Search results for: computational methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16989

Search results for: computational methods

13419 Ecosystem Model for Environmental Applications

Authors: Cristina Schreiner, Romeo Ciobanu, Marius Pislaru

Abstract:

This paper aims to build a system based on fuzzy models that can be implemented in the assessment of ecological systems, to determine appropriate methods of action for reducing adverse effects on environmental and implicit the population. The model proposed provides new perspective for environmental assessment, and it can be used as a practical instrument for decision-making.

Keywords: ecosystem model, environmental security, fuzzy logic, sustainability of habitable regions

Procedia PDF Downloads 427
13418 Neuroecological Approach for Anthropological Studies in Archaeology

Authors: Kalangi Rodrigo

Abstract:

The term Neuroecology elucidates the study of customizable variation in cognition and the brain. Subject marked the birth since 1980s, when researches began to apply methods of comparative evolutionary biology to cognitive processes and the underlying neural mechanisms of cognition. In Archaeology and Anthropology, we observe behaviors such as social learning skills, innovative feeding and foraging, tool use and social manipulation to determine the cognitive processes of ancient mankind. Depending on the brainstem size was used as a control variable, and phylogeny was controlled using independent contrasts. Both disciplines need to enriched with comparative literature and neurological experimental, behavioral studies among tribal peoples as well as primate groups which will lead the research to a potential end. Neuroecology examines the relations between ecological selection pressure and mankind or sex differences in cognition and the brain. The goal of neuroecology is to understand how natural law acts on perception and its neural apparatus. Furthermore, neuroecology will eventually lead both principal disciplines to Ethology, where human behaviors and social management studies from a biological perspective. It can be either ethnoarchaeological or prehistoric. Archaeology should adopt general approach of neuroecology, phylogenetic comparative methods can be used in the field, and new findings on the cognitive mechanisms and brain structures involved mating systems, social organization, communication and foraging. The contribution of neuroecology to archaeology and anthropology is the information it provides on the selective pressures that have influenced the evolution of cognition and brain structure of the mankind. It will shed a new light to the path of evolutionary studies including behavioral ecology, primate archaeology and cognitive archaeology.

Keywords: Neuroecology, Archaeology, Brain Evolution, Cognitive Archaeology

Procedia PDF Downloads 124
13417 Determination of Geotechnical Properties of Travertine Lithotypes in Van-Turkey

Authors: Ali Ozvan, Ismail Akkaya, Mucip Tapan

Abstract:

Travertine is generally a weak or medium strong rock, and physical, mechanical and structural properties of travertines are direct impacts on geotechnical studies. New settlement areas were determined on travertine units after two destructive earthquakes which occurred on October 23rd, 2011 (M=7.1) and November 9th, 2011 (M=5.6) in Tabanlı and Edremit districts of Van province in Turkey, respectively. In the study area, the travertines have different lithotype and engineering properties such as strong crystalline crust, medium strong shrub, and weak reed which can affect mechanical and engineering properties of travertine and each level have different handicaps. Travertine has a higher strength when compared to the soil ground; however, it can have different handicaps such as having poor rock mass, karst caves and weathering alteration. Physico-mechanical properties of travertine in the study area are determined by laboratory tests and field observations. Uniaxial compressive strength (UCS) values were detected by indirect methods, and the strength map of different lithotype of Edremit travertine was created in order to define suitable settlement areas. Also, rock mass properties and underground structure were determined by bore holes, field studies, and geophysical method. The reason of this study is to investigate the relationship between lithotype and physicomechanical properties of travertines. According to the results, lithotype has an effect on physical, mechanical and rock mass properties of travertine levels. It is detected by several research methods that various handicaps may occur on such areas when the active tectonic structure of the area is evaluated along with the karstic cavities within the travertine and different lithotype qualities.

Keywords: travertine, lithotype, geotechnical parameters, Van earthquake

Procedia PDF Downloads 234
13416 Barnard Feature Point Detector for Low-Contractperiapical Radiography Image

Authors: Chih-Yi Ho, Tzu-Fang Chang, Chih-Chia Huang, Chia-Yen Lee

Abstract:

In dental clinics, the dentists use the periapical radiography image to assess the effectiveness of endodontic treatment of teeth with chronic apical periodontitis. Periapical radiography images are taken at different times to assess alveolar bone variation before and after the root canal treatment, and furthermore to judge whether the treatment was successful. Current clinical assessment of apical tissue recovery relies only on dentist personal experience. It is difficult to have the same standard and objective interpretations due to the dentist or radiologist personal background and knowledge. If periapical radiography images at the different time could be registered well, the endodontic treatment could be evaluated. In the image registration area, it is necessary to assign representative control points to the transformation model for good performances of registration results. However, detection of representative control points (feature points) on periapical radiography images is generally very difficult. Regardless of which traditional detection methods are practiced, sufficient feature points may not be detected due to the low-contrast characteristics of the x-ray image. Barnard detector is an algorithm for feature point detection based on grayscale value gradients, which can obtain sufficient feature points in the case of gray-scale contrast is not obvious. However, the Barnard detector would detect too many feature points, and they would be too clustered. This study uses the local extrema of clustering feature points and the suppression radius to overcome the problem, and compared different feature point detection methods. In the preliminary result, the feature points could be detected as representative control points by the proposed method.

Keywords: feature detection, Barnard detector, registration, periapical radiography image, endodontic treatment

Procedia PDF Downloads 445
13415 The Population Death Model and Influencing Factors from the Data of The "Sixth Census": Zhangwan District Case Study

Authors: Zhou Shangcheng, Yi Sicen

Abstract:

Objective: To understand the mortality patterns of Zhangwan District in 2010 and provide the basis for the development of scientific and rational health policy. Methods: Data are collected from the Sixth Census of Zhangwan District and disease surveillance system. The statistical analysis include death difference between age, gender, region and time and the related factors. Methods developed for the Global Burden of Disease (GBD) Study by the World Bank and World Health Organization (WHO) were adapted and applied to Zhangwan District population health data. DALY rate per 1,000 was calculated for varied causes of death. SPSS 16 is used by statistic analysis. Results: From the data of death population of Zhangwan District we know the crude mortality rate was 6.03 ‰. There are significant differences of mortality rate in male and female population which was respectively 7.37 ‰ and 4.68 ‰. 0 age group population life expectancy in Zhangwan District in 2010 was 78.40 years old(Male 75.93, Female 81.03). The five leading causes of YLL in descending order were: cardiovascular diseases(42.63DALY/1000), malignant neoplasm (23.73DALY/1000), unintentional injuries (5.84DALY/1000), Respiratory diseases(5.43 DALY/1000), Respiratory infections (2.44DALY/1000). In addition, there are strong relation between the marital status , educational level and mortality in some to a certain extend. Conclusion Zhangwan District, as city level, is at lower mortality levels. The mortality of the total population of Zhangwan District has a downward trend and life expectancy is rising.

Keywords: sixth census, Zhangwan district, death level differences, influencing factors, cause of death

Procedia PDF Downloads 274
13414 Microstructure Evolution and Pre-transformation Microstructure Reconstruction in Ti-6Al-4V Alloy

Authors: Shreyash Hadke, Manendra Singh Parihar, Rajesh Khatirkar

Abstract:

In the present investigation, the variation in the microstructure with the changes in the heat treatment conditions i.e. temperature and time was observed. Ti-6Al-4V alloy was subject to solution annealing treatments in β (1066C) and α+β phase (930C and 850C) followed by quenching, air cooling and furnace cooling to room temperature respectively. The effect of solution annealing and cooling on the microstructure was studied by using optical microscopy (OM), scanning electron microscopy (SEM), electron backscattered diffraction (EBSD) and x-ray diffraction (XRD). The chemical composition of the β phase for different conditions was determined with the help of energy dispersive spectrometer (EDS) attached to SEM. Furnace cooling resulted in the development of coarser structure (α+β), while air cooling resulted in much finer structure with widmanstatten morphology of α at the grain boundaries. Quenching from solution annealing temperature formed α’ martensite, their proportion being dependent on the temperature in β phase field. It is well known that the transformation of β to α follows Burger orientation relationship (OR). In order to reconstruct the microstructure of parent β phase, a MATLAB code was written using neighbor-to-neighbor, triplet method and Tari’s method. The code was tested on the annealed samples (1066C solution annealing temperature followed by furnace cooling to room temperature). The parent phase data thus generated was then plotted using the TSL-OIM software. The reconstruction results of the above methods were compared and analyzed. The Tari’s approach (clustering approach) gave better results compared to neighbor-to-neighbor and triplet method but the time taken by the triplet method was least compared to the other two methods.

Keywords: Ti-6Al-4V alloy, microstructure, electron backscattered diffraction, parent phase reconstruction

Procedia PDF Downloads 451
13413 Effects of Centrifugation, Encapsulation Method and Different Coating Materials on the Total Antioxidant Activity of the Microcapsules of Powdered Cherry Laurels

Authors: B. Cilek Tatar, G. Sumnu, M. Oztop, E. Ayaz

Abstract:

Encapsulation protects sensitive food ingredients against heat, oxygen, moisture and pH until they are released to the system. It can mask the unwanted taste of nutrients that are added to the foods for fortification purposes. Cherry laurels (Prunus laurocerasus) contain phenolic compounds which decrease the proneness to several chronic diseases such as types of cancer and cardiovascular diseases. The objective of this research was to study the effects of centrifugation, different coating materials and homogenization methods on microencapsulation of powders obtained from cherry laurels. In this study, maltodextrin and mixture of maltodextrin:whey protein with a ratio of 1:3 (w/w) were chosen as coating materials. Total solid content of coating materials was kept constant as 10% (w/w). Capsules were obtained from powders of freeze-dried cherry laurels through encapsulation process by silent crusher homogenizer or microfluidization. Freeze-dried cherry laurels were core materials and core to coating ratio was chosen as 1:10 by weight. To homogenize the mixture, high speed homogenizer was used at 4000 rpm for 5 min. Then, silent crusher or microfluidizer was used to complete encapsulation process. The mixtures were treated either by silent crusher for 1 min at 75000 rpm or microfluidizer at 50 MPa for 3 passes. Freeze drying for 48 hours was applied to emulsions to obtain capsules in powder form. After these steps, dry capsules were grounded manually into a fine powder. The microcapsules were analyzed for total antioxidant activity with DPPH (1,1-diphenyl-2-picrylhydrazyl) radical scavenging method. Prior to high speed homogenization, the samples were centrifuged (4000 rpm, 1 min). Centrifugation was found to have positive effect on total antioxidant activity of capsules. Microcapsules treated by microfluidizer were found to have higher total antioxidant activities than those treated by silent crusher. It was found that increasing whey protein concentration in coating material (using maltodextrin:whey protein 1:3 mixture) had positive effect on total antioxidant activity for both silent crusher and microfluidization methods. Therefore, capsules prepared by microfluidization of centrifuged mixtures can be selected as the best conditions for encapsulation of cherry laurel powder by considering their total antioxidant activity. In this study, it was shown that capsules prepared by these methods can be recommended to be incorporated into foods in order to enhance their functionality by increasing antioxidant activity.

Keywords: antioxidant activity, cherry laurel, microencapsulation, microfluidization

Procedia PDF Downloads 297
13412 The Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster: A Qualitative study

Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon

Abstract:

In a disaster event, sharing patient information between the pre-hospital Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre- EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors that are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality, and the data were analyzed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system that can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analyzed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospital staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.

Keywords: emergency medical teams, communication, information and communication technologies, disaster

Procedia PDF Downloads 129
13411 Fractal Nature of Granular Mixtures of Different Concretes Formulated with Different Methods of Formulation

Authors: Fatima Achouri, Kaddour Chouicha, Abdelwahab Khatir

Abstract:

It is clear that concrete of quality must be made with selected materials chosen in optimum proportions that remain after implementation, a minimum of voids in the material produced. The different methods of formulations what we use, are based for the most part on a granular curve which describes an ‘optimal granularity’. Many authors have engaged in fundamental research on granular arrangements. A comparison of mathematical models reproducing these granular arrangements with experimental measurements of compactness have to verify that the minimum porosity P according to the following extent granular exactly a power law. So the best compactness in the finite medium are obtained with power laws, such as Furnas, Fuller or Talbot, each preferring a particular setting between 0.20 and 0.50. These considerations converge on the assumption that the optimal granularity Caquot approximates by a power law. By analogy, it can then be analyzed as a granular structure of fractal-type since the properties that characterize the internal similarity fractal objects are reflected also by a power law. Optimized mixtures may be described as a series of installments falling granular stuff to better the tank on a regular hierarchical distribution which would give at different scales, by cascading effects, the same structure to the mix. Likely this model may be appropriate for the entire extent of the size distribution of the components, since the cement particles (and silica fume) correctly deflocculated, micrometric dimensions, to chippings sometimes several tens of millimeters. As part of this research, the aim is to give an illustration of the application of fractal analysis to characterize the granular concrete mixtures optimized for a so-called fractal dimension where different concretes were studying that we proved a fractal structure of their granular mixtures regardless of the method of formulation or the type of concrete.

Keywords: concrete formulation, fractal character, granular packing, method of formulation

Procedia PDF Downloads 264
13410 Text Mining of Veterinary Forums for Epidemiological Surveillance Supplementation

Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves

Abstract:

Web scraping and text mining are popular computer science methods deployed by public health researchers to augment traditional epidemiological surveillance. However, within veterinary disease surveillance, such techniques are still in the early stages of development and have not yet been fully utilised. This study presents an exploration into the utility of incorporating internet-based data to better understand the smallholder farming communities within Scotland by using online text extraction and the subsequent mining of this data. Web scraping of the livestock fora was conducted in conjunction with text mining of the data in search of common themes, words, and topics found within the text. Results from bi-grams and topic modelling uncover four main topics of interest within the data pertaining to aspects of livestock husbandry: feeding, breeding, slaughter, and disposal. These topics were found amongst both the poultry and pig sub-forums. Topic modeling appears to be a useful method of unsupervised classification regarding this form of data, as it has produced clusters that relate to biosecurity and animal welfare. Internet data can be a very effective tool in aiding traditional veterinary surveillance methods, but the requirement for human validation of said data is crucial. This opens avenues of research via the incorporation of other dynamic social media data, namely Twitter and Facebook/Meta, in addition to time series analysis to highlight temporal patterns.

Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, smallholding, social media, web scraping, sentiment analysis, geolocation, text mining, NLP

Procedia PDF Downloads 105
13409 A Quantitative Study on the Effects of School Development on Character Development

Authors: Merve Gücen

Abstract:

One of the aims of education is to educate individuals who have embraced universal moral principles and transform universal moral principles into moral values. Character education aims to educate behaviors of individuals in their mental activities to transform moral principles into moral values in their lives. As the result of this education, individuals are expected to develop positive character traits and become morally indifferent individuals. What are the characteristics of the factors that influence character education at this stage? How should character education help individuals develop positive character traits? Which methods are more effective? These questions come to mind when studying character education. Our research was developed within the framework of these questions. The aim of our study is to provide the most effective use of the education factor that affects character. In this context, we tried to explain character definition, character development, character education and the factors affecting character education using qualitative research methods. At this stage, character education programs applied in various countries were examined and a character education program consisting of Islamic values was prepared and implemented in an International Imam Hatip High School in Istanbul. Our application was carried out with the collaboration of school and families. Various seminars were organized in the school and participation of families was ensured. In the last phase of our study, we worked with the students and their families on the effectiveness of the events held during the program. In this study, it was found that activities such as storytelling and theater in character education programs were effective in recognizing wrong behaviors in individuals. It was determined that our program had a positive effect on the quality of education. It was seen that applications of this educational program affected the behavior of the employees in the educational institution.

Keywords: character development, family activities, values education, education program

Procedia PDF Downloads 174
13408 Numerical Analysis of Engine Performance and Emission of a 2-Stroke Opposed Piston Hydrogen Engine

Authors: Bahamin Bazooyar, Xinyan Wang, Hua Zhao

Abstract:

As a zero-carbon fuel, hydrogen can be used in combustion engines to avoid carbon emissions. This paper numerically investigates the engine performance of a two-stroke opposed piston hydrogen engine by using three-dimensional (3D) Computational Fluid Dynamics (CFD) simulations. The engine displacement is 12.2 cm, and the compression ratio of 39. RANS simulations with the k-ε turbulence model and coupled chemistry combustion models are performed at an engine speed of 4500 rpm and hydrogen flow rate of up to 100 gr/s. In order to model the hydrogen injection process, the hydrogen nozzle was meshed with refined mesh, and injection pressure varied between 100 and 200 bars. In order to optimize the hydrogen combustion process, the injection timing was optimized between 15 before the top dead center and 10. The results showed that the combustion efficiency was mostly influenced by the injection pressures due to its impact on the fuel/air mixing and charge inhomogeneity. Nitrogen oxide (NOₓ) emissions are well correlated with engine peak temperatures, demonstrating that the thermal NO mechanism is dominant under engine conditions. Through the optimization of hydrogen injection timing and pressure, the peak thermal efficiency of 45 and NOx emission of 15 ppm/kWh can be achieved at an injection timing of 350 CA and pressure of 160 bars.

Keywords: engine, hydrogen, diesel, two-stroke, opposed-piston, decarbonisation

Procedia PDF Downloads 19
13407 Development and Validation of a Green Analytical Method for the Analysis of Daptomycin Injectable by Fourier-Transform Infrared Spectroscopy (FTIR)

Authors: Eliane G. Tótoli, Hérida Regina N. Salgado

Abstract:

Daptomycin is an important antimicrobial agent used in clinical practice nowadays, since it is very active against some Gram-positive bacteria that are particularly challenges for the medicine, such as methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococci (VRE). The importance of environmental preservation has receiving special attention since last years. Considering the evident need to protect the natural environment and the introduction of strict quality requirements regarding analytical procedures used in pharmaceutical analysis, the industries must seek environmentally friendly alternatives in relation to the analytical methods and other processes that they follow in their routine. In view of these factors, green analytical chemistry is prevalent and encouraged nowadays. In this context, infrared spectroscopy stands out. This is a method that does not use organic solvents and, although it is formally accepted for the identification of individual compounds, also allows the quantitation of substances. Considering that there are few green analytical methods described in literature for the analysis of daptomycin, the aim of this work was the development and validation of a green analytical method for the quantification of this drug in lyophilized powder for injectable solution, by Fourier-transform infrared spectroscopy (FT-IR). Method: Translucent potassium bromide pellets containing predetermined amounts of the drug were prepared and subjected to spectrophotometric analysis in the mid-infrared region. After obtaining the infrared spectrum and with the assistance of the IR Solution software, quantitative analysis was carried out in the spectral region between 1575 and 1700 cm-1, related to a carbonyl band of the daptomycin molecule, and this band had its height analyzed in terms of absorbance. The method was validated according to ICH guidelines regarding linearity, precision (repeatability and intermediate precision), accuracy and robustness. Results and discussion: The method showed to be linear (r = 0.9999), precise (RSD% < 2.0), accurate and robust, over a concentration range from 0.2 to 0.6 mg/pellet. In addition, this technique does not use organic solvents, which is one great advantage over the most common analytical methods. This fact contributes to minimize the generation of organic solvent waste by the industry and thereby reduces the impact of its activities on the environment. Conclusion: The validated method proved to be adequate to quantify daptomycin in lyophilized powder for injectable solution and can be used for its routine analysis in quality control. In addition, the proposed method is environmentally friendly, which is in line with the global trend.

Keywords: daptomycin, Fourier-transform infrared spectroscopy, green analytical chemistry, quality control, spectrometry in IR region

Procedia PDF Downloads 382
13406 Shark Detection and Classification with Deep Learning

Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti

Abstract:

Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.

Keywords: classification, data mining, Instagram, remote monitoring, sharks

Procedia PDF Downloads 125
13405 A Comprehensive Safety Analysis for a Pressurized Water Reactor Fueled with Mixed-Oxide Fuel as an Accident Tolerant Fuel

Authors: Mohamed Y. M. Mohsen

Abstract:

The viability of utilising mixed-oxide fuel (MOX) ((U₀.₉, rgPu₀.₁) O₂) as an accident-tolerant fuel (ATF) has been thoroughly investigated. MOX fuel provides the best example of a nuclear waste recycling process. The MCNPX 2.7 code was used to determine the main neutronic features, especially the radial power distribution, to identify the hot channel on which the thermal-hydraulic (TH) study was performed. Based on the computational fluid dynamics technique, the simulation of the rod-centered thermal-hydraulic subchannel model was implemented using COMSOL Multiphysics. TH analysis was utilised to determine the axially and radially distributed temperatures of the fuel and cladding materials, as well as the departure from the nucleate boiling ratio (DNBR) along the coolant channel. COMSOL Multiphysics can simulate reality by coupling multiphysics, such as coupling between heat transfer and solid mechanics. The main solid structure parameters, such as the von Mises stress, volumetric strain, and displacement, were simulated using this coupling. When the neutronic, TH, and solid structure performances of UO₂ and ((U₀.₉, rgPu₀.₁) O₂) were compared, the results showed considerable improvement and an increase in safety margins with the use of ((U₀.₉, rgPu₀.₁) O₂).

Keywords: mixed-oxide, MCNPX, neutronic analysis, COMSOL-multiphysics, thermal-hydraulic, solid structure

Procedia PDF Downloads 110
13404 Vision Zero for the Caribbean Using the Systemic Approach for Road Safety: A Case Study Analyzing Jamaican Road Crash Data (Ongoing)

Authors: Rachelle McFarlane

Abstract:

The Second Decade of Action Road Safety has begun with increased focus on countries who are disproportionately affected by road fatalities. Researchers highlight the low effectiveness of road safety campaigns in Latin America and the Caribbean (LAC) still reporting approximately 130,000 deaths and six million injuries annually. The regional fatality rate 19.2 per 100,000 with heightened concern for persons 15 to 44 years. In 2021, 483 Jamaicans died in 435 crashes, with 33% of these fatalities occurring during Covid-19 curfew hours. The study objective is to conduct a systemic safety review of Jamaican road crashes and provide a framework for its use in complementing traditional methods. The methodology involves the use of the FHWA Systemic Safety Project Selection Tool for analysis. This tool reviews systemwide data in order to identify risk factors across the network associated with severe and fatal crashes, rather that only hotspots. A total of 10,379 crashes with 745 fatalities and serious injuries were reviewed. Of the focus crash types listed, 50% of ‘Pedestrian Accidents’ resulted in fatalities and serious injuries, followed by 32% ‘Bicycle’, 24% ‘Single’ and 12% of ‘Head-on’. This study seeks to understand the associated risk factors with these priority crash types across the network and recommend cost-effective countermeasures across common sites. As we press towards Vision Zero, the inclusion of the systemic safety review method, complementing traditional methods, may create a wider impact in reducing road fatalities and serious injury by targeting issues across network with similarities; focus crash types and contributing factors.

Keywords: systemic safety review, risk factors, road crashes, crash types

Procedia PDF Downloads 94
13403 Decision Support System Based On GIS and MCDM to Identify Land Suitability for Agriculture

Authors: Abdelkader Mendas

Abstract:

The integration of MultiCriteria Decision Making (MCDM) approaches in a Geographical Information System (GIS) provides a powerful spatial decision support system which offers the opportunity to efficiently produce the land suitability maps for agriculture. Indeed, GIS is a powerful tool for analyzing spatial data and establishing a process for decision support. Because of their spatial aggregation functions, MCDM methods can facilitate decision making in situations where several solutions are available, various criteria have to be taken into account and decision-makers are in conflict. The parameters and the classification system used in this work are inspired from the FAO (Food and Agriculture Organization) approach dedicated to a sustainable agriculture. A spatial decision support system has been developed for establishing the land suitability map for agriculture. It incorporates the multicriteria analysis method ELECTRE Tri (ELimitation Et Choix Traduisant la REalité) in a GIS within the GIS program package environment. The main purpose of this research is to propose a conceptual and methodological framework for the combination of GIS and multicriteria methods in a single coherent system that takes into account the whole process from the acquisition of spatially referenced data to decision-making. In this context, a spatial decision support system for developing land suitability maps for agriculture has been developed. The algorithm of ELECTRE Tri is incorporated into a GIS environment and added to the other analysis functions of GIS. This approach has been tested on an area in Algeria. A land suitability map for durum wheat has been produced. Through the obtained results, it appears that ELECTRE Tri method, integrated into a GIS, is better suited to the problem of land suitability for agriculture. The coherence of the obtained maps confirms the system effectiveness.

Keywords: multicriteria decision analysis, decision support system, geographical information system, land suitability for agriculture

Procedia PDF Downloads 644
13402 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management

Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.

Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities

Procedia PDF Downloads 77
13401 Computational Analysis of Variation in Thrust of Oblique Detonation Ramjet Engine With Adaptive Inlet

Authors: Aditya, Ganapati Joshi, Vinod Kumar

Abstract:

IN THE MODERN-WARFARE ERA, THE PRIME REQUIREMENT IS A HIGH SPEED AND MACH NUMBER. WHEN THE MISSILES STRIKE IN THE HYPERSONIC REGIME THE OPPONENT CAN DETECT IT WITH THE ANTI-DEFENSE SYSTEM BUT CAN NOT STOP IT FROM CAUSING DAMAGE. SO, TO ACHIEVE THE SPEEDS OF THIS LEVEL THERE ARE TWO ENGINES THAT ARE AVAILABLE WHICH CAN WORK IN THIS REGION ARE RAMJET AND SCRAMJET. THE PROBLEM WITH RAMJET STARTS TO OCCUR WHEN MACH NUMBER EXCEEDS 4 AS THE STATIC PRESSURE AT THE INLET BECOMES EQUAL TO THE EXIT PRESSURE. SO, SCRAMJET ENGINE DEALS WITH THIS PROBLEM AS IT NEARLY HAS THE SAME WORKING BUT HERE THE FLOW IS NOT MUCH SLOWED DOWN AS COMPARED TO RAMJET IN THE DIFFUSER BUT IT SUFFERS FROM THE PROBLEMS SUCH AS INLET BUZZ, THERMAL CHOCKING, MIXING OF FUEL AND OXIDIZER, THERMAL HEATING, AND MANY MORE. HERE THE NEW ENGINE IS DEVELOPED ON THE SAME PRINCIPLE AS THE SCRAMJET ENGINE BUT BURNING HAPPENS DUE TO DETONATION INSTEAD OF DEFLAGRATION. THE PROBLEM WITH THE ENGINE STARTS WHEN THE MACH NUMBER BECOMES VARIABLE AND THE INLET GEOMETRY IS FIXED AND THIS LEADS TO INLET SPILLAGE WHICH WILL AFFECT THE THRUST ADVERSELY. SO, HERE ADAPTIVE INLET IS MADE OF SHAPE MEMORY ALLOYS WHICH WILL ENHANCE THE INLET MASS FLOW RATE AS WELL AS THRUST.

Keywords: detonation, ramjet engine, shape memory alloy, ignition delay, shock-boundary layer interaction, eddy dissipation, asymmetric nozzle

Procedia PDF Downloads 107
13400 Interplay of Power Management at Core and Server Level

Authors: Jörg Lenhardt, Wolfram Schiffmann, Jörg Keller

Abstract:

While the feature sizes of recent Complementary Metal Oxid Semiconductor (CMOS) devices decrease the influence of static power prevails their energy consumption. Thus, power savings that benefit from Dynamic Frequency and Voltage Scaling (DVFS) are diminishing and temporal shutdown of cores or other microchip components become more worthwhile. A consequence of powering off unused parts of a chip is that the relative difference between idle and fully loaded power consumption is increased. That means, future chips and whole server systems gain more power saving potential through power-aware load balancing, whereas in former times this power saving approach had only limited effect, and thus, was not widely adopted. While powering off complete servers was used to save energy, it will be superfluous in many cases when cores can be powered down. An important advantage that comes with that is a largely reduced time to respond to increased computational demand. We include the above developments in a server power model and quantify the advantage. Our conclusion is that strategies from datacenters when to power off server systems might be used in the future on core level, while load balancing mechanisms previously used at core level might be used in the future at server level.

Keywords: power efficiency, static power consumption, dynamic power consumption, CMOS

Procedia PDF Downloads 224
13399 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline

Authors: Kenan Morani, Esra Kaya Ayana

Abstract:

This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.

Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation

Procedia PDF Downloads 137
13398 Time Pressure and Its Effect at Tactical Level of Disaster Management

Authors: Agoston Restas

Abstract:

Introduction: In case of managing disasters decision makers can face many times such a special situation where any pre-sign of the drastically change is missing therefore the improvised decision making can be required. The complexity, ambiguity, uncertainty or the volatility of the situation can require many times the improvisation as decision making. It can be taken at any level of the management (strategic, operational and tactical) but at tactical level the main reason of the improvisation is surely time pressure. It is certainly the biggest problem during the management. Methods: The author used different tools and methods to achieve his goals; one of them was the study of the relevant literature, the other one was his own experience as a firefighting manager. Other results come from two surveys that are referred to; one of them was an essay analysis, the second one was a word association test, specially created for the research. Results and discussion: This article proves that, in certain situations, the multi-criteria, evaluating decision-making processes simply cannot be used or only in a limited manner. However, it can be seen that managers, directors or commanders are many times in situations that simply cannot be ignored when making decisions which should be made in a short time. The functional background of decisions made in a short time, their mechanism, which is different from the conventional, was studied lately and this special decision procedure was given the name recognition-primed decision. In the article, author illustrates the limits of the possibilities of analytical decision-making, presents the general operating mechanism of recognition-primed decision-making, elaborates on its special model relevant to managers at tactical level, as well as explore and systemize the factors that facilitate (catalyze) the processes with an example with fire managers.

Keywords: decision making, disaster managers, recognition primed decision, model for making decisions in emergencies

Procedia PDF Downloads 262
13397 Aerodynamic Study of Formula 1 Car in Upsight Down Configuration

Authors: Hrishit Mitra, Saptarshi Mandal

Abstract:

The study of aerodynamics for Formula 1 cars is very crucial in determining their performance. In the current F1 industry, when each engine manufacturer exhibits a torque and peak speed that differ by less than 5%, the emphasis on maximizing performance is dependent heavily on the utilization of aerodynamics. This work examines the aerodynamic characteristics of an F1 car by utilizing computational fluid dynamics in order to substantiate the hypothesis that an F1 car can go upside down in a tunnel without any external assistance, only due to the downforce it produces. In addition to this, this study also suggests the implementation of a 'flexi-wing' front in F1 cars to optimize downforce and reduce drag. Furthermore, this paper provides a concise overview of the historical development of aerodynamics in F1, with a specific emphasis on the progression of aerodynamics and the impact of downforce on the dynamics of vehicles. Next, an examination of wings has been provided, comparing the performance of the suggested wing at high speeds and low speeds. Three simulations have been conducted: one to test the complete aerodynamics and validate the hypothesis discussed above, and two specifically focused on the flexi wing, one at high speed and one at low speed. The collected results have been examined to analyze the performance of the front flexi wing. Performance analysis was conducted from the measurement of downforce and drag coefficient, as well as the pressure and velocity distributions.

Keywords: high speed flexi wing, low speed flexi wing, F1 car aerodynamics, F1 car drag reduction

Procedia PDF Downloads 19
13396 Viscous Flow Computations for the Diffuser Section of a Large Cavitation Tunnel

Authors: Ahmet Y. Gurkan, Cagatay S. Koksal, Cagri Aydin, U. Oral Unal

Abstract:

The present paper covers the viscous flow computations for the asymmetric diffuser section of a large, high-speed cavitation tunnel which will be constructed in Istanbul Technical University. The analyses were carried out by using the incompressible Reynold-Averaged-Navier-Stokes equations. While determining the diffuser geometry, a high quality, separation-free flow field with minimum energy loses was particularly aimed. The expansion angle has a critical role on the diffuser hydrodynamic performance. In order obtain a relatively short diffuser length, due to the constructive limitations, and hydrodynamic energy effectiveness, three diffuser sections with varying expansion angles for side and bottom walls were considered. A systematic study was performed to determine the most effective diffuser configuration. The results revealed that the inlet condition of the diffuser greatly affects its flow field. The inclusion of the contraction section in the computations substantially modified the flow topology in the diffuser. The effect of the diffuser flow on the test section flow characteristics was clearly observed. The influence of the introduction of small chamfers at the corners of the diffuser geometry is also presented.

Keywords: asymmetric diffuser, diffuser design, cavitation tunnel, viscous flow, computational fluid dynamics (CFD), rans

Procedia PDF Downloads 366
13395 Reduction of Aerodynamic Drag Using Vortex Generators

Authors: Siddharth Ojha, Varun Dua

Abstract:

Classified as one of the most important reasons of aerodynamic drag in the sedan automobiles is the fluid flow separation near the vehicle’s rear end. To retard the separation of flow, bump-shaped vortex generators are being tested for its implementation to the roof end of a sedan vehicle. Frequently used in the aircrafts to prevent the separation of fluid flow, vortex generators themselves produce drag, but they also substantially reduce drag by preventing flow separation at the downstream. The net effects of vortex generators can be calculated by summing the positive and negative impacts and effects. Since this effect depends on dimensions and geometry of vortex generators, those present on the vehicle roof are optimized for maximum efficiency and performance. The model was tested through ANSYS CFD analysis and modeling. The model was tested in the wind tunnel for observing it’s properties such as aerodynamic drag and flow separation and a major time lag was gained by employing vortex generators in the scaled model. Major conclusions which were recorded during the analysis were a substantial 24% reduction in the aerodynamic drag and 14% increase in the efficiency of the sedan automobile as the flow separation from the surface is delayed. This paper presents the results of optimization, the effect of vortex generators in the flow field and the mechanism by which these effects occur and are regulated.

Keywords: aerodynamics, aerodynamic devices, body, computational fluid dynamics (CFD), flow visualization

Procedia PDF Downloads 225
13394 Numerical Study on the Performance of Upgraded Victorian Brown Coal in an Ironmaking Blast Furnace

Authors: Junhai Liao, Yansong Shen, Aibing Yu

Abstract:

A 3D numerical model is developed to simulate the complicated in-furnace combustion phenomena in the lower part of an ironmaking blast furnace (BF) while using pulverized coal injection (PCI) technology to reduce the consumption of relatively expensive coke. The computational domain covers blowpipe-tuyere-raceway-coke bed in the BF. The model is validated against experimental data in terms of gaseous compositions and coal burnout. Parameters, such as coal properties and some key operational variables, play an important role on the performance of coal combustion. Their diverse effects on different combustion characteristics are examined in the domain, in terms of gas compositions, temperature, and burnout. The heat generated by the combustion of upgraded Victorian brown coal is able to meet the heating requirement of a BF, hence making upgraded brown coal injected into BF possible. It is evidenced that the model is suitable to investigate the mechanism of the PCI operation in a BF. Prediction results provide scientific insights to optimize and control of the PCI operation. This model cuts the cost to investigate and understand the comprehensive combustion phenomena of upgraded Victorian brown coal in a full-scale BF.

Keywords: blast furnace, numerical study, pulverized coal injection, Victorian brown coal

Procedia PDF Downloads 246
13393 Analysis of Genomics Big Data in Cloud Computing Using Fuzzy Logic

Authors: Mohammad Vahed, Ana Sadeghitohidi, Majid Vahed, Hiroki Takahashi

Abstract:

In the genomics field, the huge amounts of data have produced by the next-generation sequencers (NGS). Data volumes are very rapidly growing, as it is postulated that more than one billion bases will be produced per year in 2020. The growth rate of produced data is much faster than Moore's law in computer technology. This makes it more difficult to deal with genomics data, such as storing data, searching information, and finding the hidden information. It is required to develop the analysis platform for genomics big data. Cloud computing newly developed enables us to deal with big data more efficiently. Hadoop is one of the frameworks distributed computing and relies upon the core of a Big Data as a Service (BDaaS). Although many services have adopted this technology, e.g. amazon, there are a few applications in the biology field. Here, we propose a new algorithm to more efficiently deal with the genomics big data, e.g. sequencing data. Our algorithm consists of two parts: First is that BDaaS is applied for handling the data more efficiently. Second is that the hybrid method of MapReduce and Fuzzy logic is applied for data processing. This step can be parallelized in implementation. Our algorithm has great potential in computational analysis of genomics big data, e.g. de novo genome assembly and sequence similarity search. We will discuss our algorithm and its feasibility.

Keywords: big data, fuzzy logic, MapReduce, Hadoop, cloud computing

Procedia PDF Downloads 303
13392 Determination of Identification and Antibiotic Resistance Rates of Pseudomonas aeruginosa Strains from Various Clinical Specimens in a University Hospital for Two Years, 2013-2015

Authors: Recep Kesli, Gulsah Asik, Cengiz Demir, Onur Turkyilmaz

Abstract:

Objective: Pseudomonas aeruginosa (P. aeruginosa) is an important nosocomial pathogen which causes serious hospital infections and is resistant to many commonly used antibiotics. P. aeruginosa can develop resistance during therapy and also it is very resistant to disinfectant chemicals. It may be found in respiratory support devices in hospitals. In this study, the antibiotic resistance of P. aeruginosa strains isolated from bronchial aspiration samples was evaluated retrospectively. Methods: Between October 2013 and September 2015, a total of 318 P. aeruginosa were isolated from clinical samples obtained from various intensive care units and inpatient patients hospitalized at Afyon Kocatepe University, ANS Practice and Research Hospital. Isolated bacteria identified by using both the conventional methods and automated identification system-VITEK 2 (bioMerieux, Marcy l’etoile France). Antibacterial resistance tests were performed by using Kirby-Bauer disc (Oxoid, Hampshire, England) diffusion method following the recommendations of CLSI. Results: Antibiotic resistance rates of identified 318 P. aeruginosa strains were found as follows for tested antibiotics; 32 % amikacin, 42% gentamicin, 43% imipenem, 43% meropenem, 50% ciprofloxacin, 57% levofloxacin, 38% cefepime, 63% ceftazidime, and 85% piperacillin/tazobactam. Conclusion: Resistance profiles change according to years and provinces for P. aeruginosa, so these findings should be considered empirical treatment choices. In this study, the highest and lowest resistance rates found against piperacillin/tazobactam % 85, and amikacin %32.

Keywords: Pseudomonas aeruginosa, antibiotic resistance rates, intensive care unit, Pseudomonas spp.

Procedia PDF Downloads 293
13391 Simultaneous Bilateral Patella Tendon Rupture: A Systematic Review

Authors: André Rui Coelho Fernandes, Mariana Rufino, Divakar Hamal, Amr Sousa, Emma Fossett, Kamalpreet Cheema

Abstract:

Aim: A single patella tendon rupture is relatively uncommon, but a simultaneous bilateral event is a rare occurrence and has been scarcely reviewed in the literature. This review was carried out to analyse the existing literature on this event, with the aim of proposing a standardised approach to the diagnosis and management of this injury. Methods: A systematic review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Three independent reviewers conducted searches in PubMed, OvidSP for Medline and Embase, as well as Cochrane Library using the same search strategy. From a total of 183 studies, 45 were included, i.e. 90 patellas. Results: 46 patellas had a Type 1 Rupture equating to 51%, with Type 3 being the least common, with only 7 patellas sustaining this injury. The mean Insall-Salvio ratio for each knee was 1.62 (R) and 1.60 (L) Direct Primary Repair was the most common surgical technique compared to Tendon Reconstruction, with End to End and Transosseous techniques split almost equally. Brace immobilisation was preferred over cast, with a mean start to weight-bearing of 3.23 weeks post-op. Conclusions: Bilateral patellar tendon rupture is a rare injury that should be considered in patients with knee extensor mechanism disruption. The key limitation of this study was the low number of patients encompassed by the eligible literature. There is space for a higher level of evidence study, specifically regarding surgical treatment choice and methods, as well as post-operative management, which could potentially improve the outcomes in the management of this injury.

Keywords: trauma and orthopaedic surgery, bilateral patella, tendon rupture, trauma

Procedia PDF Downloads 141
13390 Endoscopic Treatment of Esophageal Injuries Using Vacuum Therapy

Authors: Murad Gasanov, Shagen Danielyan, Ali Gasanov, Yuri Teterin, Peter Yartsev

Abstract:

Background: Despite the advances made in modern surgery, the treatment of patients with esophageal injuries remains one of the most topical and complex issues. In recent years, high-technology minimally invasive methods, such as endoscopic vacuum therapy (EVT) in the treatment of esophageal injuries. The effectiveness of EVT has been sufficiently studied in case of failure of esophageal anastomoses, however the application of this method in case of mechanical esophageal injuries is limited by a small series of observations, indicating the necessity of additional study. Aim: The aim was to аnalyzed of own experience in the use of endoscopic vacuum therapy (EVT) in a comprehensive examination of patients with esophageal injuries. Methods: We analyzed the results of treatment of 24 patients with mechanical injuries of the esophagus for the period 2019-2021. Complex treatment of patients included the use of minimally invasive technologies, including percutaneous endoscopic gastrostomy (PEG), EVT and video-assisted thoracoscopic debridement. Evaluation of the effectiveness of treatment was carried out using multislice computed tomography (MSCT), endoscopy and laboratory tests. The duration of inpatient treatment and the duration of EVT, the number of system replacements, complications and mortality were taken into account. Result: EVT in patients with mechanical injuries of the esophagus allowed to achieve epithelialization of the esophageal defect in 21 patients (87.5%) in the form of linear scar on the site of perforation or pseudodiverticulum. Complications were noted in 4 patients (16.6%), including bleeding (2) and and esophageal stenosis in the perforation area (2). Lethal outcome was in one observation (4.2%). Conclusion. EVT may be the method of choice in complex treatment in patients with esophageal lesions.

Keywords: esophagus injuries, damage to the esophagus, perforation of the esophagus, spontaneous perforation of the esophagus, mediastinitis, endoscopic vacuum therapy

Procedia PDF Downloads 110