Search results for: computer vision
135 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle
Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores
Abstract:
This work introduces the use of EMGs (electromyography) from muscle sensors to develop an Artificial Neural Network (ANN) for pattern recognition to control a small unmanned aerial vehicle. The objective of this endeavor exhibits interfacing drone applications beyond manual control directly. MyoWare Muscle sensor contains three EMG electrodes (dual and single type) used to collect signals from the posterior (extensor) and anterior (flexor) forearm and the bicep. Collection of raw voltages from each sensor were connected to an Arduino Uno and a data processing algorithm was developed with the purpose of interpreting the voltage signals given when performing flexing, resting, and motion of the arm. Each sensor collected eight values over a two-second period for the duration of one minute, per assessment. During each two-second interval, the movements were alternating between a resting reference class and an active motion class, resulting in controlling the motion of the drone with left and right movements. This paper further investigated adding up to three sensors to differentiate between hand gestures to control the principal motions of the drone (left, right, up, and land). The hand gestures chosen to execute these movements were: a resting position, a thumbs up, a hand swipe right motion, and a flexing position. The MATLAB software was utilized to collect, process, and analyze the signals from the sensors. The protocol (machine learning tool) was used to classify the hand gestures. To generate the input vector to the ANN, the mean, root means squared, and standard deviation was processed for every two-second interval of the hand gestures. The neuromuscular information was then trained using an artificial neural network with one hidden layer of 10 neurons to categorize the four targets, one for each hand gesture. Once the machine learning training was completed, the resulting network interpreted the processed inputs and returned the probabilities of each class. Based on the resultant probability of the application process, once an output was greater or equal to 80% of matching a specific target class, the drone would perform the motion expected. Afterward, each movement was sent from the computer to the drone through a Wi-Fi network connection. These procedures have been successfully tested and integrated into trial flights, where the drone has responded successfully in real-time to predefined command inputs with the machine learning algorithm through the MyoWare sensor interface. The full paper will describe in detail the database of the hand gestures, the details of the ANN architecture, and confusion matrices results.Keywords: artificial neural network, biosensors, electromyography, machine learning, MyoWare muscle sensors, Arduino
Procedia PDF Downloads 174134 Bank Failures: A Question of Leadership
Authors: Alison L. Miles
Abstract:
Almost all major financial institutions in the world suffered losses due to the financial crisis of 2007, but the extent varied widely. The causes of the crash of 2007 are well documented and predominately focus on the role and complexity of the financial markets. The dominant theme of the literature suggests the causes of the crash were a combination of globalization, financial sector innovation, moribund regulation and short termism. While these arguments are undoubtedly true, they do not tell the whole story. A key weakness in the current analysis is the lack of consideration of those leading the banks pre and during times of crisis. This purpose of this study is to examine the possible link between the leadership styles and characteristics of the CEO, CFO and chairman and the financial institutions that failed or needed recapitalization. As such, it contributes to the literature and debate on international financial crises and systemic risk and also to the debate on risk management and regulatory reform in the banking sector. In order to first test the proposition (p1) that there are prevalent leadership characteristics or traits in financial institutions, an initial study was conducted using a sample of the top 65 largest global banks and financial institutions according to the Banker Top 1000 banks 2014. Secondary data from publically available and official documents, annual reports, treasury and parliamentary reports together with a selection of press articles and analyst meeting transcripts was collected longitudinally from the period 1998 to 2013. A computer aided key word search was used in order to identify the leadership styles and characteristics of the chairman, CEO and CFO. The results were then compared with the leadership models to form a picture of leadership in the sector during the research period. As this resulted in separate results that needed combining, SPSS data editor was used to aggregate the results across the studies using the variables ‘leadership style’ and ‘company financial performance’ together with the size of the company. In order to test the proposition (p2) that there was a prevalent leadership style in the banks that failed and the proposition (P3) that this was different to those that did not, further quantitative analysis was carried out on the leadership styles of the chair, CEO and CFO of banks that needed recapitalization, were taken over, or required government bail-out assistance during 2007-8. These included: Lehman Bros, Merrill Lynch, Royal Bank of Scotland, HBOS, Barclays, Northern Rock, Fortis and Allied Irish. The findings show that although regulatory reform has been a key mechanism of control of behavior in the banking sector, consideration of the leadership characteristics of those running the board are a key factor. They add weight to the argument that if each crisis is met with the same pattern of popular fury with the financier, increased regulation, followed by back to business as usual, the cycle of failure will always be repeated and show that through a different lens, new paradigms can be formed and future clashes avoided.Keywords: banking, financial crisis, leadership, risk
Procedia PDF Downloads 318133 Modelling of Groundwater Resources for Al-Najaf City, Iraq
Authors: Hayder H. Kareem, Shunqi Pan
Abstract:
Groundwater is a vital water resource in many areas in the world, particularly in the Middle-East region where the water resources become scarce and depleting. Sustainable management and planning of the groundwater resources become essential and urgent given the impact of the global climate change. In the recent years, numerical models have been widely used to predict the flow pattern and assess the water resources security, as well as the groundwater quality affected by the contaminants transported. In this study, MODFLOW is used to study the current status of groundwater resources and the risk of water resource security in the region centred at Al-Najaf City, which is located in the mid-west of Iraq and adjacent to the Euphrates River. In this study, a conceptual model is built using the geologic and hydrogeologic collected for the region, together with the Digital Elevation Model (DEM) data obtained from the "Global Land Cover Facility" (GLCF) and "United State Geological Survey" (USGS) for the study area. The computer model is also implemented with the distributions of 69 wells in the area with the steady pro-defined hydraulic head along its boundaries. The model is then applied with the recharge rate (from precipitation) of 7.55 mm/year, given from the analysis of the field data in the study area for the period of 1980-2014. The hydraulic conductivity from the measurements at the locations of wells is interpolated for model use. The model is calibrated with the measured hydraulic heads at the locations of 50 of 69 wells in the domain and results show a good agreement. The standard-error-of-estimate (SEE), root-mean-square errors (RMSE), Normalized RMSE and correlation coefficient are 0.297 m, 2.087 m, 6.899% and 0.971 respectively. Sensitivity analysis is also carried out, and it is found that the model is sensitive to recharge, particularly when the rate is greater than (15mm/year). Hydraulic conductivity is found to be another parameter which can affect the results significantly, therefore it requires high quality field data. The results show that there is a general flow pattern from the west to east of the study area, which agrees well with the observations and the gradient of the ground surface. It is found that with the current operational pumping rates of the wells in the area, a dry area is resulted in Al-Najaf City due to the large quantity of groundwater withdrawn. The computed water balance with the current operational pumping quantity shows that the Euphrates River supplies water into the groundwater of approximately 11759 m3/day, instead of gaining water of 11178 m3/day from the groundwater if no pumping from the wells. It is expected that the results obtained from the study can provide important information for the sustainable and effective planning and management of the regional groundwater resources for Al-Najaf City.Keywords: Al-Najaf city, conceptual modelling, groundwater, unconfined aquifer, visual MODFLOW
Procedia PDF Downloads 213132 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube
Authors: Nirjhar Dhang, S. Vinay Kumar
Abstract:
Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.Keywords: concrete, image processing, plane strain, interfacial transition zone
Procedia PDF Downloads 241131 Ways to Prevent Increased Wear of the Drive Box Parts and the Central Drive of the Civil Aviation Turbo Engine Based on Tribology
Authors: Liudmila Shabalinskaya, Victor Golovanov, Liudmila Milinis, Sergey Loponos, Alexander Maslov, D. O. Frolov
Abstract:
The work is devoted to the rapid laboratory diagnosis of the condition of aircraft friction units, based on the application of the nondestructive testing method by analyzing the parameters of wear particles, or tribodiagnostics. The most important task of tribodiagnostics is to develop recommendations for the selection of more advanced designs, materials and lubricants based on data on wear processes for increasing the life and ensuring the safety of the operation of machines and mechanisms. The object of tribodiagnostics in this work are the tooth gears of the central drive and the gearboxes of the gas turbine engine of the civil aviation PS-90A type, in which rolling friction and sliding friction with slip occur. The main criterion for evaluating the technical state of lubricated friction units of a gas turbine engine is the intensity and rate of wear of the friction surfaces of the friction unit parts. When the engine is running, oil samples are taken and the state of the friction surfaces is evaluated according to the parameters of the wear particles contained in the oil sample, which carry important and detailed information about the wear processes in the engine transmission units. The parameters carrying this information include the concentration of wear particles and metals in the oil, the dispersion composition, the shape, the size ratio and the number of particles, the state of their surfaces, the presence in the oil of various mechanical impurities of non-metallic origin. Such a morphological analysis of wear particles has been introduced into the order of monitoring the status and diagnostics of various aircraft engines, including a gas turbine engine, since the type of wear characteristic of the central drive and the drive box is surface fatigue wear and the beginning of its development, accompanied by the formation of microcracks, leads to the formation of spherical, up to 10 μm in size, and in the aftermath of flocculent particles measuring 20-200 μm in size. Tribodiagnostics using the morphological analysis of wear particles includes the following techniques: ferrography, filtering, and computer analysis of the classification and counting of wear particles. Based on the analysis of several series of oil samples taken from the drive box of the engine during their operating time, a study was carried out of the processes of wear kinetics. Based on the results of the study and comparing the series of criteria for tribodiagnostics, wear state ratings and statistics of the results of morphological analysis, norms for the normal operating regime were developed. The study allowed to develop levels of wear state for friction surfaces of gearing and a 10-point rating system for estimating the likelihood of the occurrence of an increased wear mode and, accordingly, prevention of engine failures in flight.Keywords: aviation, box of drives, morphological analysis, tribodiagnostics, tribology, ferrography, filtering, wear particle
Procedia PDF Downloads 261130 Variability of Physico-Chemical and Carbonate Chemistry of Seawater in Selected Portions of the Central Atlantic Coastline of Ghana
Authors: Robert Kwame Kpaliba, Dennis Kpakpor Adotey, Yaw Serfor-Armah
Abstract:
Increase in the oceanic carbon dioxide absorbance from the atmosphere due to climate change has led to appreciable change in the chemistry of the oceans. The change in oceanic pH referred to as ocean acidification poses multiple threats and stresses on marine species, biodiversity, goods and services, and livelihoods. Marine ecosystems are continuously threatened by plethora of natural and anthropogenic stressors including carbon dioxide (CO₂) emissions causing a lot of changes which has not been experienced for approximately 60 years. Little has been done in Africa as a whole and Ghana in particular to improve the understanding of the variations of the carbonate chemistry of seawater and the biophysical impacts of ocean acidification on security of seafood, nutrition, climate and environmental change. There is, therefore, the need for regular monitoring of carbonate chemistry of seawater along Ghana’s coastline to generate reliable data to aid marine policy formulation. Samples of seawater were collected thrice every month for a one-year period from five study sites for the various parameters to be analyzed. Analysis of the measured physico-chemical and the carbonate chemistry parameters was done using simple statistics. Correlation test and ANOVA were run on both of the physico-chemical and carbonate chemistry parameters. The carbonate chemistry parameters were measured using computer software programme (CO₂cal v4.0.9) except total alkalinity and pH. The study assessed the variability of seawater carbonate chemistry in selected portions of the Central Atlantic Coastline of Ghana (Tsokomey/Bortianor, Kokrobitey, Gomoa Nyanyanor, Gomoa Fetteh, and Senya Breku landing beaches) over a 1-year period (June 2016–May 2017). For physico-chemical parameters, there was insignificant variation in nitrate (NO₃⁻) (1.62 - 2.3 mg/L), ammonia (NH₃) (1.52 - 2.05 mg/L), and salinity (sal) (34.50 - 34.74 ppt). Carbonate chemistry parameters for all the five study sites showed significant variation: partial pressure of carbon dioxide (pCO₂) (414.08-715.5 µmol/kg), carbonate ion (CO₃²⁻) (115-157.92 µmol/kg), pH (7.9-8.12), total alkalinity (TA) (1711.8-1986 µmol/kg), total carbon dioxide (TCO₂) (1512.1 - 1792 µmol/kg), dissolved carbon dioxide (CO₂aq) (10.97-18.92 µmol/kg), Revelle Factor (RF) (9.62-11.84), aragonite (ΩAr) (0.75-1.48) and calcite (ΩCa) (1.08-2.14). The study revealed that the partial pressure of carbon dioxide and temperature did not have a significant effect on each other (r² = 0.31) (p-value = 0.0717). There was an appreciable effect of pH on dissolved carbon dioxide (r² = 0.921) (p-value = 0.0000). The variation between total alkalinity and dissolved carbon dioxide was appreciable (r² = 0.731) (p-value = 0.0008). There was a significant correlation between total carbon dioxide and dissolved carbon dioxide (r² = 0.852) (p-value = 0.0000). Revelle factor correlated strongly with dissolved carbon dioxide (r² = 0.982) (p-value = 0.0000). Partial pressure of carbon dioxide corresponds strongly with atmospheric carbon dioxide (r² = 0.9999) (p-value = 0.00000).Keywords: carbonate chemistry, seawater, central atlantic coastline, Ghana, ocean acidification
Procedia PDF Downloads 561129 Multi-Label Approach to Facilitate Test Automation Based on Historical Data
Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally
Abstract:
The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.Keywords: machine learning, multi-class, multi-label, supervised learning, test automation
Procedia PDF Downloads 133128 Using Balanced Scorecard Performance Metrics in Gauging the Delivery of Stakeholder Value in Higher Education: the Assimilation of Industry Certifications within a Business Program Curriculum
Authors: Thomas J. Bell III
Abstract:
This paper explores the value of assimilating certification training within a traditional course curriculum. This innovative approach is believed to increase stakeholder value within the Computer Information System program at Texas Wesleyan University. Stakeholder value is obtained from increased job marketability and critical thinking skills that create employment-ready graduates. This paper views value as first developing the capability to earn an industry-recognized certification, which provides the student with more job placement compatibility while allowing the use of critical thinking skills in a liberal arts business program. Graduates with industry-based credentials are often given preference in the hiring process, particularly in the information technology sector. And without a pioneering curriculum that better prepares students for an ever-changing employment market, its educational value is dubiously questioned. Since certifications are trending in the hiring process, academic programs should explore the viability of incorporating certification training into teaching pedagogy and courses curriculum. This study will examine the use of the balanced scorecard across four performance dimensions (financial, customer, internal process, and innovation) to measure the stakeholder value of certification training within a traditional course curriculum. The balanced scorecard as a strategic management tool may provide insight for leveraging resource prioritization and decisions needed to achieve various curriculum objectives and long-term value while meeting multiple stakeholders' needs, such as students, universities, faculty, and administrators. The research methodology will consist of quantitative analysis that includes (1) surveying over one-hundred students in the CIS program to learn what factor(s) contributed to their certification exam success or failure, (2) interviewing representatives from the Texas Workforce Commission to identify the employment needs and trends in the North Texas (Dallas/Fort Worth) area, (3) reviewing notable Workforce Innovation and Opportunity Act publications on training trends across several local business sectors, and (4) analyzing control variables to identify specific correlations between industry alignment and job placement to determine if a correlation exists. These findings may provide helpful insight into impactful pedagogical teaching techniques and curriculum that positively contribute to certification credentialing success. And should these industry-certified students land industry-related jobs that correlate with their certification credential value, arguably, stakeholder value has been realized.Keywords: certification exam teaching pedagogy, exam preparation, testing techniques, exam study tips, passing certification exams, embedding industry certification and curriculum alignment, balanced scorecard performance evaluation
Procedia PDF Downloads 108127 Building Community through Discussion Forums in an Online Accelerated MLIS Program: Perspectives of Instructors and Students
Authors: Mary H Moen, Lauren H. Mandel
Abstract:
Creating a sense of community in online learning is important for student engagement and success. The integration of discussion forums within online learning environments presents an opportunity to explore how this computer mediated communications format can cultivate a sense of community among students in accelerated master’s degree programs. This research has two aims, to delve into the ways instructors utilize this communications technology to create community and to understand the feelings and experiences of graduate students participating in these forums in regard to its effectiveness in community building. This study is a two-phase approach encompassing qualitative and quantitative methodologies. The data will be collected at an online accelerated Master of Library and Information Studies program at a public university in the northeast of the United States. Phase 1 is a content analysis of the syllabi from all courses taught in the 2023 calendar year, which explores the format and rules governing discussion forum assignments. Four to six individual interviews of department faculty and part time faculty will also be conducted to illuminate their perceptions of the successes and challenges of their discussion forum activities. Phase 2 will be an online survey administered to students in the program during the 2023 calendar year. Quantitative data will be collected for statistical analysis, and short answer responses will be analyzed for themes. The survey is adapted from the Classroom Community Scale Short-Form (CSS-SF), which measures students' self-reported responses on their feelings of connectedness and learning. The prompts will contextualize the items from their experience in discussion forums during the program. Short answer responses on the challenges and successes of using discussion forums will be analyzed to gauge student perceptions and experiences using this type of communication technology in education. This research study is in progress. The authors anticipate that the findings will provide a comprehensive understanding of the varied approaches instructors use in discussion forums for community-building purposes in an accelerated MLIS program. They predict that the more varied, flexible, and consistent student uses of discussion forums are, the greater the sense of community students will report. Additionally, students’ and instructors’ perceptions and experiences within these forums will shed light on the successes and challenges faced, thereby offering valuable recommendations for enhancing online learning environments. The findings are significant because they can contribute actionable insights for instructors, educational institutions, and curriculum designers aiming to optimize the use of discussion forums in online accelerated graduate programs, ultimately fostering a richer and more engaging learning experience for students.Keywords: accelerated online learning, discussion forums, LIS programs, sense of community, g
Procedia PDF Downloads 87126 Developing a Framework for Designing Digital Assessments for Middle-school Aged Deaf or Hard of Hearing Students in the United States
Authors: Alexis Polanco Jr, Tsai Lu Liu
Abstract:
Research on digital assessment for deaf and hard of hearing (DHH) students is negligible. Part of this stems from the DHH assessment design existing at the intersection of the emergent disciplines of usability, accessibility, and child-computer interaction (CCI). While these disciplines have some prevailing guidelines —e.g. in user experience design (UXD), there is Jacob Nielsen’s 10 Usability Heuristics (Nielsen-10); for accessibility, there are the Web Content Accessibility Guidelines (WCAG) & the Principles of Universal Design (PUD)— this research was unable to uncover a unified set of guidelines. Given that digital assessments have lasting implications for the funding and shaping of U.S. school districts, it is vital that cross-disciplinary guidelines emerge. As a result, this research seeks to provide a framework by which these disciplines can share knowledge. The framework entails a process of asking subject-matter experts (SMEs) and design & development professionals to self-describe their fields of expertise, how their work might serve DHH students, and to expose any incongruence between their ideal process and what is permissible at their workplace. This research used two rounds of mixed methods. The first round consisted of structured interviews with SMEs in usability, accessibility, CCI, and DHH education. These practitioners were not designers by trade but were revealed to use designerly work processes. In addition to asking these SMEs about their field of expertise, work process, etc., these SMEs were asked to comment about whether they believed Nielsen-10 and/or PUD were sufficient for designing products for middle-school DHH students. This first round of interviews revealed that Nielsen-10 and PUD were, at best, a starting point for creating middle-school DHH design guidelines or, at worst insufficient. The second round of interviews followed a semi-structured interview methodology. The SMEs who were interviewed in the first round were asked open-ended follow-up questions about their semantic understanding of guidelines— going from the most general sense down to the level of design guidelines for DHH middle school students. Designers and developers who were never interviewed previously were asked the same questions that the SMEs had been asked across both rounds of interviews. In terms of the research goals: it was confirmed that the design of digital assessments for DHH students is inherently cross-disciplinary. Unexpectedly, 1) guidelines did not emerge from the interviews conducted in this study, and 2) the principles of Nielsen-10 and PUD were deemed to be less relevant than expected. Given the prevalence of Nielsen-10 in UXD curricula across academia and certificate programs, this poses a risk to the efficacy of DHH assessments designed by UX designers. Furthermore, the following findings emerged: A) deep collaboration between the disciplines of usability, accessibility, and CCI is low to non-existent; B) there are no universally agreed-upon guidelines for designing digital assessments for DHH middle school students; C) these disciplines are structured academically and professionally in such a way that practitioners may not know to reach out to other disciplines. For example, accessibility teams at large organizations do not have designers and accessibility specialists on the same team.Keywords: deaf, hard of hearing, design, guidelines, education, assessment
Procedia PDF Downloads 67125 Rotary Machine Sealing Oscillation Frequencies and Phase Shift Analysis
Authors: Liliia N. Butymova, Vladimir Ya Modorskii
Abstract:
To ensure the gas transmittal GCU's efficient operation, leakages through the labyrinth packings (LP) should be minimized. Leakages can be minimized by decreasing the LP gap, which in turn depends on thermal processes and possible rotor vibrations and is designed to ensure absence of mechanical contact. Vibration mitigation allows to minimize the LP gap. It is advantageous to research influence of processes in the dynamic gas-structure system on LP vibrations. This paper considers influence of rotor vibrations on LP gas dynamics and influence of the latter on the rotor structure within the FSI unidirectional dynamical coupled problem. Dependences of nonstationary parameters of gas-dynamic process in LP on rotor vibrations under various gas speeds and pressures, shaft rotation speeds and vibration amplitudes, and working medium features were studied. The programmed multi-processor ANSYS CFX was chosen as a numerical computation tool. The problem was solved using PNRPU high-capacity computer complex. Deformed shaft vibrations are replaced with an unyielding profile that moves in the fixed annulus "up-and-down" according to set harmonic rule. This solves a nonstationary gas-dynamic problem and determines time dependence of total gas-dynamic force value influencing the shaft. Pressure increase from 0.1 to 10 MPa causes growth of gas-dynamic force oscillation amplitude and frequency. The phase shift angle between gas-dynamic force oscillations and those of shaft displacement decreases from 3π/4 to π/2. Damping constant has maximum value under 1 MPa pressure in the gap. Increase of shaft oscillation frequency from 50 to 150 Hz under P=10 MPa causes growth of gas-dynamic force oscillation amplitude. Damping constant has maximum value at 50 Hz equaling 1.012. Increase of shaft vibration amplitude from 20 to 80 µm under P=10 MPa causes the rise of gas-dynamic force amplitude up to 20 times. Damping constant increases from 0.092 to 0.251. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the minimum gas-dynamic force persistent oscillating amplitude under P=0.1 MPa being observed in methane, and maximum in the air. Frequency remains almost unchanged and the phase shift in the air changes from 3π/4 to π/2. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the maximum gas-dynamic force oscillating amplitude under P=10 MPa being observed in methane, and minimum in the air. Air demonstrates surging. Increase of leakage speed from 0 to 20 m/s through LP under P=0.1 MPa causes the gas-dynamic force oscillating amplitude to decrease by 3 orders and oscillation frequency and the phase shift to increase 2 times and stabilize. Increase of leakage speed from 0 to 20 m/s in LP under P=1 MPa causes gas-dynamic force oscillating amplitude to decrease by almost 4 orders. The phase shift angle increases from π/72 to π/2. Oscillations become persistent. Flow rate proved to influence greatly on pressure oscillations amplitude and a phase shift angle. Work medium influence depends on operation conditions. At pressure growth, vibrations are mostly affected in methane (of working substances list considered), and at pressure decrease, in the air at 25 ˚С.Keywords: aeroelasticity, labyrinth packings, oscillation phase shift, vibration
Procedia PDF Downloads 296124 Robotic Process Automation in Accounting and Finance Processes: An Impact Assessment of Benefits
Authors: Rafał Szmajser, Katarzyna Świetla, Mariusz Andrzejewski
Abstract:
Robotic process automation (RPA) is a technology of repeatable business processes performed using computer programs, robots that simulate the work of a human being. This approach assumes replacing an existing employee with the use of dedicated software (software robots) to support activities, primarily repeated and uncomplicated, characterized by a low number of exceptions. RPA application is widespread in modern business services, particularly in the areas of Finance, Accounting and Human Resources Management. By utilizing this technology, the effectiveness of operations increases while reducing workload, minimizing possible errors in the process, and as a result, bringing measurable decrease in the cost of providing services. Regardless of how the use of modern information technology is assessed, there are also some doubts as to whether we should replace human activities in the implementation of the automation in business processes. After the initial awe for the new technological concept, a reflection arises: to what extent does the implementation of RPA increase the efficiency of operations or is there a Business Case for implementing it? If the business case is beneficial, in which business processes is the greatest potential for RPA? A closer look at these issues was provided by in this research during which the respondents’ view of the perceived advantages resulting from the use of robotization and automation in financial and accounting processes was verified. As a result of an online survey addressed to over 500 respondents from international companies, 162 complete answers were returned from the most important types of organizations in the modern business services industry, i.e. Business or IT Process Outsourcing (BPO/ITO), Shared Service Centers (SSC), Consulting/Advisory and their customers. Answers were provided by representatives of the positions in their organizations: Members of the Board, Directors, Managers and Experts/Specialists. The structure of the survey allowed the respondents to supplement the survey with additional comments and observations. The results formed the basis for the creation of a business case calculating tangible benefits associated with the implementation of automation in the selected financial processes. The results of the statistical analyses carried out with regard to revenue growth confirmed the correctness of the hypothesis that there is a correlation between job position and the perception of the impact of RPA implementation on individual benefits. Second hypothesis (H2) that: There is a relationship between the kind of company in the business services industry and the reception of the impact of RPA on individual benefits was thus not confirmed. Based results of survey authors performed simulation of business case for implementation of RPA in selected Finance and Accounting Processes. Calculated payback period was diametrically different ranging from 2 months for the Account Payables process with 75% savings and in the extreme case for the process Taxes implementation and maintenance costs exceed the savings resulting from the use of the robot.Keywords: automation, outsourcing, business process automation, process automation, robotic process automation, RPA, RPA business case, RPA benefits
Procedia PDF Downloads 138123 Trainability of Executive Functions during Preschool Age Analysis of Inhibition of 5-Year-Old Children
Authors: Christian Andrä, Pauline Hähner, Sebastian Ludyga
Abstract:
Introduction: In the recent past, discussions on the importance of physical activity for child development have contributed to a growing interest in executive functions, which refer to cognitive processes. By controlling, modulating and coordinating sub-processes, they make it possible to achieve superior goals. Major components include working memory, inhibition and cognitive flexibility. While executive functions can be trained easily in school children, there are still research deficits regarding the trainability during preschool age. Methodology: This quasi-experimental study with pre- and post-design analyzes 23 children [age: 5.0 (mean value) ± 0.7 (standard deviation)] from four different sports groups. The intervention group was made up of 13 children (IG: 4.9 ± 0.6), while the control group consisted of ten children (CG: 5.1 ± 0.9). Between pre-test and post-test, children from the intervention group participated special games that train executive functions (i.e., changing rules of the game, introduction of new stimuli in familiar games) for ten units of their weekly sports program. The sports program of the control group was not modified. A computer-based version of the Eriksen Flanker Task was employed in order to analyze the participants’ inhibition ability. In two rounds, the participants had to respond 50 times and as fast as possible to a certain target (direction of sight of a fish; the target was always placed in a central position between five fish). Congruent (all fish have the same direction of sight) and incongruent (central fish faces opposite direction) stimuli were used. Relevant parameters were response time and accuracy. The main objective was to investigate whether children from the intervention group show more improvement in the two parameters than the children from the control group. Major findings: The intervention group revealed significant improvements in congruent response time (pre: 1.34 s, post: 1.12 s, p<.01), while the control group did not show any statistically relevant difference (pre: 1.31 s, post: 1.24 s). Likewise, the comparison of incongruent response times indicates a comparable result (IG: pre: 1.44 s, post: 1.25 s, p<.05 vs. CG: pre: 1.38 s, post: 1.38 s). In terms of accuracy for congruent stimuli, the intervention group showed significant improvements (pre: 90.1 %, post: 95.9 %, p<.01). In contrast, no significant improvement was found for the control group (pre: 88.8 %, post: 92.9 %). Vice versa, the intervention group did not display any significant results for incongruent stimuli (pre: 74.9 %, post: 83.5 %), while the control group revealed a significant difference (pre: 68.9 %, post: 80.3 %, p<.01). The analysis of three out of four criteria demonstrates that children who took part in a special sports program improved more than children who did not. The contrary results for the last criterion could be caused by the control group’s low results from the pre-test. Conclusion: The findings illustrate that inhibition can be trained as early as in preschool age. The combination of familiar games with increased requirements for attention and control processes appears to be particularly suitable.Keywords: executive functions, flanker task, inhibition, preschool children
Procedia PDF Downloads 253122 Hybrid versus Cemented Fixation in Total Knee Arthroplasty: Mid-Term Follow-Up
Authors: Pedro Gomes, Luís Sá Castelo, António Lopes, Marta Maio, Pedro Mota, Adélia Avelar, António Marques Dias
Abstract:
Introduction: Total Knee Arthroplasty (TKA) has contributed to improvement of patient`s quality of life, although it has been associated with some complications including component loosening and polyethylene wear. To prevent these complications various fixation techniques have been employed. Hybrid TKA with cemented tibial and cementless femoral components have shown favourable outcomes, although it still lack of consensus in the literature. Objectives: To evaluate the clinical and radiographic results of hybrid versus cemented TKA with an average 5 years follow-up and analyse the survival rates. Methods: A retrospective study of 125 TKAs performed in 92 patients at our institution, between 2006 to 2008, with a minimum follow-up of 2 years. The same prosthesis was used in all knees. Hybrid TKA fixation was performed in 96 knees, with a mean follow-up of 4,8±1,7 years (range, 2–8,3 years) and 29 TKAs received fully cemented fixation with a mean follow-up of 4,9±1,9 years (range, 2-8,3 years). Selection for hybrid fixation was nonrandomized and based on femoral component fit. The Oxford Knee Score (OKS 0-48) was evaluated for clinical assessment and Knee Society Roentgenographic Evaluation Scoring System was used for radiographic outcome. The survival rate was calculated using the Kaplan-Meier method, with failures defined as revision of either the tibial or femoral component for aseptic failures and all-causes (aseptic and infection). Analysis of survivorship data was performed using the log-rank test. SPSS (v22) was the computer program used for statistical analysis. Results: The hybrid group consisted of 72 females (75%) and 24 males (25%), with mean age 64±7 years (range, 50-78 years). The preoperative diagnosis was osteoarthritis (OA) in 94 knees (98%), rheumatoid arthritis (RA) in 1 knee (1%) and Posttraumatic arthritis (PTA) in 1 Knee (1%). The fully cemented group consisted of 23 females (79%) and 6 males (21%), with mean age 65±7 years (range, 47-78 years). The preoperative diagnosis was OA in 27 knees (93%), PTA in 2 knees (7%). The Oxford Knee Scores were similar between the 2 groups (hybrid 40,3±2,8 versus cemented 40,2±3). The percentage of radiolucencies seen on the femoral side was slightly higher in the cemented group 20,7% than the hybrid group 11,5% p0.223. In the cemented group there were significantly more Zone 4 radiolucencies compared to the hybrid group (13,8% versus 2,1% p0,026). Revisions for all causes were performed in 4 of the 96 hybrid TKAs (4,2%) and 1 of the 29 cemented TKAs (3,5%). The reason for revision was aseptic loosening in 3 hybrid TKAs and 1 of the cemented TKAs. Revision was performed for infection in 1 hybrid TKA. The hybrid group demonstrated a 7 years survival rate of 93% for all-cause failures and 94% for aseptic loosening. No significant difference in survivorship was seen between the groups for all-cause failures or aseptic failures. Conclusions: Hybrid TKA yields similar intermediate-term results and survival rates as fully cemented total knee arthroplasty and remains a viable option in knee joint replacement surgery.Keywords: hybrid, survival rate, total knee arthroplasty, orthopaedic surgery
Procedia PDF Downloads 594121 The Impact of Anxiety on the Access to Phonological Representations in Beginning Readers and Writers
Authors: Regis Pochon, Nicolas Stefaniak, Veronique Baltazart, Pamela Gobin
Abstract:
Anxiety is known to have an impact on working memory. In reasoning or memory tasks, individuals with anxiety tend to show longer response times and poorer performance. Furthermore, there is a memory bias for negative information in anxiety. Given the crucial role of working memory in lexical learning, anxious students may encounter more difficulties in learning to read and spell. Anxiety could even affect an earlier learning, that is the activation of phonological representations, which are decisive for the learning of reading and writing. The aim of this study is to compare the access to phonological representations of beginning readers and writers according to their level of anxiety, using an auditory lexical decision task. Eighty students of 6- to 9-years-old completed the French version of the Revised Children's Manifest Anxiety Scale and were then divided into four anxiety groups according to their total score (Low, Median-Low, Median-High and High). Two set of eighty-one stimuli (words and non-words) have been auditory presented to these students by means of a laptop computer. Stimuli words were selected according to their emotional valence (positive, negative, neutral). Students had to decide as quickly and accurately as possible whether the presented stimulus was a real word or not (lexical decision). Response times and accuracy were recorded automatically on each trial. It was anticipated a) longer response times for the Median-High and High anxiety groups in comparison with the two others groups, b) faster response times for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups, c) lower response accuracy for Median-High and High anxiety groups in comparison with the two others groups, d) better response accuracy for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups. Concerning the response times, our results showed no difference between the four groups. Furthermore, inside each group, the average response times was very close regardless the emotional valence. Otherwise, group differences appear when considering the error rates. Median-High and High anxiety groups made significantly more errors in lexical decision than Median-Low and Low groups. Better response accuracy, however, is not found for negative-valence words in comparison with positive and neutral-valence words in the Median-High and High anxiety groups. Thus, these results showed a lower response accuracy for above-median anxiety groups than below-median groups but without specificity for the negative-valence words. This study suggests that anxiety can negatively impact the lexical processing in young students. Although the lexical processing speed seems preserved, the accuracy of this processing may be altered in students with moderate or high level of anxiety. This finding has important implication for the prevention of reading and spelling difficulties. Indeed, during these learnings, if anxiety affects the access to phonological representations, anxious students could be disturbed when they have to match phonological representations with new orthographic representations, because of less efficient lexical representations. This study should be continued in order to precise the impact of anxiety on basic school learning.Keywords: anxiety, emotional valence, childhood, lexical access
Procedia PDF Downloads 288120 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems
Authors: Georgi Y. Georgiev, Matthew Brouillet
Abstract:
This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.Keywords: complexity, self-organization, agent based modelling, efficiency
Procedia PDF Downloads 69119 Improvement of the Traditional Techniques of Artistic Casting through the Development of Open Source 3D Printing Technologies Based on Digital Ultraviolet Light Processing
Authors: Drago Diaz Aleman, Jose Luis Saorin Perez, Cecile Meier, Itahisa Perez Conesa, Jorge De La Torre Cantero
Abstract:
Traditional manufacturing techniques used in artistic contexts compete with highly productive and efficient industrial procedures. The craft techniques and associated business models tend to disappear under the pressure of the appearance of mass-produced products that compete in all niche markets, including those traditionally reserved for the work of art. The surplus value derived from the prestige of the author, the exclusivity of the product or the mastery of the artist, do not seem to be sufficient reasons to preserve this productive model. In the last years, the adoption of open source digital manufacturing technologies in small art workshops can favor their permanence by assuming great advantages such as easy accessibility, low cost, and free modification, adapting to specific needs of each workshop. It is possible to use pieces modeled by computer and made with FDM (Fused Deposition Modeling) 3D printers that use PLA (polylactic acid) in the procedures of artistic casting. Models printed by PLA are limited to approximate minimum sizes of 3 cm, and optimal layer height resolution is 0.1 mm. Due to these limitations, it is not the most suitable technology for artistic casting processes of smaller pieces. An alternative to solve size limitation, are printers from the type (SLS) "selective sintering by laser". And other possibility is a laser hardens, by layers, metal powder and called DMLS (Direct Metal Laser Sintering). However, due to its high cost, it is a technology that is difficult to introduce in small artistic foundries. The low-cost DLP (Digital Light Processing) type printers can offer high resolutions for a reasonable cost (around 0.02 mm on the Z axis and 0.04 mm on the X and Y axes), and can print models with castable resins that allow the subsequent direct artistic casting in precious metals or their adaptation to processes such as electroforming. In this work, the design of a DLP 3D printer is detailed, using backlit LCD screens with ultraviolet light. Its development is totally "open source" and is proposed as a kit made up of electronic components, based on Arduino and easy to access mechanical components in the market. The CAD files of its components can be manufactured in low-cost FDM 3D printers. The result is less than 500 Euros, high resolution and open-design with free access that allows not only its manufacture but also its improvement. In future works, we intend to carry out different comparative analyzes, which allow us to accurately estimate the print quality, as well as the real cost of the artistic works made with it.Keywords: traditional artistic techniques, DLP 3D printer, artistic casting, electroforming
Procedia PDF Downloads 142118 Microgrid Design Under Optimal Control With Batch Reinforcement Learning
Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion
Abstract:
Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.Keywords: batch-constrained reinforcement learning, control, design, optimal
Procedia PDF Downloads 124117 Learners’ Preferences in Selecting Language Learning Institute (A Study in Iran)
Authors: Hoora Dehghani, Meisam Shahbazi, Reza Zare
Abstract:
During the previous decade, a significant evolution has occurred in the number of private educational centers and, accordingly, the increase in the number of providers and students of these centers around the world. The number of language teaching institutes in Iran that are considered private educational sectors is also growing exponentially as the request for learning foreign languages has extremely increased in recent years. This fact caused competition among the institutions in improving better services tailored to the students’ demands to win the competition. Along with the growth in the industry of education, higher education institutes should apply the marketing-related concepts and view students as customers because students’ outlooks are similar to consumers with education. Studying the influential factors in the selection of an institute has multiple benefits. Firstly, it acknowledges the institutions of the students’ choice factors. Secondly, the institutions use the obtained information to improve their marketing methods. It also helps institutions know students’ outlooks that can be applied to expand the student know-how. Moreover, it provides practical evidence for educational centers to plan useful amenities and programs, and use efficient policies to cater to the market, and also helps them execute the methods that increase students’ feeling of contentment and assurance. Thus, this study explored the influencing factors in the selection of a language learning institute by language learners and examined and compared the importance among the varying age groups and genders. In the first phase of the study, the researchers selected 15 language learners as representative cases within the specified age ranges and genders purposefully and interviewed them to explore the comprising elements in their language institute selection process and analyzed the results qualitatively. In the second phase, the researchers identified elements as specified items of a questionnaire, and 1000 English learners across varying educational contexts rated them. The TOPSIS method was used to analyze the data quantitatively by representing the level of importance of the items for the participants generally and specifically in each subcategory; genders and age groups. The results indicated that the educational quality, teaching method, duration of training course, establishing need-oriented courses, and easy access were the most important elements. On the other hand, offering training in different languages, the specialized education of only one language, the uniform and appropriate appearance of office staff, having native professors to the language of instruction, applying Computer or online tests instead of the usual paper tests respectively as the least important choice factors in selecting a language institute. Besides, some comparisons among different groups’ ratings of choice factors were made, which revealed the differences among different groups' priorities in choosing a language institute.Keywords: choice factors, EFL institute selection, english learning, need analysis, TOPSIS
Procedia PDF Downloads 165116 Quantitative Analysis of Camera Setup for Optical Motion Capture Systems
Authors: J. T. Pitale, S. Ghassab, H. Ay, N. Berme
Abstract:
Biomechanics researchers commonly use marker-based optical motion capture (MoCap) systems to extract human body kinematic data. These systems use cameras to detect passive or active markers placed on the subject. The cameras use triangulation methods to form images of the markers, which typically require each marker to be visible by at least two cameras simultaneously. Cameras in a conventional optical MoCap system are mounted at a distance from the subject, typically on walls, ceiling as well as fixed or adjustable frame structures. To accommodate for space constraints and as portable force measurement systems are getting popular, there is a need for smaller and smaller capture volumes. When the efficacy of a MoCap system is investigated, it is important to consider the tradeoff amongst the camera distance from subject, pixel density, and the field of view (FOV). If cameras are mounted relatively close to a subject, the area corresponding to each pixel reduces, thus increasing the image resolution. However, the cross section of the capture volume also decreases, causing reduction of the visible area. Due to this reduction, additional cameras may be required in such applications. On the other hand, mounting cameras relatively far from the subject increases the visible area but reduces the image quality. The goal of this study was to develop a quantitative methodology to investigate marker occlusions and optimize camera placement for a given capture volume and subject postures using three-dimension computer-aided design (CAD) tools. We modeled a 4.9m x 3.7m x 2.4m (LxWxH) MoCap volume and designed a mounting structure for cameras using SOLIDWORKS (Dassault Systems, MA, USA). The FOV was used to generate the capture volume for each camera placed on the structure. A human body model with configurable posture was placed at the center of the capture volume on CAD environment. We studied three postures; initial contact, mid-stance, and early swing. The human body CAD model was adjusted for each posture based on the range of joint angles. Markers were attached to the model to enable a full body capture. The cameras were placed around the capture volume at a maximum distance of 2.7m from the subject. We used the Camera View feature in SOLIDWORKS to generate images of the subject as seen by each camera and the number of markers visible to each camera was tabulated. The approach presented in this study provides a quantitative method to investigate the efficacy and efficiency of a MoCap camera setup. This approach enables optimization of a camera setup through adjusting the position and orientation of cameras on the CAD environment and quantifying marker visibility. It is also possible to compare different camera setup options on the same quantitative basis. The flexibility of the CAD environment enables accurate representation of the capture volume, including any objects that may cause obstructions between the subject and the cameras. With this approach, it is possible to compare different camera placement options to each other, as well as optimize a given camera setup based on quantitative results.Keywords: motion capture, cameras, biomechanics, gait analysis
Procedia PDF Downloads 310115 Simulation Research of the Aerodynamic Drag of 3D Structures for Individual Transport Vehicle
Authors: Pawel Magryta, Mateusz Paszko
Abstract:
In today's world, a big problem of individual mobility, especially in large urban areas, occurs. Commonly used grand way of transport such as buses, trains or cars do not fulfill their tasks, i.e. they are not able to meet the increasing mobility needs of the growing urban population. Additional to that, the limitations of civil infrastructure construction in the cities exist. Nowadays the most common idea is to transfer the part of urban transport on the level of air transport. However to do this, there is a need to develop an individual flying transport vehicle. The biggest problem occurring in this concept is the type of the propulsion system from which the vehicle will obtain a lifting force. Standard propeller drives appear to be too noisy. One of the ideas is to provide the required take-off and flight power by the machine using the innovative ejector system. This kind of the system will be designed through a suitable choice of the three-dimensional geometric structure with special shape of nozzle in order to generate overpressure. The authors idea is to make a device that would allow to cumulate the overpressure using the a five-sided geometrical structure that will be limited on the one side by the blowing flow of air jet. In order to test this hypothesis a computer simulation study of aerodynamic drag of such 3D structures have been made. Based on the results of these studies, the tests on real model were also performed. The final stage of work was a comparative analysis of the results of simulation and real tests. The CFD simulation studies of air flow was conducted using the Star CD - Star Pro 3.2 software. The design of virtual model was made using the Catia v5 software. Apart from the objective to obtain advanced aviation propulsion system, all of the tests and modifications of 3D structures were also aimed at achieving high efficiency of this device while maintaining the ability to generate high value of overpressures. This was possible only in case of a large mass flow rate of air. All these aspects have been possible to verify using CFD methods for observing the flow of the working medium in the tested model. During the simulation tests, the distribution and size of pressure and velocity vectors were analyzed. Simulations were made with different boundary conditions (supply air pressure), but with a fixed external conditions (ambient temp., ambient pressure, etc.). The maximum value of obtained overpressure is 2 kPa. This value is too low to exploit the power of this device for the individual transport vehicle. Both the simulation model and real object shows a linear dependence of the overpressure values obtained from the different geometrical parameters of three-dimensional structures. Application of computational software greatly simplifies and streamlines the design and simulation capabilities. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: aviation propulsion, CFD, 3d structure, aerodynamic drag
Procedia PDF Downloads 311114 Disability in the Course of a Chronic Disease: The Example of People Living with Multiple Sclerosis in Poland
Authors: Milena Trojanowska
Abstract:
Disability is a phenomenon for which meanings and definitions have evolved over the decades. This became the trigger to start a project to answer the question of what disability constitutes in the course of an incurable chronic disease. The chosen research group are people living with multiple sclerosis.The contextual phase of the research was participant observation at the Polish Multiple Sclerosis Society, the largest NGO in Poland supporting people living with MS and their relatives. The research techniques used in the project are (in order of implementation): group interviews with people living with MS and their relatives, narrative interviews, asynchronous technique, participant observation during events organised for people living with MS and their relatives.The researcher is currently conducting follow-up interviews, as inaccuracies in the respondents' narratives were identified during the data analysis. Interviews and supplementary research techniques were used over the four years of the research, and the researcher also benefited from experience gained from 12 years of working with NGOs (diaries, notes). The research was carried out in Poland with the participation of people living in this country only.The research has been based on grounded theory methodology in a constructivist perspectivedeveloped by Kathy Charmaz. The goal was to follow the idea that research must be reliable, original, and useful. The aim was to construct an interpretive theory that assumes temporality and the processualityof social life. TheAtlas.ti software was used to collect research material and analyse it. It is a program from the CAQDAS(Computer-Assisted Qualitative Data Analysis Software) group.Several key factors influencing the construction of a disability identity by people living with multiple sclerosis was identified:-course of interaction with significant relatives,- the expectation of identification with disability (expressed by close relatives),- economic profitability (pension, allowances),- institutional advantages (e.g. parking card),- independence and autonomy (not equated with physical condition, but access to adapted infrastructure and resources to support daily functioning),- the way a person with MS construes the meaning of disability,- physical and mental state,- medical diagnosis of illness.In addition, it has been shown that making an assumption about the experience of disability in the course of MS is a form of cognitive reductionism leading to further phenomenon such as: the expectation of the person with MS to construct a social identity as a person with a disability (e.g. giving up work), the occurrence of institutional inequalities. It can also be a determinant of the choice of a life strategy that limits social and individual functioning, even if this necessity is not influenced by the person's physical or psychological condition.The results of the research are important for the development of knowledge about the phenomenon of disability. It indicates the contextuality and complexity of the disability phenomenon, which in the light of the research is a set of different phenomenon of heterogeneous nature and multifaceted causality. This knowledge can also be useful for institutions and organisations in the non-governmental sector supporting people with disabilities and people living with multiple sclerosis.Keywords: disability, multiple sclerosis, grounded theory, poland
Procedia PDF Downloads 108113 Multicultural Education in the National Context: A Study of Peoples' Friendship University of Russia
Authors: Maria V. Mishatkina
Abstract:
The modelling of dialogical environment is an essential feature of modern education. The dialogue of cultures is a foundation and an important prerequisite for a formation of a human’s main moral qualities such as an ability to understand another person, which is manifested in such values as tolerance, respect, mutual assistance and mercy. A formation of a modern expert occurs in an educational environment that is significantly different from what we had several years ago. Nowadays university education has qualitatively new characteristics. They may be observed in Peoples’ Friendship University of Russia (RUDN University), a top Russian higher education institution which unites representatives of more than 150 countries. The content of its educational strategies is not an adapted cultural experience but material between science and innovation. Besides, RUDN University’s profiles and specialization are not equal to the professional structures. People study not a profession in a strict sense but a basic scientific foundation of an activity in different socio-cultural areas (science, business and education). RUDN University also provides a considerable unit of professional education components. They are foreign languages skills, economic, political, ethnic, communication and computer culture, theory of information and basic management skills. Moreover, there is a rich social life (festive multicultural events, theme parties, journeys) and prospects concerning the inclusive approach to education (for example, a special course ‘Social Pedagogy: Issues of Tolerance’). In our research, we use such methods as analysis of modern and contemporary scientific literature, opinion poll (involving students, teachers and research workers) and comparative data analysis. We came to the conclusion that knowledge transfer of RUDN student in the activity happens through making goals, problems, issues, tasks and situations which simulate future innovative ambiguous environment that potentially prepares him/her to dialogical way of life. However, all these factors may not take effect if there is no ‘personal inspiration’ of students by communicative and dialogic values, their participation in a system of meanings and tools of learning activity that is represented by cooperation within the framework of scientific and pedagogical schools dialogue. We also found out that dominating strategies of ensuring the quality of education are those that put students in the position of the subject of their own education. Today these strategies and approaches should involve such approaches and methods as task, contextual, modelling, specialized, game-imitating and dialogical approaches, the method of practical situations, etc. Therefore, University in the modern sense is not only an educational institution, but also a generator of innovation, cooperation among nations and cultural progress. RUDN University has been performing exactly this mission for many decades.Keywords: dialogical developing situation, dialogue of cultures, readiness for dialogue, university graduate
Procedia PDF Downloads 221112 The Use of Image Analysis Techniques to Describe a Cluster Cracks in the Cement Paste with the Addition of Metakaolinite
Authors: Maciej Szeląg, Stanisław Fic
Abstract:
The impact of elevated temperatures on the construction materials manifests in change of their physical and mechanical characteristics. Stresses and thermal deformations that occur inside the volume of the material cause its progressive degradation as temperature increase. Finally, the reactions and transformations of multiphase structure of cementitious composite cause its complete destruction. A particularly dangerous phenomenon is the impact of thermal shock – a sudden high temperature load. The thermal shock leads to a high value of the temperature gradient between the outer surface and the interior of the element in a relatively short time. The result of mentioned above process is the formation of the cracks and scratches on the material’s surface and inside the material. The article describes the use of computer image analysis techniques to identify and assess the structure of the cluster cracks on the surfaces of modified cement pastes, caused by thermal shock. Four series of specimens were tested. Two Portland cements were used (CEM I 42.5R and CEM I 52,5R). In addition, two of the series contained metakaolinite as a replacement for 10% of the cement content. Samples in each series were made in combination of three w/b (water/binder) indicators of respectively 0.4; 0.5; 0.6. Surface cracks of the samples were created by a sudden temperature load at 200°C for 4 hours. Images of the cracked surfaces were obtained via scanning at 1200 DPI; digital processing and measurements were performed using ImageJ v. 1.46r software. In order to examine the cracked surface of the cement paste as a system of closed clusters – the dispersal systems theory was used to describe the structure of cement paste. Water is used as the dispersing phase, and the binder is used as the dispersed phase – which is the initial stage of cement paste structure creation. A cluster itself is considered to be the area on the specimen surface that is limited by cracks (created by sudden temperature loading) or by the edge of the sample. To describe the structure of cracks two stereological parameters were proposed: A ̅ – the cluster average area, L ̅ – the cluster average perimeter. The goal of this study was to compare the investigated stereological parameters with the mechanical properties of the tested specimens. Compressive and tensile strength testes were carried out according to EN standards. The method used in the study allowed the quantitative determination of defects occurring in the examined modified cement pastes surfaces. Based on the results, it was found that the nature of the cracks depends mainly on the physical parameters of the cement and the intermolecular interactions on the dispersal environment. Additionally, it was noted that the A ̅/L ̅ relation of created clusters can be described as one function for all tested samples. This fact testifies about the constant geometry of the thermal cracks regardless of the presence of metakaolinite, the type of cement and the w/b ratio.Keywords: cement paste, cluster cracks, elevated temperature, image analysis, metakaolinite, stereological parameters
Procedia PDF Downloads 390111 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands
Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé
Abstract:
The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis
Procedia PDF Downloads 164110 Voyage Analysis of a Marine Gas Turbine Engine Installed to Power and Propel an Ocean-Going Cruise Ship
Authors: Mathias U. Bonet, Pericles Pilidis, Georgios Doulgeris
Abstract:
A gas turbine-powered cruise Liner is scheduled to transport pilgrim passengers from Lagos-Nigeria to the Islamic port city of Jeddah in Saudi Arabia. Since the gas turbine is an air breathing machine, changes in the density and/or mass flow at the compressor inlet due to an encounter with variations in weather conditions induce negative effects on the performance of the power plant during the voyage. In practice, all deviations from the reference atmospheric conditions of 15 oC and 1.103 bar tend to affect the power output and other thermodynamic parameters of the gas turbine cycle. Therefore, this paper seeks to evaluate how a simple cycle marine gas turbine power plant would react under a variety of scenarios that may be encountered during a voyage as the ship sails across the Atlantic Ocean and the Mediterranean Sea before arriving at its designated port of discharge. It is also an assessment that focuses on the effect of varying aerodynamic and hydrodynamic conditions which deteriorate the efficient operation of the propulsion system due to an increase in resistance that results from some projected levels of the ship hull fouling. The investigated passenger ship is designed to run at a service speed of 22 knots and cover a distance of 5787 nautical miles. The performance evaluation consists of three separate voyages that cover a variety of weather conditions in winter, spring and summer seasons. Real-time daily temperatures and the sea states for the selected transit route were obtained and used to simulate the voyage under the aforementioned operating conditions. Changes in engine firing temperature, power output as well as the total fuel consumed per voyage including other performance variables were separately predicted under both calm and adverse weather conditions. The collated data were obtained online from the UK Meteorological Office as well as the UK Hydrographic Office websites, while adopting the Beaufort scale for determining the magnitude of sea waves resulting from rough weather situations. The simulation of the gas turbine performance and voyage analysis was effected through the use of an integrated Cranfield-University-developed computer code known as ‘Turbomatch’ and ‘Poseidon’. It is a project that is aimed at developing a method for predicting the off design behavior of the marine gas turbine when installed and operated as the main prime mover for both propulsion and powering of all other auxiliary services onboard a passenger cruise liner. Furthermore, it is a techno-economic and environmental assessment that seeks to enable the forecast of the marine gas turbine part and full load performance as it relates to the fuel requirement for a complete voyage.Keywords: cruise ship, gas turbine, hull fouling, performance, propulsion, weather
Procedia PDF Downloads 165109 Treatment and Diagnostic Imaging Methods of Fetal Heart Function in Radiology
Authors: Mahdi Farajzadeh Ajirlou
Abstract:
Prior evidence of normal cardiac anatomy is desirable to relieve the anxiety of cases with a family history of congenital heart disease or to offer the option of early gestation termination or close follow-up should a cardiac anomaly be proved. Fetal heart discovery plays an important part in the opinion of the fetus, and it can reflect the fetal heart function of the fetus, which is regulated by the central nervous system. Acquisition of ventricular volume and inflow data would be useful to quantify more valve regurgitation and ventricular function to determine the degree of cardiovascular concession in fetal conditions at threat for hydrops fetalis. This study discusses imaging the fetal heart with transvaginal ultrasound, Doppler ultrasound, three-dimensional ultrasound (3DUS) and four-dimensional (4D) ultrasound, spatiotemporal image correlation (STIC), glamorous resonance imaging and cardiac catheterization. Doppler ultrasound (DUS) image is a kind of real- time image with a better imaging effect on blood vessels and soft tissues. DUS imaging can observe the shape of the fetus, but it cannot show whether the fetus is hypoxic or distressed. Spatiotemporal image correlation (STIC) enables the acquisition of a volume of data concomitant with the beating heart. The automated volume accession is made possible by the array in the transducer performing a slow single reach, recording a single 3D data set conforming to numerous 2D frames one behind the other. The volume accession can be done in a stationary 3D, either online 4D (direct volume scan, live 3D ultrasound or a so-called 4D (3D/ 4D)), or either spatiotemporal image correlation-STIC (off-line 4D, which is a circular volume check-up). Fetal cardiovascular MRI would appear to be an ideal approach to the noninvasive disquisition of the impact of abnormal cardiovascular hemodynamics on antenatal brain growth and development. Still, there are practical limitations to the use of conventional MRI for fetal cardiovascular assessment, including the small size and high heart rate of the mortal fetus, the lack of conventional cardiac gating styles to attend data accession, and the implicit corruption of MRI data due to motherly respiration and unpredictable fetal movements. Fetal cardiac MRI has the implicit to complement ultrasound in detecting cardiovascular deformations and extracardiac lesions. Fetal cardiac intervention (FCI), minimally invasive catheter interventions, is a new and evolving fashion that allows for in-utero treatment of a subset of severe forms of congenital heart deficiency. In special cases, it may be possible to modify the natural history of congenital heart disorders. It's entirely possible that future generations will ‘repair’ congenital heart deficiency in utero using nanotechnologies or remote computer-guided micro-robots that work in the cellular layer.Keywords: fetal, cardiac MRI, ultrasound, 3D, 4D, heart disease, invasive, noninvasive, catheter
Procedia PDF Downloads 43108 Risks beyond Cyber in IoT Infrastructure and Services
Authors: Mattias Bergstrom
Abstract:
Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.Keywords: IoT, security, infrastructure, SCADA, blockchain, AI
Procedia PDF Downloads 107107 Digital Technology Relevance in Archival and Digitising Practices in the Republic of South Africa
Authors: Tashinga Matindike
Abstract:
By means of definition, digital artworks encompass an array of artistic productions that are expressed in a technological form as an essential part of a creative process. Examples include illustrations, photos, videos, sculptures, and installations. Within the context of the visual arts, the process of repatriation involves the return of once-appropriated goods. Archiving denotes the preservation of a commodity for storage purposes in order to nurture its continuity. The aforementioned definitions form the foundation of the academic framework and premise of the argument, which is outlined in this paper. This paper aims to define, discuss and decipher the complexities involved in digitising artworks, whilst explaining the benefits of the process, particularly within the South African context, which is rich in tangible and intangible traditional cultural material, objects, and performances. With the internet having been introduced to the African Continent in the early 1990s, this new form of technology, in its own right, initiated a high degree of efficiency, which also resulted in the progressive transformation of computer-generated visual output. Subsequently, this caused a revolutionary influence on the manner in which technological software was developed and uterlised in art-making. Digital technology and the digitisation of creative processes then opened up new avenues of collating and recording information. One of the first visual artists to make use of digital technology software in his creative productions was United States-based artist John Whitney. His inventive work contributed greatly to the onset and development of digital animation. Comparable by technique and originality, South African contemporary visual artists who make digital artworks, both locally and internationally, include David Goldblatt, Katherine Bull, Fritha Langerman, David Masoga, Zinhle Sethebe, Alicia Mcfadzean, Ivan Van Der Walt, Siobhan Twomey, and Fhatuwani Mukheli. In conclusion, the main objective of this paper is to address the following questions: In which ways has the South African art community of visual artists made use of and benefited from technology, in its digital form, as a means to further advance creativity? What are the positive changes that have resulted in art production in South Africa since the onset and use of digital technological software? How has digitisation changed the manner in which we record, interpret, and archive both written and visual information? What is the role of South African art institutions in the development of digital technology and its use in the field of visual art. What role does digitisation play in the process of the repatriation of artworks and artefacts. The methodology in terms of the research process of this paper takes on a multifacted form, inclusive of data analysis of information attained by means of qualitative and quantitative approaches.Keywords: digital art, digitisation, technology, archiving, transformation and repatriation
Procedia PDF Downloads 52106 The Role of Intraluminal Endoscopy in the Diagnosis and Treatment of Fluid Collections in Patients With Acute Pancreatitis
Authors: A. Askerov, Y. Teterin, P. Yartcev, S. Novikov
Abstract:
Introduction: Acute pancreatitis (AP) is a socially significant problem for public health and continues to be one of the most common causes of hospitalization of patients with pathology of the gastrointestinal tract. It is characterized by high mortality rates, which reaches 62-65% in infected pancreatic necrosis. Aims & Methods: The study group included 63 patients who underwent transluminal drainage (TLD) fluid collection (FC). All patients were performed transabdominal ultrasound, computer tomography of the abdominal cavity and retroperitoneal organs and endoscopic ultrasound (EUS) of the pancreatobiliary zone. The EUS was used as a final diagnostic method to determine the characteristics of FC. The indications for TLD were: the distance between the wall of the hollow organ and the FC was not more than 1 cm, the absence of large vessels on the puncture trajectory (more than 3 mm), and the size of the formation was more than 5 cm. When a homogeneous cavity with clear, even contours was detected, a plastic stent with rounded ends (“double pig tail”) was installed. The indication for the installation of a fully covered self-expanding stent was the detection of nonhomogeneous anechoic FC with hyperechoic inclusions and cloudy purulent contents. In patients with necrotic forms after drainage of the purulent cavity, a cystonasal drainage with a diameter of 7Fr was installed in its lumen under X-ray control to sanitize the cavity with a 0.05% aqueous solution of chlorhexidine. Endoscopic necrectomy was performed every 24-48 hours. The plastic stent was removed in 6 month, the fully covered self-expanding stent - in 1 month after the patient was discharged from the hospital. Results: Endoscopic TLD was performed in 63 patients. The FC corresponding to interstitial edematous pancreatitis was detected in 39 (62%) patients who underwent TLD with the installation of a plastic stent with rounded ends. In 24 (38%) patients with necrotic forms of FC, a fully covered self-expanding stent was placed. Communication with the ductal system of the pancreas was found in 5 (7.9%) patients. They underwent pancreaticoduodenal stenting. A complicated postoperative period was noted in 4 (6.3%) cases and was manifested by bleeding from the zone of pancreatogenic destruction. In 2 (3.1%) cases, this required angiography and endovascular embolization a. gastroduodenalis, in 1 (1.6%) case, endoscopic hemostasis was performed by filling the cavity with 4 ml of Hemoblock hemostatic solution. The combination of both methods was used in 1 (1.6%) patient. There was no evidence of recurrent bleeding in these patients. Lethal outcome occurred in 4 patients (6.3%). In 3 (4.7%) patients, the cause of death was multiple organ failure, in 1 (1.6%) - severe nosocomial pneumonia that developed on the 32nd day after drainage. Conclusions: 1. EUS is not only the most important method for diagnosing FC in AP, but also allows you to determine further tactics for their intraluminal drainage.2. Endoscopic intraluminal drainage of fluid zones in 45.8% of cases is the final minimally invasive method of surgical treatment of large-focal pancreatic necrosis. Disclosure: Nothing to disclose.Keywords: acute pancreatitis, fluid collection, endoscopy surgery, necrectomy, transluminal drainage
Procedia PDF Downloads 111