Search results for: COMSOL optimization simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7739

Search results for: COMSOL optimization simulation

209 Challenge in Teaching Physics during the Pandemic: Another Way of Teaching and Learning

Authors: Edson Pierre, Gustavo de Jesus Lopez Nunez

Abstract:

The objective of this work is to analyze how physics can be taught remotely through the use of platforms and software to attract the attention of 2nd-year high school students at Colégio Cívico Militar Professor Carmelita Souza Dias and point out how remote teaching can be a teaching-learning strategy during the period of social distancing. Teaching physics has been a challenge for teachers and students, permeating common sense with the great difficulty of teaching and learning the subject. The challenge increased in 2020 and 2021 with the impact caused by the new coronavirus pandemic (Sars-Cov-2) and its variants that have affected the entire world. With these changes, a new teaching modality emerged: remote teaching. It brought new challenges and one of them was promoting distance research experiences, especially in physics teaching, since there are learning difficulties and it is often impossible for the student to relate the theory observed in class with the reality that surrounds them. Teaching physics in schools faces some difficulties, which makes it increasingly less attractive for young people to choose this profession. Bearing in mind that the study of physics is very important, as it puts students in front of concrete and real situations, situations that physical principles can respond to, helping to understand nature, nourishing and nurturing a taste for science. The use of new platforms and software, such as PhET Interactive Simulations from the University of Colorado at Boulder, is a virtual laboratory that has numerous simulations of scientific experiments, which serve to improve the understanding of the content taught practically, facilitating student learning and absorption of content, being a simple, practical and free simulation tool, attracts more attention from students, causing them to acquire greater knowledge about the subject studied, or even a quiz, bringing certain healthy competitiveness to students, generating knowledge and interest in the themes used. The present study takes the Theory of Social Representations as a theoretical reference, examining the content and process of constructing the representations of teachers, subjects of our investigation, on the evaluation of teaching and learning processes, through a methodology of qualitative. The result of this work has shown that remote teaching was really a very important strategy for the process of teaching and learning physics in the 2nd year of high school. It provided greater interaction between the teacher and the student. Therefore, the teacher also plays a fundamental role since technology is increasingly present in the educational environment, and he is the main protagonist of this process.

Keywords: physics teaching, technologies, remote learning, pandemic

Procedia PDF Downloads 67
208 Design of Photonic Crystal with Defect Layer to Eliminate Interface Corrugations for Obtaining Unidirectional and Bidirectional Beam Splitting under Normal Incidence

Authors: Evrim Colak, Andriy E. Serebryannikov, Pavel V. Usik, Ekmel Ozbay

Abstract:

Working with a dielectric photonic crystal (PC) structure which does not include surface corrugations, unidirectional transmission and dual-beam splitting are observed under normal incidence as a result of the strong diffractions caused by the embedded defect layer. The defect layer has twice the period of the regular PC segments which sandwich the defect layer. Although the PC has even number of rows, the structural symmetry is broken due to the asymmetric placement of the defect layer with respect to the symmetry axis of the regular PC. The simulations verify that efficient splitting and occurrence of strong diffractions are related to the dispersion properties of the Floquet-Bloch modes of the photonic crystal. Unidirectional and bi-directional splitting, which are associated with asymmetric transmission, arise due to the dominant contribution of the first positive and first negative diffraction orders. The effect of the depth of the defect layer is examined by placing single defect layer in varying rows, preserving the asymmetry of PC. Even for deeply buried defect layer, asymmetric transmission is still valid even if the zeroth order is not coupled. This transmission is due to evanescent waves which reach to the deeply embedded defect layer and couple to higher order modes. In an additional selected performance, whichever surface is illuminated, i.e., in both upper and lower surface illumination cases, incident beam is split into two beams of equal intensity at the output surface where the intensity of the out-going beams are equal for both illumination cases. That is, although the structure is asymmetric, symmetric bidirectional transmission with equal transmission values is demonstrated and the structure mimics the behavior of symmetric structures. Finally, simulation studies including the examination of a coupled-cavity defect for two different permittivity values (close to the permittivity values of GaAs or Si and alumina) reveal unidirectional splitting for a wider band of operation in comparison to the bandwidth obtained in the case of a single embedded defect layer. Since the dielectric materials that are utilized are low-loss and weakly dispersive in a wide frequency range including microwave and optical frequencies, the studied structures should be scalable to the mentioned ranges.

Keywords: asymmetric transmission, beam deflection, blazing, bi-directional splitting, defect layer, dual beam splitting, Floquet-Bloch modes, isofrequency contours, line defect, oblique incidence, photonic crystal, unidirectionality

Procedia PDF Downloads 184
207 A Hydrometallurgical Route for the Recovery of Molybdenum from Spent Mo-Co Catalyst

Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra

Abstract:

Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum has increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. The present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3.0 mol/L HCl, and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2.0 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe- Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by countercurrent simulation studies. According to McCabe- Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two-stage counter current at A/O= 1:1 with the negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO₃ in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO₃ was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO₃ correspond to molybdite Syn-MoO₃ structure. FE-SEM depicts the rod-like morphology of synthesized MoO₃. EDX analysis of MoO₃ shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO₃ can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as a catalyst.

Keywords: cyphos Il 102, extraction, spent mo-co catalyst, recovery

Procedia PDF Downloads 172
206 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 289
205 Early Predictive Signs for Kasai Procedure Success

Authors: Medan Isaeva, Anna Degtyareva

Abstract:

Context: Biliary atresia is a common reason for liver transplants in children, and the Kasai procedure can potentially be successful in avoiding the need for transplantation. However, it is important to identify factors that influence surgical outcomes in order to optimize treatment and improve patient outcomes. Research aim: The aim of this study was to develop prognostic models to assess the outcomes of the Kasai procedure in children with biliary atresia. Methodology: This retrospective study analyzed data from 166 children with biliary atresia who underwent the Kasai procedure between 2002 and 2021. The effectiveness of the operation was assessed based on specific criteria, including post-operative stool color, jaundice reduction, and bilirubin levels. The study involved a comparative analysis of various parameters, such as gestational age, birth weight, age at operation, physical development, liver and spleen sizes, and laboratory values including bilirubin, ALT, AST, and others, measured pre- and post-operation. Ultrasonographic evaluations were also conducted pre-operation, assessing the hepatobiliary system and related quantitative parameters. The study was carried out by two experienced specialists in pediatric hepatology. Comparative analysis and multifactorial logistic regression were used as the primary statistical methods. Findings: The study identified several statistically significant predictors of a successful Kasai procedure, including the presence of the gallbladder and levels of cholesterol and direct bilirubin post-operation. A detectable gallbladder was associated with a higher probability of surgical success, while elevated post-operative cholesterol and direct bilirubin levels were indicative of a reduced chance of positive outcomes. Theoretical importance: The findings of this study contribute to the optimization of treatment strategies for children with biliary atresia undergoing the Kasai procedure. By identifying early predictive signs of success, clinicians can modify treatment plans and manage patient care more effectively and proactively. Data collection and analysis procedures: Data for this analysis were obtained from the health records of patients who received the Kasai procedure. Comparative analysis and multifactorial logistic regression were employed to analyze the data and identify significant predictors. Question addressed: The study addressed the question of identifying predictive factors for the success of the Kasai procedure in children with biliary atresia. Conclusion: The developed prognostic models serve as valuable tools for early detection of patients who are less likely to benefit from the Kasai procedure. This enables clinicians to modify treatment plans and manage patient care more effectively and proactively. Potential limitations of the study: The study has several limitations. Its retrospective nature may introduce biases and inconsistencies in data collection. Being single centered, the results might not be generalizable to wider populations due to variations in surgical and postoperative practices. Also, other potential influencing factors beyond the clinical, laboratory, and ultrasonographic parameters considered in this study were not explored, which could affect the outcomes of the Kasai operation. Future studies could benefit from including a broader range of factors.

Keywords: biliary atresia, kasai operation, prognostic model, native liver survival

Procedia PDF Downloads 55
204 Effect of Thermal Treatment on Mechanical Properties of Reduced Activation Ferritic/Martensitic Eurofer Steel Grade

Authors: Athina Puype, Lorenzo Malerba, Nico De Wispelaere, Roumen Petrov, Jilt Sietsma

Abstract:

Reduced activation ferritic/martensitic (RAFM) steels like EUROFER97 are primary candidate structural materials for first wall application in the future demonstration (DEMO) fusion reactor. Existing steels of this type obtain their functional properties by a two-stage heat treatment, which consists of an annealing stage at 980°C for thirty minutes followed by quenching and an additional tempering stage at 750°C for two hours. This thermal quench and temper (Q&T) treatment creates a microstructure of tempered martensite with, as main precipitates, M23C6 carbides, with M = Fe, Cr and carbonitrides of MX type, e.g. TaC and VN. The resulting microstructure determines the mechanical properties of the steel. The ductility is largely determined by the tempered martensite matrix, while the resistance to mechanical degradation, determined by the spatial and size distribution of precipitates and the martensite crystals, plays a key role in the high temperature properties of the steel. Unfortunately, the high temperature response of EUROFER97 is currently insufficient for long term use in fusion reactors, due to instability of the matrix phase and coarsening of the precipitates at prolonged high temperature exposure. The objective of this study is to induce grain refinement by appropriate modifications of the processing route in order to increase the high temperature strength of a lab-cast EUROFER RAFM steel grade. The goal of the work is to obtain improved mechanical behavior at elevated temperatures with respect to conventionally heat treated EUROFER97. A dilatometric study was conducted to study the effect of the annealing temperature on the mechanical properties after a Q&T treatment. The microstructural features were investigated with scanning electron microscopy (SEM), electron back-scattered diffraction (EBSD) and transmission electron microscopy (TEM). Additionally, hardness measurements, tensile tests at elevated temperatures and Charpy V-notch impact testing of KLST-type MCVN specimens were performed to study the mechanical properties of the furnace-heated lab-cast EUROFER RAFM steel grade. A significant prior austenite grain (PAG) refinement was obtained by lowering the annealing temperature of the conventionally used Q&T treatment for EUROFER97. The reduction of the PAG results in finer martensitic constituents upon quenching, which offers more nucleation sites for carbide and carbonitride formation upon tempering. The ductile-to-brittle transition temperature (DBTT) was found to decrease with decreasing martensitic block size. Additionally, an increased resistance against high temperature degradation was accomplished in the fine grained martensitic materials with smallest precipitates obtained by tailoring the annealing temperature of the Q&T treatment. It is concluded that the microstructural refinement has a pronounced effect on the DBTT without significant loss of strength and ductility. Further investigation into the optimization of the processing route is recommended to improve the mechanical behavior of RAFM steels at elevated temperatures.

Keywords: ductile-to-brittle transition temperature (DBTT), EUROFER, reduced activation ferritic/martensitic (RAFM) steels, thermal treatments

Procedia PDF Downloads 301
203 Role of Artificial Intelligence in Nano Proteomics

Authors: Mehrnaz Mostafavi

Abstract:

Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.

Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence

Procedia PDF Downloads 102
202 Economic Analysis of a Carbon Abatement Technology

Authors: Hameed Rukayat Opeyemi, Pericles Pilidis Pagone Emmanuele, Agbadede Roupa, Allison Isaiah

Abstract:

Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero-emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, the current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbomachinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50% cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low-temperature heat exchanger LTHX (referred to by some authors as air preheater the mixed conductive membrane responsible for oxygen transfer and the high-temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout)–AZEP 85% (85% CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine–AZEP 85% (85% CO2 capture). This paper discusses monte carlo risk analysis of four possible layouts of the AZEP cycle.

Keywords: gas turbine, global warming, green house gas, fossil fuel power plants

Procedia PDF Downloads 397
201 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression

Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin

Abstract:

This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.

Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression

Procedia PDF Downloads 290
200 A Hydrometallurgical Route for the Recovery of Molybdenum from Mo-Co Spent Catalyst

Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra

Abstract:

Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum have increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. Present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3 mol/L HCl and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe-Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by counter current simulation studies. According to McCabe-Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two stage counter current at A/O= 1:1 with negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO3 in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO3 was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO3correspond to molybdite Syn-MoO3 structure. FE-SEM depicts the rod like morphology of synthesized MoO3. EDX analysis of MoO3 shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO3 can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as catalyst.

Keywords: cyphos IL 102, extraction, Mo-Co spent catalyst, recovery

Procedia PDF Downloads 269
199 Biomechanical Evaluation for Minimally Invasive Lumbar Decompression: Unilateral Versus Bilateral Approaches

Authors: Yi-Hung Ho, Chih-Wei Wang, Chih-Hsien Chen, Chih-Han Chang

Abstract:

Unilateral laminotomy and bilateral laminotomies were successful decompressions methods for managing spinal stenosis that numerous studies have reported. Thus, unilateral laminotomy was rated technically much more demanding than bilateral laminotomies, whereas the bilateral laminotomies were associated with a positive benefit to reduce more complications. There were including incidental durotomy, increased radicular deficit, and epidural hematoma. However, no relative biomechanical analysis for evaluating spinal instability treated with unilateral and bilateral laminotomies. Therefore, the purpose of this study was to compare the outcomes of different decompressions methods by experimental and finite element analysis. Three porcine lumbar spines were biomechanically evaluated for their range of motion, and the results were compared following unilateral or bilateral laminotomies. The experimental protocol included flexion and extension in the following procedures: intact, unilateral, and bilateral laminotomies (L2–L5). The specimens in this study were tested in flexion (8 Nm) and extension (6 Nm) of pure moment. Spinal segment kinematic data was captured by using the motion tracking system. A 3D finite element lumbar spine model (L1-S1) containing vertebral body, discs, and ligaments were constructed. This model was used to simulate the situation of treating unilateral and bilateral laminotomies at L3-L4 and L4-L5. The bottom surface of S1 vertebral body was fully geometrically constrained in this study. A 10 Nm pure moment also applied on the top surface of L1 vertebral body to drive lumbar doing different motion, such as flexion and extension. The experimental results showed that in the flexion, the ROMs (±standard deviation) of L3–L4 were 1.35±0.23, 1.34±0.67, and 1.66±0.07 degrees of the intact, unilateral, and bilateral laminotomies, respectively. The ROMs of L4–L5 were 4.35±0.29, 4.06±0.87, and 4.2±0.32 degrees, respectively. No statistical significance was observed in these three groups (P>0.05). In the extension, the ROMs of L3–L4 were 0.89±0.16, 1.69±0.08, and 1.73±0.13 degrees, respectively. In the L4-L5, the ROMs were 1.4±0.12, 2.44±0.26, and 2.5±0.29 degrees, respectively. Significant differences were observed among all trials, except between the unilateral and bilateral laminotomy groups. At the simulation results portion, the similar results were discovered with the experiment. No significant differences were found at L4-L5 both flexion and extension in each group. Only 0.02 and 0.04 degrees variation were observed during flexion and extension between the unilateral and bilateral laminotomy groups. In conclusions, the present results by finite element analysis and experimental reveal that no significant differences were observed during flexion and extension between unilateral and bilateral laminotomies in short-term follow-up. From a biomechanical point of view, bilateral laminotomies seem to exhibit a similar stability as unilateral laminotomy. In clinical practice, the bilateral laminotomies are likely to reduce technical difficulties and prevent perioperative complications; this study proved this benefit through biomechanical analysis. The results may provide some recommendations for surgeons to make the final decision.

Keywords: unilateral laminotomy, bilateral laminotomies, spinal stenosis, finite element analysis

Procedia PDF Downloads 401
198 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN

Authors: Mohamed Gaafar, Evan Davies

Abstract:

Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.

Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN

Procedia PDF Downloads 300
197 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase

Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc

Abstract:

Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.

Keywords: numerical model, additive manufacturing, friction, process

Procedia PDF Downloads 147
196 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna

Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov

Abstract:

This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.

Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna

Procedia PDF Downloads 283
195 The Effectiveness of Multiphase Flow in Well- Control Operations

Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia

Abstract:

Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.

Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic

Procedia PDF Downloads 119
194 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 388
193 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 180
192 Benzenepropanamine Analogues as Non-detergent Microbicidal Spermicide for Effective Pre-exposure Prophylaxis

Authors: Veenu Bala, Yashpal S. Chhonker, Bhavana Kushwaha, Rabi S. Bhatta, Gopal Gupta, Vishnu L. Sharma

Abstract:

According to UNAIDS 2013 estimate nearly 52% of all individuals living with HIV are now women of reproductive age (15–44 years). Seventy-five percent cases of HIV acquisition are through heterosexual contacts and sexually transmitted infections (STIs), attributable to unsafe sexual behaviour. Each year, an estimated 500 million people acquire atleast one of four STIs: chlamydia, gonorrhoea, syphilis and trichomoniasis. Trichomonas vaginalis (TV) is exclusively sexually transmitted in adults, accounting for 30% of STI cases and associated with pelvic inflammatory disease (PID), vaginitis and pregnancy complications in women. TV infection resulted in impaired vaginal milieu, eventually favoring HIV transmission. In the absence of an effective prophylactic HIV vaccine, prevention of new infections has become a priority. It was thought worthwhile to integrate HIV prevention and reproductive health services including unintended pregnancy protection for women as both are related with unprotected sex. Initially, nonoxynol-9 (N-9) had been proposed as a spermicidal agent with microbicidal activity but on the contrary it increased HIV susceptibility due to surfactant action. Thus, to accomplish an urgent need of novel woman controlled non-detergent microbicidal spermicides benzenepropanamine analogues have been synthesized. At first, five benzenepropanamine-dithiocarbamate hybrids have been synthesized and evaluated for their spermicidal, anti-Trichomonas and anti-fungal activities along with safety profiling to cervicovaginal cells. In order to further enhance the scope of above study benzenepropanamine was hybridized with thiourea as to introduce anti-HIV potential. The synthesized hybrid molecules were evaluated for their reverse transcriptase (RT) inhibition, spermicidal, anti-Trichomonas and antimicrobial activities as well as their safety against vaginal flora and cervical cells. simulated vaginal fluid (SVF) stability and pharmacokinetics of most potent compound versus N-9 was examined in female Newzealand (NZ) rabbits to observe its absorption into systemic circulation and subsequent exposure in blood plasma through vaginal wall. The study resulted in the most promising compound N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) exhibiting better activity profile than N-9 as it showed RT inhibition (72.30 %), anti-Trichomonas (MIC, 46.72 µM against MTZ susceptible and MIC, 187.68 µM against resistant strain), spermicidal (MEC, 0.01%) and antifungal activity (MIC, 3.12–50 µg/mL) against four fungal strains. The high safety against vaginal epithelium (HeLa cells) and compatibility with vaginal flora (lactobacillus), SVF stability and least vaginal absorption supported its suitability for topical vaginal application. Docking study was performed to gain an insight into the binding mode and interactions of the most promising compound, N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) with HIV-1 Reverse Transcriptase. The docking study has revealed that compound (29) interacted with HIV-1 RT similar to standard drug Nevirapine. It may be concluded that hybridization of benzenepropanamine and thiourea moiety resulted into novel lead with multiple activities including RT inhibition. A further lead optimization may result into effective vaginal microbicides having spermicidal, anti-Trichomonas, antifungal and anti-HIV potential altogether with enhanced safety to cervico-vaginal cells in comparison to Nonoxynol-9.

Keywords: microbicidal, nonoxynol-9, reverse transcriptase, spermicide

Procedia PDF Downloads 344
191 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale

Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal

Abstract:

Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.

Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable

Procedia PDF Downloads 302
190 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 173
189 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control

Procedia PDF Downloads 174
188 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal

Authors: A. D. Rao, Sachiko Mohanty

Abstract:

The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.

Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal

Procedia PDF Downloads 170
187 High Throughput Virtual Screening against ns3 Helicase of Japanese Encephalitis Virus (JEV)

Authors: Soma Banerjee, Aamen Talukdar, Argha Mandal, Dipankar Chaudhuri

Abstract:

Japanese Encephalitis is a major infectious disease with nearly half the world’s population living in areas where it is prevalent. Currently, treatment for it involves only supportive care and symptom management through vaccination. Due to the lack of antiviral drugs against Japanese Encephalitis Virus (JEV), the quest for such agents remains a priority. For these reasons, simulation studies of drug targets against JEV are important. Towards this purpose, docking experiments of the kinase inhibitors were done against the chosen target NS3 helicase as it is a nucleoside binding protein. Previous efforts regarding computational drug design against JEV revealed some lead molecules by virtual screening using public domain software. To be more specific and accurate regarding finding leads, in this study a proprietary software Schrödinger-GLIDE has been used. Druggability of the pockets in the NS3 helicase crystal structure was first calculated by SITEMAP. Then the sites were screened according to compatibility with ATP. The site which is most compatible with ATP was selected as target. Virtual screening was performed by acquiring ligands from databases: KinaseSARfari, KinaseKnowledgebase and Published inhibitor Set using GLIDE. The 25 ligands with best docking scores from each database were re-docked in XP mode. Protein structure alignment of NS3 was performed using VAST against MMDB, and similar human proteins were docked to all the best scoring ligands. The low scoring ligands were chosen for further studies and the high scoring ligands were screened. Seventy-three ligands were listed as the best scoring ones after performing HTVS. Protein structure alignment of NS3 revealed 3 human proteins with RMSD values lesser than 2Å. Docking results with these three proteins revealed the inhibitors that can interfere and inhibit human proteins. Those inhibitors were screened. Among the ones left, those with docking scores worse than a threshold value were also removed to get the final hits. Analysis of the docked complexes through 2D interaction diagrams revealed the amino acid residues that are essential for ligand binding within the active site. Interaction analysis will help to find a strongly interacting scaffold among the hits. This experiment yielded 21 hits with the best docking scores which could be investigated further for their drug like properties. Aside from getting suitable leads, specific NS3 helicase-inhibitor interactions were identified. Selection of Target modification strategies complementing docking methodologies which can result in choosing better lead compounds are in progress. Those enhanced leads can lead to better in vitro testing.

Keywords: antivirals, docking, glide, high-throughput virtual screening, Japanese encephalitis, ns3 helicase

Procedia PDF Downloads 232
186 God, The Master Programmer: The Relationship Between God and Computers

Authors: Mohammad Sabbagh

Abstract:

Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.

Keywords: programming, the Quran, object orientation, computers and humans, GOD

Procedia PDF Downloads 107
185 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice

Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer

Abstract:

The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.

Keywords: method of lines, brine-spongy ice, heat conduction, salt water

Procedia PDF Downloads 217
184 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 222
183 Monte Carlo Risk Analysis of a Carbon Abatement Technology

Authors: Hameed Rukayat Opeyemi, Pericles Pilidis, Pagone Emanuele

Abstract:

Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5 cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbo machinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50 % cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low temperature heat exchanger LTHX (referred to by some authors as air pre-heater the mixed conductive membrane responsible for oxygen transfer and the high temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. This paper discusses techno-economic analysis of four possible layouts of the AZEP cycle. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout) – AZEP 85 % (85 % CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine– AZEP 85 % (85 % CO2 capture). This paper discusses Montecarlo risk analysis of four possible layouts of the AZEP cycle.

Keywords: gas turbine, global warming, green house gases, power plants

Procedia PDF Downloads 472
182 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 397
181 DIF-JACKET: a Thermal Protective Jacket for Firefighters

Authors: Gilda Santos, Rita Marques, Francisca Marques, João Ribeiro, André Fonseca, João M. Miranda, João B. L. M. Campos, Soraia F. Neves

Abstract:

Every year, an unacceptable number of firefighters are seriously burned during firefighting operations, with some of them eventually losing their life. Although thermal protective clothing research and development has been searching solutions to minimize firefighters heat load and skin burns, currently commercially available solutions focus in solving isolated problems, for example, radiant heat or water-vapor resistance. Therefore, episodes of severe burns and heat strokes are still frequent. Taking this into account, a consortium composed by Portuguese entities has joined synergies to develop an innovative protective clothing system by following a procedure based on the application of numerical models to optimize the design and using a combinationof protective clothing components disposed in different layers. Recently, it has been shown that Phase Change Materials (PCMs) can contribute to the reduction of potential heat hazards in fire extinguish operations, and consequently, their incorporation into firefighting protective clothing has advantages. The greatest challenge is to integrate these materials without compromising garments ergonomics and, at the same time, accomplishing the International Standard of protective clothing for firefighters – laboratory test methods and performance requirements for wildland firefighting clothing. The incorporation of PCMs into the firefighter's protective jacket will result in the absorption of heat from the fire and consequently increase the time that the firefighter can be exposed to it. According to the project studies and developments, to favor a higher use of the PCM storage capacityand to take advantage of its high thermal inertia more efficiently, the PCM layer should be closer to the external heat source. Therefore, in this stage, to integrate PCMs in firefighting clothing, a mock-up of a vest specially designed to protect the torso (back, chest and abdomen) and to be worn over a fire-resistant jacketwas envisaged. Different configurations of PCMs, as well as multilayer approaches, were studied using suitable joining technologies such as bonding, ultrasound, and radiofrequency. Concerning firefighter’s protective clothing, it is important to balance heat protection and flame resistance with comfort parameters, namely, thermaland water-vapor resistances. The impact of the most promising solutions regarding thermal comfort was evaluated to refine the performance of the global solutions. Results obtained with experimental bench scale model and numerical simulation regarding the integration of PCMs in a vest designed as protective clothing for firefighters will be presented.

Keywords: firefighters, multilayer system, phase change material, thermal protective clothing

Procedia PDF Downloads 165
180 A Literature Review on the Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster

Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon

Abstract:

In a disaster event, sharing patient information between the pre-hospitals Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre-EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors which are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality and the data were analysed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system which can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analysed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospitals staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.

Keywords: communication, emergency communication services, emergency medical teams, emergency physicians, emergency nursing, paramedics, information and communication technology, communication systems

Procedia PDF Downloads 86