Search results for: nozzle length
227 Predicting Mortality among Acute Burn Patients Using BOBI Score vs. FLAMES Score
Authors: S. Moustafa El Shanawany, I. Labib Salem, F. Mohamed Magdy Badr El Dine, H. Tag El Deen Abd Allah
Abstract:
Thermal injuries remain a global health problem and a common issue encountered in forensic pathology. They are a devastating cause of morbidity and mortality in children and adults especially in developing countries, causing permanent disfigurement, scarring and grievous hurt. Burns have always been a matter of legal concern in cases of suicidal burns, self-inflicted burns for false accusation and homicidal attempts. Assessment of burn injuries as well as rating permanent disabilities and disfigurement following thermal injuries for the benefit of compensation claims represents a challenging problem. This necessitates the development of reliable scoring systems to yield an expected likelihood of permanent disability or fatal outcome following burn injuries. The study was designed to identify the risk factors of mortality in acute burn patients and to evaluate the applicability of FLAMES (Fatality by Longevity, APACHE II score, Measured Extent of burn, and Sex) and BOBI (Belgian Outcome in Burn Injury) model scores in predicting the outcome. The study was conducted on 100 adult patients with acute burn injuries admitted to the Burn Unit of Alexandria Main University Hospital, Egypt from October 2014 to October 2015. Victims were examined after obtaining informed consent and the data were collected in specially designed sheets including demographic data, burn details and any associated inhalation injury. Each burn patient was assessed using both BOBI and FLAMES scoring systems. The results of the study show the mean age of patients was 35.54±12.32 years. Males outnumbered females (55% and 45%, respectively). Most patients were accidently burnt (95%), whereas suicidal burns accounted for the remaining 5%. Flame burn was recorded in 82% of cases. As well, 8% of patients sustained more than 60% of total burn surface area (TBSA) burns, 19% of patients needed mechanical ventilation, and 19% of burnt patients died either from wound sepsis, multi-organ failure or pulmonary embolism. The mean length of hospital stay was 24.91±25.08 days. The mean BOBI score was 1.07±1.27 and that of the FLAMES score was -4.76±2.92. The FLAMES score demonstrated an area under the receiver operating characteristic (ROC) curve of 0.95 which was significantly higher than that of the BOBI score (0.883). A statistically significant association was revealed between both predictive models and the outcome. The study concluded that both scoring systems were beneficial in predicting mortality in acutely burnt patients. However, the FLAMES score could be applied with a higher level of accuracy.Keywords: BOBI, burns, FLAMES, scoring systems, outcome
Procedia PDF Downloads 334226 The Effect of Technology on Skin Development and Progress
Authors: Haidy Weliam Megaly Gouda
Abstract:
Dermatology is often a neglected specialty in low-resource settings despite the high morbidity associated with skin disease. This becomes even more significant when associated with HIV infection, as dermatological conditions are more common and aggressive in HIV-positive patients. African countries have the highest HIV infection rates, and skin conditions are frequently misdiagnosed and mismanaged because of a lack of dermatological training and educational material. The frequent lack of diagnostic tests in the African setting renders basic clinical skills all the more vital. This project aimed to improve the diagnosis and treatment of skin disease in the HIV population in a district hospital in Malawi. A basic dermatological clinical tool was developed and produced in collaboration with local staff and based on available literature and data collected from clinics. The aim was to improve diagnostic accuracy and provide guidance for the treatment of skin disease in HIV-positive patients. A literature search within Embassy, Medline and Google Scholar was performed and supplemented through data obtained from attending 5 Antiretroviral clinics. From the literature, conditions were selected for inclusion in the resource if they were described as specific, more prevalent, or extensive in the HIV population or have more adverse outcomes if they develop in HIV patients. Resource-appropriate treatment options were decided using Malawian Ministry of Health guidelines and textbooks specific to African dermatology. After the collection of data and discussion with local clinical and pharmacy staff, a list of 15 skin conditions was included, and a booklet was created using the simple layout of a picture, a diagnostic description of the disease and treatment options. Clinical photographs were collected from local clinics (with full consent of the patient) or from the book ‘Common Skin Diseases in Africa’ (permission granted if fully acknowledged and used in a not-for-profit capacity). This tool was evaluated by the local staff alongside an educational teaching session on skin disease. This project aimed to reduce uncertainty in diagnosis and provide guidance for appropriate treatment in HIV patients by gathering information into one practical and manageable resource. To further this project, we hope to review the effectiveness of the tool in practice.Keywords: prevalence and pattern of skin diseases, impact on quality of life, rural Nepal, interventions, quality switched ruby laser, skin color river blindness, clinical signs, circularity index, grey level run length matrix, grey level co-occurrence matrix, local binary pattern, object detection, ring detection, shape identification
Procedia PDF Downloads 60225 Evaluation Method for Fouling Risk Using Quartz Crystal Microbalance
Authors: Natsuki Kishizawa, Keiko Nakano, Hussam Organji, Amer Shaiban, Mohammad Albeirutty
Abstract:
One of the most important tasks in operating desalination plants using a reverse osmosis (RO) method is preventing RO membrane fouling caused by foulants found in seawater. Optimal design of the pre-treatment process of RO process for plants enables the reduction of foulants. Therefore, a quantitative evaluation of the fouling risk in pre-treated water, which is fed to RO, is required for optimal design. Some measurement methods for water quality such as silt density index (SDI) and total organic carbon (TOC) have been conservatively applied for evaluations. However, these methods have not been effective in some situations for evaluating the fouling risk of RO feed water. Furthermore, stable management of plants will be possible by alerts and appropriate control of the pre-treatment process by using the method if it can be applied to the inline monitoring system for the fouling risk of RO feed water. The purpose of this study is to develop a method to evaluate the fouling risk of RO feed water. We applied a quartz crystal microbalance (QCM) to measure the amount of foulants found in seawater using a sensor whose surface is coated with polyamide thin film, which is the main material of a RO membrane. The increase of the weight of the sensor after a certain length of time in which the sample water passes indicates the fouling risk of the sample directly. We classified the values as “FP: Fouling Potential”. The characteristics of the method are to measure the very small amount of substances in seawater in a short time: < 2h, and from a small volume of the sample water: < 50mL. Using some RO cell filtration units, a higher correlation between the pressure increase given by RO fouling and the FP from the method than SDI and TOC was confirmed in the laboratory-scale test. Then, to establish the correlation in the actual bench-scale RO membrane module, and to confirm the feasibility of the monitoring system as a control tool for the pre-treatment process, we have started a long-term test at an experimental desalination site by the Red Sea in Jeddah, Kingdom of Saudi Arabia. Implementing inline equipment for the method made it possible to measure FP intermittently (4 times per day) and automatically. Moreover, for two 3-month long operations, the RO operation pressure among feed water samples of different qualities was compared. The pressure increase through a RO membrane module was observed at a high FP RO unit in which feed water was treated by a cartridge filter only. On the other hand, the pressure increase was not observed at a low FP RO unit in which feed water was treated by an ultra-filter during the operation. Therefore, the correlation in an actual scale RO membrane was established in two runs of two types of feed water. The result suggested that the FP method enables the evaluation of the fouling risk of RO feed water.Keywords: fouling, monitoring, QCM, water quality
Procedia PDF Downloads 211224 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes
Authors: Mohsen Hababalahi, Morteza Bastami
Abstract:
Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method
Procedia PDF Downloads 510223 Energy Harvesting and Storage System for Marine Applications
Authors: Sayem Zafar, Mahmood Rahi
Abstract:
Rigorous international maritime regulations are in place to limit boat and ship hydrocarbon emissions. The global sustainability goals are reducing the fuel consumption and minimizing the emissions from the ships and boats. These maritime sustainability goals have attracted a lot of research interest. Energy harvesting and storage system is designed in this study based on hybrid renewable and conventional energy systems. This energy harvesting and storage system is designed for marine applications, such as, boats and small ships. These systems can be utilized for mobile use or off-grid remote electrification. This study analyzed the use of micro power generation for boats and small ships. The energy harvesting and storage system has two distinct systems i.e. dockside shore-based system and on-board system. The shore-based system consists of a small wind turbine, photovoltaic (PV) panels, small gas turbine, hydrogen generator and high-pressure hydrogen storage tank. This dockside system is to provide easy access to the boats and small ships for supply of hydrogen. The on-board system consists of hydrogen storage tanks and fuel cells. The wind turbine and PV panels generate electricity to operate electrolyzer. A small gas turbine is used as a supplementary power system to contribute in case the hybrid renewable energy system does not provide the required energy. The electrolyzer performs the electrolysis on distilled water to produce hydrogen. The hydrogen is stored in high-pressure tanks. The hydrogen from the high-pressure tank is filled in the low-pressure tanks on-board seagoing vessels to operate the fuel cell. The boats and small ships use the hydrogen fuel cell to provide power to electric propulsion motors and for on-board auxiliary use. For shore-based system, a small wind turbine with the total length of 4.5 m and the disk diameter of 1.8 m is used. The small wind turbine dimensions make it big enough to be used to charge batteries yet small enough to be installed on the rooftops of dockside facility. The small dimensions also make the wind turbine easily transportable. In this paper, PV, sizing and solar flux are studied parametrically. System performance is evaluated under different operating and environmental conditions. The parametric study is conducted to evaluate the energy output and storage capacity of energy storage system. Results are generated for a wide range of conditions to analyze the usability of hybrid energy harvesting and storage system. This energy harvesting method significantly improves the usability and output of the renewable energy sources. It also shows that small hybrid energy systems have promising practical applications.Keywords: energy harvesting, fuel cell, hybrid energy system, hydrogen, wind turbine
Procedia PDF Downloads 136222 Exploring Ugliness as an Aesthetic Theme in Contemporary Chinese Literature through Analyzing Five Dragons, Protagonist in Rice by Xianfeng Writer Su Tong
Authors: Ku Yu Yiu
Abstract:
Writers have included the ugly in their works for centuries, but ugliness has often served merely as a contrast to bring out the beautiful, not having emerged as an independent aesthetic category until recent history. In the 1980s, China was going through a series of changes and transformations; the wounds and scars from the Cultural Revolution, a freer literary atmosphere then, and the introduction of Western thoughts into China gave rise to a trend of penning the ugly and the repulsive among writers. Such trend of utilizing 'Ugliness' as a theme of writing in Chinese literature is especially observed among Xianfeng writers (China’s pioneer writers or avant-garde writers). As a prominent Xianfeng writer, Su Tong (1963-) also incorporates ugliness into his novels: shoddy environment, degenerate and ruthless society, distorted and decadent humanity are part and parcel of his deliberate efforts of exploring and depicting the ugly aspects of the world. His full-length novel Rice, staging the appalling protagonist Five Dragons, is a prime example. In fact, all characters in Rice exhibit Ugliness but Five Dragons’s turning into a figure of ugly spite is the most thorough and complete, making Rice a masterpiece of Su Tong’s art in projecting the Ugliness embedded in society and human nature. Approaching Rice from the angle of the aesthetics of the Ugly and selecting Five Dragons as the subject of close reading and analysis, this paper offers insights into both Su Tong’s distinct style of foregrounding and unfolding Ugliness in his novel and the workings of such text when he deploys the Ugly as a center component of his writing. In addition to citing from the discussion of Rice by literary critics and the author himself, this paper also presents textual evidence and analyzes the imageries/motifs and calculated vocabulary/narration employed by Su Tong to illustrate how Five Dragons' extreme behaviors and psychological states are integral to the plot and ultimately to the manifestation of ugliness as the novel’s theme. This study reveals that although the psyche and doings of Five Dragons and other 'ugly' characters are, as the author once stated, imagined products of the writer Su Tong himself, Rice sheds light onto the ugly aspects of life in China in 1920s-30s. Three aspects of Ugliness are identified and discussed in the paper. Lastly, this paper also suggests some effects of Su Tong’s exploration of Ugliness in Rice, proposing that the portrayal of Ugliness per se is not the ends of Su Tong’s mastery of the aesthetics of the Ugly but rather a means to making his writing transcend from provoking spontaneous moral judgment in readers on the doings of Five Dragons to prompting readers to ponder on philosophical questions such as how humanity can still be possible when an individual confronts the dark sides of a self, a society, and his/her fate.Keywords: aesthetics, Rice, Su Tong, Ugly
Procedia PDF Downloads 168221 Numerical Simulation of the Heat Transfer Process in a Double Pipe Heat Exchanger
Authors: J. I. Corcoles, J. D. Moya-Rico, A. Molina, J. F. Belmonte, J. A. Almendros-Ibanez
Abstract:
One of the most common heat exchangers technology in engineering processes is the use of double-pipe heat exchangers (DPHx), mainly in the food industry. To improve the heat transfer performance, several passive geometrical devices can be used, such as the wall corrugation of tubes, which increases the wet perimeter maintaining a constant cross-section area, increasing consequently the convective surface area. It contributes to enhance heat transfer in forced convection, promoting secondary recirculating flows. One of the most extended tools to analyse heat exchangers' efficiency is the use of computational fluid dynamic techniques (CFD), a complementary activity to the experimental studies as well as a previous step for the design of heat exchangers. In this study, a double pipe heat exchanger behaviour with two different inner tubes, smooth and spirally corrugated tube, have been analysed. Hence, experimental analysis and steady 3-D numerical simulations using the commercial code ANSYS Workbench v. 17.0 are carried out to analyse the influence of geometrical parameters for spirally corrugated tubes at turbulent flow. To validate the numerical results, an experimental setup has been used. To heat up or cool down the cold fluid as it passes through the heat exchanger, the installation includes heating and cooling loops served by an electric boiler with a heating capacity of 72 kW and a chiller, with a cooling capacity of 48 kW. Two tests have been carried out for the smooth tube and for the corrugated one. In all the tests, the hot fluid has a constant flowrate of 50 l/min and inlet temperature of 59.5°C. For the cold fluid, the flowrate range from 25 l/min (Test 1) and 30 l/min (Test 2) with an inlet temperature of 22.1°C. The heat exchanger is made of stainless steel, with an external diameter of 35 mm and wall thickness of 1.5 mm. Both inner tubes have an external diameter of 24 mm and 1 mm thickness of stainless steel with a length of 2.8 m. The corrugated tube has a corrugation height (H) of 1.1 mm and helical pitch (P) of 25 mm. It is characterized using three non-dimensional parameters, the ratio of the corrugation shape and the diameter (H/D), the helical pitch (P/D) and the severity index (SI = H²/P x D). The results showed good agreement between the numerical and the experimental results. Hence, the lowest differences were shown for the fluid temperatures. In all the analysed tests and for both analysed tubes, the temperature obtained numerically was slightly higher than the experimental results, with values ranged between 0.1% and 0.7%. Regarding the pressure drop, the maximum differences between the values obtained numerically, and the experimental values were close to 16%. Based on the experimental and the numerical results, for the corrugated tube, it can be highlighted that the temperature difference between the inlet and the outlet of the cold fluid is 42%, higher than the smooth tube.Keywords: corrugated tube, heat exchanger, heat transfer, numerical simulation
Procedia PDF Downloads 144220 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat
Authors: M. Venegas, M. De Vega, N. García-Hernando
Abstract:
Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy
Procedia PDF Downloads 283219 Chebyshev Collocation Method for Solving Heat Transfer Analysis for Squeezing Flow of Nanofluid in Parallel Disks
Authors: Mustapha Rilwan Adewale, Salau Ayobami Muhammed
Abstract:
This study focuses on the heat transfer analysis of magneto-hydrodynamics (MHD) squeezing flow between parallel disks, considering a viscous incompressible fluid. The upper disk exhibits both upward and downward motion, while the lower disk remains stationary but permeable. By employing similarity transformations, a system of nonlinear ordinary differential equations is derived to describe the flow behavior. To solve this system, a numerical approach, namely the Chebyshev collocation method, is utilized. The study investigates the influence of flow parameters and compares the obtained results with existing literature. The significance of this research lies in understanding the heat transfer characteristics of MHD squeezing flow, which has practical implications in various engineering and industrial applications. By employing the similarity transformations, the complex governing equations are simplified into a system of nonlinear ordinary differential equations, facilitating the analysis of the flow behavior. To obtain numerical solutions for the system, the Chebyshev collocation method is implemented. This approach provides accurate approximations for the nonlinear equations, enabling efficient computations of the heat transfer properties. The obtained results are compared with existing literature, establishing the validity and consistency of the numerical approach. The study's major findings shed light on the influence of flow parameters on the heat transfer characteristics of the squeezing flow. The analysis reveals the impact of parameters such as magnetic field strength, disk motion amplitude, fluid viscosity on the heat transfer rate between the disks, the squeeze number(S), suction/injection parameter(A), Hartman number(M), Prandtl number(Pr), modified Eckert number(Ec), and the dimensionless length(δ). These findings contribute to a comprehensive understanding of the system's behavior and provide insights for optimizing heat transfer processes in similar configurations. In conclusion, this study presents a thorough heat transfer analysis of magneto-hydrodynamics squeezing flow between parallel disks. The numerical solutions obtained through the Chebyshev collocation method demonstrate the feasibility and accuracy of the approach. The investigation of flow parameters highlights their influence on heat transfer, contributing to the existing knowledge in this field. The agreement of the results with previous literature further strengthens the reliability of the findings. These outcomes have practical implications for engineering applications and pave the way for further research in related areas.Keywords: squeezing flow, magneto-hydro-dynamics (MHD), chebyshev collocation method(CCA), parallel manifolds, finite difference method (FDM)
Procedia PDF Downloads 73218 The Influence of Newest Generation Butyrate Combined with Acids, Medium Chain Fatty Acids and Plant Extract on the Performance and Physiological State of Laying Hens
Authors: Vilma Sasyte, Vilma Viliene, Asta Raceviciute-Stupeliene, Agila Dauksiene, Romas Gruzauskas, Virginijus Slausgalvis, Jamal Al-Saifi
Abstract:
The aim of the present study was to investigate the effect of butyrate, acids, medium-chain fatty acids and plant extract mixture on performance, blood and gastrointestinal tract characteristics of laying hens’. For the period of 8 weeks, 24 Hisex Brown laying hens were randomly assigned to 2 dietary treatments: 1) control wheat-corn-soybean meal based diet (Control group), 2) control diet supplemented with the mixture of butyrate, acids, medium chain fatty acids and plant extract (Lumance®) at the level of 1.5 g/kg of feed (Experimental group). Hens were fed with a crumbled diet at 125 g per day. Housing and feeding conditions were the same for all groups and met the requirements of growth for laying hens of Hisex Brown strain. In the blood serum total protein, bilirubin, cholesterol, DTL- and MTL- cholesterol, triglycerides, glucose, GGT, GOT, GPT, alkaline phosphatase, alpha amylase, contents of c-reactive protein, uric acid, and lipase were analyzed. Development of intestines and internal organs (intestinal length, intestinal weight, the weight of glandular and muscular stomach, pancreas, heart, and liver) were determined. The concentration of short chain fatty acids in caecal content was measured using the method of HPLC. The results of the present study showed that 1.5 g/kg supplementation of feed additive affected egg production and feed conversion ratio for the production of 1 kg of egg mass. Dietary supplementation of analyzed additive in the diets increased the concentration of triglycerides, GOT, alkaline phosphatase and decreased uric acid content compared with the control group (P<0.05). No significant difference for others blood indices in comparison to the control was observed. The addition of feed additives in laying hens’ diets increased intestinal weight by 11% and liver weight by 14% compared with the control group (P<0.05). The short chain fatty acids (propionic, acetic and butyric acids) in the caecum of laying hens in experimental groups decreased compared with the control group. The supplementation of the mixture of butyrate, acids, medium-chain fatty acids and plant extract at the level of 1.5 g/kg in the laying hens’ diets had the effect on the performance, some gastrointestinal tract function and blood parameters of laying hens.Keywords: acids, butyrate, laying hens, MCFA, performance, plant extract, psysiological state
Procedia PDF Downloads 295217 Analyzing Temperature and Pressure Performance of a Natural Air-Circulation System
Authors: Emma S. Bowers
Abstract:
Perturbations in global environments and temperatures have heightened the urgency of creating cost-efficient, energy-neutral building techniques. Structural responses to this thermal crisis have included designs (including those of the building standard PassivHaus) with airtightness, window placement, insulation, solar orientation, shading, and heat-exchange ventilators as potential solutions or interventions. Limitations in the predictability of the circulation of cooled air through the ambient temperature gradients throughout a structure are one of the major obstacles facing these enhanced building methods. A diverse range of air-cooling devices utilizing varying technologies is implemented around the world. Many of them worsen the problem of climate change by consuming energy. Using natural ventilation principles of air buoyancy and density to circulate fresh air throughout a building with no energy input can combat these obstacles. A unique prototype of an energy-neutral air-circulation system was constructed in order to investigate potential temperature and pressure gradients related to the stack effect (updraft of air through a building due to changes in air pressure). The stack effect principle maintains that since warmer air rises, it will leave an area of low pressure that cooler air will rush in to fill. The result is that warmer air will be expelled from the top of the building as cooler air is directed through the bottom, creating an updraft. Stack effect can be amplified by cooling the air near the bottom of a building and heating the air near the top. Using readily available, mostly recyclable or biodegradable materials, an insulated building module was constructed. A tri-part construction model was utilized: a subterranean earth-tube heat exchanger constructed of PVC pipe and placed in a horizontally oriented trench, an insulated, airtight cube aboveground to represent a building, and a solar chimney (painted black to increase heat in the out-going air). Pressure and temperature sensors were placed at four different heights within the module as well as outside, and data was collected for a period of 21 days. The air pressures and temperatures over the course of the experiment were compared and averaged. The promise of this design is that it represents a novel approach which directly addresses the obstacles of air flow and expense, using the physical principle of stack effect to draw a continuous supply of fresh air through the structure, using low-cost and readily available materials (and zero manufactured energy). This design serves as a model for novel approaches to creating temperature controlled buildings using zero energy and opens the door for future research into the effects of increasing module scale, increasing length and depth of the earth tube, and shading the building. (Model can be provided).Keywords: air circulation, PassivHaus, stack effect, thermal gradient
Procedia PDF Downloads 153216 Comparative Analysis of Costs and Well Drilling Techniques for Water, Geothermal Energy, Oil and Gas Production
Authors: Thales Maluf, Nazem Nascimento
Abstract:
The development of society relies heavily on the total amount of energy obtained and its consumption. Over the years, there has been an advancement on energy attainment, which is directly related to some natural resources and developing systems. Some of these resources should be highlighted for its remarkable presence in world´s energy grid, such as water, petroleum, and gas, while others deserve attention for representing an alternative to diversify the energy grid, like geothermal sources. Therefore, because all these resources can be extracted from the underground, drilling wells is a mandatory activity in terms of exploration, and it involves a previous geological study and an adequate preparation. It also involves a cleaning process and an extraction process that can be executed by different procedures. For that reason, this research aims the enhancement of exploration processes through a comparative analysis of drilling costs and techniques used to produce them. The analysis itself is based on a bibliographical review based on books, scientific papers, schoolwork and mainly explore drilling methods and technologies, equipment used, well measurements, extraction methods, and production costs. Besides techniques and costs regarding the drilling processes, some properties and general characteristics of these sources are also compared. Preliminary studies show that there are some major differences regarding the exploration processes, mostly because these resources are naturally distinct. Water wells, for instance, have hundreds of meters of length because water is stored close to the surface, while oil, gas, and geothermal production wells can reach thousands of meters, which make them more expensive to be drilled. The drilling methods present some general similarities especially regarding the main mechanism of perforation, but since water is a resource stored closer to the surface than the other ones, there is a wider variety of methods. Water wells can be drilled by rotary mechanisms, percussion mechanisms, rotary-percussion mechanisms, and some other simpler methods. Oil and gas production wells, on the other hand, require rotary or rotary-percussion drilling with a proper structure called drill rig and resistant materials for the drill bits and the other components, mostly because they´re stored in sedimentary basins that can be located thousands of meters under the ground. Geothermal production wells also require rotary or rotary-percussion drilling and require the existence of an injection well and an extraction well. The exploration efficiency also depends on the permeability of the soil, and that is why it has been developed the Enhanced Geothermal Systems (EGS). Throughout this review study, it can be verified that the analysis of the extraction processes of energy resources is essential since these resources are responsible for society development. Furthermore, the comparative analysis of costs and well drilling techniques for water, geothermal energy, oil, and gas production, which is the main goal of this research, can enable the growth of energy generation field through the emergence of ideas that improve the efficiency of energy generation processes.Keywords: drilling, water, oil, Gas, geothermal energy
Procedia PDF Downloads 143215 Advancing Sustainable Seawater Desalination Technologies: Exploring the Sub-Atmospheric Vapor Pipeline (SAVP) and Energy-Efficient Solution for Urban and Industrial Water Management in Smart, Eco-Friendly, and Green Building Infrastructure
Authors: Mona Shojaei
Abstract:
The Sub-Atmospheric Vapor Pipeline (SAVP) introduces a distinct approach to seawater desalination with promising applications in both land and industrial sectors. SAVP systems exploit the temperature difference between a hot source and a cold environment to facilitate efficient vapor transfer, offering substantial benefits in diverse industrial and field applications. This approach incorporates dynamic boundary conditions, where the temperatures of hot and cold sources vary over time, particularly in natural and industrial environments. Such variations critically influence convection and diffusion processes, introducing challenges that require the refinement of the convection-diffusion equation and the derivation of temperature profiles along the pipeline through advanced engineering mathematics. This study formulates vapor temperature as a function of time and length using two mathematical approaches: Eigen functions and Green’s equation. Combining detailed theoretical modeling, mathematical simulations, and extensive field and industrial tests, this research underscores the SAVP system’s scalability for real-world applications. Results reveal a high degree of accuracy, highlighting SAVP’s significant potential for energy conservation and environmental sustainability. Furthermore, the integration of SAVP technology within smart and green building systems creates new opportunities for sustainable urban water management. By capturing and repurposing vapor for non-potable uses such as irrigation, greywater recycling, and ecosystem support in green spaces, SAVP aligns with the principles of smart and green buildings. Smart buildings emphasize efficient resource management, enhanced system control, and automation for optimal energy and water use, while green buildings prioritize environmental impact reduction and resource conservation. SAVP technology bridges both paradigms, enhancing water self-sufficiency and reducing reliance on external water supplies. The sustainable and energy-efficient properties of SAVP make it a vital component in resilient infrastructure development, addressing urban water scarcity while promoting eco-friendly living. This dual alignment with smart and green building goals positions SAVP as a transformative solution in the pursuit of sustainable urban resource management.Keywords: sub-atmospheric vapor pipeline, seawater desalination, energy efficiency, vapor transfer dynamics, mathematical modeling, sustainable water solutions, smart buildings
Procedia PDF Downloads 10214 Evaluating the Impact of Nursing Protocols on External Ventricular Drain Infection Control in Adult Neurosurgery Patients with External Ventricular Drainage at Directorate General of Khoula Hospital ICU, Oman: A Cluster-Randomized Trial
Authors: Shamsa Al Sharji, Athar Al Jabri, Haitham Al Dughaishi, Mirfat Al Barwani, Raja Al Rawahi, Raiya Al Rajhi, Shurooq Al Ruqaishi, Thamreen Al Zadjali, Iman Al Humaidi
Abstract:
Background: External Ventricular Drains (EVDs) are critical in managing traumatic brain injuries and hydrocephalus by controlling intracranial pressure, but they carry a high risk of infection. Infection rates vary globally, ranging from 5% to 45%, leading to increased morbidity, prolonged hospital stays, and higher healthcare costs. Nursing protocols play a pivotal role in reducing these infection rates. This study investigates the impact of a structured nursing protocol on EVD-associated infections in adult neurosurgery patients at the Directorate General of Khoula Hospital, Oman, from January to September 2024. Methods: A cluster-randomized trial was conducted across neurosurgery wards and the ICU. The intervention group followed a comprehensive nursing protocol, including strict sterile insertion, standardized dressing changes, infection control training, and regular clinical audits. The control group received standard care. The primary outcome was the incidence of EVD-associated infections, with secondary outcomes including protocol compliance, infection severity, recovery times, length of stay, and 30-day mortality. Statistical analysis was conducted using Chi-square tests, paired t-tests, and logistic regression to assess the differences between groups. Results: The study involved 75 patients, with an overall infection rate of 13.3%. The intervention group showed a reduced infection rate of 8.9% compared to 20% in the control group. Compliance rates for key nursing actions were high, with 89.7% for hand hygiene and 86.2% for wound dressing. The relative risk of infection was 0.44 in the intervention group, reflecting a 55.6% reduction. Logistic regression identified obesity as a significant predictor of EVD infections. Although mortality rates were slightly higher in the intervention group, the number needed to treat (NNT) of 9 suggests that the nursing protocol may improve survival outcomes. Conclusion: This study demonstrates that structured nursing protocols can reduce EVD-related infections and improve patient outcomes in neurosurgery. While the findings are promising, further research with larger sample sizes is needed to confirm these results and optimize infection control strategies in neurosurgical care.Keywords: EVD, CSF, nursing protocol, EVD infection
Procedia PDF Downloads 21213 Comparative Study on Efficacy and Clinical Outcomes in Minimally Invasive Surgery Transforaminal Interbody Fusion vs Minimally Invasive Surgery Lateral Interbody Fusion
Authors: Sundaresan Soundararajan, George Ezekiel Silvananthan, Chor Ngee Tan
Abstract:
Introduction: Transforaminal Interbody Fusion (TLIF) has been adopted for many decades now, however, XLIF, still in relative infancy, has grown to be accepted as a new Minimally Invasive Surgery (MIS) option. There is a paucity of reports directly comparing lateral approach surgery to other MIS options such as TLIF in the treatment of lumbar degenerative disc diseases. Aims/Objectives: The objective of this study was to compare the efficacy and clinical outcomes between Minimally Invasive Transforaminal Interbody Fusion (TLIF) and Minimally Invasive Lateral Interbody Fusion (XLIF) in the treatment of patients with degenerative disc disease of the lumbar spine. Methods: A single center, retrospective cohort study involving a total of 38 patients undergoing surgical intervention between 2010 and 2013 for degenerative disc disease of lumbar spine at single L4/L5 level. 18 patients were treated with MIS TLIF, and 20 patients were treated with XLIF. Results: The XLIF group showed shorter duration of surgery compared to the TLIF group (176 mins vs. 208.3 mins, P = 0.03). Length of hospital stay was also significantly shorter in XLIF group (5.9 days vs. 9 days, p = 0.03). Intraoperative blood loss was favouring XLIF as 85% patients had blood loss less than 100cc compared to 58% in the TLIF group (P = 0.03). Radiologically, disc height was significantly improved post operatively in the XLIF group compared to the TLIF group (0.56mm vs. 0.39mm, P = 0.01). Foraminal height increment was also higher in the XLIF group (0.58mm vs. 0.45mm , P = 0.06). Clinically, back pain and leg pain improved in 85% of patients in the XLIF group and 78% in the TLIF group. Post op hip flexion weakness was more common in the XLIF group (40%) than in the TLIF group (0%). However, this weakness resolved within 6 months post operatively. There was one case of dural tear and surgical site infection in the TLIF group respectively and none in the XLIF group. Visual Analog Scale (VAS) score 6 months post operatively showed comparable reduction in both groups. TLIF group had Owsterty Disability Index (ODI) improvement on 67% while XLIF group showed improvement of 70% of its patients. Conclusions: Lateral approach surgery shows comparable clinical outcomes in resolution of back pain and radiculopathy to conventional MIS techniques such as TLIF. With significantly shorter duration of surgical time, minimal blood loss and shorter hospital stay, XLIF seems to be a reasonable MIS option compared to other MIS techniques in treating degenerative lumbar disc diseases.Keywords: extreme lateral interbody fusion, lateral approach, minimally invasive, XLIF
Procedia PDF Downloads 218212 Genetic Variability and Heritability Among Indigenous Pearl Millet (Pennisetum Glaucum L. R. BR.) in Striga Infested Fields of Sudan Savanna, Nigeria
Authors: Adamu Usman, Grace Stanley Balami
Abstract:
Pearl millet (Pennisetum glaucum L. R. Br.) is a cereal cultivated in arid and semi-arid areas of the world. It supports more than 100 million people around the world. Parasitic weed (Striga hermonthica Del. Benth) is a major constraint to its production. Estimated yield losses are put at 10 - 95% depending on variety, ecology and cultural practices. Potentials in selection of traits in pearl millets for grain yield have been reported and it depends on genotypic variability and heritability among landraces. Variability and heritability among cultivars could offer opportunities for improvement. The study was conducted to determine the genetic variability among cultivars and estimate broad sense heritability among grain yield and related traits. F1 breeding populations were generated with 9 parental cultivars, viz; Ex-Gubio, Ex-Monguno, Ex-Baga as males and PEO 5984, Super-SOSAT, SOSAT-C88, Ex-Borno and LCIC9702 as females through Line × Tester mating during 2017 dry season at Lushi Irrigation Station, Bauchi Metropolitan in Bauchi State, Nigeria. The F1 population and the parents were evaluated during cropping season of 2018 at Bauchi and Maiduguri. Data collected were subjected to analysis of variance. Results showed significant difference among cultivars and among traits indicating variability. Number of plants at emergence, days to 50% flowering, days to 100% flowering, plant height, panicle length, number of plants at harvest, Striga count at 90 days after sowing, panicle weight and grain yield were significantly different. Significant variability offer opportunity for improvement as superior individuals can be isolated. Genotypic variance estimates of traits were largely greater than environmental variances except in plant height and 1000 seed weight. Environmental variances were low and in some cases negligible. The phenotypic variances of all traits were higher than genotypic variances. Similarly phenotypic coefficient of variation (PCV) was higher than genotypic coefficient of variation (GCV). High heritability was found in days to 50% flowering (90.27%), Striga count at 90 days after sowing (90.07%), number of plants at harvest (87.97%), days to 100% flowering (83.89%), number of plants at emergence (82.19%) and plant height (73.18%). Greater heritability estimates could be due to presence of additive gene. The result revealed wider variability among genotypes and traits. Traits having high heritability could easily respond to selection. High value of GCV, PCV and heritability estimates indicate that selection for these traits are possible and could be effective.Keywords: variability, heritability, phenotypic, genotypic, striga
Procedia PDF Downloads 54211 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography
Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld
Abstract:
With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry
Procedia PDF Downloads 309210 Growth and Characterization of Cuprous Oxide (Cu2O) Nanorods by Reactive Ion Beam Sputter Deposition (Ibsd) Method
Authors: Assamen Ayalew Ejigu, Liang-Chiun Chao
Abstract:
In recent semiconductor and nanotechnology, quality material synthesis, proper characterizations, and productions are the big challenges. As cuprous oxide (Cu2O) is a promising semiconductor material for photovoltaic (PV) and other optoelectronic applications, this study was aimed at to grow and characterize high quality Cu2O nanorods for the improvement of the efficiencies of thin film solar cells and other potential applications. In this study, well-structured cuprous oxide (Cu2O) nanorods were successfully fabricated using IBSD method in which the Cu2O samples were grown on silicon substrates with a substrate temperature of 400°C in an IBSD chamber of pressure of 4.5 x 10-5 torr using copper as a target material. Argon, and oxygen gases were used as a sputter and reactive gases, respectively. The characterization of the Cu2O nanorods (NRs) were done in comparison with Cu2O thin film (TF) deposited with the same method but with different Ar:O2 flow rates. With Ar:O2 ratio of 9:1 single phase pure polycrystalline Cu2O NRs with diameter of ~500 nm and length of ~4.5 µm were grow. Increasing the oxygen flow rates, pure single phase polycrystalline Cu2O thin film (TF) was found at Ar:O2 ratio of 6:1. The field emission electron microscope (FE-SEM) measurements showed that both samples have smooth morphologies. X-ray diffraction and Rama scattering measurements reveals the presence of single phase Cu2O in both samples. The differences in Raman scattering and photoluminescence (PL) bands of the two samples were also investigated and the results showed us there are differences in intensities, in number of bands and in band positions. Raman characterization shows that the Cu2O NRs sample has pronounced Raman band intensities, higher numbers of Raman bands than the Cu2O TF which has only one second overtone Raman signal at 2 (217 cm-1). The temperature dependent photoluminescence (PL) spectra measurements, showed that the defect luminescent band centered at 720 nm (1.72 eV) is the dominant one for the Cu2O NRs and the 640 nm (1.937 eV) band was the only PL band observed from the Cu2O TF. The difference in optical and structural properties of the samples comes from the oxygen flow rate change in the process window of the samples deposition. This gave us a roadmap for further investigation of the electrical and other optical properties for the tunable fabrication of the Cu2O nano/micro structured sample for the improvement of the efficiencies of thin film solar cells in addition to other potential applications. Finally, the novel morphologies, excellent structural and optical properties seen exhibits the grown Cu2O NRs sample has enough quality to be used in further research of the nano/micro structured semiconductor materials.Keywords: defect levels, nanorods, photoluminescence, Raman modes
Procedia PDF Downloads 240209 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space
Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari
Abstract:
Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.Keywords: amino acid, head space, gas chromatography, total error
Procedia PDF Downloads 147208 Solar Electric Propulsion: The Future of Deep Space Exploration
Authors: Abhishek Sharma, Arnab Banerjee
Abstract:
The research is intended to study the solar electric propulsion (SEP) technology for planetary missions. The main benefits of using solar electric propulsion for such missions are shorter flight times, more frequent target accessibility and the use of a smaller launch vehicle than that required by a comparable chemical propulsion mission. Energized by electric power from on-board solar arrays, the electrically propelled system uses 10 times less propellant than conventional chemical propulsion system, yet the reduced fuel mass can provide vigorous power which is capable of propelling robotic and crewed missions beyond the Lower Earth Orbit (LEO). The various thrusters used in the SEP are gridded ion thrusters and the Hall Effect thrusters. The research is solely aimed to study the ion thrusters and investigate the complications related to it and what can be done to overcome the glitches. The ion thrusters are used because they are found to have a total lower propellant requirement and have substantially longer time. In the ion thrusters, the anode pushes or directs the incoming electrons from the cathode. But the anode is not maintained at a very high potential which leads to divergence. Divergence leads to the charges interacting against the surface of the thruster. Just as the charges ionize the xenon gases, they are capable of ionizing the surfaces and over time destroy the surface and hence contaminate it. Hence the lifetime of thruster gets limited. So a solution to this problem is using substances which are not easy to ionize as the surface material. Another approach can be to increase the potential of anode so that the electrons don’t deviate much or reduce the length of thruster such that the positive anode is more effective. The aim is to work on these aspects as to how constriction of the deviation of charges can be done by keeping the input power constant and hence increase the lifetime of the thruster. Predominantly ring cusp magnets are used in the ion thrusters. However, the study is also intended to observe the effect of using solenoid for producing micro-solenoidal magnetic field apart from using the ring cusp magnetic field which are used in the discharge chamber for prevention of interaction of electrons with the ionization walls. Another foremost area of interest is what are the ways by which power can be provided to the Solar Electric Propulsion Vehicle for lowering and boosting the orbit of the spacecraft and also provide substantial amount of power to the solenoid for producing stronger magnetic fields. This can be successfully achieved by using the concept of Electro-dynamic tether which will serve as a power source for powering both the vehicle and the solenoids in the ion thruster and hence eliminating the need for carrying extra propellant on the spacecraft which will reduce the weight and hence reduce the cost of space propulsion.Keywords: electro-dynamic tether, ion thruster, lifetime of thruster, solar electric propulsion vehicle
Procedia PDF Downloads 210207 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms
Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier
Abstract:
Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability
Procedia PDF Downloads 102206 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 218205 Finite Element Modeling and Analysis of Reinforced Concrete Coupled Shear Walls Strengthened with Externally Bonded Carbon Fiber Reinforced Polymer Composites
Authors: Sara Honarparast, Omar Chaallal
Abstract:
Reinforced concrete (RC) coupled shear walls (CSWs) are very effective structural systems in resisting lateral loads due to winds and earthquakes and are particularly used in medium- to high-rise RC buildings. However, most of existing old RC structures were designed for gravity loads or lateral loads well below the loads specified in the current modern seismic international codes. These structures may behave in non-ductile manner due to poorly designed joints, insufficient shear reinforcement and inadequate anchorage length of the reinforcing bars. This has been the main impetus to investigate an appropriate strengthening method to address or attenuate the deficiencies of these structures. The objective of this paper is to twofold: (i) evaluate the seismic performance of existing reinforced concrete coupled shear walls under reversed cyclic loading; and (ii) investigate the seismic performance of RC CSWs strengthened with externally bonded (EB) carbon fiber reinforced polymer (CFRP) sheets. To this end, two CSWs were considered as follows: (a) the first one is representative of old CSWs and therefore was designed according to the 1941 National Building Code of Canada (NBCC, 1941) with conventionally reinforced coupling beams; and (b) the second one, representative of new CSWs, was designed according to modern NBCC 2015 and CSA/A23.3 2014 requirements with diagonally reinforced coupling beam. Both CSWs were simulated using ANSYS software. Nonlinear behavior of concrete is modeled using multilinear isotropic hardening through a multilinear stress strain curve. The elastic-perfectly plastic stress-strain curve is used to simulate the steel material. Bond stress–slip is modeled between concrete and steel reinforcement in conventional coupling beam rather than considering perfect bond to better represent the slip of the steel bars observed in the coupling beams of these CSWs. The old-designed CSW was strengthened using CFRP sheets bonded to the concrete substrate and the interface was modeled using an adhesive layer. The behavior of CFRP material is considered linear elastic up to failure. After simulating the loading and boundary conditions, the specimens are analyzed under reversed cyclic loading. The comparison of results obtained for the two unstrengthened CSWs and the one retrofitted with EB CFRP sheets reveals that the strengthening method improves the seismic performance in terms of strength, ductility, and energy dissipation capacity.Keywords: carbon fiber reinforced polymer, coupled shear wall, coupling beam, finite element analysis, modern code, old code, strengthening
Procedia PDF Downloads 197204 Life at the Fence: Lived Experiences of Navigating Cultural and Social Complexities among South Sudanese Refugees in Australia
Authors: Sabitra Kaphle, Rebecca Fanany, Jenny Kelly
Abstract:
Australia welcomes significant numbers of humanitarian arrivals every year with the commitment to provide equal opportunities and the resources required for integration into the new society. Over the last two decades, more than 24,000 South Sudanese people have come to call Australia home. Most of these refugees experienced several challenges whilesettlinginto the new social structures and service systems in Australia. The aim of the research is to explore the factors influencing social and cultural integration of South Sudanese refugees who have settled in Australia. Methodology: This studyused a phenomenological approach based on in-depth interviews designed to elicit the lived experiences of South Sudanese refugees settled in Australia. It applied the principles of narrative ethnography, allowing participants an opportunity to speak about themselves and their experiences of social and cultural integration-using their own words. Twenty-six participants were recruited to the study. Participants were long-term residents (over 10 years of settlement experience)who self-identified as refugees from South Sudan. Participants were given an opportunity to speak in the language of their choice, and interviews were conducted by a bilingual interviewer in their preferred language, time, and location. Interviews were recorded and transcribed verbatim and translated to Englishfor thematic analysis. Findings: Participants’ experiences portray the complexities of integrating into a new society due tothe daily challenges that South Sudaneserefugees face. Themes emerged from narrativesindicated that South Sudanese refugees express a high level of association with a Sudanese identity while demonstrating a significant level of integration into the Australian society. Despite this identity dilemma, these refugees show a high level of consensus about the experiencesof living in Australia that is closely associated with a group identity. In the process of maintaining identity andsocial affiliation, there are significant inter-generational cultural conflicts that participants experience in adapting to Australian society. It has been elucidated that identityconflict often emerges centeringon what constitutes authentic cultural practice as well as who is entitled to claim to be a member of the South Sudanese culture. Conclusions: Results of this study suggest that the cultural identity and social affiliations of South Sudanese refugees settling into Australian society are complex and multifaceted. While there are positive elements of theirintegration into the new society, inter-generational conflictsand identity confusion require further investigation to understand the context that will assist refugees to integrate more successfully into their new society. Given the length of stay of these refugees in Australia, government and settlement agencies may benefit from developing appropriate resources and process that are adaptive to the social and cultural context in which newly arrived refugees will live.Keywords: cultural integration, inter-generational conflict, lived experiences, refugees, South sudanese
Procedia PDF Downloads 114203 Debriefing Practices and Models: An Integrative Review
Authors: Judson P. LaGrone
Abstract:
Simulation-based education in curricula was once a luxurious component of nursing programs but now serves as a vital element of an individual’s learning experience. A debriefing occurs after the simulation scenario or clinical experience is completed to allow the instructor(s) or trained professional(s) to act as a debriefer to guide a reflection with a purpose of acknowledging, assessing, and synthesizing the thought process, decision-making process, and actions/behaviors performed during the scenario or clinical experience. Debriefing is a vital component of the simulation process and educational experience to allow the learner(s) to progressively build upon past experiences and current scenarios within a safe and welcoming environment with a guided dialog to enhance future practice. The aim of this integrative review was to assess current practices of debriefing models in simulation-based education for health care professionals and students. The following databases were utilized for the search: CINAHL Plus, Cochrane Database of Systemic Reviews, EBSCO (ERIC), PsycINFO (Ovid), and Google Scholar. The advanced search option was useful to narrow down the search of articles (full text, Boolean operators, English language, peer-reviewed, published in the past five years). Key terms included debrief, debriefing, debriefing model, debriefing intervention, psychological debriefing, simulation, simulation-based education, simulation pedagogy, health care professional, nursing student, and learning process. Included studies focus on debriefing after clinical scenarios of nursing students, medical students, and interprofessional teams conducted between 2015 and 2020. Common themes were identified after the analysis of articles matching the search criteria. Several debriefing models are addressed in the literature with similarities of effectiveness for participants in clinical simulation-based pedagogy. Themes identified included (a) importance of debriefing in simulation-based pedagogy, (b) environment for which debriefing takes place is an important consideration, (c) individuals who should conduct the debrief, (d) length of debrief, and (e) methodology of the debrief. Debriefing models supported by theoretical frameworks and facilitated by trained staff are vital for a successful debriefing experience. Models differed from self-debriefing, facilitator-led debriefing, video-assisted debriefing, rapid cycle deliberate practice, and reflective debriefing. A reoccurring finding was centered around the emphasis of continued research for systematic tool development and analysis of the validity and effectiveness of current debriefing practices. There is a lack of consistency of debriefing models among nursing curriculum with an increasing rate of ill-prepared faculty to facilitate the debriefing phase of the simulation.Keywords: debriefing model, debriefing intervention, health care professional, simulation-based education
Procedia PDF Downloads 141202 Validation of Global Ratings in Clinical Performance Assessment
Authors: S. J. Yune, S. Y. Lee, S. J. Im, B. S. Kam, S. Y. Baek
Abstract:
This study aimed to determine the reliability of clinical performance assessments, having been emphasized by ability-based education, and professors overall assessment methods. We addressed the following problems: First, we try to find out whether there is a difference in what we consider to be the main variables affecting the clinical performance test according to the evaluator’s working period and the number of evaluation experience. Second, we examined the relationship among the global rating score (G), analytic global rating score (Gc), and the sum of the analytical checklists (C). What are the main factors affecting clinical performance assessments in relation to the numbers of times the evaluator had administered evaluations and the length of their working period service? What is the relationship between overall assessment score and analytic checklist score? How does analytic global rating with 6 components in OSCE and 4 components in sub-domains (Gc) CPX: aseptic practice, precision, systemic approach, proficiency, successfulness, and attitude overall assessment score and task-specific analytic checklist score sum (C) affect the professor’s overall global rating assessment score (G)? We studied 75 professors who attended a 2016 Bugyeoung Consortium clinical skills performances test evaluating third and fourth year medical students at the Pusan National University Medical school in South Korea (39 prof. in OSCE, 36 prof. in CPX; all consented to participate in our study). Each evaluator used 3 forms; a task-specific analytic checklist, subsequent analytic global rating scale with sub-6 domains, and overall global scale. After the evaluation, the professors responded to the questionnaire on the important factors of clinical performance assessment. The data were analyzed by frequency analysis, correlation analysis, and hierarchical regression analysis using SPSS 21.0. Their understanding of overall assessment was analyzed by dividing the subjects into groups based on experiences. As a result, they considered ‘precision’ most important in overall OSCE assessment, and ‘precise accuracy physical examination’, ‘systemic approaches to taking patient history’, and ‘diagnostic skill capability’ in overall CPX assessment. For OSCE, there was no clear difference of opinion about the main factors, but there was for CPX. Analytic global rating scale score, overall rating scale score, and analytic checklist score had meaningful mutual correlations. According to the regression analysis results, task-specific checklist score sum had the greatest effect on overall global rating. professors regarded task-specific analytic checklist total score sum as best reflecting overall OSCE test score, followed by aseptic practice, precision, systemic approach, proficiency, successfulness, and attitude on a subsequent analytic global rating scale. For CPX, subsequent analytic global rating scale score, overall global rating scale score, and task-specific checklist score had meaningful mutual correlations. These findings support explanations for validity of professors’ global rating in clinical performance assessment.Keywords: global rating, clinical performance assessment, medical education, analytic checklist
Procedia PDF Downloads 233201 Residual Plastic Deformation Capacity in Reinforced Concrete Beams Subjected to Drop Weight Impact Test
Authors: Morgan Johansson, Joosef Leppanen, Mathias Flansbjer, Fabio Lozano, Josef Makdesi
Abstract:
Concrete is commonly used for protective structures and how impact loading affects different types of concrete structures is an important issue. Often the knowledge gained from static loading is also used in the design of impulse loaded structures. A large plastic deformation capacity is essential to obtain a large energy absorption in an impulse loaded structure. However, the structural response of an impact loaded concrete beam may be very different compared to a statically loaded beam. Consequently, the plastic deformation capacity and failure modes of the concrete structure can be different when subjected to dynamic loads; and hence it is not sure that the observations obtained from static loading are also valid for dynamic loading. The aim of this paper is to investigate the residual plastic deformation capacity in reinforced concrete beams subjected to drop weight impact tests. A test-series consisting of 18 simply supported beams (0.1 x 0.1 x 1.18 m, ρs = 0.7%) with a span length of 1.0 m and subjected to a point load in the beam mid-point, was carried out. 2x6 beams were first subjected to drop weight impact tests, and thereafter statically tested until failure. The drop in weight had a mass of 10 kg and was dropped from 2.5 m or 5.0 m. During the impact tests, a high-speed camera was used with 5 000 fps and for the static tests, a camera was used with 0.5 fps. Digital image correlation (DIC) analyses were conducted and from these the velocities of the beam and the drop weight, as well as the deformations and crack propagation of the beam, were effectively measured. Additionally, for the static tests, the applied load and midspan deformation were measured. The load-deformation relations for the beams subjected to an impact load were compared with 6 reference beams that were subjected to static loading only. The crack pattern obtained were compared using DIC, and it was concluded that the resulting crack formation depended much on the test method used. For the static tests, only bending cracks occurred. For the impact loaded beams, though, distinctive diagonal shear cracks also formed below the zone of impact and less wide shear cracks were observed in the region half-way to the support. Furthermore, due to wave propagation effects, bending cracks developed in the upper part of the beam during initial loading. The results showed that the plastic deformation capacity increased for beams subjected to drop weight impact tests from a high drop height of 5.0 m. For beams subjected to an impact from a low drop height of 2.5 m, though, the plastic deformation capacity was in the same order of magnitude as for the statically loaded reference beams. The beams tested were designed to fail due to bending when subjected to a static load. However, for the impact tested beams, one beam exhibited a shear failure at a significantly reduced load level when it was tested statically; indicating that there might be a risk of reduced residual load capacity for impact loaded structures.Keywords: digital image correlation (DIC), drop weight impact, experiments, plastic deformation capacity, reinforced concrete
Procedia PDF Downloads 142200 Analysis of Shrinkage Effect during Mercerization on Himalayan Nettle, Cotton and Cotton/Nettle Yarn Blends
Authors: Reena Aggarwal, Neha Kestwal
Abstract:
The Himalayan Nettle (Girardinia diversifolia) has been used for centuries as fibre and food source by Himalayan communities. Himalayan Nettle is a natural cellulosic fibre that can be handled in the same way as other cellulosic fibres. The Uttarakhand Bamboo and Fibre Development Board based in Uttarakhand, India is working extensively with the nettle fibre to explore the potential of nettle for textile production in the region. The fiber is a potential resource for rural enterprise development for some high altitude pockets of the state and traditionally the plant fibre is used for making domestic products like ropes and sacks. Himalayan Nettle is an unconventional natural fiber with functional characteristics of shrink resistance, degree of pathogen and fire resistance and can blend nicely with other fibres. Most importantly, they generate mainly organic wastes and leave residues that are 100% biodegradable. The fabrics may potentially be reused or re-manufactured and can also be used as a source of cellulose feedstock for regenerated cellulosic products. Being naturally bio- degradable, the fibre can be composted if required. Though a lot of research activities and training are directed towards fibre extraction and processing techniques in different craft clusters villagers of different clusters of Uttarkashi, Chamoli and Bageshwar of Uttarakhand like retting and Degumming process, very little is been done to analyse the crucial properties of nettle fiber like shrinkage and wash fastness. These properties are very crucial to obtain desired quality of fibre for further processing of yarn making and weaving and in developing these fibers into fine saleable products. This research therefore is focused towards various on-field experiments which were focused on shrinkage properties conducted on cotton, nettle and cotton/nettle blended yarn samples. The objective of the study was to analyze the scope of the blended fiber for developing into wearable fabrics. For the study, after conducting the initial fiber length and fineness testing, cotton and nettle fibers were mixed in 60:40 ratio and five varieties of yarns were spun in open end spinning mill having yarn count of 3s, 5s, 6s, 7s and 8s. Samples of 100% Nettle 100% cotton fibers in 8s count were also developed for the study. All the six varieties of yarns were tested with shrinkage test and results were critically analyzed as per ASTM method D2259. It was observed that 100% Nettle has a least shrinkage of 3.36% while pure cotton has shrinkage approx. 13.6%. Yarns made of 100% Cotton exhibits four times more shrinkage than 100% Nettle. The results also show that cotton and Nettle blended yarn exhibit lower shrinkage than 100% cotton yarn. It was thus concluded that as the ratio of nettle increases in the samples, the shrinkage decreases in the samples. These results are very crucial for Uttarakhand people who want to commercially exploit the abundant nettle fiber for generating sustainable employment.Keywords: Himalayan nettle, sustainable, shrinkage, blending
Procedia PDF Downloads 240199 Coupling Strategy for Multi-Scale Simulations in Micro-Channels
Authors: Dahia Chibouti, Benoit Trouette, Eric Chenier
Abstract:
With the development of micro-electro-mechanical systems (MEMS), understanding fluid flow and heat transfer at the micrometer scale is crucial. In the case where the flow characteristic length scale is narrowed to around ten times the mean free path of gas molecules, the classical fluid mechanics and energy equations are still valid in the bulk flow, but particular attention must be paid to the gas/solid interface boundary conditions. Indeed, in the vicinity of the wall, on a thickness of about the mean free path of the molecules, called the Knudsen layer, the gas molecules are no longer in local thermodynamic equilibrium. Therefore, macroscopic models based on the continuity of velocity, temperature and heat flux jump conditions must be applied at the fluid/solid interface to take this non-equilibrium into account. Although these macroscopic models are widely used, the assumptions on which they depend are not necessarily verified in realistic cases. In order to get rid of these assumptions, simulations at the molecular scale are carried out to study how molecule interaction with walls can change the fluid flow and heat transfers at the vicinity of the walls. The developed approach is based on a kind of heterogeneous multi-scale method: micro-domains overlap the continuous domain, and coupling is carried out through exchanges of information between both the molecular and the continuum approaches. In practice, molecular dynamics describes the fluid flow and heat transfers in micro-domains while the Navier-Stokes and energy equations are used at larger scales. In this framework, two kinds of micro-simulation are performed: i) in bulk, to obtain the thermo-physical properties (viscosity, conductivity, ...) as well as the equation of state of the fluid, ii) close to the walls to identify the relationships between the slip velocity and the shear stress or between the temperature jump and the normal temperature gradient. The coupling strategy relies on an implicit formulation of the quantities extracted from micro-domains. Indeed, using the results of the molecular simulations, a Bayesian regression is performed in order to build continuous laws giving both the behavior of the physical properties, the equation of state and the slip relationships, as well as their uncertainties. These latter allow to set up a learning strategy to optimize the number of micro simulations. In the present contribution, the first results regarding this coupling associated with the learning strategy are illustrated through parametric studies of convergence criteria, choice of basis functions and noise of input data. Anisothermic flows of a Lennard Jones fluid in micro-channels are finally presented.Keywords: multi-scale, microfluidics, micro-channel, hybrid approach, coupling
Procedia PDF Downloads 164198 Hedonic Pricing Model of Parboiled Rice
Authors: Roengchai Tansuchat, Wassanai Wattanutchariya, Aree Wiboonpongse
Abstract:
Parboiled rice is one of the most important food grains and classified in cereal and cereal product. In 2015, parboiled rice was traded more than 14.34 % of total rice trade. The major parboiled rice export countries are Thailand and India, while many countries in Africa and the Middle East such as Nigeria, South Africa, United Arab Emirates, and Saudi Arabia, are parboiled rice import countries. In the global rice market, parboiled rice pricing differs from white rice pricing because parboiled rice is semi-processing product, (soaking, steaming and drying) which affects to their color and texture. Therefore, parboiled rice export pricing does not depend only on the trade volume, length of grain, and percentage of broken rice or purity but also depend on their rice seed attributes such as color, whiteness, consistency of color and whiteness, and their texture. In addition, the parboiled rice price may depend on the country of origin, and other attributes, such as certification mark, label, packaging, and sales locations. The objectives of this paper are to study the attributes of parboiled rice sold in different countries and to evaluate the relationship between parboiled rice price in different countries and their attributes by using hedonic pricing model. These results are useful for product development, and marketing strategies development. The 141 samples of parboiled rice were collected from 5 major parboiled rice consumption countries, namely Nigeria, South Africa, Saudi Arabia, United Arab Emirates and Spain. The physicochemical properties and optical properties, namely size and shape of seed, colour (L*, a*, and b*), parboiled rice texture (hardness, adhesiveness, cohesiveness, springiness, gumminess, and chewiness), nutrition (moisture, protein, carbohydrate, fat, and ash), amylose, package, country of origin, label are considered as explanatory variables. The results from parboiled rice analysis revealed that most of samples are classified as long grain and slender. The highest average whiteness value is the parboiled rice sold in South Africa. The amylose value analysis shows that most of parboiled rice is non-glutinous rice, classified in intermediate amylose content range, and the maximum value was found in United Arab Emirates. The hedonic pricing model showed that size and shape are the key factors to determine parboiled rice price statistically significant. In parts of colour, brightness value (L*) and red-green value (a*) are statistically significant, but the yellow-blue value (b*) is insignificant. In addition, the texture attributes that significantly affect to the parboiled rice price are hardness, adhesiveness, cohesiveness, and gumminess. The findings could help both parboiled rice miller, exporter and retailers formulate better production and marketing strategies by focusing on these attributes.Keywords: hedonic pricing model, optical properties, parboiled rice, physicochemical properties
Procedia PDF Downloads 330