Search results for: multi-phase induction machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3644

Search results for: multi-phase induction machine

554 Potential Use of Leaching Gravel as a Raw Material in the Preparation of Geo Polymeric Material as an Alternative to Conventional Cement Materials

Authors: Arturo Reyes Roman, Daniza Castillo Godoy, Francisca Balarezo Olivares, Francisco Arriagada Castro, Miguel Maulen Tapia

Abstract:

Mining waste–based geopolymers are a sustainable alternative to conventional cement materials due to their contribution to the valorization of mining wastes as well as to the new construction materials with reduced fingerprints. The objective of this study was to determine the potential of leaching gravel (LG) from hydrometallurgical copper processing to be used as a raw material in the manufacture of geopolymer. NaOH, Na2SiO3 (modulus 1.5), and LG were mixed and then wetted with an appropriate amount of tap water, then stirred until a homogenous paste was obtained. A liquid/solid ratio of 0.3 was used for preparing mixtures. The paste was then cast in cubic moulds of 50 mm for the determination of compressive strengths. The samples were left to dry for 24h at room temperature, then unmoulded before analysis after 28 days of curing time. The compressive test was conducted in a compression machine (15/300 kN). According to the laser diffraction spectroscopy (LDS) analysis, 90% of LG particles were below 500 μm. The X-ray diffraction (XRD) analysis identified crystalline phases of albite (30 %), Quartz (16%), Anorthite (16 %), and Phillipsite (14%). The X-ray fluorescence (XRF) determinations showed mainly 55% of SiO2, 13 % of Al2O3, and 9% of CaO. ICP (OES) concentrations of Fe, Ca, Cu, Al, As, V, Zn, Mo, and Ni were 49.545; 24.735; 6.172; 14.152, 239,5; 129,6; 41,1;15,1, and 13,1 mg kg-1, respectively. The geopolymer samples showed resistance ranging between 2 and 10 MPa. In comparison with the raw material composition, the amorphous percentage of materials in the geopolymer was 35 %, whereas the crystalline percentage of main mineral phases decreased. Further studies are needed to find the optimal combinations of materials to produce a more resistant and environmentally safe geopolymer. Particularly are necessary compressive resistance higher than 15 MPa are necessary to be used as construction unit such as bricks.

Keywords: mining waste, geopolymer, construction material, alkaline activation

Procedia PDF Downloads 94
553 Leveraging the HDAC Inhibitory Pharmacophore to Construct Deoxyvasicinone Based Tractable Anti-Lung Cancer Agent and pH-Responsive Nanocarrier

Authors: Ram Sharma, Esha Chatterjee, Santosh Kumar Guru, Kunal Nepali

Abstract:

A tractable anti-lung cancer agent was identified via the installation of a Ring C expanded synthetic analogue of the alkaloid vasicinone [7,8,9,10-tetrahydroazepino[2,1-b] quinazolin-12(6H)-one (TAZQ)] as a surface recognition part in the HDAC inhibitory three-component model. Noteworthy to mention that the candidature of TAZQ was deemed suitable for accommodation in HDAC inhibitory pharmacophore as per the results of the fragment recruitment process conducted by our laboratory. TAZQ was pinpointed through the fragment screening program as a synthetically flexible fragment endowed with some moderate cell growth inhibitory activity against the lung cancer cell lines, and it was anticipated that the use of the aforementioned fragment to generate hydroxamic acid functionality (zinc-binding motif) bearing HDAC inhibitors would boost the antitumor efficacy of TAZQ. Consistent with our aim of applying epigenetic targets to the treatment of lung cancer, a strikingly potent anti-lung cancer scaffold (compound 6) was pinpointed through a series of in-vitro experiments. Notably, the compounds manifested a magnificent activity profile against KRAS and EGFR mutant lung cancer cell lines (IC50 = 0.80 - 0.96 µM), and the effects were found to be mediated through preferential HDAC6 inhibition (IC50 = 12.9 nM). In addition to HDAC6 inhibition, the compounds also elicited HDAC1 and HDAC3 inhibitory activity with an IC50 value of 49.9 nM and 68.5 nM, respectively. The HDAC inhibitory ability of compound 6 was also confirmed from the results of the western blot experiment that revealed its potential to decrease the expression levels of HDAC isoforms (HDAC1, HDAC3, and HDAC6). Noteworthy to mention that complete downregulation of the HDAC6 isoform was exerted by compound 6 at 0.5 and 1 µM. Moreover, in another western blot experiment, treatment with hydroxamic acid 6 led to upregulation of H3 acK9 and α-Tubulin acK40 levels, ascertaining its inhibitory activity toward both the class I HDACs and Class II B HDACs. The results of other assays were also encouraging as treatment with compound 6 led to the suppression of the colony formation ability of A549 cells, induction of apoptosis, and increase in autophagic flux. In silico studies led us to rationalize the results of the experimental assay, and some key interactions of compound 6 with the amino acid residues of HDAC isoforms were identified. In light of the impressive activity spectrum of compound 6, a pH-responsive nanocarrier (hyaluronic acid-compound 6 nanoparticles) was prepared. The dialysis bag approach was used for the assessment of the nanoparticles under both normal and acidic circumstances, and the pH-sensitive nature of hyaluronic acid-compound 6 nanoparticles was confirmed. Delightfully, the nanoformulation was devoid of cytotoxicity against the L929 mouse fibroblast cells (normal settings) and exhibited selective cytotoxicity towards the A549 lung cancer cell lines. In a nutshell, compound 6 appears to be a promising adduct, and a detailed investigation of this compound might yield a therapeutic for the treatment of lung cancer.

Keywords: HDAC inhibitors, lung cancer, scaffold, hyaluronic acid, nanoparticles

Procedia PDF Downloads 95
552 Epigenetic Modification Observed in Yeast Chromatin Remodeler Ino80p

Authors: Chang-Hui Shen, Michelle Esposito, Andrew J. Shen, Michael Adejokun, Diana Laterman

Abstract:

The packaging of DNA into nucleosomes is critical to genomic compaction, yet it can leave gene promoters inaccessible to activator proteins or transcription machinery and thus prevents transcriptional initiation. Both chromatin remodelers and histone acetylases (HATs) are the two main transcription co-activators that can reconfigure chromatin structure for transcriptional activation. Ino80p is the core component of the INO80 remodeling complex. Recently, it was shown that Ino80p dissociates from the yeast INO1 promoter after induction. However, when certain HATs were deleted or mutated, Ino80p accumulated at the promoters during gene activation. This suggests a link between HATs’ presence and Ino80p’s dissociation. However, it has yet to be demonstrated that Ino80p can be acetylated. To determine if Ino80p can be acetylated, wild-type Saccharomyces cerevisiae cells carrying Ino80p engineered with a double FLAG tag (MATa INO80-FLAG his3∆200 leu2∆0 met15∆0 trp1∆63 ura3∆0) were grown to mid log phase, as were non-tagged wild type (WT) (MATa his3∆200 leu2∆0 met15∆0 trp1∆63 ura3∆0) and ino80∆ (MATa ino80∆::TRP1 his3∆200 leu2∆0 met15∆0 trp1∆63 ura3∆0) cells as controls. Cells were harvested, and the cell lysates were subjected to immunoprecipitation (IP) with α-FLAG resin to isolate Ino80p. These eluted IP samples were subjected to SDS-PAGE and Western blot analysis. Subsequently, the blots were probed with the α-FLAG and α-acetyl lysine antibodies, respectively. For the blot probed with α-FLAG, one prominent band was shown in the INO80-FLAG cells, but no band was detected in the IP samples from the WT and ino80∆ cells. For the blot probed with the α-acetyl lysine antibody, we detected acetylated Ino80p in the INO80-FLAG strain while no bands were observed in the control strains. As such, our results showed that Ino80p can be acetylated. This acetylation can explain the co-activator’s recruitment patterns observed in current gene activation models. In yeast INO1, it has been shown that Ino80p is recruited to the promoter during repression, and then dissociates from the promoter once de-repression begins. Histone acetylases, on the other hand, have the opposite pattern of recruitment, as they have an increased presence at the promoter as INO1 de-repression commences. This Ino80p recruitment pattern significantly changes when HAT mutant strains are studied. It was observed that instead of dissociating, Ino80p accumulates at the promoter in the absence of functional HATs, such as Gcn5p or Esa1p, under de-repressing processes. As such, Ino80p acetylation may be required for its proper dissociation from the promoters. The remodelers’ dissociation mechanism may also have a wide range of implications with respect to transcriptional initiation, elongation, or even repression as it allows for increased spatial access to the promoter for the various transcription factors and regulators that need to bind in that region. Our findings here suggest a previously uncharacterized interaction between Ino80p and other co-activators recruited to promoters. As such, further analysis of Ino80p acetylation not only will provide insight into the role of epigenetic modifications in transcriptional activation, but also gives insight into the interactions occurring between co-activators at gene promoters during gene regulation.

Keywords: acetylation, chromatin remodeler, epigenetic modification, Ino80p

Procedia PDF Downloads 170
551 Monitor Student Concentration Levels on Online Education Sessions

Authors: M. K. Wijayarathna, S. M. Buddika Harshanath

Abstract:

Monitoring student engagement has become a crucial part of the educational process and a reliable indicator of the capacity to retain information. As online learning classrooms are now more common these days, students' attention levels have become increasingly important, making it more difficult to check each student's concentration level in an online classroom setting. To profile student attention to various gradients of engagement, a study is a plan to conduct using machine learning models. Using a convolutional neural network, the findings and confidence score of the high accuracy model are obtained. In this research, convolutional neural networks are using to help discover essential emotions that are critical in defining various levels of participation. Students' attention levels were shown to be influenced by emotions such as calm, enjoyment, surprise, and fear. An improved virtual learning system was created as a result of these data, which allowed teachers to focus their support and advise on those students who needed it. Student participation has formed as a crucial component of the learning technique and a consistent predictor of a student's capacity to retain material in the classroom. Convolutional neural networks have a plan to implement the platform. As a preliminary step, a video of the pupil would be taken. In the end, researchers used a convolutional neural network utilizing the Keras toolkit to take pictures of the recordings. Two convolutional neural network methods are planned to use to determine the pupils' attention level. Finally, those predicted student attention level results plan to display on the graphical user interface of the System.

Keywords: HTML5, JavaScript, Python flask framework, AI, graphical user

Procedia PDF Downloads 99
550 Voting Representation in Social Networks Using Rough Set Techniques

Authors: Yasser F. Hassan

Abstract:

Social networking involves use of an online platform or website that enables people to communicate, usually for a social purpose, through a variety of services, most of which are web-based and offer opportunities for people to interact over the internet, e.g. via e-mail and ‘instant messaging’, by analyzing the voting behavior and ratings of judges in a popular comments in social networks. While most of the party literature omits the electorate, this paper presents a model where elites and parties are emergent consequences of the behavior and preferences of voters. The research in artificial intelligence and psychology has provided powerful illustrations of the way in which the emergence of intelligent behavior depends on the development of representational structure. As opposed to the classical voting system (one person – one decision – one vote) a new voting system is designed where agents with opposed preferences are endowed with a given number of votes to freely distribute them among some issues. The paper uses ideas from machine learning, artificial intelligence and soft computing to provide a model of the development of voting system response in a simulated agent. The modeled development process involves (simulated) processes of evolution, learning and representation development. The main value of the model is that it provides an illustration of how simple learning processes may lead to the formation of structure. We employ agent-based computer simulation to demonstrate the formation and interaction of coalitions that arise from individual voter preferences. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior.

Keywords: voting system, rough sets, multi-agent, social networks, emergence, power indices

Procedia PDF Downloads 393
549 Drilling Quantification and Bioactivity of Machinable Hydroxyapatite : Yttrium phosphate Bioceramic Composite

Authors: Rupita Ghosh, Ritwik Sarkar, Sumit K. Pal, Soumitra Paul

Abstract:

The use of Hydroxyapatite bioceramics as restorative implants is widely known. These materials can be manufactured by pressing and sintering route to a particular shape. However machining processes are still a basic requirement to give a near net shape to those implants for ensuring dimensional and geometrical accuracy. In this context, optimising the machining parameters is an important factor to understand the machinability of the materials and to reduce the production cost. In the present study a method has been optimized to produce true particulate drilled composite of Hydroxyapatite Yttrium Phosphate. The phosphates are used in varying ratio for a comparative study on the effect of flexural strength, hardness, machining (drilling) parameters and bioactivity.. The maximum flexural strength and hardness of the composite that could be attained are 46.07 MPa and 1.02 GPa respectively. Drilling is done with a conventional radial drilling machine aided with dynamometer with high speed steel (HSS) and solid carbide (SC) drills. The effect of variation in drilling parameters (cutting speed and feed), cutting tool, batch composition on torque, thrust force and tool wear are studied. It is observed that the thrust force and torque varies greatly with the increase in the speed, feed and yttrium phosphate content in the composite. Significant differences in the thrust and torque are noticed due to the change of the drills as well. Bioactivity study is done in simulated body fluid (SBF) upto 28 days. The growth of the bone like apatite has become denser with the increase in the number of days for all the composition of the composites and it is comparable to that of the pure hydroxyapatite.

Keywords: Bioactivity, Drilling, Hydroxyapatite, Yttrium Phosphate

Procedia PDF Downloads 300
548 Decision-Making Strategies on Smart Dairy Farms: A Review

Authors: L. Krpalkova, N. O' Mahony, A. Carvalho, S. Campbell, G. Corkery, E. Broderick, J. Walsh

Abstract:

Farm management and operations will drastically change due to access to real-time data, real-time forecasting, and tracking of physical items in combination with Internet of Things developments to further automate farm operations. Dairy farms have embraced technological innovations and procured vast amounts of permanent data streams during the past decade; however, the integration of this information to improve the whole farm-based management and decision-making does not exist. It is now imperative to develop a system that can collect, integrate, manage, and analyse on-farm and off-farm data in real-time for practical and relevant environmental and economic actions. The developed systems, based on machine learning and artificial intelligence, need to be connected for useful output, a better understanding of the whole farming issue, and environmental impact. Evolutionary computing can be very effective in finding the optimal combination of sets of some objects and, finally, in strategy determination. The system of the future should be able to manage the dairy farm as well as an experienced dairy farm manager with a team of the best agricultural advisors. All these changes should bring resilience and sustainability to dairy farming as well as improving and maintaining good animal welfare and the quality of dairy products. This review aims to provide an insight into the state-of-the-art of big data applications and evolutionary computing in relation to smart dairy farming and identify the most important research and development challenges to be addressed in the future. Smart dairy farming influences every area of management, and its uptake has become a continuing trend.

Keywords: big data, evolutionary computing, cloud, precision technologies

Procedia PDF Downloads 189
547 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes

Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo

Abstract:

Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).

Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation

Procedia PDF Downloads 206
546 Accuracy/Precision Evaluation of Excalibur I: A Neurosurgery-Specific Haptic Hand Controller

Authors: Hamidreza Hoshyarmanesh, Benjamin Durante, Alex Irwin, Sanju Lama, Kourosh Zareinia, Garnette R. Sutherland

Abstract:

This study reports on a proposed method to evaluate the accuracy and precision of Excalibur I, a neurosurgery-specific haptic hand controller, designed and developed at Project neuroArm. Having an efficient and successful robot-assisted telesurgery is considerably contingent on how accurate and precise a haptic hand controller (master/local robot) would be able to interpret the kinematic indices of motion, i.e., position and orientation, from the surgeon’s upper limp to the slave/remote robot. A proposed test rig is designed and manufactured according to standard ASTM F2554-10 to determine the accuracy and precision range of Excalibur I at four different locations within its workspace: central workspace, extreme forward, far left and far right. The test rig is metrologically characterized by a coordinate measuring machine (accuracy and repeatability < ± 5 µm). Only the serial linkage of the haptic device is examined due to the use of the Structural Length Index (SLI). The results indicate that accuracy decreases by moving from the workspace central area towards the borders of the workspace. In a comparative study, Excalibur I performs on par with the PHANToM PremiumTM 3.0 and more accurate/precise than the PHANToM PremiumTM 1.5. The error in Cartesian coordinate system shows a dominant component in one direction (δx, δy or δz) for the movements on horizontal, vertical and inclined surfaces. The average error magnitude of three attempts is recorded, considering all three error components. This research is the first promising step to quantify the kinematic performance of Excalibur I.

Keywords: accuracy, advanced metrology, hand controller, precision, robot-assisted surgery, tele-operation, workspace

Procedia PDF Downloads 336
545 Enhancing Vehicle Efficiency Through Vapor Absorption Refrigeration Systems

Authors: Yoftahe Nigussie Worku

Abstract:

This paper explores the utilization of vapor absorption refrigeration systems (VARS) as an alternative to the conventional vapor compression refrigerant systems (VCRS) in vehicle air conditioning (AC) systems. Currently, most vehicles employ VCRS, which relies on engine power to drive the compressor, leading to additional fuel consumption. In contrast, VARS harnesses low-grade heat, specifically from the exhaust of high-power internal combustion engines, reducing the burden on the vehicle's engine. The historical development of vapor absorption technology is outlined, dating back to Michael Faraday's discovery in 1824 and the subsequent creation of the first vapor absorption refrigeration machine by Ferdinand Carre in 1860. The paper delves into the fundamental principles of VARS, emphasizing the replacement of mechanical processes with physicochemical interactions, utilizing heat rather than mechanical work. The study compares the basic concepts of the current vapor compression systems with the proposed vapor absorption systems, highlighting the efficiency gains achieved by eliminating the need for engine-driven compressors. The vapor absorption refrigeration cycle (VARC) is detailed, focusing on the generator's role in separating and vaporizing ammonia, chosen for its low-temperature evaporation characteristics. The project's statement underscores the need for increased efficiency in vehicle AC systems beyond the limitations of VCRS. By introducing VARS, driven by low-grade heat, the paper advocates for a reduction in engine power consumption and, consequently, a decrease in fuel usage. This research contributes to the ongoing efforts to enhance sustainability and efficiency in automotive climate control systems.

Keywords: VCRS, VARS, efficiency, sustainability

Procedia PDF Downloads 74
544 Modelling of Pipe Jacked Twin Tunnels in a Very Soft Clay

Authors: Hojjat Mohammadi, Randall Divito, Gary J. E. Kramer

Abstract:

Tunnelling and pipe jacking in very soft soils (fat clays), even with an Earth Pressure Balance tunnel boring machine (EPBM), can cause large ground displacements. In this study, the short-term and long-term ground and tunnel response is predicted for twin, pipe-jacked EPBM 3 meter diameter tunnels with a narrow pillar width. Initial modelling indicated complete closure of the annulus gap at the tail shield onto the centrifugally cast, glass-fiber-reinforced, polymer mortar jacking pipe (FRP). Numerical modelling was employed to simulate the excavation and support installation sequence, examine the ground response during excavation, confirm the adequacy of the pillar width and check the structural adequacy of the installed pipe. In the numerical models, Mohr-Coulomb constitutive model with the effect of unloading was adopted for the fat clays, while for the bedrock layer, the generalized Hoek-Brown was employed. The numerical models considered explicit excavation sequences and different levels of ground convergence prior to support installation. The well-studied excavation sequences made the analysis possible for this study on a very soft clay, otherwise, obtaining the convergency in the numerical analysis would be impossible. The predicted results indicate that the ground displacements around the tunnel and its effect on the pipe would be acceptable despite predictions of large zones of plastic behaviour around the tunnels and within the entire pillar between them due to excavation-induced ground movements.

Keywords: finite element modeling (FEM), pipe-jacked tunneling, very soft clay, EPBM

Procedia PDF Downloads 82
543 Relevance of Brain Stem Evoked Potential in Diagnosis of Central Demyelination in Guillain Barre’ Syndrome

Authors: Geetanjali Sharma

Abstract:

Guillain Barre’ syndrome (GBS) is an auto-immune mediated demyelination poly-radiculo-neuropathy. Clinical features include progressive symmetrical ascending muscle weakness of more than two limbs, areflexia with or without sensory, autonomic and brainstem abnormalities, the purpose of this study was to determine subclinical neurological changes of CNS with GBS and to establish the presence of central demyelination in GBS. The study was prospective and conducted in the Department of Physiology, Pt. B. D. Sharma Post-graduate Institute of Medical Sciences, University of Health Sciences, Rohtak, Haryana, India to find out early central demyelination in clinically diagnosed patients of GBS. These patients were referred from the department of Medicine of our Institute to our department for electro-diagnostic evaluation. The study group comprised of 40 subjects (20 clinically diagnosed GBS patients and 20 healthy individuals as controls) aged between 6-65 years. Brain Stem evoked Potential (BAEP) were done in both groups using RMS EMG EP mark II machine. BAEP parameters included the latencies of waves I to IV, inter peak latencies I-III, III-IV & I-V. Statistically significant increase in absolute peak and inter peak latencies in the GBS group as compared with control group was noted. Results of evoked potential reflect impairment of auditory pathways probably due to focal demyelination in Schwann cell derived myelin sheaths that cover the extramedullary portion of auditory nerves. Early detection of the sub-clinical abnormalities is important as timely intervention reduces morbidity.

Keywords: brainstem, demyelination, evoked potential, Guillain Barre’

Procedia PDF Downloads 302
542 Surface-Enhanced Raman Spectroscopy on Gold Nanoparticles in the Kidney Disease

Authors: Leonardo C. Pacheco-Londoño, Nataly J Galan-Freyle, Lisandro Pacheco-Lugo, Antonio Acosta-Hoyos, Elkin Navarro, Gustavo Aroca-Martinez, Karin Rondón-Payares, Alberto C. Espinosa-Garavito, Samuel P. Hernández-Rivera

Abstract:

At the Life Science Research Center at Simon Bolivar University, a primary focus is the diagnosis of various diseases, and the use of gold nanoparticles (Au-NPs) in diverse biomedical applications is continually expanding. In the present study, Au-NPs were employed as substrates for Surface-Enhanced Raman Spectroscopy (SERS) aimed at diagnosing kidney diseases arising from Lupus Nephritis (LN), preeclampsia (PC), and Hypertension (H). Discrimination models were developed for distinguishing patients with and without kidney diseases based on the SERS signals from urine samples by partial least squares-discriminant analysis (PLS-DA). A comparative study of the Raman signals across the three conditions was conducted, leading to the identification of potential metabolite signals. Model performance was assessed through cross-validation and external validation, determining parameters like sensitivity and specificity. Additionally, a secondary analysis was performed using machine learning (ML) models, wherein different ML algorithms were evaluated for their efficiency. Models’ validation was carried out using cross-validation and external validation, and other parameters were determined, such as sensitivity and specificity; the models showed average values of 0.9 for both parameters. Additionally, it is not possible to highlight this collaborative effort involved two university research centers and two healthcare institutions, ensuring ethical treatment and informed consent of patient samples.

Keywords: SERS, Raman, PLS-DA, kidney diseases

Procedia PDF Downloads 45
541 Improving Efficiency and Effectiveness of FMEA Studies

Authors: Joshua Loiselle

Abstract:

This paper discusses the challenges engineering teams face in conducting Failure Modes and Effects Analysis (FMEA) studies. This paper focuses on the specific topic of improving the efficiency and effectiveness of FMEA studies. Modern economic needs and increased business competition require engineers to constantly develop newer and better solutions within shorter timeframes and tighter margins. In addition, documentation requirements for meeting standards/regulatory compliance and customer needs are becoming increasingly complex and verbose. Managing open actions and continuous improvement activities across all projects, product variations, and processes in addition to daily engineering tasks is cumbersome, time consuming, and is susceptible to errors, omissions, and non-conformances. FMEA studies are proven methods for improving products and processes while subsequently reducing engineering workload and improving machine and resource availability through a pre-emptive, systematic approach of identifying, analyzing, and improving high-risk components. If implemented correctly, FMEA studies significantly reduce costs and improve productivity. However, the value of an effective FMEA is often shrouded by a lack of clarity and structure, misconceptions, and previous experiences and, as such, FMEA studies are frequently grouped with the other required information and documented retrospectively in preparation of customer requirements or audits. Performing studies in this way only adds cost to a project and perpetuates the misnomer that FMEA studies are not value-added activities. This paper discusses the benefits of effective FMEA studies, the challenges related to conducting FMEA studies, best practices for efficiently overcoming challenges via structure and automation, and the benefits of implementing those practices.

Keywords: FMEA, quality, APQP, PPAP

Procedia PDF Downloads 304
540 An Explanatory Study Approach Using Artificial Intelligence to Forecast Solar Energy Outcome

Authors: Agada N. Ihuoma, Nagata Yasunori

Abstract:

Artificial intelligence (AI) techniques play a crucial role in predicting the expected energy outcome and its performance, analysis, modeling, and control of renewable energy. Renewable energy is becoming more popular for economic and environmental reasons. In the face of global energy consumption and increased depletion of most fossil fuels, the world is faced with the challenges of meeting the ever-increasing energy demands. Therefore, incorporating artificial intelligence to predict solar radiation outcomes from the intermittent sunlight is crucial to enable a balance between supply and demand of energy on loads, predict the performance and outcome of solar energy, enhance production planning and energy management, and ensure proper sizing of parameters when generating clean energy. However, one of the major problems of forecasting is the algorithms used to control, model, and predict performances of the energy systems, which are complicated and involves large computer power, differential equations, and time series. Also, having unreliable data (poor quality) for solar radiation over a geographical location as well as insufficient long series can be a bottleneck to actualization. To overcome these problems, this study employs the anaconda Navigator (Jupyter Notebook) for machine learning which can combine larger amounts of data with fast, iterative processing and intelligent algorithms allowing the software to learn automatically from patterns or features to predict the performance and outcome of Solar Energy which in turns enables the balance of supply and demand on loads as well as enhance production planning and energy management.

Keywords: artificial Intelligence, backward elimination, linear regression, solar energy

Procedia PDF Downloads 157
539 Determination of Selected Engineering Properties of Giant Palm Seeds (Borassus Aethiopum) in Relation to Its Oil Potential

Authors: Rasheed Amao Busari, Ahmed Ibrahim

Abstract:

The engineering properties of giant palms are crucial for the reasonable design of the processing and handling systems. The research was conducted to investigate some engineering properties of giant palm seeds in relation to their oil potential. The ripe giant palm fruit was sourced from some parts of Zaria in Kaduna State and Ado Ekiti in Ekiti State, Nigeria. The mesocarps of the fruits collected were removed to obtain the nuts, while the collected nuts were dried under ambient conditions for several days. The actual moisture content of the nuts at the time of the experiment was determined using KT100S Moisture Meter, with moisture content ranged 17.9% to 19.15%. The physical properties determined are axial dimension, geometric mean diameter, arithmetic mean diameter, sphericity, true and bulk densities, porosity, angles of repose, and coefficients of friction. The nuts were measured using a vernier caliper for physical assessment of their sizes. The axial dimensions of 100 nuts were taken and the result shows that the size ranges from 7.30 to 9.32cm for major diameter, 7.2 to 8.9 cm for intermediate diameter, and 4.2 to 6.33 for minor diameter. The mechanical properties determined were compressive force, compressive stress, and deformation both at peak and break using Instron hydraulic universal tensile testing machine. The work also revealed that giant palm seed can be classified as an oil-bearing seed. The seed gave 18% using the solvent extraction method. The results obtained from the study will help in solving the problem of equipment design, handling, and further processing of the seeds.

Keywords: giant palm seeds, engineering properties, oil potential, moisture content, and giant palm fruit

Procedia PDF Downloads 77
538 Effect of Shot Peening on the Mechanical Properties for Welded Joints of Aluminium Alloy 6061-T6

Authors: Muna Khethier Abbass, Khairia Salman Hussan, Huda Mohummed AbdudAlaziz

Abstract:

This work aims to study the effect of shot peening on the mechanical properties of welded joints which performed by two different welding processes: Tungsten inert gas (TIG) welding and friction stir welding (FSW) processes of aluminum alloy 6061 T6. Arc welding process (TIG) was carried out on the sheet with dimensions of (100x50x6 mm) to obtain many welded joints with using electrode type ER4043 (AlSi5) as a filler metal and argon as shielding gas. While the friction stir welding process was carried out using CNC milling machine with a tool of rotational speed (1000 rpm) and welding speed of (20 mm/min) to obtain the same butt welded joints. The welded pieces were tested by X-ray radiography to detect the internal defects and faulty welded pieces were excluded. Tensile test specimens were prepared from welded joints and base alloy in the dimensions according to ASTM17500 and then subjected to shot peening process using steel ball of diameter 0.9 mm and for 15 min. All specimens were subjected to Vickers hardness test and micro structure examination to study the effect of welding process (TIG and FSW) on the micro structure of the weld zones. Results showed that a general decay of mechanical properties of TIG and FSW welded joints comparing with base alloy while the FSW welded joint gives better mechanical properties than that of TIG welded joint. This is due to the micro structure changes during the welding process. It has been found that the surface hardening by shot peening improved the mechanical properties of both welded joints, this is due to the compressive residual stress generation in the weld zones which was measured using X-Ray diffraction (XRD) inspection.

Keywords: friction stir welding, TIG welding, mechanical properties, shot peening

Procedia PDF Downloads 339
537 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations

Authors: Deepak Singh, Rail Kuliev

Abstract:

The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.

Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization

Procedia PDF Downloads 70
536 Human Vibrotactile Discrimination Thresholds for Simultaneous and Sequential Stimuli

Authors: Joanna Maj

Abstract:

Body machine interfaces (BMIs) afford users a non-invasive way coordinate movement. Vibrotactile stimulation has been incorporated into BMIs to allow feedback in real-time and guide movement control to benefit patients with cognitive deficits, such as stroke survivors. To advance research in this area, we examined vibrational discrimination thresholds at four body locations to determine suitable application sites for future multi-channel BMIs using vibration cues to guide movement planning and control. Twelve healthy adults had a pair of small vibrators (tactors) affixed to the skin at each location: forearm, shoulders, torso, and knee. A "standard" stimulus (186 Hz; 750 ms) and "probe" stimuli (11 levels ranging from 100 Hz to 235 Hz; 750 ms) were delivered. Probe and test stimulus pairs could occur sequentially or simultaneously (timing). Participants verbally indicated which stimulus felt more intense. Stimulus order was counterbalanced across tactors and body locations. Probabilities that probe stimuli felt more intense than the standard stimulus were computed and fit with a cumulative Gaussian function; the discrimination threshold was defined as one standard deviation of the underlying distribution. Threshold magnitudes depended on stimulus timing and location. Discrimination thresholds were better for stimuli applied sequentially vs. simultaneously at the torso as well as the knee. Thresholds were small (better) and relatively insensitive to timing differences for vibrations applied at the shoulder. BMI applications requiring multiple channels of simultaneous vibrotactile stimulation should therefore consider the shoulder as a deployment site for a vibrotactile BMI interface.

Keywords: electromyography, electromyogram, neuromuscular disorders, biomedical instrumentation, controls engineering

Procedia PDF Downloads 64
535 Modeling of a Pilot Installation for the Recovery of Residual Sludge from Olive Oil Extraction

Authors: Riad Benelmir, Muhammad Shoaib Ahmed Khan

Abstract:

The socio-economic importance of the olive oil production is significant in the Mediterranean region, both in terms of wealth and tradition. However, the extraction of olive oil generates huge quantities of wastes that may have a great impact on land and water environment because of their high phytotoxicity. Especially olive mill wastewater (OMWW) is one of the major environmental pollutants in olive oil industry. This work projects to design a smart and sustainable integrated thermochemical catalytic processes of residues from olive mills by hydrothermal carbonization (HTC) of olive mill wastewater (OMWW) and fast pyrolysis of olive mill wastewater sludge (OMWS). The byproducts resulting from OMWW-HTC treatment are a solid phase enriched in carbon, called biochar and a liquid phase (residual water with less dissolved organic and phenolic compounds). HTC biochar can be tested as a fuel in combustion systems and will also be utilized in high-value applications, such as soil bio-fertilizer and as catalyst or/and catalyst support. The HTC residual water is characterized, treated and used in soil irrigation since the organic and the toxic compounds will be reduced under the permitted limits. This project’s concept includes also the conversion of OMWS to a green diesel through a catalytic pyrolysis process. The green diesel is then used as biofuel in an internal combustion engine (IC-Engine) for automotive application to be used for clean transportation. In this work, a theoretical study is considered for the use of heat from the pyrolysis non-condensable gases in a sorption-refrigeration machine for pyrolysis gases cooling and condensation of bio-oil vapors.

Keywords: biomass, olive oil extraction, adsorption cooling, pyrolisis

Procedia PDF Downloads 90
534 Diabetes Mellitus and Blood Glucose Variability Increases the 30-day Readmission Rate after Kidney Transplantation

Authors: Harini Chakkera

Abstract:

Background: Inpatient hyperglycemia is an established independent risk factor among several patient cohorts with hospital readmission. This has not been studied after kidney transplantation. Nearly one-third of patients who have undergone a kidney transplant reportedly experience 30-day readmission. Methods: Data on first-time solitary kidney transplantations were retrieved between September 2015 to December 2018. Information was linked to the electronic health record to determine a diagnosis of diabetes mellitus and extract glucometeric and insulin therapy data. Univariate logistic regression analysis and the XGBoost algorithm were used to predict 30-day readmission. We report the average performance of the models on the testing set on five bootstrapped partitions of the data to ensure statistical significance. Results: The cohort included 1036 patients who received kidney transplantation, and 224 (22%) experienced 30-day readmission. The machine learning algorithm was able to predict 30-day readmission with an average AUC of 77.3% (95% CI 75.30-79.3%). We observed statistically significant differences in the presence of pretransplant diabetes, inpatient-hyperglycemia, inpatient-hypoglycemia, and minimum and maximum glucose values among those with higher 30-day readmission rates. The XGBoost model identified the index admission length of stay, presence of hyper- and hypoglycemia and recipient and donor BMI values as the most predictive risk factors of 30-day readmission. Additionally, significant variations in the therapeutic management of blood glucose by providers were observed. Conclusions: Suboptimal glucose metrics during hospitalization after kidney transplantation is associated with an increased risk for 30-day hospital readmission. Optimizing the hospital blood glucose management, a modifiable factor, after kidney transplantation may reduce the risk of 30-day readmission.

Keywords: kidney, transplant, diabetes, insulin

Procedia PDF Downloads 90
533 Remote Sensing through Deep Neural Networks for Satellite Image Classification

Authors: Teja Sai Puligadda

Abstract:

Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.

Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss

Procedia PDF Downloads 159
532 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs

Authors: H. M. Soroush

Abstract:

The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.

Keywords: number of late jobs, scheduling, single server, stochastic

Procedia PDF Downloads 497
531 A High Content Screening Platform for the Accurate Prediction of Nephrotoxicity

Authors: Sijing Xiong, Ran Su, Lit-Hsin Loo, Daniele Zink

Abstract:

The kidney is a major target for toxic effects of drugs, industrial and environmental chemicals and other compounds. Typically, nephrotoxicity is detected late during drug development, and regulatory animal models could not solve this problem. Validated or accepted in silico or in vitro methods for the prediction of nephrotoxicity are not available. We have established the first and currently only pre-validated in vitro models for the accurate prediction of nephrotoxicity in humans and the first predictive platforms based on renal cells derived from human pluripotent stem cells. In order to further improve the efficiency of our predictive models, we recently developed a high content screening (HCS) platform. This platform employed automated imaging in combination with automated quantitative phenotypic profiling and machine learning methods. 129 image-based phenotypic features were analyzed with respect to their predictive performance in combination with 44 compounds with different chemical structures that included drugs, environmental and industrial chemicals and herbal and fungal compounds. The nephrotoxicity of these compounds in humans is well characterized. A combination of chromatin and cytoskeletal features resulted in high predictivity with respect to nephrotoxicity in humans. Test balanced accuracies of 82% or 89% were obtained with human primary or immortalized renal proximal tubular cells, respectively. Furthermore, our results revealed that a DNA damage response is commonly induced by different PTC-toxicants with diverse chemical structures and injury mechanisms. Together, the results show that the automated HCS platform allows efficient and accurate nephrotoxicity prediction for compounds with diverse chemical structures.

Keywords: high content screening, in vitro models, nephrotoxicity, toxicity prediction

Procedia PDF Downloads 312
530 Characteristics of the Particle Size Distribution and Exposure Concentrations of Nanoparticles Generated from the Laser Metal Deposition Process

Authors: Yu-Hsuan Liu, Ying-Fang Wang

Abstract:

The objectives of the present study are to characterize nanoparticles generated from the laser metal deposition (LMD) process and to estimate particle concentrations deposited in the head (H), that the tracheobronchial (TB) and alveolar (A) regions, respectively. The studied LMD chamber (3.6m × 3.8m × 2.9m) is installed with a robot laser metal deposition machine. Direct-reading instrument of a scanning mobility particle sizer (SMPS, Model 3082, TSI Inc., St. Paul, MN, USA) was used to conduct static sampling inside the chamber for nanoparticle number concentration and particle size distribution measurements. The SMPS obtained particle number concentration at every 3 minutes, the diameter of the SMPS ranged from 11~372 nm when the aerosol and sheath flow rates were set at 0.6 and 6 L / min, respectively. The resultant size distributions were used to predict depositions of nanoparticles at the H, TB, and A regions of the respiratory tract using the UK National Radiological Protection Board’s (NRPB’s) LUDEP Software. Result that the number concentrations of nanoparticles in indoor background and LMD chamber were 4.8×10³ and 4.3×10⁵ # / cm³, respectively. However, the nanoparticles emitted from the LMD process was in the form of the uni-modal with number median diameter (NMD) and geometric standard deviation (GSD) as 142nm and 1.86, respectively. The fractions of the nanoparticles deposited on the alveolar region (A: 69.8%) were higher than the other two regions of the head region (H: 10.9%), tracheobronchial region (TB: 19.3%). This study conducted static sampling to measure the nanoparticles in the LMD process, and the results show that the fraction of particles deposited on the A region was higher than the other two regions. Therefore, applying the characteristics of nanoparticles emitted from LMD process could be provided valuable scientific-based evidence for exposure assessments in the future.

Keywords: exposure assessment, laser metal deposition process, nanoparticle, respiratory region

Procedia PDF Downloads 284
529 Metabolic Changes during Reprogramming of Wheat and Triticale Microspores

Authors: Natalia Hordynska, Magdalena Szechynska-Hebda, Miroslaw Sobczak, Elzbieta Rozanska, Joanna Troczynska, Zofia Banaszak, Maria Wedzony

Abstract:

Albinism is a common problem encountered in wheat and triticale breeding programs, which require in vitro culture steps e.g. generation of doubled haploids via androgenesis process. Genetic factor is a major determinant of albinism, however, environmental conditions such as temperature and media composition influence the frequency of albino plant formation. Cold incubation of wheat and triticale spikes induced a switch from gametophytic to sporophytic development. Further, androgenic structures formed from anthers of the genotypes susceptible to androgenesis or treated with cold stress, had a pool of structurally primitive plastids, with small starch granules or swollen thylakoids. High temperature was a factor inducing andro-genesis of wheat and triticale, but at the same time, it was a factor favoring the formation of albino plants. In genotypes susceptible to albinism or after heat stress conditions, cells formed from anthers were vacuolated, and plastids were eliminated. Partial or complete loss of chlorophyll pigments and incomplete differentiation of chloroplast membranes result in formation of tissues or whole plant unable to perform photosynthesis. Indeed, susceptibility to the andro-genesis process was associated with an increase of total concentration of photosynthetic pigments in anthers, spikes and regenerated plants. The proper balance of the synthesis of various pigments, was the starting point for their proper incorporation into photosynthetic membranes. In contrast, genotypes resistant to the androgenesis process and those treated with heat, contained 100 times lower content of photosynthetic pigments. In particular, the synthesis of violaxanthin, zeaxanthin, lutein and chlorophyll b was limited. Furthermore, deregulation of starch and lipids synthesis, which led to the formation of very complex starch granules and an increased number of oleosomes, respectively, correlated with the reduction of the efficiency of androgenesis. The content of other sugars varied depending on the genotype and the type of stress. The highest content of various sugars was found for genotypes susceptible to andro-genesis, and highly reduced for genotypes resistant to androgenesis. The most important sugars seem to be glucose and fructose. They are involved in sugar sensing and signaling pathways, which affect the expression of various genes and regulate plant development. Sucrose, on the other hand, seems to have minor effect at each stage of the androgenesis. The sugar metabolism was related to metabolic activity of microspores. The genotypes susceptible to androgenesis process had much faster mitochondrium- and chloroplast-dependent energy conversion and higher heat production by tissues. Thus, the effectiveness of metabolic processes, their balance and the flexibility under the stress was a factor determining the direction of microspore development, and in the later stages of the androgenesis process, a factor supporting the induction of androgenic structures, chloroplast formation and the regeneration of green plants. The work was financed by Ministry of Agriculture and Rural Development within Program: ‘Biological Progress in Plant Production’, project no HOR.hn.802.15.2018.

Keywords: androgenesis, chloroplast, metabolism, temperature stress

Procedia PDF Downloads 260
528 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models

Authors: Ethan James

Abstract:

Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.

Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina

Procedia PDF Downloads 181
527 Making the Right Call for Falls: Evaluating the Efficacy of a Multi-Faceted Trust Wide Approach to Improving Patient Safety Post Falls

Authors: Jawaad Saleem, Hannah Wright, Peter Sommerville, Adrian Hopper

Abstract:

Introduction: Inpatient falls are the most commonly reported patient safety incidents, and carry a significant burden on resources, morbidity, and mortality. Ensuring adequate post falls management of patients by staff is therefore paramount to maintaining patient safety especially in out of hours and resource stretched settings. Aims: This quality improvement project aims to improve the current practice of falls management at Guys St Thomas Hospital, London as compared to our 2016 Quality Improvement Project findings. Furthermore, it looks to increase current junior doctors confidence in managing falls and their use of new guidance protocols. Methods: Multifaceted Interventions implemented included: the development of new trust wide guidelines detailing management pathways for patients post falls, available for intranet access. Furthermore, the production of 2000 lanyard cards distributed amongst junior doctors and staff which summarised these guidelines. Additionally, a ‘safety signal’ email was sent from the Trust chief medical officer to all staff raising awareness of falls and the guidelines. Formal falls teaching was also implemented for new doctors at induction. Using an established incident database, 189 consecutive falls in 2017were retrospectively analysed electronically to assess and compared to the variables measured in 2016 post interventions. A separate serious incident database was used to analyse 50 falls from May 2015 to March 2018 to ascertain the statistical significance of the impact of our interventions on serious incidents. A similar questionnaire for the 2017 cohort of foundation year one (FY1) doctors was performed and compared to 2016 results. Results: Questionnaire data demonstrated improved awareness and utility of guidelines and increased confidence as well as an increase in training. 97% of FY1 trainees felt that the interventions had increased their awareness of the impact of falls on patients in the trust. Data from the incident database demonstrated the time to review patients post fall had decreased from an average of 130 to 86 minutes. Improvement was also demonstrated in the reduced time to order and schedule X-ray and CT imaging, 3 and 5 hours respectively. Data from the serious incident database show that ‘the time from fall until harm was detected’ was statistically significantly lower (P = 0.044) post intervention. We also showed the incidence of significant delays in detecting harm ( > 10 hours) reduced post intervention. Conclusions: Our interventions have helped to significantly reduce the average time to assess, order and schedule appropriate imaging post falls. Delays of over ten hours to detect serious injuries after falls were commonplace; since the intervention, their frequency has markedly reduced. We suggest this will lead to identifying patient harm sooner, reduced clinical incidents relating to falls and thus improve overall patient safety. Our interventions have also helped increase clinical staff confidence, management, and awareness of falls in the trust. Next steps include expanding teaching sessions, improving multidisciplinary team involvement to aid this improvement.

Keywords: patient safety, quality improvement, serious incidents, falls, clinical care

Procedia PDF Downloads 124
526 Performance Analysis of Pumps-as-Turbine Under Cavitating Conditions

Authors: Calvin Stephen, Biswajit Basu, Aonghus McNabola

Abstract:

Market liberalization in the power sector has led to the emergence of micro-hydropower schemes that are dependent on the use of pumps-as-turbines in applications that were not suitable as potential hydropower sites in earlier years. These applications include energy recovery in water supply networks, sewage systems, irrigation systems, alcohol breweries, underground mining and desalination plants. As a result, there has been an accelerated adoption of pumpsas-turbine technology due to the economic advantages it presents in comparison to the conventional turbines in the micro-hydropower space. The performance of this machines under cavitation conditions, however, is not well understood as there is a deficiency of knowledge in literature focused on their turbine mode of operation. In hydraulic machines, cavitation is a common occurrence which needs to be understood to safeguard them and prolong their operation life. The overall purpose of this study is to investigate the effects of cavitation on the performance of a pumps-as-turbine system over its entire operating range. At various operating speeds, the cavitating region is identified experimentally while monitoring the effects this has on the power produced by the machine. Initial results indicate occurrence of cavitation at higher flow rates for lower operating speeds and at lower flow rates at higher operating speeds. This implies that for cavitation free operation, low speed pumps-as-turbine must be used for low flow rate conditions whereas for sites with higher flow rate conditions high speed turbines should be adopted. Such a complete understanding of pumps-as-turbine suction performance can aid avoid cavitation induced failures hence improved reliability of the micro-hydropower plant.

Keywords: cavitation, micro-hydropower, pumps-as-turbine, system design

Procedia PDF Downloads 118
525 Vertebral Artery Dissection Complicating Pregnancy and Puerperium: Case Report and Review of the Literature

Authors: N. Reza Pour, S. Chuah, T. Vo

Abstract:

Background: Vertebral artery dissection (VAD) is a rare complication of pregnancy. It can occur spontaneously or following a traumatic event. The pathogenesis is unclear. Predisposing factors include chronic hypertension, Marfan’s syndrome, fibromuscular dysplasia, vasculitis and cystic medial necrosis. Physiological changes of pregnancy have also been proposed as potential mechanisms of injury to the vessel wall. The clinical presentation varies and it can present as a headache, neck pain, diplopia, transient ischaemic attack, or an ischemic stroke. Isolated cases of VAD in pregnancy and puerperium have been reported in the literature. One case was found to have posterior circulation stroke as a result of bilateral VAD and labour was induced at 37 weeks gestation for preeclampsia. Another patient at 38 weeks with severe neck pain that persisted after induction for elevated blood pressure and arteriography showed right VAD postpartum. A single case of lethal VAD in pregnancy with subsequent massive subarachnoid haemorrhage has been reported which was confirmed by the autopsy. Case Presentation: We report two cases of vertebral artery dissection in pregnancy. The first patient was a 32-year-old primigravida presented at the 38th week of pregnancy with the onset of early labour and blood pressure (BP) of 130/70 on arrival. After 2 hours, the patient developed a severe headache with blurry vision and BP was 238/120. Despite treatment with an intravenous antihypertensive, she had eclamptic fit. Magnesium solfate was started and Emergency Caesarean Section was performed under the general anaesthesia. On the second day after the operation, she developed left-sided neck pain. Magnetic Resonance Imaging (MRI) angiography confirmed a short segment left vertebral artery dissection at the level of C3. The patient was treated with aspirin and remained stable without any neurological deficit. The second patient was a 33-year-old primigavida who was admitted to the hospital at 36 weeks gestation with BP of 155/105, constant headache and visual disturbances. She was medicated with an oral antihypertensive agent. On day 4, she complained of right-sided neck pain. MRI angiogram revealed a short segment dissection of the right vertebral artery at the C2-3 level. Pregnancy was terminated on the same day with emergency Caesarean Section and anticoagulation was started subsequently. Post-operative recovery was complicated by rectus sheath haematoma requiring evacuation. She was discharged home on Aspirin without any neurological sequelae. Conclusion: Because of collateral circulation, unilateral vertebral artery dissections may go unrecognized and may be more common than suspected. The outcome for most patients is benign, reflecting the adequacy of the collateral circulation in young patients. Spontaneous VAD is usually treated with anticoagulation or antiplatelet therapy for a minimum of 3-6 months to prevent future ischaemic events, allowing the dissection to heal on its own. We had two cases of VAD in the context of hypertensive disorders of pregnancy with an acceptable outcome. A high level of vigilance is required particularly with preeclamptic patients presenting with head/neck pain to allow an early diagnosis. This is as we hypothesize, early and aggressive management of vertebral artery dissection may potentially prevent further complications.

Keywords: eclampsia, preeclampsia, pregnancy, Vertebral Artery Dissection

Procedia PDF Downloads 278