Search results for: block toppling failure
2997 Business Continuity Risk Review for a Large Petrochemical Complex
Authors: Michel A. Thomet
Abstract:
A discrete-event simulation model was used to perform a Reliability-Availability-Maintainability (RAM) study of a large petrochemical complex which included sixteen process units, and seven feeds and intermediate streams. All the feeds and intermediate streams have associated storage tanks, so that if a processing unit fails and shuts down, the downstream units can keep producing their outputs. This also helps the upstream units which do not have to reduce their outputs, but can store their excess production until the failed unit restart. Each process unit and each pipe section carrying the feeds and intermediate streams has a probability of failure with an associated distribution and a Mean Time Between Failure (MTBF), as well as a distribution of the time to restore and a Mean Time To Restore (MTTR). The utilities supporting the process units can also fail and have their own distributions with specific MTBF and MTTR. The model runs are for ten years or more and the runs are repeated several times to obtain statistically relevant results. One of the main results is the On-Stream factor (OSF) of each process unit (percent of hours in a year when the unit is running in nominal conditions). One of the objectives of the study was to investigate if the storage capacity of each of the feeds and the intermediate stream was adequate. This was done by increasing the storage capacities in several steps and through running the simulation to see if the OSF were improved and by how much. Other objectives were to see if the failure of the utilities were an important factor in the overall OSF, and what could be done to reduce their failure rates through redundant equipment.Keywords: business continuity, on-stream factor, petrochemical, RAM study, simulation, MTBF
Procedia PDF Downloads 2172996 A Case Study on Re-Assessment Study of an Earthfill Dam at Latamber, Pakistan
Authors: Afnan Ahmad, Shahid Ali, Mujahid Khan
Abstract:
This research presents the parametric study of an existing earth fill dam located at Latamber, Karak city, Pakistan. The study consists of carrying out seepage analysis, slope stability analysis, and Earthquake analysis of the dam for the existing dam geometry and do the same for modified geometry. Dams are massive as well as expensive hydraulic structure, therefore it needs proper attention. Additionally, this dam falls under zone 2B region of Pakistan, which is an earthquake-prone area and where ground accelerations range from 0.16g to 0.24g peak. So it should be deal with great care, as the failure of any dam can cause irreparable losses. Similarly, seepage as well as slope failure can also cause damages which can lead to failure of the dam. Therefore, keeping in view of the importance of dam construction and associated costs, our main focus is to carry out parametric study of newly constructed dam. GeoStudio software is used for this analysis in the study in which Seep/W is used for seepage analysis, Slope/w is used for Slope stability analysis and Quake/w is used for earthquake analysis. Based on the geometrical, hydrological and geotechnical data, Seepage and slope stability analysis of different proposed geometries of the dam are carried out along with the Seismic analysis. A rigorous analysis was carried out in 2-D limit equilibrium using finite element analysis. The seismic study began with the static analysis, continuing by the dynamic response analysis. The seismic analyses permitted evaluation of the overall patterns of the Latamber dam behavior in terms of displacements, stress, strain, and acceleration fields. Similarly, the seepage analysis allows evaluation of seepage through the foundation and embankment of the dam, while slope stability analysis estimates the factor of safety of the upstream and downstream of the dam. The results of the analysis demonstrate that among multiple geometries, Latamber dam is secure against seepage piping failure and slope stability (upstream and downstream) failure. Moreover, the dam is safe against any dynamic loading and no liquefaction has been observed while changing its geometry in permissible limits.Keywords: earth-fill dam, finite element, liquefaction, seepage analysis
Procedia PDF Downloads 1632995 Induction Machine Bearing Failure Detection Using Advanced Signal Processing Methods
Authors: Abdelghani Chahmi
Abstract:
This article examines the detection and localization of faults in electrical systems, particularly those using asynchronous machines. First, the process of failure will be characterized, relevant symptoms will be defined and based on those processes and symptoms, a model of those malfunctions will be obtained. Second, the development of the diagnosis of the machine will be shown. As studies of malfunctions in electrical systems could only rely on a small amount of experimental data, it has been essential to provide ourselves with simulation tools which allowed us to characterize the faulty behavior. Fault detection uses signal processing techniques in known operating phases.Keywords: induction motor, modeling, bearing damage, airgap eccentricity, torque variation
Procedia PDF Downloads 1382994 Lateritic Soils from Ceara, Brazil: Sustainable Use in Constructive Blocks for Social Housing
Authors: Ivelise M. Strozberg, Juliana Sales Frota, Lucas de Oliveira Vale
Abstract:
The state of Ceara, located in the northeast region of Brazil, is abundant in lateritic soil which has been usually discarded due to its lack of agricultural potential while materials of similar nature have been used as constituents of housing constructive elements in many parts of the world, such as India and Portugal, for decades. Since many of the semi-arid housing conditions in the state of Ceara fail to meet the minimum criteria regarding comfort and safety requirements, this research proposed to study the Ceara lateritic soil and the possibility of its use as a sustainable building block constituent for social housings, collaborating to the improvement of the region living conditions. In order to achieve this objective, soil samples were collected from five different locations within the specific region, three of which presented lateritic nature, being characterized according to the Unified Soil Classification System and the MCT methodology, which is a Brazilian methodology developed during the 80’s that aimed to better describe and approach tropical soils, its characterization and behavior. Two of these samples were used to build two different miniature block prototypes, which were manually molded, heated at low temperatures -( < 300 ºC) in order to save energy and lessen the CO₂ high emission rate common in traditional burning methods- and then submitted to load tests. Among the soils tested, the one with the highest degree of laterization and greater presence of fines constituted the block with the best performance in terms of flexural strength tensions, presenting resistance gains when heated at increasing temperatures, which can indicate that this type of soil has potential towards being used as constructing material.Keywords: constructive blocks, lateritic soil, MCT methodology, sustainability
Procedia PDF Downloads 1242993 Analysis of Urban Rail Transit Station's Accessibility Reliability: A Case Study of Hangzhou Metro, China
Authors: Jin-Qu Chen, Jie Liu, Yong Yin, Zi-Qi Ju, Yu-Yao Wu
Abstract:
Increase in travel fare and station’s failure will have huge impact on passengers’ travel. The Urban Rail Transit (URT) station’s accessibility reliability under increasing travel fare and station failure are analyzed in this paper. Firstly, the passenger’s travel path is resumed based on stochastic user equilibrium and Automatic Fare Collection (AFC) data. Secondly, calculating station’s importance by combining LeaderRank algorithm and Ratio of Station Affected Passenger Volume (RSAPV), and then the station’s accessibility evaluation indicators are proposed based on the analysis of passenger’s travel characteristic. Thirdly, station’s accessibility under different scenarios are measured and rate of accessibility change is proposed as station’s accessibility reliability indicator. Finally, the accessibility of Hangzhou metro stations is analyzed by the formulated models. The result shows that Jinjiang station and Liangzhu station are the most important and convenient station in the Hangzhou metro, respectively. Station failure and increase in travel fare and station failure have huge impact on station’s accessibility, except for increase in travel fare. Stations in Hangzhou metro Line 1 have relatively worse accessibility reliability and Fengqi Road station’s accessibility reliability is weakest. For Hangzhou metro operational department, constructing new metro line around Line 1 and protecting Line 1’s station preferentially can effective improve the accessibility reliability of Hangzhou metro.Keywords: automatic fare collection data, AFC, station’s accessibility reliability, stochastic user equilibrium, urban rail transit, URT
Procedia PDF Downloads 1332992 Development of Alpha Spectroscopy Method with Solid State Nuclear Track Detector Using Aluminium Thin Films
Authors: Nidal Dwaikat
Abstract:
This work presents the development of alpha spectroscopy method with Solid-state nuclear track detectors using aluminum thin films. The resolution of this method is high, and it is able to discriminate between alpha particles at different incident energy. It can measure the exact number of alpha particles at specific energy without needing a calibration of alpha track diameter versus alpha energy. This method was tested by using Cf-252 alpha standard source at energies 5.11 Mev, 3.86 MeV and 2.7 MeV, which produced by the variation of detector -standard source distance. On front side, two detectors were covered with two Aluminum thin films and the third detector was kept uncovered. The thickness of Aluminum thin films was selected carefully (using SRIM 2013) such that one of the films will block the lower two alpha particles (3.86 MeV and 2.7 MeV) and the alpha particles at higher energy (5.11 Mev) can penetrate the film and reach the detector’s surface. The second thin film will block alpha particles at lower energy of 2.7 MeV and allow alpha particles at higher two energies (5.11 Mev and 3.86 MeV) to penetrate and produce tracks. For uncovered detector, alpha particles at three different energies can produce tracks on it. For quality assurance and accuracy, the detectors were mounted on thick enough copper substrates to block exposure from the backside. The tracks on the first detector are due to alpha particles at energy of 5.11 MeV. The difference between the tracks number on the first detector and the tracks number on the second detector is due to alpha particles at energy of 3.8 MeV. Finally, by subtracting the tracks number on the second detector from the tracks number on the third detector (uncovered), we can find the tracks number due to alpha particles at energy 2.7 MeV. After knowing the efficiency calibration factor, we can exactly calculate the activity of standard source.Keywords: aluminium thin film, alpha particles, copper substrate, CR-39 detector
Procedia PDF Downloads 3642991 Electrospun Nanofibers from Amphiphlic Block Copolymers and Their Graphene Nanocomposites
Authors: Hussein M. Etmimi, Peter E. Mallon
Abstract:
Electrospinning uses an electrical charge to draw very fine fibers (typically on the micro or nano scale) from a liquid or molten precursor. Over the years, this method has become a widely used and a successful technique to process polymer materials and their composites into nanofibers. The main focus of this work is to study the electrospinning of multi-phase amphiphilic copolymers and their nanocomposites, which contain graphene as the nanofiller material. In such amphiphilic materials, the constituents segments are incompatible and thus the solid state morphology will be determined by the composition of the various constituents as well as the method of preparation. In this study, amphiphilic block copolymers of poly(dimethyl siloxane) and poly(methyl methacrylate) (PDMS-b-PMMA) with well-defined structures were synthesized and the solution electrospinning of these materials and their properties were investigated. Atom transfer radical polymerization (ATRP) was used to obtain the controlled block copolymers with relatively high molar masses and narrow dispersity. First, PDMS macroinitiators with different chain length of 1000, 5000 and 10000 g/mol were synthesized by the reaction of monocarbinol terminated PDMS with α-bromoisobutyryl bromide initiator. The obtained macroinitiators were used for the polymerization of methyl methacrylate monomer to obtain the desired block copolymers using the ATRP process. Graphene oxide (GO) of different loading was then added to the copolymer solution and the resultant nanocomposites were successfully electrospun into nanofibers. The electrospinning was achieved using dimethylformamide/chloroform mixture (60:40 vl%) as electrospinning solution medium. Scanning electron microscopy (SEM) showed the successful formation of the electrospun fibers with dimensions in the nanometer range. X-ray diffraction indicated that the GO nanosheets were of an exfoliated structure, irrespective of the filler loading. Thermogravimetric analysis also showed that the thermal stability of the nanofibers was improved in the presence of GO, which was not a function of the filler loading. Differential scanning calorimetry also showed that the mechanical properties (measured as glass transition temperature) of the nanofibers was improved significantly in the presence of GO, which was a function of the filler loading.Keywords: elctrospinning, graphene oxide, nanofibers, polymethyl methacrylate (PMMA)
Procedia PDF Downloads 3042990 A Scenario-Based Experiment Comparing Managerial and Front-Line Employee Apologies in Terms of Customers' Perceived Justice, Satisfaction, and Commitment
Authors: Ioana Dallinger, Vincent P. Magnini
Abstract:
Due to the many moving parts and high human component, mistakes and failures sometimes occur during transactions in service environments. Because a certain portion of such failures is unavoidable, many service providers constantly look for guidance regarding optimal ways by which they should manage failures and recoveries. Through the use of a scenario-based experiment, the findings of this study run counter to the empowerment approach (i.e. that frontline employees should be empowered to resolve failure situations on their own doing). Specifically, this study finds that customers’ perceptions of distributive, procedural, and interactional justice are significantly higher [p-values < .05] when a manager delivers an apology as opposed to the frontline provider. Moreover, customers’ satisfaction with the recovery and commitment to the firm are also significantly stronger [p-values < .05] when a manager apologizes. Interestingly, this study also empirically tests the effects of combined apologies of both the manager and employee and finds that the combined approach yields better results for customers’ interactional justice perceptions and for their satisfaction with recovery, but not for their distributive or procedural justice perceptions or consequent commitment to the firm. This study can serve a springboard for further research. For example, perceptions and attitudes regarding employee empowerment vary based upon country culture. Furthermore, there are likely a number of factors that can moderate the cause and effect relationship between a failure recovery and customers’ post-recovery perceptions [e.g. the severity of the failure].Keywords: apology, empowerment, service failure recovery, service recovery
Procedia PDF Downloads 2942989 The Association of Slope Failure and Lineament Density along the Ranau-Tambunan Road, Sabah, Malaysia
Authors: Norbert Simon, Rodeano Roslee, Abdul Ghani Rafek, Goh Thian Lai, Azimah Hussein, Lee Khai Ern
Abstract:
The 54 km stretch of Ranau-Tambunan (RTM) road in Sabah is subjected to slope failures almost every year. This study is focusing on identifying section of roads that are susceptible to failure based on temporal landslide density and lineament density analyses. In addition to the analyses, the rock slopes in several sections of the road were assessed using the geological strength index (GSI) technique. The analysis involved 148 landslides that were obtained in 1978, 1994, 2009 and 2011. The landslides were digitized as points and the point density was calculated based on every 1km2 of the road. The lineaments of the area was interpreted from Landsat 7 15m panchromatic band. The lineament density was later calculated based on every 1km2 of the area using similar technique with the slope failure density calculation. The landslide and lineament densities were classified into three different classes that indicate the level of susceptibility (low, moderate, high). Subsequently, the two density maps were overlap to produce the final susceptibility map. The combination of both high susceptibility classes from these maps signifies the high potential of slope failure in those locations in the future. The final susceptibility map indicates that there are 22 sections of the road that are highly susceptible. Seven rock slopes were assessed along the RTM road using the GSI technique. It was found from the assessment that rock slopes along this road are highly fractured, weathered and can be classified into fair to poor categories. The poor condition of the rock slope can be attributed to the high lineament density that presence in the study area. Six of the rock slopes are located in the high susceptibility zones. A detailed investigation on the 22 high susceptibility sections of the RTM road should be conducted due to their higher susceptibility to failure, in order to prevent untoward incident to road users in the future.Keywords: GSI, landslide, landslide density, landslide susceptibility, lineament density
Procedia PDF Downloads 3962988 Modeling Thermal Changes of Urban Blocks in Relation to the Landscape Structure and Configuration in Guilan Province
Authors: Roshanak Afrakhteh, Abdolrasoul Salman Mahini, Mahdi Motagh, Hamidreza Kamyab
Abstract:
Urban Heat Islands (UHIs) are distinctive urban areas characterized by densely populated central cores surrounded by less densely populated peripheral lands. These areas experience elevated temperatures, primarily due to impermeable surfaces and specific land use patterns. The consequences of these temperature variations are far-reaching, impacting the environment and society negatively, leading to increased energy consumption, air pollution, and public health concerns. This paper emphasizes the need for simplified approaches to comprehend UHI temperature dynamics and explains how urban development patterns contribute to land surface temperature variation. To illustrate this relationship, the study focuses on the Guilan Plain, utilizing techniques like principal component analysis and generalized additive models. The research centered on mapping land use and land surface temperature in the low-lying area of Guilan province. Satellite data from Landsat sensors for three different time periods (2002, 2012, and 2021) were employed. Using eCognition software, a spatial unit known as a "city block" was utilized through object-based analysis. The study also applied the normalized difference vegetation index (NDVI) method to estimate land surface radiance. Predictive variables for urban land surface temperature within residential city blocks were identified categorized as intrinsic (related to the block's structure) and neighboring (related to adjacent blocks) variables. Principal Component Analysis (PCA) was used to select significant variables, and a Generalized Additive Model (GAM) approach, implemented using R's mgcv package, modeled the relationship between urban land surface temperature and predictor variables.Notable findings included variations in urban temperature across different years attributed to environmental and climatic factors. Block size, shared boundary, mother polygon area, and perimeter-to-area ratio were identified as main variables for the generalized additive regression model. This model showed non-linear relationships, with block size, shared boundary, and mother polygon area positively correlated with temperature, while the perimeter-to-area ratio displayed a negative trend. The discussion highlights the challenges of predicting urban surface temperature and the significance of block size in determining urban temperature patterns. It also underscores the importance of spatial configuration and unit structure in shaping urban temperature patterns. In conclusion, this study contributes to the growing body of research on the connection between land use patterns and urban surface temperature. Block size, along with block dispersion and aggregation, emerged as key factors influencing urban surface temperature in residential areas. The proposed methodology enhances our understanding of parameter significance in shaping urban temperature patterns across various regions, particularly in Iran.Keywords: urban heat island, land surface temperature, LST modeling, GAM, Gilan province
Procedia PDF Downloads 732987 River Bank Erosion Studies: A Review on Investigation Approaches and Governing Factors
Authors: Azlinda Saadon
Abstract:
This paper provides detail review on river bank erosion studies with respect to their processes, methods of measurements and factors governing river bank erosion. Bank erosion processes are commonly associated with river changes initiation and development, through width adjustment and planform evolution. It consists of two main types of erosion processes; basal erosion due to fluvial hydraulic force and bank failure under the influence of gravity. Most studies had only focused on one factor rather than integrating both factors. Evidences of previous works have shown integration between both processes of fluvial hydraulic force and bank failure. Bank failure is often treated as probabilistic phenomenon without having physical characteristics and the geotechnical aspects of the bank. This review summarizes the findings of previous investigators with respect to measurement techniques and prediction rates of river bank erosion through field investigation, physical model and numerical model approaches. Factors governing river bank erosion considering physical characteristics of fluvial erosion are defined.Keywords: river bank erosion, bank erosion, dimensional analysis, geotechnical aspects
Procedia PDF Downloads 4322986 Parkinson’s Disease Hand-Eye Coordination and Dexterity Evaluation System
Authors: Wann-Yun Shieh, Chin-Man Wang, Ya-Cheng Shieh
Abstract:
This study aims to develop an objective scoring system to evaluate hand-eye coordination and hand dexterity for Parkinson’s disease. This system contains three boards, and each of them is implemented with the sensors to sense a user’s finger operations. The operations include the peg test, the block test, and the blind block test. A user has to use the vision, hearing, and tactile abilities to finish these operations, and the board will record the results automatically. These results can help the physicians to evaluate a user’s reaction, coordination, dexterity function. The results will be collected to a cloud database for further analysis and statistics. A researcher can use this system to obtain systematic, graphic reports for an individual or a group of users. Particularly, a deep learning model is developed to learn the features of the data from different users. This model will help the physicians to assess the Parkinson’s disease symptoms by a more intellective algorithm.Keywords: deep learning, hand-eye coordination, reaction, hand dexterity
Procedia PDF Downloads 642985 Cooperative Scheme Using Adjacent Base Stations in Wireless Communication
Authors: Young-Min Ko, Seung-Jun Yu, Chang-Bin Ha, Hyoung-Kyu Song
Abstract:
In a wireless communication system, the failure of base station can result in a communication disruption in the cell. This paper proposes a way to deal with the failure of base station in a wireless communication system based on OFDM. Cooperative communication of the adjacent base stations can be a solution of the problem. High performance is obtained by the configuration of transmission signals which is applied CDD scheme in the cooperative communication. The Cooperative scheme can be a effective solution in case of the particular situation.Keywords: base station, CDD, OFDM, diversity gain, MIMO
Procedia PDF Downloads 4832984 Sodium-glucose Co-transporter-2 Inhibitors in Heart Failure with Mildly Reduced Reduced Ejection Fraction: Future Perspectives in Patients with Neoplasia
Authors: M. A. Munteanu, A. M. Lungu, A. I. Chivescu, V. Teodorescu, E. Tufanoiu, C. Nicolae, T. I. Nanea
Abstract:
Introduction: Sodium-glucose co-transporter 2 inhibitors (SGLT2i), which were first developed as antidiabetic medications, have demonstrated numerous positive benefits on the cardiovascular system, especially in the prevention of heart failure (HF). HF is a challenging, multifaceted disease that needs all-encompassing therapy. It should not be viewed as a limited form of heart illness but rather as a systemic disease that leads to multiple organ failure and death. SGLT2i is an extremely effective tool for treating HF by using its pleiotropic effects. In addition to its use in patients with diabetes mellitus who are at high cardiovascular risk or who have already experienced a cardiovascular event, SGLT2i administration has been shown to have positive effects on a variety of HF manifestations and stages, regardless of the patient's presence of diabetes mellitus. Material and Methods: According to the guide, 110 patients (83 males and 27 females) with heart failure with mildly reduced ejection fraction (HFmrEF), with T2D and neoplasia, were enrolled in the prospective study. The structural and functional state of the left ventricle myocardium and ejection fraction was assessed through echocardiography. Patients were randomized to receive once-daily dapagliflozin 10 mg. Results: Patients with HFmrEF were divided into 3 subgroups according to age. 7% (8) patients aged < 45 years, 35% (28) patients aged between 46-59 years, and 58% (74) patients aged> 60 years. The most prevalent comorbidities were hypertension (43.1%), coronary heart disease (40%), and obesity (33.2%). Study drug discontinuation and serious adverse events were not frequent in the subgroups, in either men or women, until now. Conclusions: SGLT-2 inhibitors are a novel class of antidiabetic agents that have demonstrated positive efficacy and safety outcomes in the setting of HFmrEF. Until now, in our study, dapagliflozin was safe and well-tolerated irrespective of sex.Keywords: diabetes mellitus type 2, Sodium-glucose co-transporters-2 inhibitors, heart failure, neoplasia
Procedia PDF Downloads 882983 Global Best Practice Paradox; the Failure of One Size Fits All Approach to Development a Case Study of Pakistan
Authors: Muhammad Naveed Iftikhar, Farah Khalid
Abstract:
Global best practices as ordained by international organizations comprise a broader top-down approach to development problems, without taking into account country-specific factors. The political economy of each country is extremely different and the failure of several attempts of international organizations to implement global best practice models in developing countries each with its unique set of variables, goes on to show that this is not the most efficient solution to development problems. This paper is a humble attempt at shedding light on some specific examples of failures of the global best practices. Pakistan has its unique set of problems and unless those are added to the broader equation of development, country-specific reform and growth will continue to pose a challenge to reform programs initiated by international organizations. The three case studies presented in this paper are just a few prominent examples of failure of the global best practice, top-down, universalistic approach to development as ordained by international organizations. Development and reform can only be achieved if local dynamics are given their due importance. The modus operandi of international organizations needs to be tailored according to each country’s unique politico-economic environment.Keywords: best practice, development, context
Procedia PDF Downloads 4732982 A Crossover Study of Therapeutic Equivalence of Generic Product Versus Reference Product of Ivabradine in Patients with Chronic Heart Failure
Authors: Hadeer E. Eliwa, Naglaa S. Bazan, Ebtissam A. Darweesh, Nagwa A. Sabri
Abstract:
Background: Generic substitution of brand ivabradine prescriptions can reduce drug expenditures and improve adherence. However, the distrust of generic medicines by practitioners and patients due to doubts regarding their quality and fear of counterfeiting compromise the acceptance of this practice. Aim: The goal of this study is to compare the therapeutic equivalence of brand product versus the generic product of ivabradine in adult patients with chronic heart failure with reduced ejection fraction (≤ 40%) (HFrEF). Methodology: Thirty-two Egyptian patients with chronic heart failure with reduced ejection fraction (HFrEF) were treated with branded ivabradine (Procrolan ©) and generic (Bradipect ©) during 24 (2x12) weeks. Primary outcomes were resting heart rate (HR), NYHA FC, Quality of life (QoL) using Minnesota Living with Heart Failure (MLWHF) and EF. Secondary outcomes were the number of hospitalizations for worsening HFrEF and adverse effects. The washout period was not allowed. Findings: At the 12th week, the reduction in HR was comparable in the two groups (90.13±7.11 to 69±11.41 vs 96.13±17.58 to 67.31±8.68 bpm in brand and generic groups, respectively). Also, the increase in EF was comparable in the two groups (27.44 ±4.59 to 33.38±5.62 vs 32±5.96 to 39.31±8.95 in brand and generic groups, respectively). The improvement in NYHA FC was comparable in both groups (87.5% in brand group vs 93.8% in the generic group). The mean value of the QOL improved from 31.63±15.8 to 19.6±14.7 vs 35.68±17.63 to 22.9±15.1 for the brand and generic groups, respectively. Similarly, at end of 24 weeks, no significant changes were observed from data observed at 12th week regarding HR, EF, QoL and NYHA FC. Only minor side effects, mainly phosphenes, and a comparable number of hospitalizations were observed in both groups. Conclusion: The study revealed no statistically significant differences in the therapeutic effect and safety between generic and branded ivabradine. We assume that practitioners can safely interchange between them for economic reasons.Keywords: bradipect©, heart failure, ivabradine, Procrolan ©, therapeutic equivalence
Procedia PDF Downloads 4572981 Sudden Death of a Cocaine Body Packer: An Autopsy Examination Findings
Authors: Parthasarathi Pramanik
Abstract:
Body packing is a way of transfer drugs across the international border or any drug prohibited area. The drugs are usually hidden in body packets inside the anatomical body cavities like mouth, intestines, rectum, ear, vagina etc. Cocaine is a very common drug for body packing across the world. A 48 year old male was reported dead in his hotel after complaining of chest pain and vomiting. At autopsy, there were eighty-two white cylindrical body packs in the stomach, small and large intestines. Seals of few of the packets were opened. Toxicological examination revealed presence of cocaine in the stomach, liver, kidney and hair samples. Microscopically, presence of myocardial necrosis with interstitial oedema along with hypertrophy and fibrosis of the myocardial fibre suggested heart failure due to cocaine cardio toxicity. However, focal lymphocyte infiltration and perivascular fibrosis in the myocardium also indicated chronic cocaine toxicity of the deceased. After careful autopsy examination it was considered the victim was died due congestive heart failure secondary to acute and chronic cocaine poisoning.Keywords: cardiac failure, cocaine, body packer, sudden death
Procedia PDF Downloads 3172980 Consumption of Fat Burners Leads to Acute Liver Failure: A Systematic Review protocol
Authors: Anjana Aggarwal, Sheilja Walia
Abstract:
Prevalence of obesity and overweight is increasing due to sedentary lifestyles and busy schedules of people that spend less time on physical exercise. To reduce weight, people are finding easier and more convenient ways. The easiest solution is the use of dietary supplements and fat burners. These are products that decrease body weight by increasing the basal metabolic rate. Various reports have been published on the consumption of fat burners leading to heart palpitations, seizures, anxiety, depression, psychosis, bradycardia, insomnia, muscle contractions, hepatotoxicity, and even liver failure. Case reports and series are reporting that the ingredients present in the fat burners caused acute liver failure (ALF) and hepatic toxicity in many cases. Another contributing factor is the absence of regulations from the Food and Drug Administration on these products, leading to increased consumption and a higher risk of liver diseases among the population. This systematic review aims to attain a better understanding of the dietary supplements used globally to reduce weight and document the case reports/series of acute liver failure caused by the consumption of fat burners. Electronic databases like PubMed, Cochrane, Google Scholar, etc., will be systematically searched for relevant articles. Various websites of dietary products and brands that sell such supplements, Journals of Hepatology, National and international projects launched for ALF, and their reports, along with the review of grey literature, will also be done to get a better understanding of the topic. After discussing with the co-author, the selection and screening of the articles will be performed by the author. The studies will be selected based on the predefined inclusion and exclusion criteria. The case reports and case series that will be included in the final list of the studies will be assessed for methodological quality using the CARE guidelines. The results from this study will provide insights and a better understanding of fat burners. Since the supplements are easily available in the market without any restrictions on their sale, people are unaware of their adverse effects. The consumption of these supplements causes acute liver failure. Thus, this review will provide a platform for future larger studies to be conducted.Keywords: acute liver failure, dietary supplements, fat burners, weight loss supplements
Procedia PDF Downloads 842979 Software Reliability Prediction Model Analysis
Authors: Lela Mirtskhulava, Mariam Khunjgurua, Nino Lomineishvili, Koba Bakuria
Abstract:
Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.Keywords: exponential distribution, conditional mean time to failure, distribution function, mathematical model, software reliability
Procedia PDF Downloads 4632978 Corrosion Risk Assessment/Risk Based Inspection (RBI)
Authors: Lutfi Abosrra, Alseddeq Alabaoub, Nuri Elhaloudi
Abstract:
Corrosion processes in the Oil & Gas industry can lead to failures that are usually costly to repair, costly in terms of loss of contaminated product, in terms of environmental damage and possibly costly in terms of human safety. This article describes the results of the corrosion review and criticality assessment done at Mellitah Gas (SRU unit) for pressure equipment and piping system. The information gathered through the review was intended for developing a qualitative RBI study. The corrosion criticality assessment has been carried out by applying company procedures and industrial recommended practices such as API 571, API 580/581, ASME PCC 3, which provides a guideline for establishing corrosion integrity assessment. The corrosion review is intimately related to the probability of failure (POF). During the corrosion study, the process units are reviewed by following the applicable process flow diagrams (PFDs) in the presence of Mellitah’s personnel from process engineering, inspection, and corrosion/materials and reliability engineers. The expected corrosion damage mechanism (internal and external) was identified, and the corrosion rate was estimated for every piece of equipment and corrosion loop in the process units. A combination of both Consequence and Likelihood of failure was used for determining the corrosion risk. A qualitative consequence of failure (COF) for each individual item was assigned based on the characteristics of the fluid as per its flammability, toxicity, and pollution into three levels (High, Medium, and Low). A qualitative probability of failure (POF)was applied to evaluate the internal and external degradation mechanism, a high-level point-based (0 to 10) for the purpose of risk prioritizing in the range of Low, Medium, and High.Keywords: corrosion, criticality assessment, RBI, POF, COF
Procedia PDF Downloads 792977 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation
Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov
Abstract:
Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery
Procedia PDF Downloads 1092976 A Machine Learning Based Method to Detect System Failure in Resource Constrained Environment
Authors: Payel Datta, Abhishek Das, Abhishek Roychoudhury, Dhiman Chattopadhyay, Tanushyam Chattopadhyay
Abstract:
Machine learning (ML) and deep learning (DL) is most predominantly used in image/video processing, natural language processing (NLP), audio and speech recognition but not that much used in system performance evaluation. In this paper, authors are going to describe the architecture of an abstraction layer constructed using ML/DL to detect the system failure. This proposed system is used to detect the system failure by evaluating the performance metrics of an IoT service deployment under constrained infrastructure environment. This system has been tested on the manually annotated data set containing different metrics of the system, like number of threads, throughput, average response time, CPU usage, memory usage, network input/output captured in different hardware environments like edge (atom based gateway) and cloud (AWS EC2). The main challenge of developing such system is that the accuracy of classification should be 100% as the error in the system has an impact on the degradation of the service performance and thus consequently affect the reliability and high availability which is mandatory for an IoT system. Proposed ML/DL classifiers work with 100% accuracy for the data set of nearly 4,000 samples captured within the organization.Keywords: machine learning, system performance, performance metrics, IoT, edge
Procedia PDF Downloads 1932975 Hypocalcaemia Inducing Heart Failure: A Rare Presentation
Authors: A. Kherraf, M. Bouziane, L. Azzouzi, R. Habbal
Abstract:
Introduction: Hypocalcaemia is a rare cause of heart failure. We report the clinical case of a young patient with reversible dilated cardiomyopathy secondary to hypocalcaemia in the context of hyperparathyroidism. Clinical case: We report the clinical case of a 23-year-old patient with a history of thyroidectomy for papillary thyroid carcinoma 3 years previously, who presented to the emergency room with a progressive onset dyspnea and edema of the lower limbs. Clinical examination showed hypotension at 90/70 mmHg, tachycardia at 102 bpm, and edema of the lower limbs. The ECG showed a regular sinus rhythm with a prolonged corrected QT interval to 520ms. The chest x-ray showed cardiomegaly. Echocardiography revealed dilated cardiomyopathy with biventricular dysfunction and a left ventricular ejection fraction of 45%, as well as moderate mitral insufficiency by restriction of the posterior mitral leaflet, moderate tricuspid insufficiency, and a dilated inferior vena cava with a pulmonary arterial pressure estimated at 46 mmHg. Blood tests revealed severe hypocalcemia at 38 mg / l with normal albumin and thyroxine levels, as well as hyperphosphatemia and increased TSH. The patient received calcium intake and vitamin D supplementation and was treated with beta blockers, ACE inhibitors, and diuretics with good progress and progressive normalization of cardiac function. Discussion: The cardiovascular manifestations of hypocalcaemia usually appear with deeply low serum calcium levels. This can lead to hypotension, arrhythmias, ventricular fibrillation, prolonged QT interval, or even heart failure. Heart failure is a rare and serious complication of hypocalcemia but most often characterized by complete normalization of myocardial function after treatment. The etiology of the hypocalcaemia, in this case, was probably related to accidental parathyroid removal during thyroidectomy. This is why careful monitoring of calcium levels is recommended after surgery. Conclusion: Hypocalcemic heart failure is rare but reversible heart disease. Systematic monitoring of serum calcium should be performed in all patients after thyroid surgery to avoid any complications related to hypoparathyroidism.Keywords: hypocalcemia, heart failure, thyroid surgery, hypoparathyroidism
Procedia PDF Downloads 1412974 Optimal Sequential Scheduling of Imperfect Maintenance Last Policy for a System Subject to Shocks
Authors: Yen-Luan Chen
Abstract:
Maintenance has a great impact on the capacity of production and on the quality of the products, and therefore, it deserves continuous improvement. Maintenance procedure done before a failure is called preventive maintenance (PM). Sequential PM, which specifies that a system should be maintained at a sequence of intervals with unequal lengths, is one of the commonly used PM policies. This article proposes a generalized sequential PM policy for a system subject to shocks with imperfect maintenance and random working time. The shocks arrive according to a non-homogeneous Poisson process (NHPP) with varied intensity function in each maintenance interval. As a shock occurs, the system suffers two types of failures with number-dependent probabilities: type-I (minor) failure, which is rectified by a minimal repair, and type-II (catastrophic) failure, which is removed by a corrective maintenance (CM). The imperfect maintenance is carried out to improve the system failure characteristic due to the altered shock process. The sequential preventive maintenance-last (PML) policy is defined as that the system is maintained before any CM occurs at a planned time Ti or at the completion of a working time in the i-th maintenance interval, whichever occurs last. At the N-th maintenance, the system is replaced rather than maintained. This article first takes up the sequential PML policy with random working time and imperfect maintenance in reliability engineering. The optimal preventive maintenance schedule that minimizes the mean cost rate of a replacement cycle is derived analytically and determined in terms of its existence and uniqueness. The proposed models provide a general framework for analyzing the maintenance policies in reliability theory.Keywords: optimization, preventive maintenance, random working time, minimal repair, replacement, reliability
Procedia PDF Downloads 2732973 Genetic Algorithm Based Node Fault Detection and Recovery in Distributed Sensor Networks
Authors: N. Nalini, Lokesh B. Bhajantri
Abstract:
In Distributed Sensor Networks, the sensor nodes are prone to failure due to energy depletion and some other reasons. In this regard, fault tolerance of network is essential in distributed sensor environment. Energy efficiency, network or topology control and fault-tolerance are the most important issues in the development of next-generation Distributed Sensor Networks (DSNs). This paper proposes a node fault detection and recovery using Genetic Algorithm (GA) in DSN when some of the sensor nodes are faulty. The main objective of this work is to provide fault tolerance mechanism which is energy efficient and responsive to network using GA, which is used to detect the faulty nodes in the network based on the energy depletion of node and link failure between nodes. The proposed fault detection model is used to detect faults at node level and network level faults (link failure and packet error). Finally, the performance parameters for the proposed scheme are evaluated.Keywords: distributed sensor networks, genetic algorithm, fault detection and recovery, information technology
Procedia PDF Downloads 4512972 Drug Therapy Problems and Associated Factors among Patients with Heart Failure in the Medical Ward of Arba Minch General Hospital, Ethiopia
Authors: Debalke Dale, Bezabh Geneta, Yohannes Amene, Yordanos Bergene, Mohammed Yimam
Abstract:
Background: A drug therapy problem (DTP) is an event or circumstance that involves drug therapies that actually or potentially interfere with the desired outcome and requires professional judgment to resolve. Heart failure is an emerging worldwide threat whose prevalence and health loss burden constantly increase, especially in the young and in low-to-middle-income countries. There is a lack of population-based incidence and prevalence of heart failure (HF) studies in sub-Saharan African countries, including Ethiopia. Objective: The aim of this study was designed to assess drug therapy problems and associated factors among patients with HF in the medical ward of Arba Minch General Hospital(AGH), Ethiopia, from June 5 to August 20, 2022. Methods: A retrospective cross-sectional study was conducted among 180 patients with HF who were admitted to the medical ward of AGH. Data were collected from patients' cards by using questionnaires. The data were categorized and analyzed by using SPSS version 25.0 software, and data were presented in tables and words based on the nature of the data. Result: Out of the total, 85 (57.6%) were females, and 113 (75.3%) patients were aged over fifty years. Of the 150 study participants, 86 (57.3%) patients had at least one DTP identified, and a total of 116 DTPs were identified, which is 0.77 DTPs per patient. The most common types of DTP were unnecessary drug therapy (32%), followed by the need for additional drug therapy (36%), and dose too low (15%). Patients who used polypharmacy were 5.86 (AOR) times more likely to develop DTPs than those who did not (95% CI = 1.625–16.536, P = 0.005), and patients with more co-morbid conditions developed 3.68 (AOR) times more DTPs than those who had fewer co-morbidities (95% CI = 1.28–10.5, P = 0.015). Conclusion: The results of this study indicated that drug therapy problems were common among medical ward patients with heart failure. These problems are adversely affecting the treatment outcomes of patients, so it requires the special attention of healthcare professionals to optimize them.Keywords: heart failure, drug therapy problems, Arba Minch general hospital, Ethiopia
Procedia PDF Downloads 1042971 NFResNet: Multi-Scale and U-Shaped Networks for Deblurring
Authors: Tanish Mittal, Preyansh Agrawal, Esha Pahwa, Aarya Makwana
Abstract:
Multi-Scale and U-shaped Networks are widely used in various image restoration problems, including deblurring. Keeping in mind the wide range of applications, we present a comparison of these architectures and their effects on image deblurring. We also introduce a new block called as NFResblock. It consists of a Fast Fourier Transformation layer and a series of modified Non-Linear Activation Free Blocks. Based on these architectures and additions, we introduce NFResnet and NFResnet+, which are modified multi-scale and U-Net architectures, respectively. We also use three differ-ent loss functions to train these architectures: Charbonnier Loss, Edge Loss, and Frequency Reconstruction Loss. Extensive experiments on the Deep Video Deblurring dataset, along with ablation studies for each component, have been presented in this paper. The proposed architectures achieve a considerable increase in Peak Signal to Noise (PSNR) ratio and Structural Similarity Index (SSIM) value.Keywords: multi-scale, Unet, deblurring, FFT, resblock, NAF-block, nfresnet, charbonnier, edge, frequency reconstruction
Procedia PDF Downloads 1352970 Comparison of Blockchain Ecosystem for Identity Management
Authors: K. S. Suganya, R. Nedunchezhian
Abstract:
In recent years, blockchain technology has been found to be the most significant discovery in this digital era, after the discovery of the Internet and Cloud Computing. Blockchain is a simple, distributed public ledger that contains all the user’s transaction details in a block. The global copy of the block is then shared among all its peer-peer network users after validation by the Blockchain miners. Once a block is validated and accepted, it cannot be altered by any users making it a trust-free transaction. It also resolves the problem of double-spending by using traditional cryptographic methods. Since the advent of bitcoin, blockchain has been the backbone for all its transactions. But in recent years, it has found its roots and uses in many fields like Smart Contracts, Smart City management, healthcare, etc. Identity management against digital identity theft has become a major concern among financial and other organizations. To solve this digital identity theft, blockchain technology can be employed with existing identity management systems, which maintain a distributed public ledger containing details of an individual’s identity containing information such as Digital birth certificates, Citizenship number, Bank details, voter details, driving license in the form of blocks verified on the blockchain becomes time-stamped, unforgeable and publicly visible for any legitimate users. The main challenge in using blockchain technology to prevent digital identity theft is ensuring the pseudo-anonymity and privacy of the users. This survey paper will exert to study the blockchain concepts, consensus protocols, and various blockchain-based Digital Identity Management systems with their research scope. This paper also discusses the role of Blockchain in COVID-19 pandemic management by self-sovereign identity and supply chain management.Keywords: blockchain, consensus protocols, bitcoin, identity theft, digital identity management, pandemic, COVID-19, self-sovereign identity
Procedia PDF Downloads 1302969 Stability Analysis of Slopes during Pile Driving
Authors: Yeganeh Attari, Gudmund Reidar Eiksund, Hans Peter Jostad
Abstract:
In Geotechnical practice, there is no standard method recognized by the industry to account for the reduction of safety factor of a slope as an effect of soil displacement and pore pressure build-up during pile installation. Pile driving disturbs causes large strains and generates excess pore pressures in a zone that can extend many diameters from the installed pile, resulting in a decrease of the shear strength of the surrounding soil. This phenomenon may cause slope failure. Moreover, dissipation of excess pore pressure set-up may cause weakening of areas outside the volume of soil remoulded during installation. Because of complex interactions between changes in mean stress and shearing, it is challenging to predict installation induced pore pressure response. Furthermore, it is a complex task to follow the rate and path of pore pressure dissipation in order to analyze slope stability. In cohesive soils it is necessary to implement soil models that account for strain softening in the analysis. In the literature, several cases of slope failure due to pile driving activities have been reported, for instance, a landslide in Gothenburg that resulted in a slope failure destroying more than thirty houses and Rigaud landslide in Quebec which resulted in loss of life. Up to now, several methods have been suggested to predict the effect of pile driving on total and effective stress, pore pressure changes and their effect on soil strength. However, this is still not well understood or agreed upon. In Norway, general approaches applied by geotechnical engineers for this problem are based on old empirical methods with little accurate theoretical background. While the limitations of such methods are discussed, this paper attempts to capture the reduction in the factor of safety of a slope during pile driving, using coupled Finite Element analysis and cavity expansion method. This is demonstrated by analyzing a case of slope failure due to pile driving in Norway.Keywords: cavity expansion method, excess pore pressure, pile driving, slope failure
Procedia PDF Downloads 1492968 A Picture Naming Study of European Portuguese-English Bilinguals on Cognates Switch Effects
Authors: Minghui Zou
Abstract:
This study investigates whether and how cognate status influences switching costs in bilingual language production. Two picture naming tasks will be conducted in this proposed study by manipulating the conditions of how cognates and non-cognates are presented, i.e., separately in two testing blocks vs intermixed in one single testing block. Participants of each experiment will be 24 L1-European Portuguese L2-English unbalanced speakers. Stimuli will include 12 pictures of cognate nouns and 12 of non-cognate nouns. It is hypothesized that there will be cognate switch facilitation effects among unbalanced bilinguals in both of their languages when stimuli are presented either in two single testing blocks or one mixed testing block. Shorter reaction times and higher naming accuracy are expected to be found in cognate switch trials than in non-cognate switch trials.Keywords: cognates, language switching costs, picture naming, European Portuguese, cognate facilitation effect
Procedia PDF Downloads 37