Search results for: random loading
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3558

Search results for: random loading

2778 Examination of Public Hospital Unions Technical Efficiencies Using Data Envelopment Analysis and Machine Learning Techniques

Authors: Songul Cinaroglu

Abstract:

Regional planning in health has gained speed for developing countries in recent years. In Turkey, 89 different Public Hospital Unions (PHUs) were conducted based on provincial levels. In this study technical efficiencies of 89 PHUs were examined by using Data Envelopment Analysis (DEA) and machine learning techniques by dividing them into two clusters in terms of similarities of input and output indicators. Number of beds, physicians and nurses determined as input variables and number of outpatients, inpatients and surgical operations determined as output indicators. Before performing DEA, PHUs were grouped into two clusters. It is seen that the first cluster represents PHUs which have higher population, demand and service density than the others. The difference between clusters was statistically significant in terms of all study variables (p ˂ 0.001). After clustering, DEA was performed for general and for two clusters separately. It was found that 11% of PHUs were efficient in general, additionally 21% and 17% of them were efficient for the first and second clusters respectively. It is seen that PHUs, which are representing urban parts of the country and have higher population and service density, are more efficient than others. Random forest decision tree graph shows that number of inpatients is a determinative factor of efficiency of PHUs, which is a measure of service density. It is advisable for public health policy makers to use statistical learning methods in resource planning decisions to improve efficiency in health care.

Keywords: public hospital unions, efficiency, data envelopment analysis, random forest

Procedia PDF Downloads 116
2777 The Effect of Traffic Load on the Maximum Response of a Cable-Stayed Bridge under Blast Loads

Authors: S. K. Hashemi, M. A. Bradford, H. R. Valipour

Abstract:

The Recent collapse of bridges has raised the awareness about safety and robustness of bridges subjected to extreme loading scenarios such as intentional/unintentional blast loads. The air blast generated by the explosion of bombs or fuel tankers leads to high-magnitude short-duration loading scenarios that can cause severe structural damage and loss of critical structural members. Hence, more attentions need to put towards bridge structures to develop guidelines to increase the resistance of such structures against the probable blast. Recent advancements in numerical methods have brought about the viable and cost effective facilities to simulate complicated blast scenarios and subsequently provide useful reference for safeguarding design of critical infrastructures. In the previous studies common bridge responses to blast load, the traffic load is sometimes not included in the analysis. Including traffic load will increase the axial compression in bridge piers especially when the axial load is relatively small. Traffic load also can reduce the uplift of girders and deck when the bridge experiences under deck explosion. For more complicated structures like cable-stayed or suspension bridges, however, the effect of traffic loads can be completely different. The tension in the cables increase and progressive collapse is likely to happen while traffic loads exist. Accordingly, this study is an attempt to simulate the effect of traffic load cases on the maximum local and global response of an entire cable-stayed bridge subjected to blast loadings using LS-DYNA explicit finite element code. The blast loads ranged from small to large explosion placed at different positions above the deck. Furthermore, the variation of the traffic load factor in the load combination and its effect on the dynamic response of the bridge under blast load is investigated.

Keywords: blast, cable-stayed bridge, LS-DYNA, numerical, traffic load

Procedia PDF Downloads 322
2776 Synthesis of a Hybrid of PEG-b-PCL and G1-PEA Dendrimer Based Six-Armed Star Polymer for Nano Delivery of Vancomycin

Authors: Calvin A. Omolo, Rahul S. Kalhapure, Mahantesh Jadhav, Sanjeev Rambharose, Chunderika Mocktar, Thirumala Govender

Abstract:

Treatment of infections is compromised by limitations of conventional dosage forms and drug resistance. Nanocarrier system is a strategy to overcome these challenges and improve therapy. Thus, the development of novel materials for drug delivery via nanocarriers is essential. The aim of the study was to synthesize a multi-arm polymer (6-mPEPEA) for enhanced activity of vancomycin (VM) against susceptible and resistant Staphylococcus aureus (MRSA). The synthesis steps of the star polymer followed reported procedures. The synthesized 6-mPEPEA was characterized by FTIR, ¹H and ¹³CNMR and MTT assays. VM loaded micelles were prepared from 6-mPEPEA and characterized for size, polydispersity index (PI) and surface charge (ZP) (Dynamic Light Scattering), morphology by TEM, drug loading (UV Spectrophotometry), drug release (dialysis bag), in vitro and in vivo efficacy against sensitive and resistant S. aureus. 6-mPEPEA was synthesized, and its structure was confirmed. MTT assays confirmed its nontoxic nature with a high cell viability (77%-85%). Unimolecular spherical micelles were prepared. Size, PI, and ZP was 52.48 ± 2.6 nm, 0.103 ± 0.047, -7.3 ± 1.3 mV, respectively and drug loading was 62.24 ± 3.8%. There was a 91% drug release from VCM-6-mPEPEA after 72 hours. In vitro antibacterial test revealed that VM-6-mPEPEA had 8 and 16-fold greater activity against S. aureus and MRSA when compared to bare VM. Further investigations using flow cytometry showed that VM-6-mPEPEA had 99.5% killing rate of MRSA at the MIC concentration. In vivo antibacterial activity revealed that treatment with VM-6-mPEPEA had a 190 and a 15-fold reduction in the MRSA load in untreated and VM treated respectively. These findings confirmed the potential of 6-mPEPEA as a promising bio-degradable nanocarrier for antibiotic delivery to improve treatment of bacterial infections.

Keywords: biosafe, MRSA, nanocarrier, resistance, unimolecular-micelles

Procedia PDF Downloads 174
2775 Evolution of Predator-prey Body-size Ratio: Spatial Dimensions of Foraging Space

Authors: Xin Chen

Abstract:

It has been widely observed that marine food webs have significantly larger predator–prey body-size ratios compared with their terrestrial counterparts. A number of hypotheses have been proposed to account for such difference on the basis of primary productivity, trophic structure, biophysics, bioenergetics, habitat features, energy efficiency, etc. In this study, an alternative explanation is suggested based on the difference in the spatial dimensions of foraging arenas: terrestrial animals primarily forage in two dimensional arenas, while marine animals mostly forage in three dimensional arenas. Using 2-dimensional and 3-dimensional random walk simulations, it is shown that marine predators with 3-dimensional foraging would normally have a greater foraging efficiency than terrestrial predators with 2-dimensional foraging. Marine prey with 3-dimensional dispersion usually has greater swarms or aggregations than terrestrial prey with 2-dimensional dispersion, which again favours a greater predator foraging efficiency in marine animals. As an analytical tool, a Lotka-Volterra based adaptive dynamical model is developed with the predator-prey ratio embedded as an adaptive variable. The model predicts that high predator foraging efficiency and high prey conversion rate will dynamically lead to the evolution of a greater predator-prey ratio. Therefore, marine food webs with 3-dimensional foraging space, which generally have higher predator foraging efficiency, will evolve a greater predator-prey ratio than terrestrial food webs.

Keywords: predator-prey, body size, lotka-volterra, random walk, foraging efficiency

Procedia PDF Downloads 66
2774 Analysis of Seismic Waves Generated by Blasting Operations and their Response on Buildings

Authors: S. Ziaran, M. Musil, M. Cekan, O. Chlebo

Abstract:

The paper analyzes the response of buildings and industrially structures on seismic waves (low frequency mechanical vibration) generated by blasting operations. The principles of seismic analysis can be applied for different kinds of excitation such as: earthquakes, wind, explosions, random excitation from local transportation, periodic excitation from large rotating and/or machines with reciprocating motion, metal forming processes such as forging, shearing and stamping, chemical reactions, construction and earth moving work, and other strong deterministic and random energy sources caused by human activities. The article deals with the response of seismic, low frequency, mechanical vibrations generated by nearby blasting operations on a residential home. The goal was to determine the fundamental natural frequencies of the measured structure; therefore it is important to determine the resonant frequencies to design a suitable modal damping. The article also analyzes the package of seismic waves generated by blasting (Primary waves – P-waves and Secondary waves S-waves) and investigated the transfer regions. For the detection of seismic waves resulting from an explosion, the Fast Fourier Transform (FFT) and modal analysis, in the frequency domain, is used and the signal was acquired and analyzed also in the time domain. In the conclusions the measured results of seismic waves caused by blasting in a nearby quarry and its effect on a nearby structure (house) is analyzed. The response on the house, including the fundamental natural frequency and possible fatigue damage is also assessed.

Keywords: building structure, seismic waves, spectral analysis, structural response

Procedia PDF Downloads 390
2773 Temperature-Based Detection of Initial Yielding Point in Loading of Tensile Specimens Made of Structural Steel

Authors: Aqsa Jamil, Tamura Hiroshi, Katsuchi Hiroshi, Wang Jiaqi

Abstract:

The yield point represents the upper limit of forces which can be applied to a specimen without causing any permanent deformation. After yielding, the behavior of the specimen suddenly changes, including the possibility of cracking or buckling. So, the accumulation of damage or type of fracture changes depending on this condition. As it is difficult to accurately detect yield points of the several stress concentration points in structural steel specimens, an effort has been made in this research work to develop a convenient technique using thermography (temperature-based detection) during tensile tests for the precise detection of yield point initiation. To verify the applicability of thermography camera, tests were conducted under different loading conditions and measuring the deformation by installing various strain gauges and monitoring the surface temperature with the help of a thermography camera. The yield point of specimens was estimated with the help of temperature dip, which occurs due to the thermoelastic effect during the plastic deformation. The scattering of the data has been checked by performing a repeatability analysis. The effects of temperature imperfection and light source have been checked by carrying out the tests at daytime as well as midnight and by calculating the signal to noise ratio (SNR) of the noised data from the infrared thermography camera, it can be concluded that the camera is independent of testing time and the presence of a visible light source. Furthermore, a fully coupled thermal-stress analysis has been performed by using Abaqus/Standard exact implementation technique to validate the temperature profiles obtained from the thermography camera and to check the feasibility of numerical simulation for the prediction of results extracted with the help of the thermographic technique.

Keywords: signal to noise ratio, thermoelastic effect, thermography, yield point

Procedia PDF Downloads 91
2772 Determination of Klebsiella Pneumoniae Susceptibility to Antibiotics Using Infrared Spectroscopy and Machine Learning Algorithms

Authors: Manal Suleiman, George Abu-Aqil, Uraib Sharaha, Klaris Riesenberg, Itshak Lapidot, Ahmad Salman, Mahmoud Huleihel

Abstract:

Klebsiella pneumoniae is one of the most aggressive multidrug-resistant bacteria associated with human infections resulting in high mortality and morbidity. Thus, for an effective treatment, it is important to diagnose both the species of infecting bacteria and their susceptibility to antibiotics. Current used methods for diagnosing the bacterial susceptibility to antibiotics are time-consuming (about 24h following the first culture). Thus, there is a clear need for rapid methods to determine the bacterial susceptibility to antibiotics. Infrared spectroscopy is a well-known method that is known as sensitive and simple which is able to detect minor biomolecular changes in biological samples associated with developing abnormalities. The main goal of this study is to evaluate the potential of infrared spectroscopy in tandem with Random Forest and XGBoost machine learning algorithms to diagnose the susceptibility of Klebsiella pneumoniae to antibiotics within approximately 20 minutes following the first culture. In this study, 1190 Klebsiella pneumoniae isolates were obtained from different patients with urinary tract infections. The isolates were measured by the infrared spectrometer, and the spectra were analyzed by machine learning algorithms Random Forest and XGBoost to determine their susceptibility regarding nine specific antibiotics. Our results confirm that it was possible to classify the isolates into sensitive and resistant to specific antibiotics with a success rate range of 80%-85% for the different tested antibiotics. These results prove the promising potential of infrared spectroscopy as a powerful diagnostic method for determining the Klebsiella pneumoniae susceptibility to antibiotics.

Keywords: urinary tract infection (UTI), Klebsiella pneumoniae, bacterial susceptibility, infrared spectroscopy, machine learning

Procedia PDF Downloads 153
2771 The Role of Urban Development Patterns for Mitigating Extreme Urban Heat: The Case Study of Doha, Qatar

Authors: Yasuyo Makido, Vivek Shandas, David J. Sailor, M. Salim Ferwati

Abstract:

Mitigating extreme urban heat is challenging in a desert climate such as Doha, Qatar, since outdoor daytime temperature area often too high for the human body to tolerate. Recent studies demonstrate that cities in arid and semiarid areas can exhibit ‘urban cool islands’ - urban areas that are cooler than the surrounding desert. However, the variation of temperatures as a result of the time of day and factors leading to temperature change remain at the question. To address these questions, we examined the spatial and temporal variation of air temperature in Doha, Qatar by conducting multiple vehicle-base local temperature observations. We also employed three statistical approaches to model surface temperatures using relevant predictors: (1) Ordinary Least Squares, (2) Regression Tree Analysis and (3) Random Forest for three time periods. Although the most important determinant factors varied by day and time, distance to the coast was the significant determinant at midday. A 70%/30% holdout method was used to create a testing dataset to validate the results through Pearson’s correlation coefficient. The Pearson’s analysis suggests that the Random Forest model more accurately predicts the surface temperatures than the other methods. We conclude with recommendations about the types of development patterns that show the greatest potential for reducing extreme heat in air climates.

Keywords: desert cities, tree-structure regression model, urban cool Island, vehicle temperature traverse

Procedia PDF Downloads 382
2770 Object-Based Image Analysis for Gully-Affected Area Detection in the Hilly Loess Plateau Region of China Using Unmanned Aerial Vehicle

Authors: Hu Ding, Kai Liu, Guoan Tang

Abstract:

The Chinese Loess Plateau suffers from serious gully erosion induced by natural and human causes. Gully features detection including gully-affected area and its two dimension parameters (length, width, area et al.), is a significant task not only for researchers but also for policy-makers. This study aims at gully-affected area detection in three catchments of Chinese Loess Plateau, which were selected in Changwu, Ansai, and Suide by using unmanned aerial vehicle (UAV). The methodology includes a sequence of UAV data generation, image segmentation, feature calculation and selection, and random forest classification. Two experiments were conducted to investigate the influences of segmentation strategy and feature selection. Results showed that vertical and horizontal root-mean-square errors were below 0.5 and 0.2 m, respectively, which were ideal for the Loess Plateau region. The segmentation strategy adopted in this paper, which considers the topographic information, and optimal parameter combination can improve the segmentation results. Besides, the overall extraction accuracy in Changwu, Ansai, and Suide achieved was 84.62%, 86.46%, and 93.06%, respectively, which indicated that the proposed method for detecting gully-affected area is more objective and effective than traditional methods. This study demonstrated that UAV can bridge the gap between field measurement and satellite-based remote sensing, obtaining a balance in resolution and efficiency for catchment-scale gully erosion research.

Keywords: unmanned aerial vehicle (UAV), object-analysis image analysis, gully erosion, gully-affected area, Loess Plateau, random forest

Procedia PDF Downloads 203
2769 Intrusion Detection in Cloud Computing Using Machine Learning

Authors: Faiza Babur Khan, Sohail Asghar

Abstract:

With an emergence of distributed environment, cloud computing is proving to be the most stimulating computing paradigm shift in computer technology, resulting in spectacular expansion in IT industry. Many companies have augmented their technical infrastructure by adopting cloud resource sharing architecture. Cloud computing has opened doors to unlimited opportunities from application to platform availability, expandable storage and provision of computing environment. However, from a security viewpoint, an added risk level is introduced from clouds, weakening the protection mechanisms, and hardening the availability of privacy, data security and on demand service. Issues of trust, confidentiality, and integrity are elevated due to multitenant resource sharing architecture of cloud. Trust or reliability of cloud refers to its capability of providing the needed services precisely and unfailingly. Confidentiality is the ability of the architecture to ensure authorization of the relevant party to access its private data. It also guarantees integrity to protect the data from being fabricated by an unauthorized user. So in order to assure provision of secured cloud, a roadmap or model is obligatory to analyze a security problem, design mitigation strategies, and evaluate solutions. The aim of the paper is twofold; first to enlighten the factors which make cloud security critical along with alleviation strategies and secondly to propose an intrusion detection model that identifies the attackers in a preventive way using machine learning Random Forest classifier with an accuracy of 99.8%. This model uses less number of features. A comparison with other classifiers is also presented.

Keywords: cloud security, threats, machine learning, random forest, classification

Procedia PDF Downloads 311
2768 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score

Procedia PDF Downloads 124
2767 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 221
2766 Centrifuge Modelling Approach on Sysmic Loading Analysis of Clay: A Geotechnical Study

Authors: Anthony Quansah, Tresor Ntaryamira, Shula Mushota

Abstract:

Models for geotechnical centrifuge testing are usually made from re-formed soil, allowing for comparisons with naturally occurring soil deposits. However, there is a fundamental omission in this process because the natural soil is deposited in layers creating a unique structure. Nonlinear dynamics of clay material deposit is an essential part of changing the attributes of ground movements when subjected to solid seismic loading, particularly when diverse intensification conduct of speeding up and relocation are considered. The paper portrays a review of axis shaking table tests and numerical recreations to explore the offshore clay deposits subjected to seismic loadings. These perceptions are accurately reenacted by DEEPSOIL with appropriate soil models and parameters reviewed from noteworthy centrifuge modeling researches. At that point, precise 1-D site reaction investigations are performed on both time and recurrence spaces. The outcomes uncover that for profound delicate clay is subjected to expansive quakes, noteworthy increasing speed lessening may happen close to the highest point of store because of soil nonlinearity and even neighborhood shear disappointment; nonetheless, huge enhancement of removal at low frequencies are normal in any case the forces of base movements, which proposes that for dislodging touchy seaward establishments and structures, such intensified low-recurrence relocation reaction will assume an essential part in seismic outline. This research shows centrifuge as a tool for creating a layered sample important for modelling true soil behaviour (such as permeability) which is not identical in all directions. Currently, there are limited methods for creating layered soil samples.

Keywords: seismic analysis, layered modeling, terotechnology, finite element modeling

Procedia PDF Downloads 144
2765 Supervised Machine Learning Approach for Studying the Effect of Different Joint Sets on Stability of Mine Pit Slopes Under the Presence of Different External Factors

Authors: Sudhir Kumar Singh, Debashish Chakravarty

Abstract:

Slope stability analysis is an important aspect in the field of geotechnical engineering. It is also important from safety, and economic point of view as any slope failure leads to loss of valuable lives and damage to property worth millions. This paper aims at mitigating the risk of slope failure by studying the effect of different joint sets on the stability of mine pit slopes under the influence of various external factors, namely degree of saturation, rainfall intensity, and seismic coefficients. Supervised machine learning approach has been utilized for making accurate and reliable predictions regarding the stability of slopes based on the value of Factor of Safety. Numerous cases have been studied for analyzing the stability of slopes using the popular Finite Element Method, and the data thus obtained has been used as training data for the supervised machine learning models. The input data has been trained on different supervised machine learning models, namely Random Forest, Decision Tree, Support vector Machine, and XGBoost. Distinct test data that is not present in training data has been used for measuring the performance and accuracy of different models. Although all models have performed well on the test dataset but Random Forest stands out from others due to its high accuracy of greater than 95%, thus helping us by providing a valuable tool at our disposition which is neither computationally expensive nor time consuming and in good accordance with the numerical analysis result.

Keywords: finite element method, geotechnical engineering, machine learning, slope stability

Procedia PDF Downloads 91
2764 Prediction of Live Birth in a Matched Cohort of Elective Single Embryo Transfers

Authors: Mohsen Bahrami, Banafsheh Nikmehr, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Tamer M. Yalcinkaya

Abstract:

In recent years, we have witnessed an explosion of studies aimed at using a combination of artificial intelligence (AI) and time-lapse imaging data on embryos to improve IVF outcomes. However, despite promising results, no study has used a matched cohort of transferred embryos which only differ in pregnancy outcome, i.e., embryos from a single clinic which are similar in parameters, such as: morphokinetic condition, patient age, and overall clinic and lab performance. Here, we used time-lapse data on embryos with known pregnancy outcomes to see if the rich spatiotemporal information embedded in this data would allow the prediction of the pregnancy outcome regardless of such critical parameters. Methodology—We did a retrospective analysis of time-lapse data from our IVF clinic utilizing Embryoscope 100% of the time for embryo culture to blastocyst stage with known clinical outcomes, including live birth vs nonpregnant (embryos with spontaneous abortion outcomes were excluded). We used time-lapse data from 200 elective single transfer embryos randomly selected from January 2019 to June 2021. Our sample included 100 embryos in each group with no significant difference in patient age (P=0.9550) and morphokinetic scores (P=0.4032). Data from all patients were combined to make a 4th order tensor, and feature extraction were subsequently carried out by a tensor decomposition methodology. The features were then used in a machine learning classifier to classify the two groups. Major Findings—The performance of the model was evaluated using 100 random subsampling cross validation (train (80%) - test (20%)). The prediction accuracy, averaged across 100 permutations, exceeded 80%. We also did a random grouping analysis, in which labels (live birth, nonpregnant) were randomly assigned to embryos, which yielded 50% accuracy. Conclusion—The high accuracy in the main analysis and the low accuracy in random grouping analysis suggest a consistent spatiotemporal pattern which is associated with pregnancy outcomes, regardless of patient age and embryo morphokinetic condition, and beyond already known parameters, such as: early cleavage or early blastulation. Despite small samples size, this ongoing analysis is the first to show the potential of AI methods in capturing the complex morphokinetic changes embedded in embryo time-lapse data, which contribute to successful pregnancy outcomes, regardless of already known parameters. The results on a larger sample size with complementary analysis on prediction of other key outcomes, such as: euploidy and aneuploidy of embryos will be presented at the meeting.

Keywords: IVF, embryo, machine learning, time-lapse imaging data

Procedia PDF Downloads 85
2763 Experimental and Theoretical Study on Flexural Behaviors of Reinforced Concrete Cement (RCC) Beams by Using Carbonfiber Reinforcedpolymer (CFRP) Laminate as Retrofitting and Rehabilitation Method

Authors: Fils Olivier Kamanzi

Abstract:

This research Paper shows that materials CFRP were used to rehabilitate 9 Beams and retrofitting of 9 Beams with size (125x250x2300) mm each for M50 grade of concrete with 20% of Volume of Cement replaced by GGBS as a mineral Admixture. Superplasticizer (ForscoConplast SP430) used to reduce the water-cement ratio and maintaining good workability of fresh concrete (Slump test 57mm). Concrete Mix ratio 1:1.56:2.66 with a water-cement ratio of 0.31(ACI codebooks). A sample of 6cubes sized (150X150X150) mm, 6cylinders sized (150ФX300H) mm and 6Prisms sized (100X100X500) mm were cast, cured, and tested for 7,14&28days by compressive, tensile and flexure test; finally, mix design reaches the compressive strength of 59.84N/mm2. 21 Beams were cast and cured for up to 28 days, 3Beams were tested by a two-point loading machine as Control beams. 9 Beams were distressed in flexure by adopting failure up to final Yielding point under two-point loading conditions by taking 90% off Ultimate load. Three sets, each composed of three distressed beams, were rehabilitated by using CFRP sheets, one, two & three layers, respectively, and after being retested up to failure mode. Another three sets were freshly retrofitted also by using CFRP sheets one, two & three layers, respectively, and being tested by a two-point load method of compression strength testing machine. The aim of this study is to determine the flexural Strength & behaviors of repaired and retrofitted Beams by CFRP sheets for gaining good strength and considering economic aspects. The results show that rehabilitated beams increase its strength 47 %, 78 % & 89 %, respectively, to thickness of CFRP sheets and 41%, 51 %& 68 %, respectively too, for retrofitted Beams. The conclusion is that three layers of CFRP sheets are the best applicable in repairing and retrofitting the bonded beams method.

Keywords: retrofitting, rehabilitation, cfrp, rcc beam, flexural strength and behaviors, ggbs, and epoxy resin

Procedia PDF Downloads 88
2762 Different Orientations of Shape Memory Alloy Wire in Automotive Sector Product

Authors: Srishti Bhatt, Vaibhav Bhavsar, Adil Hussain, Aashay Mhaske, S. C. Bali, T. S. Srikanth

Abstract:

Shape Memory Alloys (SMA) are widely known for their unique shape recovery properties. SMA based actuation systems have high-force to weight ratio, light weight and also bio-compatible material. Which is why they are being used in different fields of aerospace, robotics, automotive and biomedical industries. However, in the automotive industry plenty of patents are available but commercially viable products are very few in market. This could be due to SMA material limitations like small stroke, direct dependability of lifecycle on stroke, pull load of the wire and high cycle time. In automotive sector, SMA being considered as an actuator which is required to have high stroke and constraint arises to accommodate a long length of wire (to compensate maximum 4 % strain as per better fatigue life cycle) not only increases complexity but also adds on the cost. More than 200 different types of actuators are used in an automobile, few of them whose efficiency can highly increase by replacing them with SMA based actuators which include latch lock mechanism, glove box, Head lamp leveling, side mirror and rear mirror leveling, tailgate opener and fuel lid cap actuator. To overcome the limitation of available space for required stroke of an actuator which leads to study the effect of different loading positions on SMA wires, different orientations of SMA wire by using pulleys and lever based systems to achieve maximum stroke. This investigation summarizes the loading under the V shape orientation the required stroke and carrying load capacity in more compact in comparison with straight orientation of wire. Similarly, the U shape orientation its showing higher load carrying capacity but reduced stroke which is aligned with concept of bundled wire method. Life-cycle of these orientations were also evaluated.

Keywords: actuators, automotive, nitinol, shape memory alloy, SMA wire orientations

Procedia PDF Downloads 76
2761 Analysis of Lift Arm Failure and Its Improvement for the Use in Farm Tractor

Authors: Japinder Wadhawan, Pradeep Rajan, Alok K. Saran, Navdeep S. Sidhu, Daanvir K. Dhir

Abstract:

Currently, research focus in the development of agricultural equipment and tractor parts in India is innovation and use of alternate materials like austempered ductile iron (ADI). Three-point linkage mechanism of the tractor is susceptible to unpredictable load conditions in the field, and one of the critical components vulnerable to failure is lift arm. Conventionally, lift arm is manufactured either by forging or casting (SG Iron) and main objective of the present work is to reduce the failure occurrences in the lift arm, which is achieved by changing the manufacturing material, i.e ADI, without changing existing design. Effect of four pertinent variables of manufacturing ADI, viz. austenitizing temperature, austenitizing time, austempering temperature, austempering time, was investigated using Taguchi method for design of experiments. To analyze the effect of parameters on the mechanical properties, mean average and signal-to-noise (S/N) ratio was calculated based on the design of experiments with L9 orthogonal array and the linear graph. The best combination for achieving the desired mechanical properties of lift arm is austenitization at 860°C for 90 minutes and austempering at 350°C for 60 minutes. Results showed that the developed component is having 925 MPA tensile strength, 7.8 per cent elongation and 120 joules toughness making it more suitable material for lift arm manufacturing. The confirmatory experiment has been performed and found a good agreement between predicted and experimental value. Also, the CAD model of the existing design was developed in computer aided design software, and structural loading calculations were performed by a commercial finite element analysis package. An optimized shape of the lift arm has also been proposed resulting in light weight and cheaper product than the existing design, which can withstand the same loading conditions effectively.

Keywords: austempered ductile iron, design of experiment, finite element analysis, lift arm

Procedia PDF Downloads 223
2760 Rock-Bed Thermocline Storage: A Numerical Analysis of Granular Bed Behavior and Interaction with Storage Tank

Authors: Nahia H. Sassine, Frédéric-Victor Donzé, Arnaud Bruch, Barthélemy Harthong

Abstract:

Thermal Energy Storage (TES) systems are central elements of various types of power plants operated using renewable energy sources. Packed bed TES can be considered as a cost–effective solution in concentrated solar power plants (CSP). Such a device is made up of a tank filled with a granular bed through which heat-transfer fluid circulates. However, in such devices, the tank might be subjected to catastrophic failure induced by a mechanical phenomenon known as thermal ratcheting. Thermal stresses are accumulated during cycles of loading and unloading until the failure happens. For instance, when rocks are used as storage material, the tank wall expands more than the solid medium during charge process, a gap is created between the rocks and tank walls and the filler material settles down to fill it. During discharge, the tank contracts against the bed, resulting in thermal stresses that may exceed the wall tank yield stress and generate plastic deformation. This phenomenon is repeated over the cycles and the tank will be slowly ratcheted outward until it fails. This paper aims at studying the evolution of tank wall stresses over granular bed thermal cycles, taking into account both thermal and mechanical loads, with a numerical model based on the discrete element method (DEM). Simulations were performed to study two different thermal configurations: (i) the tank is heated homogeneously along its height or (ii) with a vertical gradient of temperature. Then, the resulting loading stresses applied on the tank are compared as well the response of the internal granular material. Besides the study of the influence of different thermal configurations on the storage tank response, other parameters are varied, such as the internal angle of friction of the granular material, the dispersion of particles diameters as well as the tank’s dimensions. Then, their influences on the kinematics of the granular bed submitted to thermal cycles are highlighted.

Keywords: discrete element method (DEM), thermal cycles, thermal energy storage, thermocline

Procedia PDF Downloads 395
2759 Asia Pacific University of Technology and Innovation

Authors: Esther O. Adebitan, Florence Oyelade

Abstract:

The Millennium Development Goals (MDGs) was initiated by the UN member nations’ aspiration for the betterment of human life. It is expressed in a set of numerical ‎and time-bound targets. In more recent time, the aspiration is shifting away from just the achievement to the sustainability of achieved MDGs beyond the 2015 target. The main objective of this study was assessing how much the hotel industry within the Nigerian Federal Capital Territory (FCT) as a member of the global community is involved in the achievement of sustainable MDGs within the FCT. The study had two population groups consisting of 160 hotels and the communities where these are located. Stratified random sampling technique was adopted in selecting 60 hotels based on large, medium ‎and small hotels categorisation, while simple random sampling technique was used to elicit information from 30 residents of three of the hotels host communities. The study was guided by tree research questions and two hypotheses aimed to ascertain if hotels see the need to be involved in, and have policies in pursuit of achieving sustained MDGs, and to determine public opinion regarding hotels contribution towards the achievement of the MDGs in their communities. A 22 item questionnaire was designed ‎and administered to hotel managers while 11 item questionnaire was designed ‎and administered to hotels’ host communities. Frequency distribution and percentage as well as Chi-square were used to analyse data. Results showed no significant involvement of the hotel industry in achieving sustained MDGs in the FCT and that there was disconnect between the hotels and their immediate communities. The study recommended that hotels should, as part of their Corporate Social Responsibility pick at least one of the goals to work on in order to be involved in the attainment of enduring Millennium Development Goals.

Keywords: MDGs, hotels, FCT, host communities, corporate social responsibility

Procedia PDF Downloads 406
2758 Rd-PLS Regression: From the Analysis of Two Blocks of Variables to Path Modeling

Authors: E. Tchandao Mangamana, V. Cariou, E. Vigneau, R. Glele Kakai, E. M. Qannari

Abstract:

A new definition of a latent variable associated with a dataset makes it possible to propose variants of the PLS2 regression and the multi-block PLS (MB-PLS). We shall refer to these variants as Rd-PLS regression and Rd-MB-PLS respectively because they are inspired by both Redundancy analysis and PLS regression. Usually, a latent variable t associated with a dataset Z is defined as a linear combination of the variables of Z with the constraint that the length of the loading weights vector equals 1. Formally, t=Zw with ‖w‖=1. Denoting by Z' the transpose of Z, we define herein, a latent variable by t=ZZ’q with the constraint that the auxiliary variable q has a norm equal to 1. This new definition of a latent variable entails that, as previously, t is a linear combination of the variables in Z and, in addition, the loading vector w=Z’q is constrained to be a linear combination of the rows of Z. More importantly, t could be interpreted as a kind of projection of the auxiliary variable q onto the space generated by the variables in Z, since it is collinear to the first PLS1 component of q onto Z. Consider the situation in which we aim to predict a dataset Y from another dataset X. These two datasets relate to the same individuals and are assumed to be centered. Let us consider a latent variable u=YY’q to which we associate the variable t= XX’YY’q. Rd-PLS consists in seeking q (and therefore u and t) so that the covariance between t and u is maximum. The solution to this problem is straightforward and consists in setting q to the eigenvector of YY’XX’YY’ associated with the largest eigenvalue. For the determination of higher order components, we deflate X and Y with respect to the latent variable t. Extending Rd-PLS to the context of multi-block data is relatively easy. Starting from a latent variable u=YY’q, we consider its ‘projection’ on the space generated by the variables of each block Xk (k=1, ..., K) namely, tk= XkXk'YY’q. Thereafter, Rd-MB-PLS seeks q in order to maximize the average of the covariances of u with tk (k=1, ..., K). The solution to this problem is given by q, eigenvector of YY’XX’YY’, where X is the dataset obtained by horizontally merging datasets Xk (k=1, ..., K). For the determination of latent variables of order higher than 1, we use a deflation of Y and Xk with respect to the variable t= XX’YY’q. In the same vein, extending Rd-MB-PLS to the path modeling setting is straightforward. Methods are illustrated on the basis of case studies and performance of Rd-PLS and Rd-MB-PLS in terms of prediction is compared to that of PLS2 and MB-PLS.

Keywords: multiblock data analysis, partial least squares regression, path modeling, redundancy analysis

Procedia PDF Downloads 131
2757 Pedestrian Behavioral Analysis for Safety at Road Crossing at Selected Intersections in Dhaka City

Authors: Sumit Roy

Abstract:

A clear understanding of pedestrian behaviour at road crossing at intersections is needed for providing necessary infrastructure and also for enhancing pedestrian safety at any intersection. Pedestrian road crossing behaviour is studied at Motijheel and Kakrail intersections where Motijheel intersection is a controlled roundabout, and Kakrail intersection is a signalized intersection. Around 60 people at each intersection were interviewed for a questionnaire survey and video recording at different time of a day was done for observation at each intersection. In case of Motijeel intersection, we got pedestrian road crossings were much higher than Kakrail intersection. It is because the number of workplaces here is higher than Kakrail. From questionnaire survey, it is found that 80% of pedestrians crosses at intersection to avail buses and their loading and unloading locations are at intersection, whereas at Kakrail intersection only 25% pedestrian crosses the road for buses as buses do not slow down here. At Motijheel intersection 25 to 40% of pedestrians choose to jump over the barricade for crossing instead of using overbridge for saving time and labour. On the other hand, the pedestrians using overbridge told that they use overbridge for safety. Moreover, pedestrian crosses at the same pace for both red and green interval with vehicle movement in the range of 12.5 to 14.5 km/h and gaps between vehicle were more than 4 m. Here pedestrian crossing speed varies from 3.5 to 7.2 km/h. In Kakrail intersection the road crossing situation can be classified into 4 categories. In case of red time, pedestrians do not wait to cross the road, and crossing speed varies from 3.5 to 7.2 km/h. When vehicle speed varies from 5.4 to 7.4 km/h, and gaps between vehicle vary from 1.5 to 2 m, most of the pedestrians initially choose to wait and try to cross the road in group with crossing speed 2.7 to 3.5 km/h. When vehicle speed varies from 10.8 to 18 km/h, and gaps between vehicles varies from 2 to 3 m most of the people waits and cross the road in group with crossing speed 3.5 to 5.4 km/h. When vehicle speed varies from 25.2 to 32.4 km/h and gaps between vehicles vary from 4 to 6 m most of the pedestrians choose to wait until red time. In Kakrail intersection 87% of people said that they cross the road with risk and 60% of pedestrians told that it is risky to get on and off the bus at this intersection. Planned location of loading and unloading area for buses can improve the pedestrian road crossing behaviour at intersections.

Keywords: crossing speed, pedestrian behaviour, road crossing, use of overbridge

Procedia PDF Downloads 164
2756 The Causes and Effects of Delinquent Behaviour among Students in Juvenile Home: A Case Study of Osun State

Authors: Baleeqs, O. Adegoke, Adeola, O. Aburime

Abstract:

Juvenile delinquency is fast becoming one of the largest problems facing many societies due to many different factors ranging from parental factors to bullying at schools all which had led to different theoretical notions by different scholars. Delinquency is an illegal or immoral behaviour, especially by the young person who behaves in a way that is illegal or that society does not approve of. The purpose of the study was to investigate causes and effects of delinquent behaviours among adolescent in juvenile home in Osun State. A descriptive survey research type was employed. The random sampling technique was used to select 100 adolescents in Juvenile home in Osun State. Questionnaires were developed and given to them. The data collected from this study were analyzed using frequency counts and percentage for the demographic data in section A, while the two research hypotheses postulated for this study were tested using t-test statistics at the significance level of 0.05. Findings revealed that the greatest school effects of delinquent behaviours among adolescent in juvenile home in Osun by respondents were their aggressive behaviours. Findings revealed that there was a significant difference in the causes and effects of delinquent behaviours among adolescent in juvenile home in Osun State. It was also revealed that there was no significant difference in the causes and effects of delinquent behaviours among secondary school students in Osun based on gender. These recommendations were made in order to address the findings of this study: More number of teachers should be appointed in the observation home so that it will be possible to provide teaching to the different age group of delinquents. Developing the infrastructure facilities of short stay homes and observation home is a top priority. Proper counseling session’s interval is highly essential for these juveniles.

Keywords: behaviour, delinquency, juvenile, random sampling, statistical techniques, survey

Procedia PDF Downloads 179
2755 Adsorbed Probe Molecules on Surface for Analyzing the Properties of Cu/SnO2 Supported Catalysts

Authors: Neha Thakur, Pravin S. More

Abstract:

The interaction of CO, H2 and LPG with Cu-dosed SnO2 catalysts was studied by means of Fourier transform infrared spectroscopy (FTIR). With increasing Cu loading, pronounced and progressive red shifts of the C–O stretching frequency associated with molecular CO adsorbed on the Cu/SnO2 component were observed. This decrease in n(CO) correlates with enhancement of CO dissociation at higher temperatures on Cu promoted SnO2 catalysts under conditions, where clean Cu is almost ineffective. In the conclusion, the capability of our technique is discussed, and a technique for enhancing the sensitivity in our technique is proposed.

Keywords: FTIR, spectroscopic, dissociation, n(CO)

Procedia PDF Downloads 293
2754 Analysis of Bridge-Pile Foundation System in Multi-layered Non-Linear Soil Strata Using Energy-Based Method

Authors: Arvan Prakash Ankitha, Madasamy Arockiasamy

Abstract:

The increasing demand for adopting pile foundations in bridgeshas pointed towardsthe need to constantly improve the existing analytical techniques for better understanding of the behavior of such foundation systems. This study presents a simplistic approach using the energy-based method to assess the displacement responses of piles subjected to general loading conditions: Axial Load, Lateral Load, and a Bending Moment. The governing differential equations and the boundary conditions for a bridge pile embedded in multi-layered soil strata subjected to the general loading conditions are obtained using the Hamilton’s principle employing variational principles and minimization of energies. The soil non-linearity has been incorporated through simple constitutive relationships that account for degradation of soil moduli with increasing strain values.A simple power law based on published literature is used where the soil is assumed to be nonlinear-elastic and perfectly plastic. A Tresca yield surface is assumed to develop the soil stiffness variation with different strain levels that defines the non-linearity of the soil strata. This numerical technique has been applied to a pile foundation in a two - layered soil strata for a pier supporting the bridge and solved using the software MATLAB R2019a. The analysis yields the bridge pile displacements at any depth along the length of the pile. The results of the analysis are in good agreement with the published field data and the three-dimensional finite element analysis results performed using the software ANSYS 2019R3. The methodology can be extended to study the response of the multi-strata soil supporting group piles underneath the bridge piers.

Keywords: pile foundations, deep foundations, multilayer soil strata, energy based method

Procedia PDF Downloads 122
2753 Bias-Corrected Estimation Methods for Receiver Operating Characteristic Surface

Authors: Khanh To Duc, Monica Chiogna, Gianfranco Adimari

Abstract:

With three diagnostic categories, assessment of the performance of diagnostic tests is achieved by the analysis of the receiver operating characteristic (ROC) surface, which generalizes the ROC curve for binary diagnostic outcomes. The volume under the ROC surface (VUS) is a summary index usually employed for measuring the overall diagnostic accuracy. When the true disease status can be exactly assessed by means of a gold standard (GS) test, unbiased nonparametric estimators of the ROC surface and VUS are easily obtained. In practice, unfortunately, disease status verification via the GS test could be unavailable for all study subjects, due to the expensiveness or invasiveness of the GS test. Thus, often only a subset of patients undergoes disease verification. Statistical evaluations of diagnostic accuracy based only on data from subjects with verified disease status are typically biased. This bias is known as verification bias. Here, we consider the problem of correcting for verification bias when continuous diagnostic tests for three-class disease status are considered. We assume that selection for disease verification does not depend on disease status, given test results and other observed covariates, i.e., we assume that the true disease status, when missing, is missing at random. Under this assumption, we discuss several solutions for ROC surface analysis based on imputation and re-weighting methods. In particular, verification bias-corrected estimators of the ROC surface and of VUS are proposed, namely, full imputation, mean score imputation, inverse probability weighting and semiparametric efficient estimators. Consistency and asymptotic normality of the proposed estimators are established, and their finite sample behavior is investigated by means of Monte Carlo simulation studies. Two illustrations using real datasets are also given.

Keywords: imputation, missing at random, inverse probability weighting, ROC surface analysis

Procedia PDF Downloads 405
2752 Adolescent Obesity Leading to Adulthood Cardiovascular Diseases among Punjabi Population

Authors: Manpreet Kaur, Badaruddoza, Sandeep Kaur Brar

Abstract:

The increasing prevalence of adolescent obesity is one of the major causes to be hypertensive in adulthood. Various statistical methods have been applied to examine the performance of anthropometric indices for the identification of adverse cardiovascular risk profile. The present work was undertaken to determine the significant traditional risk factors through principal component factor analysis (PCFA) among population based Punjabi adolescents aged 10-18 years. Data was collected among adolescent children from different schools situated in urban areas of Punjab, India. Principal component factor analysis (PCFA) was applied to extract orthogonal components from anthropometric and physiometric variables. Association between components were explained by factor loadings. The PCFA extracted four factors, which explained 84.21%, 84.06% and 83.15% of the total variance of the 14 original quantitative traits among boys, girls and combined subjects respectively. Factor 1 has high loading of the traits that reflect adiposity such as waist circumference, BMI and skinfolds among both sexes. However, waist circumference and body mass index are the indicator of abdominal obesity which increases the risk of cardiovascular diseases. The loadings of these two traits have found maximum in girls adolescents (WC=0.924; BMI=0.905). Therefore, factor 1 is the strong indicator of atherosclerosis in adolescents. Factor 2 is predominantly loaded with blood pressures and related traits (SBP, DBP, MBP and pulse rate) which reflect the risk of essential hypertension in adolescent girls and combined subjects, whereas, factor 2 loaded with obesity related traits in boys (weight and hip circumferences). Comparably, factor 3 is loaded with blood pressures in boys and with height and WHR in girls, while factor 4 contains high loading of pulse pressure among boys, girls and combined group of adolescents.

Keywords: adolescent obesity, cvd, hypertension, punjabi population

Procedia PDF Downloads 364
2751 Human-Wildlife Conflicts in Urban Areas of Zimbabwe

Authors: Davie G. Dave, Prisca H. Mugabe, Tonderai Mutibvu

Abstract:

Globally, HWCs are on the rise. Such is the case with urban areas in Zimbabwe, yet little has been documented about it. This study was done to provide insights into the occurrence of human-wildlife conflicts in urban areas. The study was carried out in Harare, Bindura, Masvingo, Beitbridge, and Chiredzi to determine the cause, nature, extent, and frequency of occurrence of HWC, to determine the key wildlife species involved in conflicts and management practices done to combat wildlife conflicts in these areas. Several sampling techniques encompassing multi-stage sampling, stratified random, purposive, and simple random sampling were employed for placing residential areas into three strata according to population density, selecting residential areas, and selecting actual participants. Data were collected through a semi-structured questionnaire and key informant interviews. The results revealed that property destruction and crop damage were the most prevalent conflicts. Of the 15 animals that were cited, snakes, baboons, and monkeys were associated with the most conflicts. The occurrence of HWCs was mainly attributed to the increase in both animal and human populations. To curtail these HWCs, the local people mainly used non-lethal methods, whilst lethal methods were used by authorities for some of the reported cases. The majority of the conflicts were seasonal and less severe. There were growing concerns by respondents on the issues of wildlife conflicts, especially in those areas that had primates, such as Warren Park in Harare and Limpopo View in Beitbridge. There are HWCs hotspots in urban areas, and to ameliorate this, suggestions are that there is a need for a multi-action approach that includes general awareness campaigns on HWCs and land use planning that involves the creation of green spaces to ease wildlife management.

Keywords: human-wildlife conflicts, mitigation measures, residential areas, types of conflicts, urban areas

Procedia PDF Downloads 53
2750 Important Factors Affecting the Effectiveness of Quality Control Circles

Authors: Sogol Zarafshan

Abstract:

The present study aimed to identify important factors affecting the effectiveness of quality control circles in a hospital, as well as rank them using a combination of fuzzy VIKOR and Grey Relational Analysis (GRA). The study population consisted of five academic members and five experts in the field of nursing working in a hospital, who were selected using a purposive sampling method. Also, a sample of 107 nurses was selected through a simple random sampling method using their employee codes and the random-number table. The required data were collected using a researcher-made questionnaire which consisted of 12 factors. The validity of this questionnaire was confirmed through giving the opinions of experts and academic members who participated in the present study, as well as performing confirmatory factor analysis. Its reliability also was verified (α=0.796). The collected data were analyzed using SPSS 22.0 and LISREL 8.8, as well as VIKOR–GRA and IPA methods. The results of ranking the factors affecting the effectiveness of quality control circles showed that the highest and lowest ranks were related to ‘Managers’ and supervisors’ support’ and ‘Group leadership’. Also, the highest hospital performance was for factors such as ‘Clear goals and objectives’ and ‘Group cohesiveness and homogeneity’, and the lowest for ‘Reward system’ and ‘Feedback system’, respectively. The results showed that although ‘Training the members’, ‘Using the right tools’ and ‘Reward system’ were factors that were of great importance, the organization’s performance for these factors was poor. Therefore, these factors should be paid more attention by the studied hospital managers and should be improved as soon as possible.

Keywords: Quality control circles, Fuzzy VIKOR, Grey Relational Analysis, Importance–Performance Analysis

Procedia PDF Downloads 122
2749 The Value of Routine Terminal Ileal Biopsies for the Investigation of Diarrhea

Authors: Swati Bhasin, Ali Ahmed, Valence Xavier, Ben Liu

Abstract:

Aims: Diarrhea is a problem that is a frequent clinic referral to the gastroenterology and surgical team from the General practitioner. To establish a diagnosis, these patients undergo colonoscopy. The current practice at our district general hospital is to perform random left and right colonic biopsies. National guidelines issued by the British Society of Gastroenterology advise all patients presenting with chronic diarrhea should have an Ileoscopy as an indicator for colonoscopy completion. Our primary aim was to check if Terminal ileum (TI) biopsy is required to establish a diagnosis of inflammatory bowel disease (IBD). Methods: Data was collected retrospectively from November 2018 to November 2019. The target population were patients who underwent colonoscopies for diarrhea. Demographic data, endoscopic and histology findings of TI were assessed and analyzed. Results: 140 patients with a mean age of 57 years (19-84) underwent a colonoscopy (M: F; 1:2.3). 92 patients had random colonic biopsies taken and based on the histological results of these, 15 patients (16%) were diagnosed with IBD. The TI was successfully intubated in 40 patients, of which 32 patients had colonic biopsies taken as well. 8 patients did not have a colonic biopsy taken. Macroscopic abnormality in the TI was detected in 5 patients, all of whom were biopsied. Based on histological results of the biopsy, 3 patients (12%) were diagnosed with IBD. These 3 patients (100%) also had colonic biopsies taken simultaneously and showed inflammation. None of the patients had a diagnosis of IBD confirmed on TI intubation alone (where colonic biopsies were not done). None of the patients has a diagnosis of IBD confirmed on TI intubation alone (where colonic biopsies were negative). Conclusion: TI intubation is a highly-skilled, time-consuming procedure with a higher risk of perforation, which as per our study, has little additional diagnostic value in finding IBD for symptoms of diarrhea if colonic biopsies are taken. We propose that diarrhea is a colonic symptom; therefore, colonic biopsies are positive for inflammation if the diarrhea is secondary to IBD. We conclude that all of the IBDs can be diagnosed simply with colonic biopsies.

Keywords: biopsy, colon, IBD, terminal ileum

Procedia PDF Downloads 111