Search results for: Couchsurfers prediction
432 Nutritional Profile and Food Intake Trends amongst Hospital Dieted Diabetic Eye Disease Patients of India
Authors: Parmeet Kaur, Nighat Yaseen Sofi, Shakti Kumar Gupta, Veena Pandey, Rajvaedhan Azad
Abstract:
Nutritional status and prevailing blood glucose level trends amongst hospitalized patients has been linked to clinical outcome. Therefore, the present study was undertaken to assess hospitalized Diabetic Eye Disease (DED) patients' anthropometric and dietary intake trends. DED patients with type 1 or 2 diabetes > 20 years were enrolled. Actual food intake was determined by weighed food record method. Mifflin St Joer predictive equation multiplied by a combined stress and activity factor of 1.3 was applied to estimate caloric needs. A questionnaire was further administered to obtain reasons of inadequate dietary intake. Results indicated validity of joint analyses of body mass index in combination with waist circumference for clinical risk prediction. Dietary data showed a significant difference (p < 0.0005) between average daily caloric and carbohydrate intake and actual daily caloric and carbohydrate needs. Mean fasting and post-prandial plasma glucose levels were 150.71 ± 72.200 mg/dL and 219.76 ± 97.365 mg/dL, respectively. Improvement in food delivery systems and nutrition educations were indicated for reducing plate waste and to enable better understanding of dietary aspects of diabetes management. A team approach of nurses, physicians and other health care providers is required besides the expertise of dietetics professional. To conclude, findings of the present study will be useful in planning nutritional care process (NCP) for optimizing glucose control as a component of quality medical nutrition therapy (MNT) in hospitalized DED patients.Keywords: nutritional status, diabetic eye disease, nutrition care process, medical nutrition therapy
Procedia PDF Downloads 354431 Calculation of Secondary Neutron Dose Equivalent in Proton Therapy of Thyroid Gland Using FLUKA Code
Authors: M. R. Akbari, M. Sadeghi, R. Faghihi, M. A. Mosleh-Shirazi, A. R. Khorrami-Moghadam
Abstract:
Proton radiotherapy (PRT) is becoming an established treatment modality for cancer. The localized tumors, the same as undifferentiated thyroid tumors are insufficiently handled by conventional radiotherapy, while protons would propose the prospect of increasing the tumor dose without exceeding the tolerance of the surrounding healthy tissues. In spite of relatively high advantages in giving localized radiation dose to the tumor region, in proton therapy, secondary neutron production can have significant contribution on integral dose and lessen advantages of this modality contrast to conventional radiotherapy techniques. Furthermore, neutrons have high quality factor, therefore, even a small physical dose can cause considerable biological effects. Measuring of this neutron dose is a very critical step in prediction of secondary cancer incidence. It has been found that FLUKA Monte Carlo code simulations have been used to evaluate dose due to secondaries in proton therapy. In this study, first, by validating simulated proton beam range in water phantom with CSDA range from NIST for the studied proton energy range (34-54 MeV), a proton therapy in thyroid gland cancer was simulated using FLUKA code. Secondary neutron dose equivalent of some organs and tissues after the target volume caused by 34 and 54 MeV proton interactions were calculated in order to evaluate secondary cancer incidence. A multilayer cylindrical neck phantom considering all the layers of neck tissues and a proton beam impinging normally on the phantom were also simulated. Trachea (accompanied by Larynx) had the greatest dose equivalent (1.24×10-1 and 1.45 pSv per primary 34 and 54 MeV protons, respectively) among the simulated tissues after the target volume in the neck region.Keywords: FLUKA code, neutron dose equivalent, proton therapy, thyroid gland
Procedia PDF Downloads 425430 Finite Element Analysis for Earing Prediction Incorporating the BBC2003 Material Model with Fully Implicit Integration Method: Derivation and Numerical Algorithm
Authors: Sajjad Izadpanah, Seyed Hadi Ghaderi, Morteza Sayah Irani, Mahdi Gerdooei
Abstract:
In this research work, a sophisticated yield criterion known as BBC2003, capable of describing planar anisotropic behaviors of aluminum alloy sheets, was integrated into the commercial finite element code ABAQUS/Standard via a user subroutine. The complete formulation of the implementation process using a fully implicit integration scheme, i.e., the classic backward Euler method, is presented, and relevant aspects of the yield criterion are introduced. In order to solve nonlinear differential and algebraic equations, the line-search algorithm was adopted in the user-defined material subroutine (UMAT) to expand the convergence domain of the iterative Newton-Raphson method. The developed subroutine was used to simulate a challenging computational problem with complex stress states, i.e., deep drawing of an anisotropic aluminum alloy AA3105. The accuracy and stability of the developed subroutine were confirmed by comparing the numerically predicted earing and thickness variation profiles with the experimental results, which showed an excellent agreement between numerical and experimental earing and thickness profiles. The integration of the BBC2003 yield criterion into ABAQUS/Standard represents a significant contribution to the field of computational mechanics and provides a useful tool for analyzing the mechanical behavior of anisotropic materials subjected to complex loading conditions.Keywords: BBC2003 yield function, plastic anisotropy, fully implicit integration scheme, line search algorithm, explicit and implicit integration schemes
Procedia PDF Downloads 75429 Uterine Cervical Cancer; Early Treatment Assessment with T2- And Diffusion-Weighted MRI
Authors: Susanne Fridsten, Kristina Hellman, Anders Sundin, Lennart Blomqvist
Abstract:
Background: Patients diagnosed with locally advanced cervical carcinoma are treated with definitive concomitant chemo-radiotherapy. Treatment failure occurs in 30-50% of patients with very poor prognoses. The treatment is standardized with risk for both over-and undertreatment. Consequently, there is a great need for biomarkers able to predict therapy outcomes to allow for individualized treatment. Aim: To explore the role of T2- and diffusion-weighted magnetic resonance imaging (MRI) for early prediction of therapy outcome and the optimal time point for assessment. Methods: A pilot study including 15 patients with cervical carcinoma stage IIB-IIIB (FIGO 2009) undergoing definitive chemoradiotherapy. All patients underwent MRI four times, at baseline, 3 weeks, 5 weeks, and 12 weeks after treatment started. Tumour size, size change (∆size), visibility on diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC) and change of ADC (∆ADC) at the different time points were recorded. Results: 7/15 patients relapsed during the study period, referred to as "poor prognosis", PP, and the remaining eight patients are referred to "good prognosis", GP. The tumor size was larger at all time points for PP than for GP. The ∆size between any of the four-time points was the same for PP and GP patients. The sensitivity and specificity to predict prognostic group depending on a remaining tumor on DWI were highest at 5 weeks and 83% (5/6) and 63% (5/8), respectively. The combination of tumor size at baseline and remaining tumor on DWI at 5 weeks in ROC analysis reached an area under the curve (AUC) of 0.83. After 12 weeks, no remaining tumor was seen on DWI among patients with GP, as opposed to 2/7 PP patients. Adding ADC to the tumor size measurements did not improve the predictive value at any time point. Conclusion: A large tumor at baseline MRI combined with a remaining tumor on DWI at 5 weeks predicted a poor prognosis.Keywords: chemoradiotherapy, diffusion-weighted imaging, magnetic resonance imaging, uterine cervical carcinoma
Procedia PDF Downloads 143428 Investigations on the Influence of Web Openings on the Load Bearing Behavior of Steel Beams
Authors: Felix Eyben, Simon Schaffrath, Markus Feldmann
Abstract:
A building should maximize the potential for use through its design. Therefore, flexible use is always important when designing a steel structure. To create flexibility, steel beams with web openings are increasingly used, because these offer the advantage that cables, pipes and other technical equipment can easily be routed through without detours, allowing for more space-saving and aesthetically pleasing construction. This can also significantly reduce the height of ceiling systems. Until now, beams with web openings were not explicitly considered in the European standard. However, this is to be done with the new EN 1993-1-13, in which design rules for different opening forms are defined. In order to further develop the design concepts, beams with web openings under bending are therefore to be investigated in terms of damage mechanics as part of a German national research project aiming to optimize the verifications for steel structures based on a wider database and a validated damage prediction. For this purpose, first, fundamental factors influencing the load-bearing behavior of girders with web openings under bending load were investigated numerically without taking material damage into account. Various parameter studies were carried out for this purpose. For example, the factors under study were the opening shape, size and position as well as structural aspects as the span length, arrangement of stiffeners and loading situation. The load-bearing behavior is evaluated using resulting load-deformation curves. These results are compared with the design rules and critically analyzed. Experimental tests are also planned based on these results. Moreover, the implementation of damage mechanics in the form of the modified Bai-Wierzbicki model was examined. After the experimental tests will have been carried out, the numerical models are validated and further influencing factors will be investigated on the basis of parametric studies.Keywords: damage mechanics, finite element, steel structures, web openings
Procedia PDF Downloads 173427 Improved Regression Relations Between Different Magnitude Types and the Moment Magnitude in the Western Balkan Earthquake Catalogue
Authors: Anila Xhahysa, Migena Ceyhan, Neki Kuka, Klajdi Qoshi, Damiano Koxhaj
Abstract:
The seismic event catalog has been updated in the framework of a bilateral project supported by the Central European Investment Fund and with the extensive support of Global Earthquake Model Foundation to update Albania's national seismic hazard model. The earthquake catalogue prepared within this project covers the Western Balkan area limited by 38.0° - 48°N, 12.5° - 24.5°E and includes 41,806 earthquakes that occurred in the region between 510 BC and 2022. Since the moment magnitude characterizes the earthquake size accurately and the selected ground motion prediction equations for the seismic hazard assessment employ this scale, it was chosen as the uniform magnitude scale for the catalogue. Therefore, proxy values of moment magnitude had to be obtained by using new magnitude conversion equations between the local and other magnitude types to this unified scale. The Global Centroid Moment Tensor Catalogue was considered the most authoritative for moderate to large earthquakes for moment magnitude reports; hence it was used as a reference for calibrating other sources. The best fit was observed when compared to some regional agencies, whereas, with reports of moment magnitudes from Italy, Greece and Turkey, differences were observed in all magnitude ranges. For teleseismic magnitudes, to account for the non-linearity of the relationships, we used the exponential model for the derivation of the regression equations. The obtained regressions for the surface wave magnitude and short-period body-wave magnitude show considerable differences with Global Earthquake Model regression curves, especially for low magnitude ranges. Moreover, a conversion relation was obtained between the local magnitude of Albania and the corresponding moment magnitude as reported by the global and regional agencies. As errors were present in both variables, the Deming regression was used.Keywords: regression, seismic catalogue, local magnitude, tele-seismic magnitude, moment magnitude
Procedia PDF Downloads 69426 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach
Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar
Abstract:
Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.Keywords: artificial neural networks, ANN, discrete wavelet transform, DWT, gray-level co-occurrence matrix, GLCM, k-nearest neighbor, KNN, region of interest, ROI
Procedia PDF Downloads 153425 In-silico Target Identification and Molecular Docking of Withaferin A and Withanolide D to Understand their Anticancer Therapeutic Potential
Authors: Devinder Kaur Sugga, Ekamdeep Kaur, Jaspreet Kaur, C. Rajesh, Preeti Rajesh, Harsimran Kaur
Abstract:
Withanolides are steroidal lactones and are highly oxygenated phytoconstituents that can be developed as potential anti-carcinogenic agents. The two main withanolides, namely Withaferin A and Withanolides D, have been extensively studied for their pharmacological activities. Both these withanolides are present in the Withania somnifera (WS) leaves belonging to the family Solanaceae, also known as “Indian ginseng .”In this study effects of WS leaf extract on the MCF7 breast cancer cell line were investigated by performing a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay to evaluate the cytotoxic effects and in vitro wound-healing assay to study the effect on cancer cell migration. Our data suggest WS extracts have cytotoxic effects and are effective anti-migrating agents and thus can be a source of potential candidates for the development of potential agents against metastasis. Thus, it can be a source of potential candidates for the development of potential agents against metastasis. Insight into these results, the in-silico approach to identify the possible protein targets interacting with withanolides was taken. Protein kinase C alpha (PKCα) was among the selected 5 top-ranked target proteins identified by the Swiss Target Prediction tool. PKCα is known to promote the growth and invasion of cancer cells and is being evaluated as a prognostic biomarker and therapeutic target in clinically aggressive tumors. Molecular docking of Withaferin A and Withanolides D was performed using AutoDock Vina. Both the bioactive compounds interacted with PKCα. The targets predicted using this approach will serve as leads for the possible therapeutic potential of withanolides, the bioactive ingredients of WS extracts, as anti-cancer drugs.Keywords: withania somnifera, withaferin A, withanolides D, PKCα
Procedia PDF Downloads 146424 Distribution of Cytochrome P450 Gene in Patients Taking Medical Cannabis
Authors: Naso Isaiah Thanavisuth
Abstract:
Introduction: Medical cannabis can be used for treatment, including anorexia, pain, inflammation, multiple sclerosis, Parkinson's disease, epilepsy, cancer, and metabolic syndrome-related disorders. However, medical cannabis leads to adverse effects (AEs), which is delta-9-tetrahydrocannabinol (THC). In previous studies, the major of THC metabolism enzymes are CYP2C9. Especially, the variation of CYP2C9 gene consist of CYP2C9*2 on exon 3 (C430T) (Arg144Cys) and CYP2C9*3 on exon 7 (A1075C) (Ile359Leu) to decrease enzyme activity. Notwithstanding, there is no data describing whether the variant of CYP2C9 genes are a pharmacogenetics marker for prediction of THC-induced AEs in Thai patients. Objective: We want to investigate the association between CYP2C9 gene and THC-induced AEs in Thai patients. Method: We enrolled 39 Thai patients with medical cannabis treatment consisting of men and women who were classified by clinical data. The quality of DNA extraction was assessed by using NanoDrop ND-1000. The CYP2C9*2 and *3 genotyping were conducted using the TaqMan real time PCR assay (ABI, Foster City, CA, USA). Results: All Thai patients who received the medical cannabis consist of twenty four (61.54%) patients who were female and fifteen (38.46%) were male, with age range 27- 87 years. Moreover, the most AEs in Thai patients who were treated with medical cannabis between cases and controls were tachycardia, arrhythmia, dry mouth, and nausea. Particularly, thirteen (72.22%) medical cannabis-induced AEs were female and age range 33 – 69 years. In this study, none of the medical cannabis groups carried CYP2C9*2 variants in Thai patients. The CYP2C9*3 variants (*1/*3, intermediate metabolizer, IM) and (*3/*3, poor metabolizer, PM) were found, three of thirty nine (7.69%) and one of thirty nine (2.56%) , respectively. Conclusion: This is the first study to confirm the genetic polymorphism of CYP2C9 and medical cannabis-induced AEs in the Thai population. Although, our results indicates that there is no found the CYP2C9*2. However, the variation of CYP2C9 allele might serve as a pharmacogenetics marker for screening before initiating the therapy with medical cannabis for prevention of medical cannabis-induced AEs.Keywords: CYP2C9, medical cannabis, adverse effects, THC, P450
Procedia PDF Downloads 105423 Econophysical Approach on Predictability of Financial Crisis: The 2001 Crisis of Turkey and Argentina Case
Authors: Arzu K. Kamberli, Tolga Ulusoy
Abstract:
Technological developments and the resulting global communication have made the 21st century when large capitals are moved from one end to the other via a button. As a result, the flow of capital inflows has accelerated, and capital inflow has brought with it crisis-related infectiousness. Considering the irrational human behavior, the financial crisis in the world under the influence of the whole world has turned into the basic problem of the countries and increased the interest of the researchers in the reasons of the crisis and the period in which they lived. Therefore, the complex nature of the financial crises and its linearly unexplained structure have also been included in the new discipline, econophysics. As it is known, although financial crises have prediction mechanisms, there is no definite information. In this context, in this study, using the concept of electric field from the electrostatic part of physics, an early econophysical approach for global financial crises was studied. The aim is to define a model that can take place before the financial crises, identify financial fragility at an earlier stage and help public and private sector members, policy makers and economists with an econophysical approach. 2001 Turkey crisis has been assessed with data from Turkish Central Bank which is covered between 1992 to 2007, and for 2001 Argentina crisis, data was taken from IMF and the Central Bank of Argentina from 1997 to 2007. As an econophysical method, an analogy is used between the Gauss's law used in the calculation of the electric field and the forecasting of the financial crisis. The concept of Φ (Financial Flux) has been adopted for the pre-warning of the crisis by taking advantage of this analogy, which is based on currency movements and money mobility. For the first time used in this study Φ (Financial Flux) calculations obtained by the formula were analyzed by Matlab software, and in this context, in 2001 Turkey and Argentina Crisis for Φ (Financial Flux) crisis of values has been confirmed to give pre-warning.Keywords: econophysics, financial crisis, Gauss's Law, physics
Procedia PDF Downloads 153422 Modeling of Bipolar Charge Transport through Nanocomposite Films for Energy Storage
Authors: Meng H. Lean, Wei-Ping L. Chu
Abstract:
The effects of ferroelectric nanofiller size, shape, loading, and polarization, on bipolar charge injection, transport, and recombination through amorphous and semicrystalline polymers are studied. A 3D particle-in-cell model extends the classical electrical double layer representation to treat ferroelectric nanoparticles. Metal-polymer charge injection assumes Schottky emission and Fowler-Nordheim tunneling, migration through field-dependent Poole-Frenkel mobility, and recombination with Monte Carlo selection based on collision probability. A boundary integral equation method is used for solution of the Poisson equation coupled with a second-order predictor-corrector scheme for robust time integration of the equations of motion. The stability criterion of the explicit algorithm conforms to the Courant-Friedrichs-Levy limit. Trajectories for charge that make it through the film are curvilinear paths that meander through the interspaces. Results indicate that charge transport behavior depends on nanoparticle polarization with anti-parallel orientation showing the highest leakage conduction and lowest level of charge trapping in the interaction zone. Simulation prediction of a size range of 80 to 100 nm to minimize attachment and maximize conduction is validated by theory. Attached charge fractions go from 2.2% to 97% as nanofiller size is decreased from 150 nm to 60 nm. Computed conductivity of 0.4 x 1014 S/cm is in agreement with published data for plastics. Charge attachment is increased with spheroids due to the increase in surface area, and especially so for oblate spheroids showing the influence of larger cross-sections. Charge attachment to nanofillers and nanocrystallites increase with vol.% loading or degree of crystallinity, and saturate at about 40 vol.%.Keywords: nanocomposites, nanofillers, electrical double layer, bipolar charge transport
Procedia PDF Downloads 354421 Comparison of the Anthropometric Obesity Indices in Prediction of Cardiovascular Disease Risk: Systematic Review and Meta-analysis
Authors: Saeed Pourhassan, Nastaran Maghbouli
Abstract:
Statement of the problem: The relationship between obesity and cardiovascular diseases has been studied widely(1). The distribution of fat tissue gained attention in relation to cardiovascular risk factors during lang-time research (2). American College of Cardiology/American Heart Association (ACC/AHA) is widely and the most reliable tool to be used as a cardiovascular risk (CVR) assessment tool(3). This study aimed to determine which anthropometric index is better in discrimination of high CVR patients from low risks using ACC/AHA score in addition to finding the best index as a CVR predictor among both genders in different races and countries. Methodology & theoretical orientation: The literature in PubMed, Scopus, Embase, Web of Science, and Google Scholar were searched by two independent investigators using the keywords "anthropometric indices," "cardiovascular risk," and "obesity." The search strategy was limited to studies published prior to Jan 2022 as full-texts in the English language. Studies using ACC/AHA risk assessment tool as CVR and those consisted at least 2 anthropometric indices (ancient ones and novel ones) are included. Study characteristics and data were extracted. The relative risks were pooled with the use of the random-effect model. Analysis was repeated in subgroups. Findings: Pooled relative risk for 7 studies with 16,348 participants were 1.56 (1.35-1.72) for BMI, 1.67(1.36-1.83) for WC [waist circumference], 1.72 (1.54-1.89) for WHR [waist-to-hip ratio], 1.60 (1.44-1.78) for WHtR [waist-to-height ratio], 1.61 (1.37-1.82) for ABSI [A body shape index] and 1.63 (1.32-1.89) for CI [Conicity index]. Considering gender, WC among females and WHR among men gained the highest RR. The heterogeneity of studies was moderate (α²: 56%), which was not decreased by subgroup analysis. Some indices such as VAI and LAP were evaluated just in one study. Conclusion & significance: This meta-analysis showed WHR could predict CVR better in comparison to BMI or WHtR. Some new indices like CI and ABSI are less accurate than WHR and WC. Among women, WC seems to be a better choice to predict cardiovascular disease risk.Keywords: obesity, cardiovascular disease, risk assessment, anthropometric indices
Procedia PDF Downloads 102420 Analysis of Dynamics Underlying the Observation Time Series by Using a Singular Spectrum Approach
Authors: O. Delage, H. Bencherif, T. Portafaix, A. Bourdier
Abstract:
The main purpose of time series analysis is to learn about the dynamics behind some time ordered measurement data. Two approaches are used in the literature to get a better knowledge of the dynamics contained in observation data sequences. The first of these approaches concerns time series decomposition, which is an important analysis step allowing patterns and behaviors to be extracted as components providing insight into the mechanisms producing the time series. As in many cases, time series are short, noisy, and non-stationary. To provide components which are physically meaningful, methods such as Empirical Mode Decomposition (EMD), Empirical Wavelet Transform (EWT) or, more recently, Empirical Adaptive Wavelet Decomposition (EAWD) have been proposed. The second approach is to reconstruct the dynamics underlying the time series as a trajectory in state space by mapping a time series into a set of Rᵐ lag vectors by using the method of delays (MOD). Takens has proved that the trajectory obtained with the MOD technic is equivalent to the trajectory representing the dynamics behind the original time series. This work introduces the singular spectrum decomposition (SSD), which is a new adaptive method for decomposing non-linear and non-stationary time series in narrow-banded components. This method takes its origin from singular spectrum analysis (SSA), a nonparametric spectral estimation method used for the analysis and prediction of time series. As the first step of SSD is to constitute a trajectory matrix by embedding a one-dimensional time series into a set of lagged vectors, SSD can also be seen as a reconstruction method like MOD. We will first give a brief overview of the existing decomposition methods (EMD-EWT-EAWD). The SSD method will then be described in detail and applied to experimental time series of observations resulting from total columns of ozone measurements. The results obtained will be compared with those provided by the previously mentioned decomposition methods. We will also compare the reconstruction qualities of the observed dynamics obtained from the SSD and MOD methods.Keywords: time series analysis, adaptive time series decomposition, wavelet, phase space reconstruction, singular spectrum analysis
Procedia PDF Downloads 104419 Adequacy of Advanced Earthquake Intensity Measures for Estimation of Damage under Seismic Excitation with Arbitrary Orientation
Authors: Konstantinos G. Kostinakis, Manthos K. Papadopoulos, Asimina M. Athanatopoulou
Abstract:
An important area of research in seismic risk analysis is the evaluation of expected seismic damage of structures under a specific earthquake ground motion. Several conventional intensity measures of ground motion have been used to estimate their damage potential to structures. Yet, none of them was proved to be able to predict adequately the seismic damage of any structural system. Therefore, alternative advanced intensity measures which take into account not only ground motion characteristics but also structural information have been proposed. The adequacy of a number of advanced earthquake intensity measures in prediction of structural damage of 3D R/C buildings under seismic excitation which attacks the building with arbitrary incident angle is investigated in the present paper. To achieve this purpose, a symmetric in plan and an asymmetric 5-story R/C building are studied. The two buildings are subjected to 20 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along horizontal orthogonal axes forming 72 different angles with the structural axes. The response is computed by non-linear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures determined for incident angle 0° as well as their maximum values over all seismic incident angles are correlated with 9 structure-specific ground motion intensity measures. The research identified certain intensity measures which exhibited strong correlation with the seismic damage of the two buildings. However, their adequacy for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.Keywords: damage indices, non-linear response, seismic excitation angle, structure-specific intensity measures
Procedia PDF Downloads 493418 Development of a Predictive Model to Prevent Financial Crisis
Authors: Tengqin Han
Abstract:
Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.Keywords: delinquency, mortgage, model development, model validation
Procedia PDF Downloads 228417 Theoretical Prediction on the Lifetime of Sessile Evaporating Droplet in Blade Cooling
Authors: Yang Shen, Yongpan Cheng, Jinliang Xu
Abstract:
The effective blade cooling is of great significance for improving the performance of turbine. The mist cooling emerges as the promising way compared with the transitional single-phase cooling. In the mist cooling, the injected droplet will evaporate rapidly, and cool down the blade surface due to the absorbed latent heat, hence the lifetime for evaporating droplet becomes critical for design of cooling passages for the blade. So far there have been extensive studies on the droplet evaporation, but usually the isothermal model is applied for most of the studies. Actually the surface cooling effect can affect the droplet evaporation greatly, it can prolong the droplet evaporation lifetime significantly. In our study, a new theoretical model for sessile droplet evaporation with surface cooling effect is built up in toroidal coordinate. Three evaporation modes are analyzed during the evaporation lifetime, include “Constant Contact Radius”(CCR) mode、“Constant Contact Angle”(CCA) mode and “stick-slip”(SS) mode. The dimensionless number E0 is introduced to indicate the strength of the evaporative cooling, it is defined based on the thermal properties of the liquid and the atmosphere. Our model can predict accurately the lifetime of evaporation by validating with available experimental data. Then the temporal variation of droplet volume, contact angle and contact radius are presented under CCR, CCA and SS mode, the following conclusions are obtained. 1) The larger the dimensionless number E0, the longer the lifetime of three evaporation cases is; 2) The droplet volume over time still follows “2/3 power law” in the CCA mode, as in the isothermal model without the cooling effect; 3) In the “SS” mode, the large transition contact angle can reduce the evaporation time in CCR mode, and increase the time in CCA mode, the overall lifetime will be increased; 4) The correction factor for predicting instantaneous volume of the droplet is derived to predict the droplet life time accurately. These findings may be of great significance to explore the dynamics and heat transfer of sessile droplet evaporation.Keywords: blade cooling, droplet evaporation, lifetime, theoretical analysis
Procedia PDF Downloads 142416 Influence of Improved Roughage Quality and Period of Meal Termination on Digesta Load in the Digestive Organs of Goats
Authors: Rasheed A. Adebayo, Mehluli M. Moyo, Ignatius V. Nsahlai
Abstract:
Ruminants are known to relish roughage for productivity but the effect of its quality on digesta load in rumen, omasum, abomasum and other distal organs of the digestive tract is yet unknown. Reticulorumen fill is a strong indicator for long-term control of intake in ruminants. As such, the measurement and prediction of digesta load in these compartments may be crucial to productivity in the ruminant industry. The current study aimed at determining the effect of (a) diet quality on digesta load in digestive organs of goats, and (b) period of meal termination on the reticulorumen fill and digesta load in other distal compartments of the digestive tract of goats. Goats were fed with urea-treated hay (UTH), urea-sprayed hay (USH) and non-treated hay (NTH). At the end of eight weeks of a feeding trial period, upon termination of a meal in the morning, afternoon or evening, all goats were slaughtered in random groups of three per day to measure reticulorumen fill and digesta loads in other distal compartments of the digestive tract. Both diet quality and period affected (P < 0.05) the measure of reticulorumen fill. However, reticulorumen fill in the evening was larger (P < 0.05) than afternoon, while afternoon was similar (P > 0.05) to morning. Also, diet quality affected (P < 0.05) the wet omasal digesta load, wet abomasum, dry abomasum and dry caecum digesta loads but did not affect (P > 0.05) both wet and dry digesta loads in other compartments of the digestive tract. Period of measurement did not affect (P > 0.05) the wet omasal digesta load, and both wet and dry digesta loads in other compartments of the digestive tract except wet abomasum digesta load (P < 0.05) and dry caecum digesta load (P < 0.05). Both wet and dry reticulorumen fill were correlated (P < 0.05) with omasum (r = 0.623) and (r = 0.723), respectively. In conclusion, reticulorumen fill of goats decreased by improving the roughage quality; and the period of meal termination and measurement of the fill is a key factor to the quantity of digesta load.Keywords: digesta, goats, meal termination, reticulo-rumen fill
Procedia PDF Downloads 373415 Application of Artificial Neural Network for Single Horizontal Bare Tube and Bare Tube Bundles (Staggered) of Large Particles: Heat Transfer Prediction
Authors: G. Ravindranath, S. Savitha
Abstract:
This paper presents heat transfer analysis of single horizontal bare tube and heat transfer analysis of staggered arrangement of bare tube bundles bare tube bundles in gas-solid (air-solid) fluidized bed and predictions are done by using Artificial Neural Network (ANN) based on experimental data. Fluidized bed provide nearly isothermal environment with high heat transfer rate to submerged objects i.e. due to through mixing and large contact area between the gas and the particle, a fully fluidized bed has little temperature variation and gas leaves at a temperature which is close to that of the bed. Measurement of average heat transfer coefficient was made by local thermal simulation technique in a cold bubbling air-fluidized bed of size 0.305 m. x 0.305 m. Studies were conducted for single horizontal Bare Tube of length 305mm and 28.6mm outer diameter and for bare tube bundles of staggered arrangement using beds of large (average particle diameter greater than 1 mm) particle (raagi and mustard). Within the range of experimental conditions influence of bed particle diameter ( Dp), Fluidizing Velocity (U) were studied, which are significant parameters affecting heat transfer. Artificial Neural Networks (ANNs) have been receiving an increasing attention for simulating engineering systems due to some interesting characteristics such as learning capability, fault tolerance, and non-linearity. Here, feed-forward architecture and trained by back-propagation technique is adopted to predict heat transfer analysis found from experimental results. The ANN is designed to suit the present system which has 3 inputs and 2 out puts. The network predictions are found to be in very good agreement with the experimental observed values of bare heat transfer coefficient (hb) and nusselt number of bare tube (Nub).Keywords: fluidized bed, large particles, particle diameter, ANN
Procedia PDF Downloads 364414 Temperature and Admixtures Effects on the Maturity of Normal and Super Fine Ground Granulated Blast Furnace Slag Mortars for the Precast Concrete Industry
Authors: Matthew Cruickshank, Chaaruchandra Korde, Roger P. West, John Reddy
Abstract:
Precast concrete element exports are growing in importance in Ireland’s concrete industry and with the increased global focus on reducing carbon emissions, the industry is exploring more sustainable alternatives such as using ground granulated blast-furnace slag (GGBS) as a partial replacement of Portland cement. It is well established that GGBS, with low early age strength development, has limited use in precast manufacturing due to the need for early de-moulding, cutting of pre-stressed strands and lifting. In this dichotomy, the effects of temperature, admixture, are explored to try to achieve the required very early age strength. Testing of the strength of mortars is mandated in the European cement standard, so here with 50% GGBS and Super Fine GGBS, with three admixture conditions (none, conventional accelerator, novel accelerator) and two early age curing temperature conditions (20°C and 35°C), standard mortar strengths are measured at six ages (16 hours, 1, 2, 3, 7, 28 days). The present paper will describe the effort towards developing maturity curves to aid in understanding the effect of these accelerating admixtures and GGBS fineness on slag cement mortars, allowing prediction of their strength with time and temperature. This study is of particular importance to the precast industry where concrete temperature can be controlled. For the climatic conditions in Ireland, heating of precast beds for long hours will amount to an additional cost and also contribute to the carbon footprint of the products. When transitioned from mortar to concrete, these maturity curves are expected to play a vital role in predicting the strength of the GGBS concrete at a very early age prior to demoulding.Keywords: accelerating admixture, early age strength, ground granulated blast-furnace slag, GGBS, maturity, precast concrete
Procedia PDF Downloads 157413 Characterising the Dynamic Friction in the Staking of Plain Spherical Bearings
Authors: Jacob Hatherell, Jason Matthews, Arnaud Marmier
Abstract:
Anvil Staking is a cold-forming process that is used in the assembly of plain spherical bearings into a rod-end housing. This process ensures that the bearing outer lip conforms to the chamfer in the matching rod end to produce a lightweight mechanical joint with sufficient strength to meet the pushout load requirement of the assembly. Finite Element (FE) analysis is being used extensively to predict the behaviour of metal flow in cold forming processes to support industrial manufacturing and product development. On-going research aims to validate FE models across a wide range of bearing and rod-end geometries by systematically isolating and understanding the uncertainties caused by variations in, material properties, load-dependent friction coefficients and strain rate sensitivity. The improved confidence in these models aims to eliminate the costly and time-consuming process of experimental trials in the introduction of new bearing designs. Previous literature has shown that friction coefficients do not remain constant during cold forming operations, however, the understanding of this phenomenon varies significantly and is rarely implemented in FE models. In this paper, a new approach to evaluate the normal contact pressure versus friction coefficient relationship is outlined using friction calibration charts generated via iterative FE models and ring compression tests. When compared to previous research, this new approach greatly improves the prediction of forming geometry and the forming load during the staking operation. This paper also aims to standardise the FE approach to modelling ring compression test and determining the friction calibration charts.Keywords: anvil staking, finite element analysis, friction coefficient, spherical plain bearing, ring compression tests
Procedia PDF Downloads 205412 TNFRSF11B Gene Polymorphisms A163G and G11811C in Prediction of Osteoporosis Risk
Authors: I. Boroňová, J.Bernasovská, J. Kľoc, Z. Tomková, E. Petrejčíková, D. Gabriková, S. Mačeková
Abstract:
Osteoporosis is a complex health disease characterized by low bone mineral density, which is determined by an interaction of genetics with metabolic and environmental factors. Current research in genetics of osteoporosis is focused on identification of responsible genes and polymorphisms. TNFRSF11B gene plays a key role in bone remodeling. The aim of this study was to investigate the genotype and allele distribution of A163G (rs3102735) osteoprotegerin gene promoter and G1181C (rs2073618) osteoprotegerin first exon polymorphisms in the group of 180 unrelated postmenopausal women with diagnosed osteoporosis and 180 normal controls. Genomic DNA was isolated from peripheral blood leukocytes using standard methodology. Genotyping for presence of different polymorphisms was performed using the Custom Taqman®SNP Genotyping assays. Hardy-Weinberg equilibrium was tested for each SNP in the groups of participants using the chi-square (χ2) test. The distribution of investigated genotypes in the group of patients with osteoporosis were as follows: AA (66.7%), AG (32.2%), GG (1.1%) for A163G polymorphism; GG (19.4%), CG (44.4%), CC (36.1%) for G1181C polymorphism. The distribution of genotypes in normal controls were follows: AA (71.1%), AG (26.1%), GG (2.8%) for A163G polymorphism; GG (22.2%), CG (48.9%), CC (28.9%) for G1181C polymorphism. In A163G polymorphism the variant G allele was more common among patients with osteoporosis: 17.2% versus 15.8% in normal controls. Also, in G1181C polymorphism the phenomenon of more frequent occurrence of C allele in the group of patients with osteoporosis was observed (58.3% versus 53.3%). Genotype and allele distributions showed no significant differences (A163G: χ2=0.270, p=0.605; χ2=0.250, p=0.616; G1181C: χ2= 1.730, p=0.188; χ2=1.820, p=0.177). Our results represents an initial study, further studies of more numerous file and associations studies will be carried out. Knowing the distribution of genotypes is important for assessing the impact of these polymorphisms on various parameters associated with osteoporosis. Screening for identification of “at-risk” women likely to develop osteoporosis and initiating subsequent early intervention appears to be most effective strategy to substantially reduce the risks of osteoporosis.Keywords: osteoporosis, real-time PCR method, SNP polymorphisms
Procedia PDF Downloads 329411 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder
Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen
Abstract:
Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.Keywords: natural language inference, explanation generation, variational auto-encoder, generative model
Procedia PDF Downloads 151410 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks
Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer
Abstract:
New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics
Procedia PDF Downloads 139409 Prediction of Positive Cloud-to-Ground Lightning Striking Zones for Charged Thundercloud Based on Line Charge Model
Authors: Surajit Das Barman, Rakibuzzaman Shah, Apurv Kumar
Abstract:
Bushfire is known as one of the ascendant factors to create pyrocumulus thundercloud that causes the ignition of new fires by pyrocumulonimbus (pyroCb) lightning strikes and creates major losses of lives and property worldwide. A conceptual model-based risk planning would be beneficial to predict the lightning striking zones on the surface of the earth underneath the pyroCb thundercloud. PyroCb thundercloud can generate both positive cloud-to-ground (+CG) and negative cloud-to-ground (-CG) lightning in which +CG tends to ignite more bushfires and cause massive damage to nature and infrastructure. In this paper, a simple line charge structured thundercloud model is constructed in 2-D coordinates using the method of image charge to predict the probable +CG lightning striking zones on the earth’s surface for two conceptual thundercloud charge configurations: titled dipole and conventional tripole structure with excessive lower positive charge regions that lead to producing +CG lightning. The electric potential and surface charge density along the earth’s surface for both structures via continuously adjusting the position and the charge density of their charge regions is investigated. Simulation results for tilted dipole structure confirm the down-shear extension of the upper positive charge region in the direction of the cloud’s forward flank by 4 to 8 km, resulting in negative surface density, and would expect +CG lightning to strike within 7.8 km to 20 km around the earth periphery in the direction of the cloud’s forward flank. On the other hand, the conceptual tripole charge structure with enhanced lower positive charge region develops negative surface charge density on the earth’s surface in the range |x| < 6.5 km beneath the thundercloud and highly favors producing +CG lightning strikes.Keywords: pyrocumulonimbus, cloud-to-ground lightning, charge structure, surface charge density, forward flank
Procedia PDF Downloads 113408 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion
Authors: Omran M. Kenshel, Alan J. O'Connor
Abstract:
Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability
Procedia PDF Downloads 473407 An Experimental Investigation of the Surface Pressure on Flat Plates in Turbulent Boundary Layers
Authors: Azadeh Jafari, Farzin Ghanadi, Matthew J. Emes, Maziar Arjomandi, Benjamin S. Cazzolato
Abstract:
The turbulence within the atmospheric boundary layer induces highly unsteady aerodynamic loads on structures. These loads, if not accounted for in the design process, will lead to structural failure and are therefore important for the design of the structures. For an accurate prediction of wind loads, understanding the correlation between atmospheric turbulence and the aerodynamic loads is necessary. The aim of this study is to investigate the effect of turbulence within the atmospheric boundary layer on the surface pressure on a flat plate over a wide range of turbulence intensities and integral length scales. The flat plate is chosen as a fundamental geometry which represents structures such as solar panels and billboards. Experiments were conducted at the University of Adelaide large-scale wind tunnel. Two wind tunnel boundary layers with different intensities and length scales of turbulence were generated using two sets of spires with different dimensions and a fetch of roughness elements. Average longitudinal turbulence intensities of 13% and 26% were achieved in each boundary layer, and the longitudinal integral length scale within the three boundary layers was between 0.4 m and 1.22 m. The pressure distributions on a square flat plate at different elevation angles between 30° and 90° were measured within the two boundary layers with different turbulence intensities and integral length scales. It was found that the peak pressure coefficient on the flat plate increased with increasing turbulence intensity and integral length scale. For example, the peak pressure coefficient on a flat plate elevated at 90° increased from 1.2 to 3 with increasing turbulence intensity from 13% to 26%. Furthermore, both the mean and the peak pressure distribution on the flat plates varied with turbulence intensity and length scale. The results of this study can be used to provide a more accurate estimation of the unsteady wind loads on structures such as buildings and solar panels.Keywords: atmospheric boundary layer, flat plate, pressure coefficient, turbulence
Procedia PDF Downloads 139406 Deorbiting Performance of Electrodynamic Tethers to Mitigate Space Debris
Authors: Giulia Sarego, Lorenzo Olivieri, Andrea Valmorbida, Carlo Bettanini, Giacomo Colombatti, Marco Pertile, Enrico C. Lorenzini
Abstract:
International guidelines recommend removing any artificial body in Low Earth Orbit (LEO) within 25 years from mission completion. Among disposal strategies, electrodynamic tethers appear to be a promising option for LEO, thanks to the limited storage mass and the minimum interface requirements to the host spacecraft. In particular, recent technological advances make it feasible to deorbit large objects with tether lengths of a few kilometers or less. To further investigate such an innovative passive system, the European Union is currently funding the project E.T.PACK – Electrodynamic Tether Technology for Passive Consumable-less Deorbit Kit in the framework of the H2020 Future Emerging Technologies (FET) Open program. The project focuses on the design of an end of life disposal kit for LEO satellites. This kit aims to deploy a taped tether that can be activated at the spacecraft end of life to perform autonomous deorbit within the international guidelines. In this paper, the orbital performance of the E.T.PACK deorbiting kit is compared to other disposal methods. Besides, the orbital decay prediction is parametrized as a function of spacecraft mass and tether system performance. Different values of length, width, and thickness of the tether will be evaluated for various scenarios (i.e., different initial orbital parameters). The results will be compared to other end-of-life disposal methods with similar allocated resources. The analysis of the more innovative system’s performance with the tape coated with a thermionic material, which has a low work-function (LWT), for which no active component for the cathode is required, will also be briefly discussed. The results show that the electrodynamic tether option can be a competitive and performant solution for satellite disposal compared to other deorbit technologies.Keywords: deorbiting performance, H2020, spacecraft disposal, space electrodynamic tethers
Procedia PDF Downloads 177405 Classifying Turbomachinery Blade Mode Shapes Using Artificial Neural Networks
Authors: Ismail Abubakar, Hamid Mehrabi, Reg Morton
Abstract:
Currently, extensive signal analysis is performed in order to evaluate structural health of turbomachinery blades. This approach is affected by constraints of time and the availability of qualified personnel. Thus, new approaches to blade dynamics identification that provide faster and more accurate results are sought after. Generally, modal analysis is employed in acquiring dynamic properties of a vibrating turbomachinery blade and is widely adopted in condition monitoring of blades. The analysis provides useful information on the different modes of vibration and natural frequencies by exploring different shapes that can be taken up during vibration since all mode shapes have their corresponding natural frequencies. Experimental modal testing and finite element analysis are the traditional methods used to evaluate mode shapes with limited application to real live scenario to facilitate a robust condition monitoring scheme. For a real time mode shape evaluation, rapid evaluation and low computational cost is required and traditional techniques are unsuitable. In this study, artificial neural network is developed to evaluate the mode shape of a lab scale rotating blade assembly by using result from finite element modal analysis as training data. The network performance evaluation shows that artificial neural network (ANN) is capable of mapping the correlation between natural frequencies and mode shapes. This is achieved without the need of extensive signal analysis. The approach offers advantage from the perspective that the network is able to classify mode shapes and can be employed in real time including simplicity in implementation and accuracy of the prediction. The work paves the way for further development of robust condition monitoring system that incorporates real time mode shape evaluation.Keywords: modal analysis, artificial neural network, mode shape, natural frequencies, pattern recognition
Procedia PDF Downloads 156404 A Continuous Real-Time Analytic for Predicting Instability in Acute Care Rapid Response Team Activations
Authors: Ashwin Belle, Bryce Benson, Mark Salamango, Fadi Islim, Rodney Daniels, Kevin Ward
Abstract:
A reliable, real-time, and non-invasive system that can identify patients at risk for hemodynamic instability is needed to aid clinicians in their efforts to anticipate patient deterioration and initiate early interventions. The purpose of this pilot study was to explore the clinical capabilities of a real-time analytic from a single lead of an electrocardiograph to correctly distinguish between rapid response team (RRT) activations due to hemodynamic (H-RRT) and non-hemodynamic (NH-RRT) causes, as well as predict H-RRT cases with actionable lead times. The study consisted of a single center, retrospective cohort of 21 patients with RRT activations from step-down and telemetry units. Through electronic health record review and blinded to the analytic’s output, each patient was categorized by clinicians into H-RRT and NH-RRT cases. The analytic output and the categorization were compared. The prediction lead time prior to the RRT call was calculated. The analytic correctly distinguished between H-RRT and NH-RRT cases with 100% accuracy, demonstrating 100% positive and negative predictive values, and 100% sensitivity and specificity. In H-RRT cases, the analytic detected hemodynamic deterioration with a median lead time of 9.5 hours prior to the RRT call (range 14 minutes to 52 hours). The study demonstrates that an electrocardiogram (ECG) based analytic has the potential for providing clinical decision and monitoring support for caregivers to identify at risk patients within a clinically relevant timeframe allowing for increased vigilance and early interventional support to reduce the chances of continued patient deterioration.Keywords: critical care, early warning systems, emergency medicine, heart rate variability, hemodynamic instability, rapid response team
Procedia PDF Downloads 143403 Temperature-Based Detection of Initial Yielding Point in Loading of Tensile Specimens Made of Structural Steel
Authors: Aqsa Jamil, Tamura Hiroshi, Katsuchi Hiroshi, Wang Jiaqi
Abstract:
The yield point represents the upper limit of forces which can be applied to a specimen without causing any permanent deformation. After yielding, the behavior of the specimen suddenly changes, including the possibility of cracking or buckling. So, the accumulation of damage or type of fracture changes depending on this condition. As it is difficult to accurately detect yield points of the several stress concentration points in structural steel specimens, an effort has been made in this research work to develop a convenient technique using thermography (temperature-based detection) during tensile tests for the precise detection of yield point initiation. To verify the applicability of thermography camera, tests were conducted under different loading conditions and measuring the deformation by installing various strain gauges and monitoring the surface temperature with the help of a thermography camera. The yield point of specimens was estimated with the help of temperature dip, which occurs due to the thermoelastic effect during the plastic deformation. The scattering of the data has been checked by performing a repeatability analysis. The effects of temperature imperfection and light source have been checked by carrying out the tests at daytime as well as midnight and by calculating the signal to noise ratio (SNR) of the noised data from the infrared thermography camera, it can be concluded that the camera is independent of testing time and the presence of a visible light source. Furthermore, a fully coupled thermal-stress analysis has been performed by using Abaqus/Standard exact implementation technique to validate the temperature profiles obtained from the thermography camera and to check the feasibility of numerical simulation for the prediction of results extracted with the help of the thermographic technique.Keywords: signal to noise ratio, thermoelastic effect, thermography, yield point
Procedia PDF Downloads 107