Search results for: shock sensitivity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2247

Search results for: shock sensitivity

537 Contextual Factors of Innovation for Improving Commercial Banks' Performance in Nigeria

Authors: Tomola Obamuyi

Abstract:

The banking system in Nigeria adopted innovative banking, with the aim of enhancing financial inclusion, and making financial services readily and cheaply available to majority of the people, and to contribute to the efficiency of the financial system. Some of the innovative services include: Automatic Teller Machines (ATMs), National Electronic Fund Transfer (NEFT), Point of Sale (PoS), internet (Web) banking, Mobile Money payment (MMO), Real-Time Gross Settlement (RTGS), agent banking, among others. The introduction of these payment systems is expected to increase bank efficiency and customers' satisfaction, culminating in better performance for the commercial banks. However, opinions differ on the possible effects of the various innovative payment systems on the performance of commercial banks in the country. Thus, this study empirically determines how commercial banks use innovation to gain competitive advantage in the specific context of Nigeria's finance and business. The study also analyses the effects of financial innovation on the performance of commercial banks, when different periods of analysis are considered. The study employed secondary data from 2009 to 2018, the period that witnessed aggressive innovation in the financial sector of the country. The Vector Autoregression (VAR) estimation technique forecasts the relative variance of each random innovation to the variables in the VAR, examine the effect of standard deviation shock to one of the innovations on current and future values of the impulse response and determine the causal relationship between the variables (VAR granger causality test). The study also employed the Multi-Criteria Decision Making (MCDM) to rank the innovations and the performance criteria of Return on Assets (ROA) and Return on Equity (ROE). The entropy method of MCDM was used to determine which of the performance criteria better reflect the contributions of the various innovations in the banking sector. On the other hand, the Range of Values (ROV) method was used to rank the contributions of the seven innovations to performance. The analysis was done based on medium term (five years) and long run (ten years) of innovations in the sector. The impulse response function derived from the VAR system indicated that the response of ROA to the values of cheques transaction, values of NEFT transactions, values of POS transactions was positive and significant in the periods of analysis. The paper also confirmed with entropy and range of value that, in the long run, both the CHEQUE and MMO performed best while NEFT was next in performance. The paper concluded that commercial banks would enhance their performance by continuously improving on the services provided through Cheques, National Electronic Fund Transfer and Point of Sale since these instruments have long run effects on their performance. This will increase the confidence of the populace and encourage more usage/patronage of these services. The banking sector will in turn experience better performance which will improve the economy of the country. Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression,

Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression

Procedia PDF Downloads 96
536 The Role of Uterine Artery Embolization in the Management of Postpartum Hemorrhage

Authors: Chee Wai Ku, Pui See Chin

Abstract:

As an emerging alternative to hysterectomy, uterine artery embolization (UAE) has been widely used in the management of fibroids and in controlling postpartum hemorrhage (PPH) unresponsive to other therapies. Research has shown UAE to be a safe, minimally invasive procedure with few complications and minimal effects on future fertility. We present two cases highlighting the use of UAE in preventing PPH in a patient with a large fibroid at the time of cesarean section and in the treatment of secondary PPH refractory to other therapies in another patient. We present a 36-year primiparous woman who booked at 18+6 weeks gestation with a 13.7 cm subserosal fibroid at the lower anterior wall of the uterus near the cervix and a 10.8 cm subserosal fibroid in the left wall. Prophylactic internal iliac artery occlusion balloons were placed prior to the planned classical midline cesarean section. The balloons were inflated once the baby was delivered. Bilateral uterine arteries were embolized subsequently. The estimated blood loss (EBL) was 400 mls and hemoglobin (Hb) remained stable at 10 g/DL. Ultrasound scan 2 years postnatally showed stable uterine fibroids 10.4 and 7.1 cm, which was significantly smaller than before. We present the second case of a 40-year-old G2P1 with a previous cesarean section for failure to progress. There were no antenatal problems, and the placenta was not previa. She presented with term labour and underwent an emergency cesarean section for failed vaginal birth after cesarean. Intraoperatively extensive adhesions were noted with bladder drawn high, and EBL was 300 mls. Postpartum recovery was uneventful. She presented with secondary PPH 3 weeks later complicated by hypovolemic shock. She underwent an emergency examination under anesthesia and evacuation of the uterus, with EBL 2500mls. Histology showed decidua with chronic inflammation. She was discharged well with no further PPH. She subsequently returned one week later for secondary PPH. Bedside ultrasound showed that the endometrium was thin with no evidence of retained products of conception. Uterotonics were administered, and examination under anesthesia was performed, with uterine Bakri balloon and vaginal pack insertion after. EBL was 1000 mls. There was no definite cause of PPH with no uterine atony or products of conception. To evaluate a potential cause, pelvic angiogram and super selective left uterine arteriogram was performed which showed profuse contrast extravasation and acute bleeding from the left uterine artery. Superselective embolization of the left uterine artery was performed. No gross contrast extravasation from the right uterine artery was seen. These two cases demonstrated the superior efficacy of UAE. Firstly, the prophylactic use of intra-arterial balloon catheters in pregnant patients with large fibroids, and secondly, in the diagnosis and management of secondary PPH refractory to uterotonics and uterine tamponade. In both cases, the need for laparotomy hysterectomy was avoided, resulting in the preservation of future fertility. UAE should be a consideration for hemodynamically stable patients in centres with access to interventional radiology.

Keywords: fertility preservation, secondary postpartum hemorrhage, uterine embolization, uterine fibroids

Procedia PDF Downloads 170
535 Transcriptome Analysis Reveals Role of Long Non-Coding RNA NEAT1 in Dengue Patients

Authors: Abhaydeep Pandey, Shweta Shukla, Saptamita Goswami, Bhaswati Bandyopadhyay, Vishnampettai Ramachandran, Sudhanshu Vrati, Arup Banerjee

Abstract:

Background: Long non-coding RNAs (lncRNAs) are the important regulators of gene expression and play important role in viral replication and disease progression. The role of lncRNA genes in the pathogenesis of Dengue virus-mediated pathogenesis is currently unknown. Methods: To gain additional insights, we utilized an unbiased RNA sequencing followed by in silico analysis approach to identify the differentially expressed lncRNA and genes that are associated with dengue disease progression. Further, we focused our study on lncRNAs NEAT1 (Nuclear Paraspeckle Assembly Transcript 1) as it was found to be differentially expressed in PBMC of dengue infected patients. Results: The expression of lncRNAs NEAT1, as compared to dengue infection (DI), was significantly down-regulated as the patients developed the complication. Moreover, pairwise analysis on follow up patients confirmed that suppression of NEAT1 expression was associated with rapid fall in platelet count in dengue infected patients. Severe dengue patients (DS) (n=18; platelet count < 20K) when recovered from infection showing high NEAT1 expression as it observed in healthy donors. By co-expression network analysis and subsequent validation, we revealed that coding gene; IFI27 expression was significantly up-regulated in severe dengue cases and negatively correlated with NEAT1 expression. To discriminate DI from dengue severe, receiver operating characteristic (ROC) curve was calculated. It revealed sensitivity and specificity of 100% (95%CI: 85.69 – 97.22) and area under the curve (AUC) = 0.97 for NEAT1. Conclusions: Altogether, our first observations demonstrate that monitoring NEAT1and IFI27 expression in dengue patients could be useful in understanding dengue virus-induced disease progression and may be involved in pathophysiological processes.

Keywords: dengue, lncRNA, NEAT1, transcriptome

Procedia PDF Downloads 292
534 Clinical Relevance of TMPRSS2-ERG Fusion Marker for Prostate Cancer

Authors: Shalu Jain, Anju Bansal, Anup Kumar, Sunita Saxena

Abstract:

Objectives: The novel TMPRSS2:ERG gene fusion is a common somatic event in prostate cancer that in some studies is linked with a more aggressive disease phenotype. Thus, this study aims to determine whether clinical variables are associated with the presence of TMPRSS2:ERG-fusion gene transcript in Indian patients of prostate cancer. Methods: We evaluated the clinical variables with presence and absence of TMPRSS2:ERG gene fusion in prostate cancer and BPH association of clinical patients. Patients referred for prostate biopsy because of abnormal DRE or/and elevated sPSA were enrolled for this prospective clinical study. TMPRSS2:ERG mRNA copies in samples were quantified using a Taqman chemistry by real time PCR assay in prostate biopsy samples (N=42). The T2:ERG assay detects the gene fusion mRNA isoform TMPRSS2 exon1 to ERG exon4. Results: Histopathology report has confirmed 25 cases as prostate cancer adenocarcinoma (PCa) and 17 patients as benign prostate hyperplasia (BPH). Out of 25 PCa cases, 16 (64%) were T2: ERG fusion positive. All 17 BPH controls were fusion negative. The T2:ERG fusion transcript was exclusively specific for prostate cancer as no case of BPH was detected having T2:ERG fusion, showing 100% specificity. The positive predictive value of fusion marker for prostate cancer is thus 100% and the negative predictive value is 65.3%. The T2:ERG fusion marker is significantly associated with clinical variables like no. of positive cores in prostate biopsy, Gleason score, serum PSA, perineural invasion, perivascular invasion and periprostatic fat involvement. Conclusions: Prostate cancer is a heterogeneous disease that may be defined by molecular subtypes such as the TMPRSS2:ERG fusion. In the present prospective study, the T2:ERG quantitative assay demonstrated high specificity for predicting biopsy outcome; sensitivity was similar to the prevalence of T2:ERG gene fusions in prostate tumors. These data suggest that further improvement in diagnostic accuracy could be achieved using a nomogram that combines T2:ERG with other markers and risk factors for prostate cancer.

Keywords: prostate cancer, genetic rearrangement, TMPRSS2:ERG fusion, clinical variables

Procedia PDF Downloads 428
533 Characterising the Dynamic Friction in the Staking of Plain Spherical Bearings

Authors: Jacob Hatherell, Jason Matthews, Arnaud Marmier

Abstract:

Anvil Staking is a cold-forming process that is used in the assembly of plain spherical bearings into a rod-end housing. This process ensures that the bearing outer lip conforms to the chamfer in the matching rod end to produce a lightweight mechanical joint with sufficient strength to meet the pushout load requirement of the assembly. Finite Element (FE) analysis is being used extensively to predict the behaviour of metal flow in cold forming processes to support industrial manufacturing and product development. On-going research aims to validate FE models across a wide range of bearing and rod-end geometries by systematically isolating and understanding the uncertainties caused by variations in, material properties, load-dependent friction coefficients and strain rate sensitivity. The improved confidence in these models aims to eliminate the costly and time-consuming process of experimental trials in the introduction of new bearing designs. Previous literature has shown that friction coefficients do not remain constant during cold forming operations, however, the understanding of this phenomenon varies significantly and is rarely implemented in FE models. In this paper, a new approach to evaluate the normal contact pressure versus friction coefficient relationship is outlined using friction calibration charts generated via iterative FE models and ring compression tests. When compared to previous research, this new approach greatly improves the prediction of forming geometry and the forming load during the staking operation. This paper also aims to standardise the FE approach to modelling ring compression test and determining the friction calibration charts.

Keywords: anvil staking, finite element analysis, friction coefficient, spherical plain bearing, ring compression tests

Procedia PDF Downloads 190
532 A One-Dimensional Modeling Analysis of the Influence of Swirl and Tumble Coefficient in a Single-Cylinder Research Engine

Authors: Mateus Silva Mendonça, Wender Pereira de Oliveira, Gabriel Heleno de Paula Araújo, Hiago Tenório Teixeira Santana Rocha, Augusto César Teixeira Malaquias, José Guilherme Coelho Baeta

Abstract:

The stricter legislation and the greater demand of the population regard to gas emissions and their effects on the environment as well as on human health make the automotive industry reinforce research focused on reducing levels of contamination. This reduction can be achieved through the implementation of improvements in internal combustion engines in such a way that they promote the reduction of both specific fuel consumption and air pollutant emissions. These improvements can be obtained through numerical simulation, which is a technique that works together with experimental tests. The aim of this paper is to build, with support of the GT-Suite software, a one-dimensional model of a single-cylinder research engine to analyze the impact of the variation of swirl and tumble coefficients on the performance and on the air pollutant emissions of an engine. Initially, the discharge coefficient is calculated through the software Converge CFD 3D, given that it is an input parameter in GT-Power. Mesh sensitivity tests are made in 3D geometry built for this purpose, using the mass flow rate in the valve as a reference. In the one-dimensional simulation is adopted the non-predictive combustion model called Three Pressure Analysis (TPA) is, and then data such as mass trapped in cylinder, heat release rate, and accumulated released energy are calculated, aiming that the validation can be performed by comparing these data with those obtained experimentally. Finally, the swirl and tumble coefficients are introduced in their corresponding objects so that their influences can be observed when compared to the results obtained previously.

Keywords: 1D simulation, single-cylinder research engine, swirl coefficient, three pressure analysis, tumble coefficient

Procedia PDF Downloads 85
531 Information Visualization Methods Applied to Nanostructured Biosensors

Authors: Osvaldo N. Oliveira Jr.

Abstract:

The control of molecular architecture inherent in some experimental methods to produce nanostructured films has had great impact on devices of various types, including sensors and biosensors. The self-assembly monolayers (SAMs) and the electrostatic layer-by-layer (LbL) techniques, for example, are now routinely used to produce tailored architectures for biosensing where biomolecules are immobilized with long-lasting preserved activity. Enzymes, antigens, antibodies, peptides and many other molecules serve as the molecular recognition elements for detecting an equally wide variety of analytes. The principles of detection are also varied, including electrochemical methods, fluorescence spectroscopy and impedance spectroscopy. In this presentation an overview will be provided of biosensors made with nanostructured films to detect antibodies associated with tropical diseases and HIV, in addition to detection of analytes of medical interest such as cholesterol and triglycerides. Because large amounts of data are generated in the biosensing experiments, use has been made of computational and statistical methods to optimize performance. Multidimensional projection techniques such as Sammon´s mapping have been shown more efficient than traditional multivariate statistical analysis in identifying small concentrations of anti-HIV antibodies and for distinguishing between blood serum samples of animals infected with two tropical diseases, namely Chagas´ disease and Leishmaniasis. Optimization of biosensing may include a combination of another information visualization method, the Parallel Coordinate technique, with artificial intelligence methods in order to identify the most suitable frequencies for reaching higher sensitivity using impedance spectroscopy. Also discussed will be the possible convergence of technologies, through which machine learning and other computational methods may be used to treat data from biosensors within an expert system for clinical diagnosis.

Keywords: clinical diagnosis, information visualization, nanostructured films, layer-by-layer technique

Procedia PDF Downloads 314
530 Linking Temporal Changes of Climate Factors with Staple Cereal Yields in Southern Burkina Faso

Authors: Pius Borona, Cheikh Mbow, Issa Ouedraogo

Abstract:

In the Sahel, climate variability has been associated with a complex web of direct and indirect impacts. This natural phenomenon has been an impediment to agro-pastoral communities who experience uncertainty while involving in farming activities which is also their key source of livelihood. In this scenario, the role of climate variability in influencing the performance, quantity and quality of staple cereals yields, vital for food and nutrition security has been a topic of importance. This response of crops and subsequent yield variability is also a subject of immense debate due to the complexity of crop development at different stages. This complexity is further compounded by influence of slowly changing non-climatic factors. With these challenges in mind, the present paper initially explores the occurrence of climate variability at an inter annual and inter decadal level in South Burkina Faso. This is evidenced by variation of the total annual rainfall and the number of rainy days among other climatic descriptors. Further, it is shown how district-scale cereal yields in the study area including maize, sorghum and millet casually associate variably to the inter-annual variation of selected climate variables. Statistical models show that the three cereals widely depict sensitivity to the length of the growing period and total dry days in the growing season. Maize yields on the other hand relate strongly to the rainfall amount variation (R2=51.8%) showing high moisture dependence during critical growth stages. Our conclusions emphasize on adoption of efficient water utilization platforms especially those that have evidently increased yields and strengthening of forecasts dissemination.

Keywords: climate variability, cereal yields, seasonality, rain fed farming, Burkina Faso, rainfall

Procedia PDF Downloads 183
529 Security of Database Using Chaotic Systems

Authors: Eman W. Boghdady, A. R. Shehata, M. A. Azem

Abstract:

Database (DB) security demands permitting authorized users and prohibiting non-authorized users and intruders actions on the DB and the objects inside it. Organizations that are running successfully demand the confidentiality of their DBs. They do not allow the unauthorized access to their data/information. They also demand the assurance that their data is protected against any malicious or accidental modification. DB protection and confidentiality are the security concerns. There are four types of controls to obtain the DB protection, those include: access control, information flow control, inference control, and cryptographic. The cryptographic control is considered as the backbone for DB security, it secures the DB by encryption during storage and communications. Current cryptographic techniques are classified into two types: traditional classical cryptography using standard algorithms (DES, AES, IDEA, etc.) and chaos cryptography using continuous (Chau, Rossler, Lorenz, etc.) or discreet (Logistics, Henon, etc.) algorithms. The important characteristics of chaos are its extreme sensitivity to initial conditions of the system. In this paper, DB-security systems based on chaotic algorithms are described. The Pseudo Random Numbers Generators (PRNGs) from the different chaotic algorithms are implemented using Matlab and their statistical properties are evaluated using NIST and other statistical test-suits. Then, these algorithms are used to secure conventional DB (plaintext), where the statistical properties of the ciphertext are also tested. To increase the complexity of the PRNGs and to let pass all the NIST statistical tests, we propose two hybrid PRNGs: one based on two chaotic Logistic maps and another based on two chaotic Henon maps, where each chaotic algorithm is running side-by-side and starting from random independent initial conditions and parameters (encryption keys). The resulted hybrid PRNGs passed the NIST statistical test suit.

Keywords: algorithms and data structure, DB security, encryption, chaotic algorithms, Matlab, NIST

Procedia PDF Downloads 251
528 Computational Fluid Dynamics Based Analysis of Heat Exchanging Performance of Rotary Thermal Wheels

Authors: H. M. D. Prabhashana Herath, M. D. Anuradha Wickramasinghe, A. M. C. Kalpani Polgolla, R. A. C. Prasad Ranasinghe, M. Anusha Wijewardane

Abstract:

The demand for thermal comfort in buildings in hot and humid climates increases progressively. In general, buildings in hot and humid climates spend more than 60% of the total energy cost for the functionality of the air conditioning (AC) system. Hence, it is required to install energy efficient AC systems or integrate energy recovery systems for both new and/or existing AC systems whenever possible, to reduce the energy consumption by the AC system. Integrate a Rotary Thermal Wheel as the energy recovery device of an existing AC system has shown very promising with attractive payback periods of less than 5 years. A rotary thermal wheel can be located in the Air Handling Unit (AHU) of a central AC system to recover the energy available in the return air stream. During this study, a sensitivity analysis was performed using a CFD (Computational Fluid Dynamics) software to determine the optimum design parameters (i.e., rotary speed and parameters of the matrix profile) of a rotary thermal wheel for hot and humid climates. The simulations were performed for a sinusoidal matrix geometry. Variation of sinusoidal matrix parameters, i.e., span length and height, were also analyzed to understand the heat exchanging performance and the induced pressure drop due to the air flow. The results show that the heat exchanging performance increases when increasing the wheel rpm. However, the performance increment rate decreases when increasing the rpm. As a result, it is more advisable to operate the wheel at 10-20 rpm. For the geometry, it was found that the sinusoidal geometries with lesser spans and higher heights have higher heat exchanging capabilities. Considering the sinusoidal profiles analyzed during the study, the geometry with 4mm height and 3mm width shows better performance than the other combinations.

Keywords: air conditioning, computational fluid dynamics, CFD, energy recovery, heat exchangers

Procedia PDF Downloads 113
527 Taguchi Robust Design for Optimal Setting of Process Wastes Parameters in an Automotive Parts Manufacturing Company

Authors: Charles Chikwendu Okpala, Christopher Chukwutoo Ihueze

Abstract:

As a technique that reduces variation in a product by lessening the sensitivity of the design to sources of variation, rather than by controlling their sources, Taguchi Robust Design entails the designing of ideal goods, by developing a product that has minimal variance in its characteristics and also meets the desired exact performance. This paper examined the concept of the manufacturing approach and its application to brake pad product of an automotive parts manufacturing company. Although the firm claimed that only defects, excess inventory, and over-production were the few wastes that grossly affect their productivity and profitability, a careful study and analysis of their manufacturing processes with the application of Single Minute Exchange of Dies (SMED) tool showed that the waste of waiting is the fourth waste that bedevils the firm. The selection of the Taguchi L9 orthogonal array which is based on the four parameters and the three levels of variation for each parameter revealed that with a range of 2.17, that waiting is the major waste that the company must reduce in order to continue to be viable. Also, to enhance the company’s throughput and profitability, the wastes of over-production, excess inventory, and defects with ranges of 2.01, 1.46, and 0.82, ranking second, third, and fourth respectively must also be reduced to the barest minimum. After proposing -33.84 as the highest optimum Signal-to-Noise ratio to be maintained for the waste of waiting, the paper advocated for the adoption of all the tools and techniques of Lean Production System (LPS), and Continuous Improvement (CI), and concluded by recommending SMED in order to drastically reduce set up time which leads to unnecessary waiting.

Keywords: lean production system, single minute exchange of dies, signal to noise ratio, Taguchi robust design, waste

Procedia PDF Downloads 108
526 Gender-Specific Vulnerability on Climate Change and Food Security Status - A Catchment Approach on Agroforestry Systems - A Multi-Country Case Study

Authors: Zerihun Yohannes Amare Id, Bernhard Freyer, Ky Serge Stephane, Ouéda Adama, Blessing Mudombi, Jean Nzuma, Mekonen Getachew Abebe, Adane Tesfaye, Birtukan Atinkut Asmare, Tesfahun Asmamaw Kassie

Abstract:

The study was conducted in Ethiopia (Zege Catchment) (ZC), Zimbabwe (Upper Save Catchment) (USC), and Burkina Faso (Nakambe Catchment) (NC). The study utilized a quantitative approach with 180 participants and complemented it with qualitative methods, including 33 key informant interviews and 6 focus group discussions. Households in ZC (58%), NC (55%), and US (40%) do not cover their household food consumption from crop production. The households rely heavily on perennial cash crops rather than annual crop production. Exposure indicators in ZC (0.758), USC (0.774), and NC (0.944), and sensitivity indicators in ZC (0.849) and NC (0.937) show statistically significant and high correlation with vulnerability. In the USC, adaptive capacity (0.746) and exposure (0.774) are also statistically significant and highly correlated with vulnerability. Vulnerability levels of the NC are very high (0.75) (0.85 female and 0.65 male participants) compared to the USC (0.66) (0.69 female and 0.61 male participants) and ZC (0.47) (0.34 female and 0.58 male participants). Female-headed households had statistically significantly lower vulnerability index compared to males in ZC, while male-headed households had statistically significantly lower vulnerability index compared to females in USC and NC. The reason is land certification in ZC (80%) is higher than in the US (10%) and NC (8%). Agroforestry practices variables across the study catchments had statistically significant contributions to households' adaptive capacity. We conclude that agroforestry practices do have substantial benefits in increasing women's adaptive capacity and reducing their vulnerability to climate change and food insecurity.

Keywords: climate change vulnerability, agroforestry, gender, food security, Sub-Saharan Africa

Procedia PDF Downloads 69
525 A Continuous Real-Time Analytic for Predicting Instability in Acute Care Rapid Response Team Activations

Authors: Ashwin Belle, Bryce Benson, Mark Salamango, Fadi Islim, Rodney Daniels, Kevin Ward

Abstract:

A reliable, real-time, and non-invasive system that can identify patients at risk for hemodynamic instability is needed to aid clinicians in their efforts to anticipate patient deterioration and initiate early interventions. The purpose of this pilot study was to explore the clinical capabilities of a real-time analytic from a single lead of an electrocardiograph to correctly distinguish between rapid response team (RRT) activations due to hemodynamic (H-RRT) and non-hemodynamic (NH-RRT) causes, as well as predict H-RRT cases with actionable lead times. The study consisted of a single center, retrospective cohort of 21 patients with RRT activations from step-down and telemetry units. Through electronic health record review and blinded to the analytic’s output, each patient was categorized by clinicians into H-RRT and NH-RRT cases. The analytic output and the categorization were compared. The prediction lead time prior to the RRT call was calculated. The analytic correctly distinguished between H-RRT and NH-RRT cases with 100% accuracy, demonstrating 100% positive and negative predictive values, and 100% sensitivity and specificity. In H-RRT cases, the analytic detected hemodynamic deterioration with a median lead time of 9.5 hours prior to the RRT call (range 14 minutes to 52 hours). The study demonstrates that an electrocardiogram (ECG) based analytic has the potential for providing clinical decision and monitoring support for caregivers to identify at risk patients within a clinically relevant timeframe allowing for increased vigilance and early interventional support to reduce the chances of continued patient deterioration.

Keywords: critical care, early warning systems, emergency medicine, heart rate variability, hemodynamic instability, rapid response team

Procedia PDF Downloads 134
524 Reliability Analysis of Variable Stiffness Composite Laminate Structures

Authors: A. Sohouli, A. Suleman

Abstract:

This study focuses on reliability analysis of variable stiffness composite laminate structures to investigate the potential structural improvement compared to conventional (straight fibers) composite laminate structures. A computational framework was developed which it consists of a deterministic design step and reliability analysis. The optimization part is Discrete Material Optimization (DMO) and the reliability of the structure is computed by Monte Carlo Simulation (MCS) after using Stochastic Response Surface Method (SRSM). The design driver in deterministic optimization is the maximum stiffness, while optimization method concerns certain manufacturing constraints to attain industrial relevance. These manufacturing constraints are the change of orientation between adjacent patches cannot be too large and the maximum number of successive plies of a particular fiber orientation should not be too high. Variable stiffness composites may be manufactured by Automated Fiber Machines (AFP) which provides consistent quality with good production rates. However, laps and gaps are the most important challenges to steer fibers that effect on the performance of the structures. In this study, the optimal curved fiber paths at each layer of composites are designed in the first step by DMO, and then the reliability analysis is applied to investigate the sensitivity of the structure with different standard deviations compared to the straight fiber angle composites. The random variables are material properties and loads on the structures. The results show that the variable stiffness composite laminate structures are much more reliable, even for high standard deviation of material properties, than the conventional composite laminate structures. The reason is that the variable stiffness composite laminates allow tailoring stiffness and provide the possibility of adjusting stress and strain distribution favorably in the structures.

Keywords: material optimization, Monte Carlo simulation, reliability analysis, response surface method, variable stiffness composite structures

Procedia PDF Downloads 498
523 Carbon-Based Electrochemical Detection of Pharmaceuticals from Water

Authors: M. Ardelean, F. Manea, A. Pop, J. Schoonman

Abstract:

The presence of pharmaceuticals in the environment and especially in water has gained increasing attention. They are included in emerging class of pollutants, and for most of them, legal limits have not been set-up due to their impact on human health and ecosystem was not determined and/or there is not the advanced analytical method for their quantification. In this context, the development of various advanced analytical methods for the quantification of pharmaceuticals in water is required. The electrochemical methods are known to exhibit the great potential for high-performance analytical methods but their performance is in direct relation to the electrode material and the operating techniques. In this study, two types of carbon-based electrodes materials, i.e., boron-doped diamond (BDD) and carbon nanofiber (CNF)-epoxy composite electrodes have been investigated through voltammetric techniques for the detection of naproxen in water. The comparative electrochemical behavior of naproxen (NPX) on both BDD and CNF electrodes was studied by cyclic voltammetry, and the well-defined peak corresponding to NPX oxidation was found for each electrode. NPX oxidation occurred on BDD electrode at the potential value of about +1.4 V/SCE (saturated calomel electrode) and at about +1.2 V/SCE for CNF electrode. The sensitivities for NPX detection were similar for both carbon-based electrode and thus, CNF electrode exhibited superiority in relation to the detection potential. Differential-pulsed voltammetry (DPV) and square-wave voltammetry (SWV) techniques were exploited to improve the electroanalytical performance for the NPX detection, and the best results related to the sensitivity of 9.959 µA·µM-1 were achieved using DPV. In addition, the simultaneous detection of NPX and fluoxetine -a very common antidepressive drug, also present in water, was studied using CNF electrode and very good results were obtained. The detection potential values that allowed a good separation of the detection signals together with the good sensitivities were appropriate for the simultaneous detection of both tested pharmaceuticals. These results reclaim CNF electrode as a valuable tool for the individual/simultaneous detection of pharmaceuticals in water.

Keywords: boron-doped diamond electrode, carbon nanofiber-epoxy composite electrode, emerging pollutans, pharmaceuticals

Procedia PDF Downloads 264
522 The Effect of Aerobic Training and Aqueous Extract of C. monogyna (Hawthorn) on Plasma and Heart Angiogenic Mediators in Male Wistar Rats

Authors: Asieh Abbassi Daloii, Ahmad Abdi

Abstract:

Introduction: Sports information suggests that physical inactivity increases the risk of many diseases, including atherosclerosis. Coronary heart disease, stroke and peripheral vascular disease, atherosclerosis and clinical protests. However, exercise can have beneficial effects on risk factors for atherosclerosis by reducing hyperlipidemia, hypertension, obesity, plaque density, increased insulin sensitivity and glucose tolerance is improved. Despite these findings, there is little information about the molecular mechanisms of interaction between the body and its relation to sport and there arteriosclerosis. The present study aims to investigate the effect of six weeks of progressive aerobic training and aqueous extract of crataegus monogyna on vascular endothelial growth factor (VEGF) variations and angiopoetin-1/2 (ANG- 1/2) in plasma and heart tissue in male Wistar rats. Methods: 30 male Wistar rats, 4-6 months old, were randomly divided into four groups: control crataegus monogyna (N=8), training crataegus monogyna (N=8), control saline (N=6), and training saline (N=8). The aerobic training program included running on treadmill at the speed of 34 meters per minute for 60 minutes per day. The training was conducted for six weeks, five days a week. Following each training session, both experimental and control subjects of crataegus monogyna groups were orally fed with 0.5 mg crataegus monogyna extract per gram of the body weight. The normal saline group was given the same amount of the normal saline solution (NS). Eventually, 72 hours after the last training session, blood samples were taken from inferior Verna cava. Conclusion: It is likely that crataegus monogyna extract compared with aerobic training and even combination of both training and crataegus monogyna extract is more effective on angiogenesis.

Keywords: angiopoietin 1, 2, vascular endothelial growth factor, aerobic exercise

Procedia PDF Downloads 370
521 Insights into Archaeological Human Sample Microbiome Using 16S rRNA Gene Sequencing

Authors: Alisa Kazarina, Guntis Gerhards, Elina Petersone-Gordina, Ilva Pole, Viktorija Igumnova, Janis Kimsis, Valentina Capligina, Renate Ranka

Abstract:

Human body is inhabited by a vast number of microorganisms, collectively known as the human microbiome, and there is a tremendous interest in evolutionary changes in human microbial ecology, diversity and function. The field of paleomicrobiology, study of ancient human microbiome, is powered by modern techniques of Next Generation Sequencing (NGS), which allows extracting microbial genomic data directly from archaeological sample of interest. One of the major techniques is 16S rRNA gene sequencing, by which certain 16S rRNA gene hypervariable regions are being amplified and sequenced. However, some limitations of this method exist including the taxonomic precision and efficacy of different regions used. The aim of this study was to evaluate the phylogenetic sensitivity of different 16S rRNA gene hypervariable regions for microbiome studies in the archaeological samples. Towards this aim, archaeological bone samples and corresponding soil samples from each burial environment were collected in Medieval cemeteries in Latvia. The Ion 16S™ Metagenomics Kit targeting different 16S rRNA gene hypervariable regions was used for library construction (Ion Torrent technologies). Sequenced data were analysed by using appropriate bioinformatic techniques; alignment and taxonomic representation was done using Mothur program. Sequences of most abundant genus were further aligned to E. coli 16S rRNA gene reference sequence using MEGA7 in order to identify the hypervariable region of the segment of interest. Our results showed that different hypervariable regions had different discriminatory power depending on the groups of microbes, as well as the nature of samples. On the basis of our results, we suggest that wider range of primers used can provide more accurate recapitulation of microbial communities in archaeological samples. Acknowledgements. This work was supported by the ERAF grant Nr. 1.1.1.1/16/A/101.

Keywords: 16S rRNA gene, ancient human microbiome, archaeology, bioinformatics, genomics, microbiome, molecular biology, next-generation sequencing

Procedia PDF Downloads 174
520 An Educational Program Based on Health Belief Model to Prevent Non-Alcoholic Fatty Liver Disease among Iranian Women

Authors: Babak Nemat

Abstract:

Background and Purpose: Non-alcoholic fatty liver is one of the most common liver disorders, which, as the most important cause of death from liver disease, has unpleasant consequences and complications. The aim of this study was to investigate the effect of an educational intervention based on a health belief model to prevent non-alcoholic fatty liver among women. Materials and Methods: This experimental study was performed among 110 women referring to comprehensive health service centers in Malayer City, west of Iran, in 2023. Using the available sampling method, 110 participants were divided into experimental and control groups. The data collection tool included demographic characteristics and a questionnaire based on the health belief model. In the experimental group, three one-hour training sessions were conducted in the form of pamphlets, lectures, and group discussions. Data were analyzed using SPSS software version 21, by correlation tests, paired t-tests, and independent t-tests. Results: The mean age of participants was 38.07±6.28 years, and most of the participants were middle-aged, married, housewives with academic education, middle-income, and overweight. After the educational intervention, the mean scores of the constructs include perceived sensitivity (p=0.01), perceived severity (p=0.01), perceived benefits (p=0.01), guidance for internal (p=0.01), and external action (p=0.01), and perceived self-efficacy (p=0.01) in the experimental group were significantly higher than the control group. The score of perceived barriers in the experimental group decreased after training. The perceived obstacles score in the test group decreased after the training (15.2 ± 3.9 v.s 11.2 ± 3.3, (p<0.01). Conclusion: The findings of the study showed that the design and implementation of educational programs based on the constructs of the health belief model can be effective in preventing women from developing higher levels of non-alcoholic fatty liver.

Keywords: non-alcoholic fatty liver, health belief model, education, women

Procedia PDF Downloads 36
519 Concepts of Modern Design: A Study of Art and Architecture Synergies in Early 20ᵗʰ Century Europe

Authors: Stanley Russell

Abstract:

Until the end of the 19th century, European painting dealt almost exclusively with the realistic representation of objects and landscapes, as can be seen in the work of realist artists like Gustav Courbet. Architects of the day typically made reference to and recreated historical precedents in their designs. The curriculum of the first architecture school in Europe, The Ecole des Beaux Artes, based on the study of classical buildings, had a profound effect on the profession. Painting exhibited an increasing level of abstraction from the late 19th century, with impressionism, and the trend continued into the early 20th century when Cubism had an explosive effect sending shock waves through the art world that also extended into the realm of architectural design. Architect /painter Le Corbusier with “Purism” was one of the first to integrate abstract painting and building design theory in works that were equally shocking to the architecture world. The interrelationship of the arts, including architecture, was institutionalized in the Bauhaus curriculum that sought to find commonality between diverse art disciplines. Renowned painter and Bauhaus instructor Vassily Kandinsky was one of the first artists to make a semi-scientific analysis of the elements in “non-objective” painting while also drawing parallels between painting and architecture in his book Point and Line to plane. Russian constructivists made abstract compositions with simple geometric forms, and like the De Stijl group of the Netherlands, they also experimented with full-scale constructions and spatial explorations. Based on the study of historical accounts and original artworks, of Impressionism, Cubism, the Bauhaus, De Stijl, and Russian Constructivism, this paper begins with a thorough explanation of the art theory and several key works from these important art movements of the late 19th and early 20th century. Similarly, based on written histories and first-hand experience of built and drawn works, the author continues with an analysis of the theories and architectural works generated by the same groups, all of which actively pursued continuity between their art and architectural concepts. With images of specific works, the author shows how the trend toward abstraction and geometric purity in painting coincided with a similar trend in architecture that favored simple unornamented geometries. Using examples like the Villa Savoye, The Schroeder House, the Dessau Bauhaus, and unbuilt designs by Russian architect Chernikov, the author gives detailed examples of how the intersection of trends in Art and Architecture led to a unique and fruitful period of creative synergy when the same concepts that were used by artists to generate paintings were also used by architects in the making of objects, space, and buildings. In Conclusion, this article examines the extremely pivotal period in art and architecture history from the late 19th to early 20th century when the confluence of art and architectural theory led to many painted, drawn, and built works that continue to inspire architects and artists to this day.

Keywords: modern art, architecture, design methodologies, modern architecture

Procedia PDF Downloads 108
518 Terahertz Glucose Sensors Based on Photonic Crystal Pillar Array

Authors: S. S. Sree Sanker, K. N. Madhusoodanan

Abstract:

Optical biosensors are dominant alternative for traditional analytical methods, because of their small size, simple design and high sensitivity. Photonic sensing method is one of the recent advancing technology for biosensors. It measures the change in refractive index which is induced by the difference in molecular interactions due to the change in concentration of the analyte. Glucose is an aldosic monosaccharide, which is a metabolic source in many of the organisms. The terahertz waves occupies the space between infrared and microwaves in the electromagnetic spectrum. Terahertz waves are expected to be applied to various types of sensors for detecting harmful substances in blood, cancer cells in skin and micro bacteria in vegetables. We have designed glucose sensors using silicon based 1D and 2D photonic crystal pillar arrays in terahertz frequency range. 1D photonic crystal has rectangular pillars with height 100 µm, length 1600 µm and width 50 µm. The array period of the crystal is 500 µm. 2D photonic crystal has 5×5 cylindrical pillar array with an array period of 75 µm. Height and diameter of the pillar array are 160 µm and 100 µm respectively. Two samples considered in the work are blood and glucose solution, which are labelled as sample 1 and sample 2 respectively. The proposed sensor detects the concentration of glucose in the samples from 0 to 100 mg/dL. For this, the crystal was irradiated with 0.3 to 3 THz waves. By analyzing the obtained S parameter, the refractive index of the crystal corresponding to the particular concentration of glucose was measured using the parameter retrieval method. Refractive indices of the two crystals decreased gradually with the increase in concentration of glucose in the sample. For 1D photonic crystals, a gradual decrease in refractive index was observed at 1 THz. 2D photonic crystal showed this behavior at 2 THz. The proposed sensor was simulated using CST Microwave studio. This will enable us to develop a model which can be used to characterize a glucose sensor. The present study is expected to contribute to blood glucose monitoring.

Keywords: CST microwave studio, glucose sensor, photonic crystal, terahertz waves

Procedia PDF Downloads 263
517 In-Situ Studies of Cyclohexane Oxidation Using Laser Raman Spectroscopy for the Refinement of Mechanism Based Kinetic Models

Authors: Christine Fräulin, Daniela Schurr, Hamed Shahidi Rad, Gerrit Waters, Günter Rinke, Roland Dittmeyer, Michael Nilles

Abstract:

The reaction mechanisms of many liquid-phase reactions in organic chemistry have not yet been sufficiently clarified. Process conditions of several hundred degrees celsius and pressures to ten megapascals complicate the sampling and the determination of kinetic data. Space resolved in-situ measurements promises new insights. A non-invasive in-situ measurement technique has the advantages that no sample preparation is necessary, there is no change in sample mixture before analysis and the sampling do no lead to interventions in the flow. Thus, the goal of our research was the development of a contact-free spatially resolved measurement technique for kinetic studies of liquid phase reaction under process conditions. Therefore we used laser Raman spectroscopy combined with an optical transparent microchannel reactor. To show the performance of the system we choose the oxidation of cyclohexane as sample reaction. Cyclohexane oxidation is an economically important process. The products are intermediates for caprolactam and adipic acid, which are starting materials for polyamide 6 and 6.6 production. To maintain high selectivities of 70 to 90 %, the reaction is performed in industry at a low conversion of about six percent. As Raman spectroscopy is usually very selective but not very sensitive the detection of the small product concentration in cyclohexane oxidation is quite challenging. To meet these requirements, an optical experimental setup was optimized to determine the concentrations by laser Raman spectroscopy with respect to good detection sensitivity. With this measurement technique space resolved kinetic studies of uncatalysed and homogeneous catalyzed cyclohexane oxidation were carried out to obtain details about the reaction mechanism.

Keywords: in-situ laser raman spectroscopy, space resolved kinetic measurements, homogeneous catalysis, chemistry

Procedia PDF Downloads 316
516 LTF Expression Profiling Which is Essential for Cancer Cell Proliferation and Metastasis, Correlating with Clinical Features, as Well as Early Stages of Breast Cancer

Authors: Azar Heidarizadi, Mahdieh Salimi, Hossein Mozdarani

Abstract:

Introduction: As a complex disease, breast cancer results from several genetic and epigenetic changes. Lactoferrin, a member of the transferrin family, is reported to have a number of biological functions, including DNA synthesis, immune responses, iron transport, etc., any of which could play a role in tumor progression. The aim of this study was to investigate the bioinformatics data and experimental assay to find the pattern of promoter methylation and gene expression of LTF in breast cancer in order to study its potential role in cancer management. Material and Methods: In order to evaluate the methylation status of the LTF promoter, we studied the MS-PCR and Real-Time PCR on samples from patients with breast cancer and normal cases. 67 patient samples were conducted for this study, including tumoral, plasma, and normal tissue adjacent samples, as well as 30 plasma from normal cases and 10 tissue breast reduction cases. Subsequently, bioinformatics analyses such as cBioPortal databases, string, and genomatix were conducted to disclose the prognostic value of LTF in breast cancer progression. Results: The analysis of LTF expression showed an inverse relationship between the expression level of LTF and the stages of tissues of breast cancer patients (p<0.01). In fact, stages 1 and 2 had a high expression in LTF, while, in stages 3 and 4, a significant reduction was observable (p < 0.0001). LTF expression frequently alters with a decrease in the expression in ER⁺, PR⁺, and HER2⁺ patients (P < 0.01) and an increase in the expression in the TNBC, LN¯, ER¯, and PR- patients (P < 0.001). Also, LTF expression is significantly associated with metastasis and lymph node involvement factors (P < 0.0001). The sensitivity and specificity of LTF were detected, respectively. A negative correlation was detected between the results of level expression and methylation of the LTF promoter. Conclusions: The altered expression of LTF observed in breast cancer patients could be considered as a promotion in cell proliferation and metastasis even in the early stages of cancer.

Keywords: LTF, expression, methylation, breast cancer

Procedia PDF Downloads 41
515 GIS Based Spatial Modeling for Selecting New Hospital Sites Using APH, Entropy-MAUT and CRITIC-MAUT: A Study in Rural West Bengal, India

Authors: Alokananda Ghosh, Shraban Sarkar

Abstract:

The study aims to identify suitable sites for new hospitals with critical obstetric care facilities in Birbhum, one of the vulnerable and underserved districts of Eastern India, considering six main and 14 sub-criteria, using GIS-based Analytic Hierarchy Process (AHP) and Multi-Attribute Utility Theory (MAUT) approach. The criteria were identified through field surveys and previous literature. After collecting expert decisions, a pairwise comparison matrix was prepared using the Saaty scale to calculate the weights through AHP. On the contrary, objective weighting methods, i.e., Entropy and Criteria Importance through Interaction Correlation (CRITIC), were used to perform the MAUT. Finally, suitability maps were prepared by weighted sum analysis. Sensitivity analyses of AHP were performed to explore the effect of dominant criteria. Results from AHP reveal that ‘maternal death in transit’ followed by ‘accessibility and connectivity’, ‘maternal health care service (MHCS) coverage gap’ were three important criteria with comparatively higher weighted values. Whereas ‘accessibility and connectivity’ and ‘maternal death in transit’ were observed to have more imprint in entropy and CRITIC, respectively. While comparing the predictive suitable classes of these three models with the layer of existing hospitals, except Entropy-MAUT, the other two are pointing towards the left-over underserved areas of existing facilities. Only 43%-67% of existing hospitals were in the moderate to lower suitable class. Therefore, the results of the predictive models might bring valuable input in future planning.

Keywords: hospital site suitability, analytic hierarchy process, multi-attribute utility theory, entropy, criteria importance through interaction correlation, multi-criteria decision analysis

Procedia PDF Downloads 40
514 Analysis and Modeling of the Building’s Facades in Terms of Different Convection Coefficients

Authors: Enes Yasa, Guven Fidan

Abstract:

Building Simulation tools need to better evaluate convective heat exchanges between external air and wall surfaces. Previous analysis demonstrated the significant effects of convective heat transfer coefficient values on the room energy balance. Some authors have pointed out that large discrepancies observed between widely used building thermal models can be attributed to the different correlations used to calculate or impose the value of the convective heat transfer coefficients. Moreover, numerous researchers have made sensitivity calculations and proved that the choice of Convective Heat Transfer Coefficient values can lead to differences from 20% to 40% of energy demands. The thermal losses to the ambient from a building surface or a roof mounted solar collector represent an important portion of the overall energy balance and depend heavily on the wind induced convection. In an effort to help designers make better use of the available correlations in the literature for the external convection coefficients due to the wind, a critical discussion and a suitable tabulation is presented, on the basis of algebraic form of the coefficients and their dependence upon characteristic length and wind direction, in addition to wind speed. Many research works have been conducted since early eighties focused on the convection heat transfer problems inside buildings. In this context, a Computational Fluid Dynamics (CFD) program has been used to predict external convective heat transfer coefficients at external building surfaces. For the building facades model, effects of wind speed and temperature differences between the surfaces and the external air have been analyzed, showing different heat transfer conditions and coefficients. In order to provide further information on external convective heat transfer coefficients, a numerical work is presented in this paper, using a Computational Fluid Dynamics (CFD) commercial package (CFX) to predict convective heat transfer coefficients at external building surface.

Keywords: CFD in buildings, external convective heat transfer coefficients, building facades, thermal modelling

Procedia PDF Downloads 401
513 Analysis of Cross-Sectional and Retrograde Data on the Prevalence of Marginal Gingivitis

Authors: Ilma Robo, Saimir Heta, Nedja Hysi, Vera Ostreni

Abstract:

Introduction: Marginal gingivitis is a disease with considerable frequency among patients who present routinely for periodontal control and treatment. In fact, this disease may not have alarming symptoms in patients and may go unnoticed by themselves when personal hygiene conditions are optimal. The aim of this study was to collect retrograde data on the prevalence of marginal gingiva in the respective group of patients, evaluated according to specific periodontal diagnostic tools. Materials and methods: The study was conducted in two patient groups. The first group was with 34 patients, during December 2019-January 2020, and the second group was with 64 patients during 2010-2018 (each year in the mentioned monthly period). Bacterial plaque index, hemorrhage index, amount of gingival fluid, presence of xerostomia and candidiasis were recorded in patients. Results: Analysis of the collected data showed that susceptibility to marginal gingivitis shows higher values according to retrograde data, compared to cross-sectional ones. Susceptibility to candidiasis and the occurrence of xerostomia, even in the combination of both pathologies, as risk factors for the occurrence of marginal gingivitis, show higher values ​​according to retrograde data. The female are presented with a reduced bacterial plaque index than the males, but more importantly, this index in the females is also associated with a reduced index of gingival hemorrhage, in contrast to the males. Conclusions: Cross-sectional data show that the prevalence of marginal gingivitis is more reduced, compared to retrograde data, based on the hemorrhage index and the bacterial plaque index together. Changes in production in the amount of gingival fluid show a higher prevalence of marginal gingivitis in cross-sectional data than in retrograde data; this is based on the sophistication of the way data are recorded, which evolves over time and also based on professional sensitivity to this phenomenon.

Keywords: marginal gingivitis, cross-sectional, retrograde, prevalence

Procedia PDF Downloads 146
512 Mode II Fracture Toughness of Hybrid Fiber Reinforced Concrete

Authors: H. S. S Abou El-Mal, A. S. Sherbini, H. E. M. Sallam

Abstract:

Mode II fracture toughness (KIIc) of fiber reinforced concrete has been widely investigated under various patterns of testing geometries. The effect of fiber type, concrete matrix properties, and testing mechanisms were extensively studied. The area of hybrid fiber addition shows a lake of reported research data. In this paper an experimental investigation of hybrid fiber embedded in high strength concrete matrix is reported. Three different types of fibers; namely steel (S), glass (G), and polypropylene (PP) fibers were mixed together in four hybridization patterns, (S/G), (S/PP), (G/PP), (S/G/PP) with constant cumulative volume fraction (Vf) of 1.5%. The concrete matrix properties were kept the same for all hybrid fiber reinforced concrete patterns. In an attempt to estimate a fairly accepted value of fracture toughness KIIc, four testing geometries and loading types are employed in this investigation. Four point shear, Brazilian notched disc, double notched cube, and double edge notched specimens are investigated in a trial to avoid the limitations and sensitivity of each test regarding geometry, size effect, constraint condition, and the crack length to specimen width ratio a/w. The addition of all hybridization patterns of fiber reduced the compressive strength and increased mode II fracture toughness in pure mode II tests. Mode II fracture toughness of concrete KIIc decreased with the increment of a/w ratio for all concretes and test geometries. Mode II fracture toughness KIIc is found to be sensitive to the hybridization patterns of fiber. The (S/PP) hybridization pattern showed higher values than all other patterns, while the (S/G/PP) showed insignificant enhancement on mode II fracture toughness (KIIc). Four point shear (4PS) test set up reflects the most reliable values of mode II fracture toughness KIIc of concrete. Mode II fracture toughness KIIc of concrete couldn’t be assumed as a real material property.

Keywords: fiber reinforced concrete, Hybrid fiber, Mode II fracture toughness, testing geometry

Procedia PDF Downloads 312
511 Detection of Bcl2 Polymorphism in Patient with Hepatocellular carcinoma

Authors: Mohamed Abdel-Hamid, Olfat Gamil Shaker, Doha El-Sayed Ellakwa, Eman Fathy Abdel-Maksoud

Abstract:

Introduction: Despite advances in the knowledge of the molecular virology of hepatitis C virus (HCV), the mechanisms of hepatocellular injury in HCV infection are not completely understood. Hepatitis C viral infection (HCV) influences the susceptibility to apoptosis. This could lead to insufficient antiviral immune response and persistent viral infection. Aim of this study: was to examine whether BCL-2 gene polymorphism at codon 43 (+127G/A or Ala43Thr) has an impact on development of hepatocellular carcinoma caused by chronic hepatitis C Egyptian patients. Subjects and Methods: The study included three groups; group 1: composing of 30 patients with hepatocellular carcinoma (HCC), group 2 composing of 30 patients with HCV, group 3 composing of 30 healthy subjects matching the same age and socioeconomic status were taken as a control group. Gene polymorphism of BCL2 (Ala43Thr) were evaluated by PCR-RFLP technique and measured for all patients and controls. Results: The summed 43Thr genotype was more frequent and statistically significant in HCC patients as compared to control group. This genotype of BCL2 gene may inhibit the programmed cell death which leads to disturbance in tissue and cells homeostasis and reduction in immune regulation. This result leads to viral replication and HCV persistence. Moreover, virus produces variety of mechanisms to block genes participated in apoptosis. This mechanism proves that HCV patients who have 43Thr genotype are more susceptible to HCC. Conclusion: The data suggest for the first time that the BCL2 polymorphism is associated with the susceptibility to HCC in Egyptian populations and might be used as molecular markers for evaluating HCC risk. This study clearly demonstrated that Chronic HCV exhibit a deregulation of apoptosis with the disease progression. This provides an insight into the pathogenesis of chronic HCV infection, and may contribute to the therapy.

Keywords: BCL2 gene, Hepatitis C Virus, Hepatocellular carcinoma, sensitivity, specificity, apoptosis

Procedia PDF Downloads 491
510 Transcranial Magnetic Stimulation as a Potentiator in the Rehabilitation of Fine Motor Skills: A Literature Review

Authors: Ana Lucia Molina

Abstract:

Introduction: Fine motor skills refer to the use of the hands and coordination of the small muscles that control the fingers. A deficiency in fine motor skills is as important as a change in global movements, as fine motor skills directly affect activities of daily living. Fine movements are involved in some functions, such as motor control of the extremities, sensitivity, strength and tonus of the hands. A growing interest in the effects of non-invasive neuromodulation, such as transcranial stimulation technologies, through transcranial magnetic stimulation (TMS), has been observed in the scientific literature, with promising results in fine motor rehabilitation, as it provides modulation of the corresponding cortical activity in the area primary motor skills of the hands in both hemispheres (according to the International System 10-20, corresponding to C3 and C4). Objectives: to carry out a literature review about the effects of TMS on the cortical motor area corresponding to hand motricity. Methodology: This is a bibliographic survey carried out between October 2022 and March 2023 at Pubmed, Google Scholar, Lillacs and Virtual Health Library (BVS), with a national and international database. Some books on neuromodulation were included. Results: 28 articles and 5 books were initially found, and after reading the abstracts, only 14 articles and 3 books were selected, with publication dates between 2008 and 2022, to compose the literature review since it suited the purpose of this study. Conclusion: TMS has shown promising results in the treatment of fine motor rehabilitation, such as improving coordination, muscle strength and range of motion of the hands, being a complementary technique to existing treatments and thus providing more potent results for manual skills in activities of daily living. It is important to emphasize the need for more specific studies on the application of TMS for the treatment of manual disorders, which describe the uniqueness of each movement.

Keywords: transcranial magnetic stimulation, fine motor skills, motor rehabilitation, non-invasive neuromodulation

Procedia PDF Downloads 61
509 Identification of Body Fluid at the Crime Scene by DNA Methylation Markers for Use in Forensic Science

Authors: Shirin jalili, Hadi Shirzad, Mahasti Modarresi, Samaneh Nabavi, Somayeh Khanjani

Abstract:

Identifying the source tissue of biological material found at crime scenes can be very informative in a number of cases. Despite their usefulness, current visual, catalytic, enzymatic, and immunologic tests for presumptive and confirmatory tissue identification are applicable only to a subset of samples, might suffer limitations such as low specificity, lack of sensitivity, and are substantially impacted by environmental insults. In addition their results are operator-dependent. Recently the possibility of discriminating body fluids using mRNA expression differences in tissues has been described but lack of long term stability of that Molecule and the need to normalize samples for each individual are limiting factors. The use of DNA should solve these issues because of its long term stability and specificity to each body fluid. Cells in the human body have a unique epigenome, which includes differences in DNA methylation in the promoter of genes. DNA methylation, which occurs at the 5′-position of the cytosine in CpG dinucleotides, has great potential for forensic identification of body fluids, because tissue-specific patterns of DNA methylation have been demonstrated, and DNA is less prone to degradation than proteins or RNA. Previous studies have reported several body fluid-specific DNA methylation markers.The presence or absence of a methyl group on the 5’ carbon of the cytosine pyridine ring in CpG dinucleotide regions called ‘CpG islands’ dictates whether the gene is expressed or silenced in the particular body fluid. Were described methylation patterns at tissue specific differentially methylated regions (tDMRs) to be stable and specific, making them excellent markers for tissue identification. The results demonstrate that methylation-based tissue identification is more than a proof-of-concept. The methodology holds promise as another viable forensic DNA analysis tool for characterization of biological materials.

Keywords: DNA methylation, forensic science, epigenome, tDMRs

Procedia PDF Downloads 413
508 Laminar Separation Bubble Prediction over an Airfoil Using Transition SST Turbulence Model on Moderate Reynolds Number

Authors: Younes El Khchine, Mohammed Sriti

Abstract:

A parametric study has been conducted to analyse the flow around S809 airfoil of a wind turbine in order to better understand the characteristics and effects of laminar separation bubble (LSB) on aerodynamic design for maximizing wind turbine efficiency. Numerical simulations were performed at low Reynolds numbers by solving the Unsteady Reynolds Averaged Navier-Stokes (URANS) equations based on C-type structural mesh and using the γ-Reθt turbulence model. A two-dimensional study was conducted for the chord Reynolds number of 1×10⁵ and angles of attack (AoA) between 0 and 20.15 degrees. The simulation results obtained for the aerodynamic coefficients at various angles of attack (AoA) were compared with XFoil results. A sensitivity study was performed to examine the effects of Reynolds number and free-stream turbulence intensity on the location and length of the laminar separation bubble and the aerodynamic performances of wind turbines. The results show that increasing the Reynolds number leads to a delay in the laminar separation on the upper surface of the airfoil. The increase in Reynolds number leads to an accelerated transition process, and the turbulent reattachment point moves closer to the leading edge owing to an earlier reattachment of the turbulent shear layer. This leads to a considerable reduction in the length of the separation bubble as the Reynolds number is increased. The increase in the level of free-stream turbulence intensity leads to a decrease in separation bubble length and an increase in the lift coefficient while having negligible effects on the stall angle. When the AoA increased, the bubble on the suction airfoil surface was found to move upstream to the leading edge of the airfoil, that causes earlier laminar separation.

Keywords: laminar separation bubble, turbulence intensity, S809 airfoil, transition model, Reynolds number

Procedia PDF Downloads 59