Search results for: radial basis function networks
8160 Borate Crosslinked Fracturing Fluids: Laboratory Determination of Rheology
Authors: Lalnuntluanga Hmar, Hardik Vyas
Abstract:
Hydraulic fracturing has become an essential procedure to break apart the rock and release the oil or gas which are trapped tightly in the rock by pumping fracturing fluids at high pressure down into the well. To open the fracture and to transport propping agent along the fracture, proper selection of fracturing fluids is the most crucial components in fracturing operations. Rheology properties of the fluids are usually considered the most important. Among various fracturing fluids, Borate crosslinked fluids have proved to be highly effective. Borate in the form of Boric Acid, borate ion is the most commonly use to crosslink the hydrated polymers and to produce very viscous gels that can stable at high temperature. Guar and HPG (Hydroxypropyl Guar) polymers are the most often used in these fluids. Borate gel rheology is known to be a function of polymer concentration, borate ion concentration, pH, and temperature. The crosslinking using Borate is a function of pH which means it can be formed or reversed simply by altering the pH of the fluid system. The fluid system was prepared by mixing base polymer with water at pH ranging between 8 to 11 and the optimum borate crosslinker efficiency was found to be pH of about 10. The rheology of laboratory prepared Borate crosslinked fracturing fluid was determined using Anton Paar Rheometer and Fann Viscometer. The viscosity was measured at high temperature ranging from 200ᵒF to 250ᵒF and pressures in order to partially stimulate the downhole condition. Rheological measurements reported that the crosslinking increases the viscosity, elasticity and thus fluid capability to transport propping agent.Keywords: borate, crosslinker, Guar, Hydroxypropyl Guar (HPG), rheology
Procedia PDF Downloads 2028159 Clara Cell Secretory Protein 16 Serum Level Decreases in Patients with Non-Smoking-Related Chronic Obstructive Pulmonary Diseases (COPD)
Authors: Lian Wu, Mervyn Merrilees
Abstract:
Chronic Obstructive Pulmonary Disease (COPD) is a worldwide problem, characterized by irreversible and progressive airflow obstruction. In New Zealand, it is currently the 4th commonest cause of death and exacerbations of COPD are a frequent cause of admission to hospital. Serum levels of Clara cell secretory protein-16 (CC-16) are believed to represent Clara cell toxicity. More recently, CC-16 has been found to be associated with smoker COPD. It is produced almost exclusively by non-ciliated Clara cells in the airways, and its primary function is to protect the lungs against oxidative stress and carcinogenesis. After acute exposure to cigarette smoke, serum levels of CC-16 become elevated. CC16 is a potent natural immune-suppressor and anti-inflammatory agent. In vitro, CC16 inhibits both monocyte and polymorphonuclear neutrophils chemotaxis and phagocytosis. CC16 also inhibits fibroblast chemotaxis. However, the role of CC-16 in non-smoking related COPD is still not clear. In this study, we investigated serum CC-16 levels in non-smoking related COPD. Methods: We compared non-smoker patients with COPD (FEV1<60% of predicted, FEV1/FVC <0.7, n=100) and individuals with normal lung function FEV1≥ 80% of predicted and FEV1/FVC≥ 0.7, n=80). All subjects had no smoking history. CC-16 was measured by ELISA. Results and conclusion: Serum CC-16 levels are reduced in individuals with non-smoking related COPD, and there is a weak correlation with disease severity in non-smoking related COPD group compared to non-smoker controls.Keywords: COPD, CC-16, ELISA, non-smoking-related COPD
Procedia PDF Downloads 3808158 Effects of Earthquake Induced Debris to Pedestrian and Community Street Network Resilience
Authors: Al-Amin, Huanjun Jiang, Anayat Ali
Abstract:
Reinforced concrete frames (RC), especially Ordinary RC frames, are prone to structural failures/collapse during seismic events, leading to a large proportion of debris from the structures, which obstructs adjacent areas, including streets. These blocked areas severely impede post-earthquake resilience. This study uses computational simulation (FEM) to investigate the amount of debris generated by the seismic collapse of an ordinary reinforced concrete moment frame building and its effects on the adjacent pedestrian and road network. A three-story ordinary reinforced concrete frame building, primarily designed for gravity load and earthquake resistance, was selected for analysis. Sixteen different ground motions were applied and scaled up until the total collapse of the tested building to evaluate the failure mode under various seismic events. Four types of collapse direction were identified through the analysis, namely aligned (positive and negative) and skewed (positive and negative), with aligned collapse being more predominant than skewed cases. The amount and distribution of debris around the collapsed building were assessed to investigate the interaction between collapsed buildings and adjacent street networks. An interaction was established between a building that collapsed in an aligned direction and the adjacent pedestrian walkway and narrow street located in an unplanned old city. The FEM model was validated against an existing shaking table test. The presented results can be utilized to simulate the interdependency between the debris generated from the collapse of seismic-prone buildings and the resilience of street networks. These findings provide insights for better disaster planning and resilient infrastructure development in earthquake-prone regions.Keywords: building collapse, earthquake-induced debris, ORC moment resisting frame, street network
Procedia PDF Downloads 858157 Cross-Linked Amyloglucosidase Aggregates: A New Carrier Free Immobilization Strategy for Continuous Saccharification of Starch
Authors: Sidra Pervez, Afsheen Aman, Shah Ali Ul Qader
Abstract:
The importance of attaining an optimum performance of an enzyme is often a question of devising an effective method for its immobilization. Cross-linked enzyme aggregate (CLEAs) is a new approach for immobilization of enzymes using carrier free strategy. This method is exquisitely simple (involving precipitation of the enzyme from aqueous buffer followed by cross-linking of the resulting physical aggregates of enzyme molecules) and amenable to rapid optimization. Among many industrial enzymes, amyloglucosidase is an important amylolytic enzyme that hydrolyzes alpha (1→4) and alpha (1→6) glycosidic bonds in starch molecule and produce glucose as a sole end product. Glucose liberated by amyloglucosidase can be used for the production of ethanol and glucose syrups. Besides this amyloglucosidase can be widely used in various food and pharmaceuticals industries. For production of amyloglucosidase on commercial scale, filamentous fungi of genera Aspergillus are mostly used because they secrete large amount of enzymes extracellularly. The current investigation was based on isolation and identification of filamentous fungi from genus Aspergillus for the production of amyloglucosidase in submerged fermentation and optimization of cultivation parameters for starch saccharification. Natural isolates were identified as Aspergillus niger KIBGE-IB36, Aspergillus fumigatus KIBGE-IB33, Aspergillus flavus KIBGE-IB34 and Aspergillus terreus KIBGE-IB35 on taxonomical basis and 18S rDNA analysis and their sequence were submitted to GenBank. Among them, Aspergillus fumigatus KIBGE-IB33 was selected on the basis of maximum enzyme production. After optimization of fermentation conditions enzyme was immobilized on CLEA. Different parameters were optimized for maximum immobilization of amyloglucosidase. Data of enzyme stability (thermal and Storage) and reusability suggested the applicability of immobilized amyloglucosidase for continuous saccharification of starch in industrial processes.Keywords: aspergillus, immobilization, industrial processes, starch saccharification
Procedia PDF Downloads 4968156 Diagnosis of Induction Machine Faults by DWT
Authors: Hamidreza Akbari
Abstract:
In this paper, for detection of inclined eccentricity in an induction motor, time–frequency analysis of the stator startup current is carried out. For this purpose, the discrete wavelet transform is used. Data are obtained from simulations, using winding function approach. The results show the validity of the approach for detecting the fault and discriminating with respect to other faults.Keywords: induction machine, fault, DWT, electric
Procedia PDF Downloads 3508155 Empirical Orthogonal Functions Analysis of Hydrophysical Characteristics in the Shira Lake in Southern Siberia
Authors: Olga S. Volodko, Lidiya A. Kompaniets, Ludmila V. Gavrilova
Abstract:
The method of empirical orthogonal functions is the method of data analysis with a complex spatial-temporal structure. This method allows us to decompose the data into a finite number of modes determined by empirically finding the eigenfunctions of data correlation matrix. The modes have different scales and can be associated with various physical processes. The empirical orthogonal function method has been widely used for the analysis of hydrophysical characteristics, for example, the analysis of sea surface temperatures in the Western North Atlantic, ocean surface currents in the North Carolina, the study of tropical wave disturbances etc. The method used in this study has been applied to the analysis of temperature and velocity measurements in saline Lake Shira (Southern Siberia, Russia). Shira is a shallow lake with the maximum depth of 25 m. The lake Shira can be considered as a closed water site because of it has one small river providing inflow and but it has no outflows. The main factor that causes the motion of fluid is variable wind flows. In summer the lake is strongly stratified by temperature and saline. Long-term measurements of the temperatures and currents were conducted at several points during summer 2014-2015. The temperature has been measured with an accuracy of 0.1 ºC. The data were analyzed using the empirical orthogonal function method in the real version. The first empirical eigenmode accounts for 70-80 % of the energy and can be interpreted as temperature distribution with a thermocline. A thermocline is a thermal layer where the temperature decreases rapidly from the mixed upper layer of the lake to much colder deep water. The higher order modes can be interpreted as oscillations induced by internal waves. The currents measurements were recorded using Acoustic Doppler Current Profilers 600 kHz and 1200 kHz. The data were analyzed using the empirical orthogonal function method in the complex version. The first empirical eigenmode accounts for about 40 % of the energy and corresponds to the Ekman spiral occurring in the case of a stationary homogeneous fluid. Other modes describe the effects associated with the stratification of fluids. The second and next empirical eigenmodes were associated with dynamical modes. These modes were obtained for a simplified model of inhomogeneous three-level fluid at a water site with a flat bottom.Keywords: Ekman spiral, empirical orthogonal functions, data analysis, stratified fluid, thermocline
Procedia PDF Downloads 1368154 Providing Reliability, Availability and Scalability Support for Quick Assist Technology Cryptography on the Cloud
Authors: Songwu Shen, Garrett Drysdale, Veerendranath Mannepalli, Qihua Dai, Yuan Wang, Yuli Chen, David Qian, Utkarsh Kakaiya
Abstract:
Hardware accelerator has been a promising solution to reduce the cost of cloud data centers. This paper investigates the QoS enhancement of the acceleration of an important datacenter workload: the webserver (or proxy) that faces high computational consumption originated from secure sockets layer (SSL) or transport layer security (TLS) procession in the cloud environment. Our study reveals that for the accelerator maintenance cases—need to upgrade driver/firmware or hardware reset due to hardware hang; we still can provide cryptography services by switching to software during maintenance phase and then switching back to accelerator after maintenance. The switching is seamless to server application such as Nginx that runs inside a VM on top of the server. To achieve this high availability goal, we propose a comprehensive fallback solution based on Intel® QuickAssist Technology (QAT). This approach introduces an architecture that involves the collaboration between physical function (PF) and virtual function (VF), and collaboration among VF, OpenSSL, and web application Nginx. The evaluation shows that our solution could provide high reliability, availability, and scalability (RAS) of hardware cryptography service in a 7x24x365 manner in the cloud environment.Keywords: accelerator, cryptography service, RAS, secure sockets layer/transport layer security, SSL/TLS, virtualization fallback architecture
Procedia PDF Downloads 1598153 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network
Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy
Abstract:
The properties of memory representations in artificial neural networks have cognitive implications. Distributed representations that encode instances as a pattern of activity across layers of nodes afford memory compression and enforce the selection of a single point in instance space. These encoding schemes also appear to distort the representational space, as well as trading off the ability to validate that input information is within the bounds of past experience. In contrast, a localist representation which encodes some meaningful information into individual nodes in a network layer affords less memory compression while retaining the integrity of the representational space. This allows the validity of an input to be determined. The validity (or familiarity) of input along with the capacity of localist representation for multiple instance selections affords a memory sampling approach that dynamically balances the bias-variance trade-off. When the input is familiar, bias may be high by referring only to the most similar instances in memory. When the input is less familiar, variance can be increased by referring to more instances that capture a broader range of features. Using this approach in a localist instance memory network, an experiment demonstrates a relationship between representational conflict, generalization performance, and memorization demand. Relatively small sampling ranges produce the best performance on a classic machine learning dataset of visual objects. Combining memory validity with conflict detection produces a reliable confidence judgement that can separate responses with high and low error rates. Confidence can also be used to signal the need for supervisory input. Using this judgement, the need for supervised learning as well as memory encoding can be substantially reduced with only a trivial detriment to classification performance.Keywords: artificial neural networks, representation, memory, conflict monitoring, confidence
Procedia PDF Downloads 1278152 Study on the Governance of Riverside Public Space in Mountainous Cities from the Perspective of Health and Safety
Authors: Chenxu Fang, Qikai Guan
Abstract:
Riverside public space in mountainous cities has unique scenic resources and humanistic connotations and is an important place indispensable to the activities of urban residents. In recent years, with the continuous development of society and the expansion of the city, the public space along the riverside has been affected to a certain extent. Based on this, this study is based on the concept of health and safety through the study of riverfront space in the local section of Jialing River in Chongqing City; according to the actual use function of riverfront public space, the riverfront public space in mountainous cities is categorized into leisure and recreational riverfront space, ecological conservation waterfront space, and composite function waterfront space. Starting from the health and safety elements affecting the environment in the riverfront public space, the health and safety influencing factors of the riverfront public space are categorized into three major categories, namely, material, non-material, and social, and through the field research and questionnaire collection, combined with the analysis of the Likert scale, the important levels of the health and safety influencing factors of different types of the riverfront public space of the mountainous cities are clarified. We summarize the factors affecting the health and safety of mountainous riverside spaces, map their importance levels to the design of different types of riverside spaces, and put forward three representative paths for the governance of the safety and health of mountainous riverside public space.Keywords: health and safety, mountain city, riverfront public space, spatial governance, Chongqing Jialing River
Procedia PDF Downloads 478151 Novel Verticillane-Type Diterpenoid from the Formosan Soft Coral Cespitularia taeniata
Authors: Yu-Chi Lin, Yun-Sheng Lin, Chia-Ching Liaw, Ching-Yu Chen, Chien-Liang Chao, Chang-Hung Chou, Ya-Ching Shen
Abstract:
A novel diterpenoid, cespitulactam peroxide (1), was isolated from the Formosan Soft Coral Cespitularia taeniata. Compound 1 possesses a verticillene skeleton having a γ-lactam fused with 1,2-dioxetane ring system. The structure of 1 was elucidated on the basis of spectroscopic analyses, especially HRMS and 2D NMR experiments.Keywords: Cespitularia hypotentaculata, diterpenoid, cespitulactam peroxide, γ-lactam
Procedia PDF Downloads 5938150 A Numerical Model for Simulation of Blood Flow in Vascular Networks
Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia
Abstract:
An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system
Procedia PDF Downloads 2728149 Impact of Grassroot Democracy on Rural Development of Villages in the State of Haryana
Authors: Minakshi Jain, Sachin Yadav
Abstract:
Gram Panchayat is the smallest unit of Democracy in India. Grassroots Democracy has been further strengthened by implementation of the 73rd Constitutional Amendment act (CAA) in 1992. To analyse the impact of grassroots democracy the three villages are selected, which have the representation of each section of the society. The selected villages belongs to the same block and district of Haryana state. Villages are selected to access the marginalized group such as women and other backward class. These groups are isolated and do not participate in the grassroots level development process. The caste continue to be a relevant factor in determining the rural leadership. The earlier models of Panchayati Raj failed to benefit the marginalized groups of the society. The 73rd CAA, advocates a uniform three tier system of Panchayat at District level (Zilla Panchayat), Taluka/Block level (Block Panchayat), and village level (Gram Panchayat). The socio-economic profile of representatives in each village is important factor in rural development. The study will highlight the socio-economic profile of elected members at gram Panchayat level, Block Level and District level. The analysis reveals that there is a need to educate and develop the capacity and capability of the elected representative. Training must be imparted to all of them to enable them to function as per provision in the act. The paper will analyse the impact of act on rural development than propose some measures to further strengthen the Panchayati Raj Institution (PRI’s) at grassroots level.Keywords: democracy, rural development, marginalized people, function
Procedia PDF Downloads 3268148 Neural Reshaping: The Plasticity of Human Brain and Artificial Intelligence in the Learning Process
Authors: Seyed-Ali Sadegh-Zadeh, Mahboobe Bahrami, Sahar Ahmadi, Seyed-Yaser Mousavi, Hamed Atashbar, Amir M. Hajiyavand
Abstract:
This paper presents an investigation into the concept of neural reshaping, which is crucial for achieving strong artificial intelligence through the development of AI algorithms with very high plasticity. By examining the plasticity of both human and artificial neural networks, the study uncovers groundbreaking insights into how these systems adapt to new experiences and situations, ultimately highlighting the potential for creating advanced AI systems that closely mimic human intelligence. The uniqueness of this paper lies in its comprehensive analysis of the neural reshaping process in both human and artificial intelligence systems. This comparative approach enables a deeper understanding of the fundamental principles of neural plasticity, thus shedding light on the limitations and untapped potential of both human and AI learning capabilities. By emphasizing the importance of neural reshaping in the quest for strong AI, the study underscores the need for developing AI algorithms with exceptional adaptability and plasticity. The paper's findings have significant implications for the future of AI research and development. By identifying the core principles of neural reshaping, this research can guide the design of next-generation AI technologies that can enhance human and artificial intelligence alike. These advancements will be instrumental in creating a new era of AI systems with unparalleled capabilities, paving the way for improved decision-making, problem-solving, and overall cognitive performance. In conclusion, this paper makes a substantial contribution by investigating the concept of neural reshaping and its importance for achieving strong AI. Through its in-depth exploration of neural plasticity in both human and artificial neural networks, the study unveils vital insights that can inform the development of innovative AI technologies with high adaptability and potential for enhancing human and AI capabilities alike.Keywords: neural plasticity, brain adaptation, artificial intelligence, learning, cognitive reshaping
Procedia PDF Downloads 528147 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 1008146 Urban Stratification as a Basis for Analyzing Political Instability: Evidence from Syrian Cities
Authors: Munqeth Othman Agha
Abstract:
The historical formation of urban centres in the eastern Arab world was shaped by rapid urbanization and sudden transformation from the age of the pre-industrial to a post-industrial economy, coupled with uneven development, informal urban expansion, and constant surges in unemployment and poverty rates. The city was stratified accordingly as overlapping layers of division and inequality that have been built on top of each other, creating complex horizontal and vertical divisions based on economic, social, political, and ethno-sectarian basis. This has been further exacerbated during the neoliberal era, which transferred the city into a sort of dual city that is inhabited by heterogeneous and often antagonistic social groups. Economic deprivation combined with a growing sense of marginalization and inequality across the city planted the seeds of political instability, outbreaking in 2011. Unlike other popular uprisings that occupy central squares, as in Egypt and Tunisia, the Syrian uprising in 2011 took place mainly within inner streets and neighborhood squares, mobilizing primarily on more or less upon the lines of stratification. This has emphasized the role of micro-urban and social settings in shaping mobilization and resistance tactics, which necessitates us to understand the way the city was stratified and place it at the center of the city-conflict nexus analysis. This research aims to understand to what extent pre-conflict urban stratification lines played a role in determining the different trajectories of three cities’ neighborhoods (Homs, Dara’a and Deir-ez-Zor). The main argument of the paper is that the way the Syrian city has been stratified creates various social groups within the city who have enjoyed different levels of accessibility to life chances, material resources and social statuses. This determines their relationship with other social groups in the city and, more importantly, their relationship with the state. The advent of a political opportunity will be depicted differently across the city’s different social groups according to their perceived interests and threats, which consequently leads to either political mobilization or demobilization. Several factors, including the type of social structures, built environment, and state response, determine the ability of social actors to transfer the repertoire of contention to collective action or transfer from social actors to political actors. The research uses urban stratification lines as the basis for understanding the different patterns of political upheavals in urban areas while explaining why neighborhoods with different social and urban environment settings had different abilities and capacities to mobilize, resist state repression and then descend into a military conflict. It particularly traces the transformation from social groups to social actors and political actors by applying the Explaining-outcome Process-Tracing method to depict the causal mechanisms that led to including or excluding different neighborhoods from each stage of the uprising, namely mobilization (M1), response (M2), and control (M3).Keywords: urban stratification, syrian conflict, social movement, process tracing, divided city
Procedia PDF Downloads 738145 Impact of the Electricity Market Prices during the COVID-19 Pandemic on Energy Storage Operation
Authors: Marin Mandić, Elis Sutlović, Tonći Modrić, Luka Stanić
Abstract:
With the restructuring and deregulation of the power system, storage owners, generation companies or private producers can offer their multiple services on various power markets and earn income in different types of markets, such as the day-ahead, real-time, ancillary services market, etc. During the COVID-19 pandemic, electricity prices, as well as ancillary services prices, increased significantly. The optimization of the energy storage operation was performed using a suitable model for simulating the operation of a pumped storage hydropower plant under market conditions. The objective function maximizes the income earned through energy arbitration, regulation-up, regulation-down and spinning reserve services. The optimization technique used for solving the objective function is mixed integer linear programming (MILP). In numerical examples, the pumped storage hydropower plant operation has been optimized considering the already achieved hourly electricity market prices from Nord Pool for the pre-pandemic (2019) and the pandemic (2020 and 2021) years. The impact of the electricity market prices during the COVID-19 pandemic on energy storage operation is shown through the analysis of income, operating hours, reserved capacity and consumed energy for each service. The results indicate the role of energy storage during a significant fluctuation in electricity and services prices.Keywords: electrical market prices, electricity market, energy storage optimization, mixed integer linear programming (MILP) optimization
Procedia PDF Downloads 1748144 Molecular Topology and TLC Retention Behaviour of s-Triazines: QSRR Study
Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević
Abstract:
Quantitative structure-retention relationship (QSRR) analysis was used to predict the chromatographic behavior of s-triazine derivatives by using theoretical descriptors computed from the chemical structure. Fundamental basis of the reported investigation is to relate molecular topological descriptors with chromatographic behavior of s-triazine derivatives obtained by reversed-phase (RP) thin layer chromatography (TLC) on silica gel impregnated with paraffin oil and applied ethanol-water (φ = 0.5-0.8; v/v). Retention parameter (RM0) of 14 investigated s-triazine derivatives was used as dependent variable while simple connectivity index different orders were used as independent variables. The best QSRR model for predicting RM0 value was obtained with simple third order connectivity index (3χ) in the second-degree polynomial equation. Numerical values of the correlation coefficient (r=0.915), Fisher's value (F=28.34) and root mean square error (RMSE = 0.36) indicate that model is statistically significant. In order to test the predictive power of the QSRR model leave-one-out cross-validation technique has been applied. The parameters of the internal cross-validation analysis (r2CV=0.79, r2adj=0.81, PRESS=1.89) reflect the high predictive ability of the generated model and it confirms that can be used to predict RM0 value. Multivariate classification technique, hierarchical cluster analysis (HCA), has been applied in order to group molecules according to their molecular connectivity indices. HCA is a descriptive statistical method and it is the most frequently used for important area of data processing such is classification. The HCA performed on simple molecular connectivity indices obtained from the 2D structure of investigated s-triazine compounds resulted in two main clusters in which compounds molecules were grouped according to the number of atoms in the molecule. This is in agreement with the fact that these descriptors were calculated on the basis of the number of atoms in the molecule of the investigated s-triazine derivatives.Keywords: s-triazines, QSRR, chemometrics, chromatography, molecular descriptors
Procedia PDF Downloads 3938143 The Effect of Substrate Temperature on the Structural, Optical, and Electrical of Nano-Crystalline Tin Doped-Cadmium Telluride Thin Films for Photovoltaic Applications
Authors: Eman A. Alghamdi, A. M. Aldhafiri
Abstract:
It was found that the induce an isolated dopant close to the middle of the bandgap by occupying the Cd position in the CdTe lattice structure is an efficient factor in reducing the nonradiative recombination rate and increasing the solar efficiency. According to our laboratory results, this work has been carried out to obtain the effect of substrate temperature on the CdTe0.6Sn0.4 prepared by thermal evaporation technique for photovoltaic application. Various substrate temperature (25°C, 100°C, 150°C, 200°C, 250°C and 300°C) was applied. Sn-doped CdTe thin films on a glass substrate at a different substrate temperature were made using CdTe and SnTe powders by the thermal evaporation technique. The structural properties of the prepared samples were determined using Raman, x-Ray Diffraction. Spectroscopic ellipsometry and spectrophotometric measurements were conducted to extract the optical constants as a function of substrate temperature. The structural properties of the grown films show hexagonal and cubic mixed structures and phase change has been reported. Scanning electron microscopy (SEM) reviled that a homogenous with a bigger grain size was obtained at 250°C substrate temperature. The conductivity measurements were recorded as a function of substrate temperatures. The open-circuit voltage was improved by controlling the substrate temperature due to the improvement of the fundamental material issues such as recombination and low carrier concentration. All the result was explained and discussed on the biases of the influences of the Sn dopant and the substrate temperature on the structural, optical and photovoltaic characteristics.Keywords: CdTe, conductivity, photovoltaic, ellipsometry
Procedia PDF Downloads 1338142 A Study of the Planning and Designing of the Built Environment under the Green Transit-Oriented Development
Authors: Wann-Ming Wey
Abstract:
In recent years, the problems of global climate change and natural disasters have induced the concerns and attentions of environmental sustainability issues for the public. Aside from the environmental planning efforts done for human environment, Transit-Oriented Development (TOD) has been widely used as one of the future solutions for the sustainable city development. In order to be more consistent with the urban sustainable development, the development of the built environment planning based on the concept of Green TOD which combines both TOD and Green Urbanism is adapted here. The connotation of the urban development under the green TOD including the design toward environment protect, the maximum enhancement resources and the efficiency of energy use, use technology to construct green buildings and protected areas, natural ecosystems and communities linked, etc. Green TOD is not only to provide the solution to urban traffic problems, but to direct more sustainable and greener consideration for future urban development planning and design. In this study, we use both the TOD and Green Urbanism concepts to proceed to the study of the built environment planning and design. Fuzzy Delphi Technique (FDT) is utilized to screen suitable criteria of the green TOD. Furthermore, Fuzzy Analytic Network Process (FANP) and Quality Function Deployment (QFD) were then developed to evaluate the criteria and prioritize the alternatives. The study results can be regarded as the future guidelines of the built environment planning and designing under green TOD development in Taiwan.Keywords: green TOD, built environment, fuzzy delphi technique, quality function deployment, fuzzy analytic network process
Procedia PDF Downloads 3848141 Optimal Selling Prices for Small Sized Poultry Farmers
Authors: Hidefumi Kawakatsu, Dong Li, Kosuke Kato
Abstract:
In Japan, meat-type chickens are mainly classified into three categories: (1) Broilers, (2) Branded chickens, and (3) Jidori (Free-range local traditional pedigree chickens). The Jidori chickens are certified by the Japanese Ministry of Agriculture, whilst, for the Branded chickens, there is no regulation with respect to their breed (genotype) or methods for rearing them. It is, therefore, relatively easy for poultry farmers to introduce Branded than Jidori chickens. The Branded chickens are normally fed a low-calorie diet with ingredients such as herbs, which lengthens their breeding period (compared with that of the Broilers) and increases their market value. In the field of inventory management, fast-growing animals such as broilers are categorised as ameliorating items. To the best of our knowledge, there are no previous studies that have explicitly considered smaller sized poultry farmers with limited breeding areas. This study develops an inventory model for a small sized poultry farmer that produces both the Broilers (Product 1) and the Branded chickens (Product 2) with different amelioration rates. The poultry farmer’s total profit per unit of time is formulated as a function of selling prices by using a price-dependent demand function. The existence of a unique optimal selling price for each product, which maximises the total profit, established. It has also been confirmed through numerical examples that, when the breeding area is fixed, the total profit could increase if the poultry farmer reduced the product quantity of Product 1 to introduce Product 2.Keywords: amelioration, deterioration, small sized poultry farmers, optimal price
Procedia PDF Downloads 2148140 Erectile Dysfunction among Bangladeshi Men with Diabetes
Authors: Shahjada Selim
Abstract:
Background: Erectile dysfunction (ED) is an important impediment to quality of life of men. ED is approximate, three times more common in diabetic than non-diabetic men, and diabetic men develop ED earlier than age-matched non-diabetic subjects. Glycemic control and other factors may contribute in developing and or deteriorating ED. Aim: The aim of the study was to determine the prevalence of ED and its risk factors in type 2 diabetic (T2DM) men in Bangladesh. Methods: During 2013-2014, 3980 diabetic men aged 30-69 years were interviewed at the out-patient departments of seven diabetic centers in Dhaka by using the validated Bengali version of the questionnaire of the International index of erectile function (IIEF) for evaluation of baseline erectile function (EF). The indexes indicate a very high correlation between the items and the questionnaire is consistently reliable. Data were analyzed with Chi-squared (χ²) test using SPSS software. P ≤ 0.05 was considered significant. Results: Out of 3790, ED was found in 2046 (53.98%) of T2DM men. The prevalence of ED was increased with age from 10.5% in men aged 30-39 years to 33.6% in those aged over 60 years (P < 0.001). In comparison with patients with reported diabetes lasting ≤ 5 years (26.4%), the prevalence of ED was less than in those with diabetes of 6-11 years (35.3%) and of 12-30 years (42.5%, P <0.001). ED increased significantly in those who had poor glycemic control. The prevalence of ED in patients with good, fair and poor glycemic control was 22.8%, 42.5% and 47.9% respectively (P = 0.004). Treatment modalities (medical nutrition therapy, oral agents, insulin, and insulin plus oral agents) had significant association with ED and its severity (P < 0.001). Conclusion: Prevalence of ED is very high among T2DM men in Bangladesh and can be reduced the burden by improving glycemic status. Glycemic control, duration of diabetes, treatment modalities, increasing age are associated with ED.Keywords: erectile dysfunction, diabetes, men, Bangladesh
Procedia PDF Downloads 2658139 The Effect of Artificial Intelligence on Digital Factory
Authors: Sherif Fayez Lewis Ghaly
Abstract:
up to datefacupupdated planning has the mission of designing merchandise, plant life, procedures, enterprise, regions, and the development of a up to date. The requirements for up-to-date planning and the constructing of a updated have changed in recent years. everyday restructuring is turning inupupdated greater essential up-to-date hold the competitiveness of a manufacturing facilityupdated. restrictions in new regions, shorter existence cycles of product and manufacturing generation up-to-date a VUCA global (Volatility, Uncertainty, Complexity & Ambiguity) up-to-date greater frequent restructuring measures inside a manufacturing facilityupdated. A virtual up-to-date model is the making plans basis for rebuilding measures and up-to-date an fundamental up-to-date. short-time period rescheduling can now not be handled through on-web site inspections and manual measurements. The tight time schedules require 3177227fc5dac36e3e5ae6cd5820dcaa making plans fashions. updated the high variation fee of facup-to-dateries defined above, a method for rescheduling facupdatedries on the idea of a modern-day digital up to datery dual is conceived and designed for sensible software in updated restructuring projects. the point of interest is on rebuild approaches. The purpose is up-to-date preserve the planning basis (virtual up-to-date model) for conversions within a up to datefacupupdated updated. This calls for the application of a methodology that reduces the deficits of present techniques. The goal is up-to-date how a digital up to datery version may be up to date up to date during ongoing up to date operation. a method up-to-date on phoup to dategrammetry technology is presented. the focus is on developing a easy and fee-powerful up to date tune the numerous adjustments that arise in a manufacturing unit constructing in the course of operation. The method is preceded with the aid of a hardware and software assessment up-to-date become aware of the most cost effective and quickest version.Keywords: building information modeling, digital factory model, factory planning, maintenance digital factory model, photogrammetry, restructuring
Procedia PDF Downloads 288138 Analyze the Effect of TETRA, Terrestrial Trunked Radio, Signal on the Health of People Working in the Gas Refinery
Authors: Mohammad Bagher Heidari, Hefzollah Mohammadian
Abstract:
TETRA (Terrestrial Trunked Radio) is a digital radio communication standard, which has been implemented in several different parts of the gas refinery ninth (phase 12th) by South Pars Gas Complex. Studies on possible impacts on the users' health considering different exposure conditions are missing. Objectives: To investigate possible acute effects of electromagnetic fields (EMF) of two different levels of TETRA hand-held transmitter signals on cognitive function and well-being in healthy young males. Methods: In the present double-blind cross-over study possible effects of short-term (2.5 h) EMF exposure of handset-like signals of TETRA (450 - 470 MHz) were studied in 30 healthy male participants (mean ± SD: 25.4 ±2.6 years). Individuals were tested on nine study days, on which they were exposed to three different exposure conditions (Sham, TETRA 1.5 W/kg and TETRA 10.0 W/kg) in a randomly assigned and balanced order. Participants were tested in the afternoon at a fixed timeframe. Results: Attention remained unchanged in two out of three tasks. In the working memory, significant changes were observed in two out of four subtasks. Significant results were found in 5 out of 35 tested parameters, four of them led to an improvement in performance. Mood, well-being and subjective somatic complaints were not affected by TETRA exposure. Conclusions: The results of the present study do not indicate a negative impact of a short-term EMF- effect of TETRA on cognitive function and well-being in healthy young men.Keywords: TETRA (terrestrial trunked radio), electromagnetic fields (EMF), mobile telecommunication health research (MTHR), antenna
Procedia PDF Downloads 2978137 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour
Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling
Abstract:
Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model
Procedia PDF Downloads 998136 Industrial Prototype for Hydrogen Separation and Purification: Graphene Based-Materials Application
Authors: Juan Alfredo Guevara Carrio, Swamy Toolahalli Thipperudra, Riddhi Naik Dharmeshbhai, Sergio Graniero Echeverrigaray, Jose Vitorio Emiliano, Antonio Helio Castro
Abstract:
In order to advance the hydrogen economy, several industrial sectors can potentially benefit from the trillions of stimulus spending for post-coronavirus. Blending hydrogen into natural gas pipeline networks has been proposed as a means of delivering it during the early market development phase, using separation and purification technologies downstream to extract the pure H₂ close to the point of end-use. This first step has been mentioned around the world as an opportunity to use existing infrastructures for immediate decarbonisation pathways. Among current technologies used to extract hydrogen from mixtures in pipelines or liquid carriers, membrane separation can achieve the highest selectivity. The most efficient approach for the separation of H₂ from other substances by membranes is offered from the research of 2D layered materials due to their exceptional physical and chemical properties. Graphene-based membranes, with their distribution of pore sizes in nanometers and angstrom range, have shown fundamental and economic advantages over other materials. Their combination with the structure of ceramic and geopolymeric materials enabled the synthesis of nanocomposites and the fabrication of membranes with long-term stability and robustness in a relevant range of physical and chemical conditions. Versatile separation modules have been developed for hydrogen separation, which adaptability allows their integration in industrial prototypes for applications in heavy transport, steel, and cement production, as well as small installations at end-user stations of pipeline networks. The developed membranes and prototypes are a practical contribution to the technological challenge of supply pure H₂ for the mentioned industries as well as hydrogen energy-based fuel cells.Keywords: graphene nano-composite membranes, hydrogen separation and purification, separation modules, indsutrial prototype
Procedia PDF Downloads 1598135 Research on the Aeration Systems’ Efficiency of a Lab-Scale Wastewater Treatment Plant
Authors: Oliver Marunțălu, Elena Elisabeta Manea, Lăcrămioara Diana Robescu, Mihai Necșoiu, Gheorghe Lăzăroiu, Dana Andreya Bondrea
Abstract:
In order to obtain efficient pollutants removal in small-scale wastewater treatment plants, uniform water flow has to be achieved. The experimental setup, designed for treating high-load wastewater (leachate), consists of two aerobic biological reactors and a lamellar settler. Both biological tanks were aerated by using three different types of aeration systems - perforated pipes, membrane air diffusers and tube ceramic diffusers. The possibility of homogenizing the water mass with each of the air diffusion systems was evaluated comparatively. The oxygen concentration was determined by optical sensors with data logging. The experimental data was analyzed comparatively for all three different air dispersion systems aiming to identify the oxygen concentration variation during different operational conditions. The Oxygenation Capacity was calculated for each of the three systems and used as performance and selection parameter. The global mass transfer coefficients were also evaluated as important tools in designing the aeration system. Even though using the tubular porous diffusers leads to higher oxygen concentration compared to the perforated pipe system (which provides medium-sized bubbles in the aqueous solution), it doesn’t achieve the threshold limit of 80% oxygen saturation in less than 30 minutes. The study has shown that the optimal solution for the studied configuration was the radial air diffusers which ensure an oxygen saturation of 80% in 20 minutes. An increment of the values was identified when the air flow was increased.Keywords: flow, aeration, bioreactor, oxygen concentration
Procedia PDF Downloads 3898134 Screening of Plant Growth Promoting Rhizobacteria in the Rhizo- and Endosphere of Sunflower (Helianthus anus) and Their Role in Enhancing Growth and Yield Attriburing Trairs and Colonization Studies
Authors: A. Majeed, M.K. Abbasi, S. Hameed, A. Imran, T. Naqqash, M. K. Hanif
Abstract:
Plant growth-promoting rhizobacteria (PGPR) are free-living soil bacteria that aggressively colonize the rhizosphere/plant roots, and enhance the growth and yield of plants when applied to seed or crops. Root associated (endophytic and rhizospheric) PGPR were isolated from Sunflower (Helianthus anus) grown in soils collected from 16 different sites of sub division Dhirkot, Poonch, Azad Jammu & Kashmir, Pakistan. A total of 150 bacterial isolates were isolated, purified, screened in vitro for their plant growth promoting (PGP) characteristics. 11 most effective isolates were selected on the basis of biochemical assays (nitrogen fixation, phosphate solubilization, growth hormone production, biocontrol assay, and carbon substrates utilization assay through gas chromatography (GCMS), spectrophotometry, high performance liquid chromatography HPLC, fungal and bacterial dual plate assay and BIOLOG GN2/GP2 microplate assay respectively) and were tested on the crop under controlled and field conditions. From the inoculation assay, the most promising 4 strains (on the basis of increased root/shoot weight, root/shoot length, seed oil content, and seed yield) were than selected for colonization studies through confocal laser scanning and transmission electron microscope. 16Sr RNA gene analysis showed that these bacterial isolates belong to Pseudononas, Enterobacter, Azospirrilum, and Citobacter genera. This study is the clear evident that such isolates have the potential for application as inoculants adapted to poor soils and local crops to minimize the chemical fertilizers harmful for soil and environmentKeywords: PGPR, nitrogen fixation, phosphate solubilization, colonization
Procedia PDF Downloads 3408133 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals
Authors: Christine F. Boos, Fernando M. Azevedo
Abstract:
Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing
Procedia PDF Downloads 5288132 Teaching Translation in Brazilian Universities: A Study about the Possible Impacts of Translators’ Comments on the Cyberspace about Translator Education
Authors: Erica Lima
Abstract:
The objective of this paper is to discuss relevant points about teaching translation in Brazilian universities and the possible impacts of blogs and social networks to translator education today. It is intended to analyze the curricula of Brazilian translation courses, contrasting them to information obtained from two social networking groups of great visibility in the area concerning essential characteristics to become a successful profession. Therefore, research has, as its main corpus, a few undergraduate translation programs’ syllabuses, as well as a few postings on social networks groups that specifically share professional opinions regarding the necessity for a translator to obtain a degree in translation to practice the profession. To a certain extent, such comments and their corresponding responses lead to the propagation of discourses which influence the ideas that aspiring translators and recent graduates end up having towards themselves and their undergraduate courses. The postings also show that many professionals do not have a clear position regarding the translator education; while refuting it, they also encourage “free” courses. It is thus observed that cyberspace constitutes, on the one hand, a place of mobilization of people in defense of similar ideas. However, on the other hand, it embodies a place of tension and conflict, in view of the fact that there are many participants and, as in any other situation of interlocution, disagreements may arise. From the postings, aspects related to professionalism were analyzed (including discussions about regulation), as well as questions about the classic dichotomies: theory/practice; art/technique; self-education/academic training. As partial result, the common interest regarding the valorization of the profession could be mentioned, although there is no consensus on the essential characteristics to be a good translator. It was also possible to observe that the set of socially constructed representations in the group reflects characteristics of the world situation of the translation courses (especially in some European countries and in the United States), which, in the first instance, does not accurately reflect the Brazilian idiosyncrasies of the area.Keywords: cyberspace, teaching translation, translator education, university
Procedia PDF Downloads 3888131 Medical Complications in Diabetic Recipients after Kidney Transplantation
Authors: Hakan Duger, Alparslan Ersoy, Canan Ersoy
Abstract:
Diabetes mellitus is the most common etiology of end-stage renal disease (ESRD). Also, diabetic nephropathy is the etiology of ESRD in approximately 23% of kidney transplant recipients. A successful kidney transplant improves the quality of life and reduces the mortality risk for most patients. However, patients require close follow-up after transplantation due to medical complications. Diabetes mellitus can affect patient morbidity and mortality due to possible effects of immunosuppressive therapy on glucose metabolism. We compared the frequency of medical complications and the outcomes in diabetic and non-diabetic kidney transplant recipients. Materials and Methods: This retrospective study conducted in 498 patients who underwent kidney transplant surgery at our center in 10-year periods. The patients were divided into two groups: diabetics (46 ± 10 year, 26 males, 16 females) and non-diabetics (39 ± 12 year, 259 males, 197 females). The medical complications, graft functions, causes of graft loss and death were obtained from medical records. Results: There was no significant difference between recipient age, duration of dialysis, body mass index, gender, donor type, donor age, dialysis type, histories of HBV, HCV and coronary artery disease between two groups. The history of hypertension in diabetics was higher (69% vs. 36%, p < 0.001). The ratios of hypertension (50.1% vs. 57.1%), pneumonia (21.9% vs. 20%), urinary infection (16.9% vs. 20%), transaminase elevation (11.5% vs. 20%), hyperpotasemia (14.7% vs. 17.1%), hyponatremia (9.7% vs. 20%), hypotension (7.1% vs. 7.9%), hypocalcemia (1.4% vs. 0%), thrombocytopenia (8.6% vs. 8.6%), hypoglycemia (0.7% vs. 0%) and neutropenia (1.8% vs. 0%) were comparable in non-diabetic and diabetic groups, respectively. The frequency of hyperglycaemia in diabetics was higher (8.6% vs. 54.3%, p < 0.001). After transplantation, primary non-function (3.4% vs. 2.6%), delayed graft function (25.1% vs. 34.2%) and acute rejection (7.3% vs. 10.5%) ratios of in non-diabetic and diabetic groups were similar, respectively. Hospitalization durations in non-diabetics and diabetics were 22.5 ± 17.5 and 18.7 ± 13 day (p=0.094). Mean serum creatinine levels in non-diabetics and diabetics were 1.54 ± 0.74 and 1.52 ± 0.62 mg/dL at 6th month. Forty patients had graft loss. The ratios of graft loss and death in non-diabetic and diabetic groups were 8.2% vs. 7.1% and 7.1% vs. 2.6% (p > 0.05). There was no significant relationship between graft and patient survivals with the development of medical complication. Conclusion: As a result, medical complications are common in the early period. Hyperglycaemia was frequently seen following transplantation due to the effects of immunosuppressant regimens. However, the frequency of other medical complications in diabetic patients did not differ from non-diabetic one. The most important cause of death is still infections. The development of medical complications during the first 6 months did not significantly affect transplant outcomes.Keywords: kidney transplantation, diabetes mellitus, complication, graft function
Procedia PDF Downloads 330