Search results for: kernel principal component analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29138

Search results for: kernel principal component analysis

28508 Component Test of Martensitic/Ferritic Steels and Nickel-Based Alloys and Their Welded Joints under Creep and Thermo-Mechanical Fatigue Loading

Authors: Daniel Osorio, Andreas Klenk, Stefan Weihe, Andreas Kopp, Frank Rödiger

Abstract:

Future power plants currently face high design requirements due to worsening climate change and environmental restrictions, which demand high operational flexibility, superior thermal performance, minimal emissions, and higher cyclic capability. The aim of the paper is, therefore, to investigate the creep and thermo-mechanical material behavior of improved materials experimentally and welded joints at component scale under near-to-service operating conditions, which are promising for application in highly efficient and flexible future power plants. These materials promise an increase in flexibility and a reduction in manufacturing costs by providing enhanced creep strength and, therefore, the possibility for wall thickness reduction. At the temperature range between 550°C and 625°C, the investigation focuses on the in-phase thermo-mechanical fatigue behavior of dissimilar welded joints of conventional materials (ferritic and martensitic material T24 and T92) to nickel-based alloys (A617B and HR6W) by means of membrane test panels. The temperature and external load are varied in phase during the test, while the internal pressure remains constant. At the temperature range between 650°C and 750°C, it focuses on the creep behavior under multiaxial stress loading of similar and dissimilar welded joints of high temperature resistant nickel-based alloys (A740H, A617B, and HR6W) by means of a thick-walled-component test. In this case, the temperature, the external axial load, and the internal pressure remain constant during testing. Numerical simulations are used for the estimation of the axial component load in order to induce a meaningful damage evolution without causing a total component failure. Metallographic investigations after testing will provide support for understanding the damage mechanism and the influence of the thermo-mechanical load and multiaxiality on the microstructure change and on the creep and TMF- strength.

Keywords: creep, creep-fatigue, component behaviour, weld joints, high temperature material behaviour, nickel-alloys, high temperature resistant steels

Procedia PDF Downloads 107
28507 A Study of Fatigue Life Estimation of a Modular Unmanned Aerial Vehicle by Developing a Structural Health Monitoring System

Authors: Zain Ul Hassan, Muhammad Zain Ul Abadin, Muhammad Zubair Khan

Abstract:

Unmanned aerial vehicles (UAVs) have now become of predominant importance for various operations, and an immense amount of work is going on in this specific category. The structural stability and life of these UAVs is key factor that should be considered while deploying them to different intelligent operations as their failure leads to loss of sensitive real-time data and cost. This paper presents an applied research on the development of a structural health monitoring system for a UAV designed and fabricated by deploying modular approach. Firstly, a modular UAV has been designed which allows to dismantle and to reassemble the components of the UAV without effecting the whole assembly of UAV. This novel approach makes the vehicle very sustainable and decreases its maintenance cost to a significant value by making possible to replace only the part leading to failure. Then the SHM for the designed architecture of the UAV had been specified as a combination of wings integrated with strain gauges, on-board data logger, bridge circuitry and the ground station. For the research purpose sensors have only been attached to the wings being the most load bearing part and as per analysis was done on ANSYS. On the basis of analysis of the load time spectrum obtained by the data logger during flight, fatigue life of the respective component has been predicted using fracture mechanics techniques of Rain Flow Method and Miner’s Rule. Thus allowing us to monitor the health of a specified component time to time aiding to avoid any failure.

Keywords: fracture mechanics, rain flow method, structural health monitoring system, unmanned aerial vehicle

Procedia PDF Downloads 281
28506 Evaluation of Joint Contact Forces and Muscle Forces in the Subjects with Non-Specific Low Back Pain

Authors: Mohammad Taghi Karimi, Maryam Hasan Zahraee

Abstract:

Background: Low back pain (LBP) is a common health and socioeconomic problem, especially the chronic one. The joint contact force is an important parameter during walking which increases the incidence of injury and degenerative joint disease. To our best knowledge, there are not enough evidences in literature on the muscular forces and joint contact forces in subjects with low back pain. Purpose: The main hypothesis associated with this research was that joint contact force of L4/L5 of non-specific chronic low back pain subjects was the same as that of normal. Therefore, the aim of this study was to determine the joint contact force difference between non-specific chronic low back pain and normal subjects. Method: This was an experimental-comparative study. 20 normal subjects and 20 non-specific chronic low back pain patients were recruited in this study. Qualysis motion analysis system and a Kistler force plate were used to collect the motions and the force applied on the leg, respectively. OpenSimm software used to determine joint contact force and muscle forces in this study. Some parameters such as force applied on the legs (pelvis), kinematic of hip and pelvic, peaks of muscles, force of trunk musculature and joint contact force of L5/S1 were used for further analysis. Differences between mean values of all data were measured using two-sample t-test among the subjects. Results: The force produced by Semitendinosus, Biceps Femoris, and Adductor muscles were significantly different between low back pain and normal subjects. Moreover, the mean value of breaking component of the force of the knee joint increased significantly in low back pain subjects, besides a significant decrease in mean value of the vertical component of joint reaction force compared to the normal ones. Conclusions: The forces produced by the trunk and pelvic muscles, and joint contact forces differ significantly between low back pain and normal subjects. It seems that those with non-specific chronic low back pain use trunk muscles more than normal subjects to stabilize the pelvic during walking.

Keywords: low back pain, joint contact force, kinetic, muscle force

Procedia PDF Downloads 230
28505 STML: Service Type-Checking Markup Language for Services of Web Components

Authors: Saqib Rasool, Adnan N. Mian

Abstract:

Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.

Keywords: REST, STML, type checking, web component

Procedia PDF Downloads 239
28504 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 76
28503 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 91
28502 A Succinct Method for Allocation of Reactive Power Loss in Deregulated Scenario

Authors: J. S. Savier

Abstract:

Real power is the component power which is converted into useful energy whereas reactive power is the component of power which cannot be converted to useful energy but it is required for the magnetization of various electrical machineries. If the reactive power is compensated at the consumer end, the need for reactive power flow from generators to the load can be avoided and hence the overall power loss can be reduced. In this scenario, this paper presents a succinct method called JSS method for allocation of reactive power losses to consumers connected to radial distribution networks in a deregulated environment. The proposed method has the advantage that no assumptions are made while deriving the reactive power loss allocation method.

Keywords: deregulation, reactive power loss allocation, radial distribution systems, succinct method

Procedia PDF Downloads 360
28501 Genetic Structuring of Four Tectona grandis L. F. Seed Production Areas in Southern India

Authors: P. M. Sreekanth

Abstract:

Teak (Tectona grandis L. f.) is a tree species indigenous to India and other Southeastern countries. It produces high-value timber and is easily established in plantations. Reforestation requires a constant supply of high quality seeds. Seed Production Areas (SPA) of teak are improved stands used for collection of open-pollinated quality seeds in large quantities. Information on the genetic diversity of major teak SPAs in India is scanty. The genetic structure of four important seed production areas of Kerala State in Southern India was analyzed employing amplified fragment length polymorphism markers using ten selective primer combinations on 80 samples (4 populations X 20 trees). The study revealed that the gene diversity of the SPAs varied from 0.169 (Konni SPA) to 0.203 (Wayanad SPA). The percentage of polymorphic loci ranged from 74.42 (Parambikulam SPA) to 84.06 (Konni SPA). The mean total gene diversity index (HT) of all the four SPAs was 0.2296 ±0.02. A high proportion of genetic diversity was observed within the populations (83%) while diversity between populations was lower (17%) (GST = 0.17). Principal coordinate analysis and STRUCTURE analysis of the genotypes indicated that the pattern of clustering was in accordance with the origin and geographic location of SPAs, indicating specific identity of each population. A UPGMA dendrogram was prepared and showed that all the twenty samples from each of Konni and Parambikulam SPAs clustered into two separate groups, respectively. However, five Nilambur genotypes and one Wayanad genotype intruded into the Konni cluster. The higher gene flow estimated (Nm = 2.4) reflected the inclusion of Konni origin planting stock in the Nilambur and Wayanad plantations. Evidence for population structure investigated using 3D Principal Coordinate Analysis of FAMD software 1.30 indicated that the pattern of clustering was in accordance with the origin of SPAs. The present study showed that assessment of genetic diversity in seed production plantations can be achieved using AFLP markers. The AFLP fingerprinting was also capable of identifying the geographical origin of planting stock and there by revealing the occurrence of the errors in genotype labeling. Molecular marker-based selective culling of genetically similar trees from a stand so as to increase the genetic base of seed production areas could be a new proposition to improve quality of seeds required for raising commercial plantations of teak. The technique can also be used to assess the genetic diversity status of plus trees within provenances during their selection for raising clonal seed orchards for assuring the quality of seeds available for raising future plantations.

Keywords: AFLP, genetic structure, spa, teak

Procedia PDF Downloads 303
28500 Designing an Effective Accountability Model for Islamic Azad University Using the Qualitative Approach of Grounded Theory

Authors: Davoud Maleki, Neda Zamani

Abstract:

The present study aims at exploring the effective accountability model of Islamic Azad University using a qualitative approach of grounded theory. The data of this study were obtained from semi-structured interviews with 25 professors and scholars in Islamic Azad University of Tehran who were selected by theoretical sampling method. In the data analysis, the stepwise method and Strauss and Corbin analytical methods (1992) were used. After identification of the main component (balanced response to stakeholders’ needs) and using it to bring the categories together, expressions and ideas representing the relationships between the main and subcomponents, and finally, the revealed components were categorized into six dimensions of the paradigm model, with the relationships among them, including causal conditions (7 components), main component (balanced response to stakeholders’ needs), strategies (5 components), environmental conditions (5 components), intervention features (4 components), and consequences (3 components). Research findings show an exploratory model for describing the relationships between causal conditions, main components, accountability strategies, environmental conditions, university environmental features, and that consequences.

Keywords: accountability, effectiveness, Islamic Azad University, grounded theory

Procedia PDF Downloads 73
28499 Massachusetts Homeschool Policy: An Interpretive Analysis of Homeschool Regulation and Oversight

Authors: Lauren Freed

Abstract:

This research proposal outlines an examination of homeschool oversight in the Massachusetts educational system amid the backdrop of ideological differences between various parties with contributing interests. This mixed methodology study will follow an interpretive policy research approach, involving the use of existing data, surveys, and focus groups. The aim is to capture distinct sets of meanings, values, feelings, and beliefs by principal stakeholders, while exploring the ways in which they/each interact with, interpret, and implement homeschool guidelines set forth by the Massachusetts Supreme Judicial Court Decision Care and Protection of Charles (1987). This analysis will identify and contextualize the attitudes, administrative choices, financial implications, and educational impacts that result from the process and practice of enacting current homeschool oversight policy in Massachusetts. The following question will guide this study: How do districts, homeschooling parents, and Massachusetts Department of Elementary and Secondary Education (DESE) regulate, fund, collect, interpret, implement and report Massachusetts homeschool oversight policy? The resulting analysis will produce a unique and original baseline snapshot of qualitative and quantifiable point-in-time data based on the registered homeschool population in the state of Massachusetts.

Keywords: alternative education, homeschooling, home education, home schooling policy

Procedia PDF Downloads 175
28498 Analysis of Biomarkers Intractable Epileptogenic Brain Networks with Independent Component Analysis and Deep Learning Algorithms: A Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients

Authors: Bliss Singhal

Abstract:

Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide and 1.2 million Americans. There exist millions of pediatric patients with intractable epilepsy, a condition in which seizures fail to come under control. The occurrence of seizures can result in physical injury, disorientation, unconsciousness, and additional symptoms that could impede children's ability to participate in everyday tasks. Predicting seizures can help parents and healthcare providers take precautions, prevent risky situations, and mentally prepare children to minimize anxiety and nervousness associated with the uncertainty of a seizure. This research proposes a comprehensive framework to predict seizures in pediatric patients by evaluating machine learning algorithms on unimodal neuroimaging data consisting of electroencephalogram signals. The bandpass filtering and independent component analysis proved to be effective in reducing the noise and artifacts from the dataset. Various machine learning algorithms’ performance is evaluated on important metrics such as accuracy, precision, specificity, sensitivity, F1 score and MCC. The results show that the deep learning algorithms are more successful in predicting seizures than logistic Regression, and k nearest neighbors. The recurrent neural network (RNN) gave the highest precision and F1 Score, long short-term memory (LSTM) outperformed RNN in accuracy and convolutional neural network (CNN) resulted in the highest Specificity. This research has significant implications for healthcare providers in proactively managing seizure occurrence in pediatric patients, potentially transforming clinical practices, and improving pediatric care.

Keywords: intractable epilepsy, seizure, deep learning, prediction, electroencephalogram channels

Procedia PDF Downloads 72
28497 Biophysical Analysis of the Interaction of Polymeric Nanoparticles with Biomimetic Models of the Lung Surfactant

Authors: Weiam Daear, Patrick Lai, Elmar Prenner

Abstract:

The human body offers many avenues that could be used for drug delivery. The pulmonary route, which is delivered through the lungs, presents many advantages that have sparked interested in the field. These advantages include; 1) direct access to the lungs and the large surface area it provides, and 2) close proximity to the blood circulation. The air-blood barrier of the alveoli is about 500 nm thick. The air-blood barrier consist of a monolayer of lipids and few proteins called the lung surfactant and cells. This monolayer consists of ~90% lipids and ~10% proteins that are produced by the alveolar epithelial cells. The two major lipid classes constitutes of various saturation and chain length of phosphatidylcholine (PC) and phosphatidylglycerol (PG) representing 80% of total lipid component. The major role of the lung surfactant monolayer is to reduce surface tension experienced during breathing cycles in order to prevent lung collapse. In terms of the pulmonary drug delivery route, drugs pass through various parts of the respiratory system before reaching the alveoli. It is at this location that the lung surfactant functions as the air-blood barrier for drugs. As the field of nanomedicine advances, the use of nanoparticles (NPs) as drug delivery vehicles is becoming very important. This is due to the advantages NPs provide with their large surface area and potential specific targeting. Therefore, studying the interaction of NPs with lung surfactant and whether they affect its stability becomes very essential. The aim of this research is to develop a biomimetic model of the human lung surfactant followed by a biophysical analysis of the interaction of polymeric NPs. This biomimetic model will function as a fast initial mode of testing for whether NPs affect the stability of the human lung surfactant. The model developed thus far is an 8-component lipid system that contains major PC and PG lipids. Recently, a custom made 16:0/16:1 PC and PG lipids were added to the model system. In the human lung surfactant, these lipids constitute 16% of the total lipid component. According to the author’s knowledge, there is not much monolayer data on the biophysical analysis of the 16:0/16:1 lipids, therefore more analysis will be discussed here. Biophysical techniques such as the Langmuir Trough is used for stability measurements which monitors changes to a monolayer's surface pressure upon NP interaction. Furthermore, Brewster Angle Microscopy (BAM) employed to visualize changes to the lateral domain organization. Results show preferential interactions of NPs with different lipid groups that is also dependent on the monolayer fluidity. Furthermore, results show that the film stability upon compression is unaffected, but there are significant changes in the lateral domain organization of the lung surfactant upon NP addition. This research is significant in the field of pulmonary drug delivery. It is shown that NPs within a certain size range are safe for the pulmonary route, but little is known about the mode of interaction of those polymeric NPs. Moreover, this work will provide additional information about the nanotoxicology of NPs tested.

Keywords: Brewster angle microscopy, lipids, lung surfactant, nanoparticles

Procedia PDF Downloads 170
28496 Classification of EEG Signals Based on Dynamic Connectivity Analysis

Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović

Abstract:

In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.

Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients

Procedia PDF Downloads 197
28495 Yield Loss Estimation Using Multiple Drought Severity Indices

Authors: Sara Tokhi Arab, Rozo Noguchi, Tofeal Ahamed

Abstract:

Drought is a natural disaster that occurs in a region due to a lack of precipitation and high temperatures over a continuous period or in a single season as a consequence of climate change. Precipitation deficits and prolonged high temperatures mostly affect the agricultural sector, water resources, socioeconomics, and the environment. Consequently, it causes agricultural product loss, food shortage, famines, migration, and natural resources degradation in a region. Agriculture is the first sector affected by drought. Therefore, it is important to develop an agricultural drought risk and loss assessment to mitigate the drought impact in the agriculture sector. In this context, the main purpose of this study was to assess yield loss using composite drought indices in the drought-affected vineyards. In this study, the CDI was developed for the years 2016 to 2020 by comprising five indices: the vegetation condition index (VCI), temperature condition index (TCI), deviation of NDVI from the long-term mean (NDVI DEV), normalized difference moisture index (NDMI) and precipitation condition index (PCI). Moreover, the quantitative principal component analysis (PCA) approach was used to assign a weight for each input parameter, and then the weights of all the indices were combined into one composite drought index. Finally, Bayesian regularized artificial neural networks (BRANNs) were used to evaluate the yield variation in each affected vineyard. The composite drought index result indicated the moderate to severe droughts were observed across the Kabul Province during 2016 and 2018. Moreover, the results showed that there was no vineyard in extreme drought conditions. Therefore, we only considered the severe and moderated condition. According to the BRANNs results R=0.87 and R=0.94 in severe drought conditions for the years of 2016 and 2018 and the R= 0.85 and R=0.91 in moderate drought conditions for the years of 2016 and 2018, respectively. In the Kabul Province within the two years drought periods, there was a significate deficit in the vineyards. According to the findings, 2018 had the highest rate of loss almost -7 ton/ha. However, in 2016 the loss rates were about – 1.2 ton/ha. This research will support stakeholders to identify drought affect vineyards and support farmers during severe drought.

Keywords: grapes, composite drought index, yield loss, satellite remote sensing

Procedia PDF Downloads 139
28494 On the Creep of Concrete Structures

Authors: A. Brahma

Abstract:

Analysis of deferred deformations of concrete under sustained load shows that the creep has a leading role on deferred deformations of concrete structures. Knowledge of the creep characteristics of concrete is a Necessary starting point in the design of structures for crack control. Such knowledge will enable the designer to estimate the probable deformation in pre-stressed concrete or reinforced and the appropriate steps can be taken in design to accommodate this movement. In this study, we propose a prediction model that involves the acting principal parameters on the deferred behaviour of concrete structures. For the estimation of the model parameters Levenberg-Marquardt method has proven very satisfactory. A confrontation between the experimental results and the predictions of models designed shows that it is well suited to describe the evolution of the creep of concrete structures.

Keywords: concrete structure, creep, modelling, prediction

Procedia PDF Downloads 284
28493 Effect of the Soil-Foundation Interface Condition in the Determination of the Resistance Domain of Rigid Shallow Foundations

Authors: Nivine Abbas, Sergio Lagomarsino, Serena Cattari

Abstract:

The resistance domain of a generally loaded rigid shallow foundation is normally represented as an interaction diagram limited by a failure surface in the three dimensional (3D) load space (N, V, M), where N is the vertical centric load component, V is the horizontal load component and M is the bending moment component. Usually, this resistance domain is constructed neglecting the foundation sliding mechanism that take place at the level of soil-foundation interface once the applied horizontal load exceeds the interface frictional resistance of the foundation. This issue is translated in the literature by the fact that the failure limit in the (2D) load space (N, V) is constructed as a parabola having an initial slope, at the center of the coordinate system, that depends, in some works, only of the soil friction angle, and in other works, has an empirical value. However, considering a given geometry of the foundation lying on a given soil type, the initial slope of the failure limit must change, for instance, when varying the roughness of the foundation surface at its interface with the soil. The present study discusses the effect of the soil-foundation interface condition on the construction of the resistance domain, and proposes a correction to be applied to the failure limit in order to overcome this effect.

Keywords: soil-foundation interface, sliding mechanism, soil shearing, resistance domain, rigid shallow foundation

Procedia PDF Downloads 449
28492 Application of Modal Analysis for Commissioning of a Ball Screw System

Authors: T. D. Tran, H. Schlegel, R. Neugebauer

Abstract:

Ball screws are an important component in machine tools. In mechatronic systems and machine tools, a ball screw has to work usually at a high speed. Otherwise the axial compliance of the ball screw, in combination with the inertia of the slide, the motor, the coupling and the screw, will cause an oscillation resonance, which limits the systems bandwidth and consequently influences performance of the motion controller. In this paper, the modal analysis method by measuring and analysing the vibrating parameters of the ball screw system to determine the dynamic characteristic of existing structures is used. On the one hand, the results of this study were obtained by the theoretical analysis and the modal testing of a ball screw system test station with the help of an impact hammer, respectively using excitation by motor. The experimental study showed oscillating forms of the ball screw for each frequency and obtained eigenfrequencies of the ball screw system. On the other hand, in this research a simulation with the help of the numerical modal analysis in order to analyse the oscillation and to find the eigenfrequencies of the ball screw system is used. Furthermore, the model order reduction by modal reduction and also according to Guyan is carried out. On the basis of these results a secure and also rapid commissioning of the control loops with regard to operating in their optimal function is targeted.

Keywords: modal analysis, ball screw, controller system, machine tools

Procedia PDF Downloads 449
28491 Phenological and Molecular Genetic Diversity Analysis among Saudi durum Wheat Landraces

Authors: Naser B. Almari, Salem S. Alghamdi, Muhammad Afzal, Mohamed Helmy El Shal

Abstract:

Wheat landraces are a rich genetic resource for boosting agronomic qualities in breeding programs while also providing diversity and unique adaptation to local environmental conditions. These genotypes have grown increasingly important in the face of recent climate change challenges. This research aimed to look at the genetic diversity of Saudi Durum wheat landraces using morpho-phenological and molecular data. The principal components analysis (PCA) analysis recorded 78.47 % variance and 1.064 eigenvalues for the first six PCs of the total, respectively. The significant characters contributed more to the diversity are the length of owns at the tip relative to the length of the ear, culm: glaucosity of the neck, flag leaf: glaucosity of the sheath, flag leaf: anthocyanin coloration of auricles, plant: frequency of plants with recurved flag leaves, ear: length, and ear: shape in profile in the PC1. The significant wheat genotypes contributed more in the PC1 (8, 14, 497, 650, 569, 590, 594, 598, 600, 601, and 604). The cluster analysis recorded an 85.42 cophenetic correlation among the 22 wheat genotypes and grouped the genotypes into two main groups. Group, I contain 8 genotypes, however, the 2nd group contains 12 wheat genotypes, while two genotypes (13 and 497) are standing alone in the dendrogram and unable to make a group with any one of the genotypes. The second group was subdivided into two subgroups. The genotypes (14, 602, and 600) were present in the second sub-group. The genotypes were grouped into two main groups. The first group contains 17 genotypes, while the second group contains 3 (8, 977, and 594) wheat genotypes. The genotype (602) was standing alone and unable to make a group with any wheat genotype. The genotypes 650 and 13 also stand alone in the first group. Using the Mantel test, the data recorded a significant (R2 = 0.0006) correlation (phenotypic and genetic) among 22 wheat durum genotypes.

Keywords: durum wheat, PCA, cluster analysis, SRAP, genetic diversity

Procedia PDF Downloads 101
28490 A Moroccan Natural Solution for Treating Industrial Effluents: Evaluating the Effectiveness of Using Date Kernel Residues for Purification

Authors: Ahmed Salim, A. El Bouari, M. Tahiri, O. Tanane

Abstract:

This research aims to develop and comprehensively characterize a cost-effective activated carbon derived from date residues, with a focus on optimizing its physicochemical properties to achieve superior performance in a variety of applications. The samples were synthesized via a chemical activation process utilizing phosphoric acid (H₃PO₄) as the activating agent. Activated carbon, produced through this method, functions as a vital adsorbent for the removal of contaminants, with a specific focus on methylene blue, from industrial wastewater. This study meticulously examined the influence of various parameters, including carbonization temperature and duration, on both the combustion properties and adsorption efficiency of the resultant material. Through extensive analysis, the optimal conditions for synthesizing the activated carbon were identified as a carbonization temperature of 600°C and a duration of 2 hours. The activated carbon synthesized under optimized conditions demonstrated an exceptional carbonization yield and methylene blue adsorption efficiency of 99.71%. The produced carbon was subsequently characterized using X-ray diffraction (XRD) analysis. Its effectiveness in the adsorption of methylene blue from contaminated water was then evaluated. A comprehensive assessment of the adsorption capacity was conducted by varying parameters such as carbon dosage, contact time, initial methylene blue concentration, and pH levels.

Keywords: environmental pollution, adsorbent, activated carbon, phosphoric acid, date Kernels, pollutants, adsorption

Procedia PDF Downloads 21
28489 Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices

Authors: Kaustav Mukherjee

Abstract:

In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parameters

Keywords: comic stylisation from camera image using GAN, creating 2D animated movie style custom stickers from images, depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices, GAN architecture for 2D animated cartoonizing neural style, neural style transfer for edge, model distilation, perceptual loss

Procedia PDF Downloads 121
28488 Design and Development of Data Mining Application for Medical Centers in Remote Areas

Authors: Grace Omowunmi Soyebi

Abstract:

Data Mining is the extraction of information from a large database which helps in predicting a trend or behavior, thereby helping management make knowledge-driven decisions. One principal problem of most hospitals in rural areas is making use of the file management system for keeping records. A lot of time is wasted when a patient visits the hospital, probably in an emergency, and the nurse or attendant has to search through voluminous files before the patient's file can be retrieved; this may cause an unexpected to happen to the patient. This Data Mining application is to be designed using a Structured System Analysis and design method, which will help in a well-articulated analysis of the existing file management system, feasibility study, and proper documentation of the Design and Implementation of a Computerized medical record system. This Computerized system will replace the file management system and help to easily retrieve a patient's record with increased data security, access clinical records for decision-making, and reduce the time range at which a patient gets attended to.

Keywords: data mining, medical record system, systems programming, computing

Procedia PDF Downloads 195
28487 Simultaneous Determination of Six Characterizing/Quality Parameters of Biodiesels via 1H NMR and Multivariate Calibration

Authors: Gustavo G. Shimamoto, Matthieu Tubino

Abstract:

The characterization and the quality of biodiesel samples are checked by determining several parameters. Considering a large number of analysis to be performed, as well as the disadvantages of the use of toxic solvents and waste generation, multivariate calibration is suggested to reduce the number of tests. In this work, hydrogen nuclear magnetic resonance (1H NMR) spectra were used to build multivariate models, from partial least squares (PLS) regression, in order to determine simultaneously six important characterizing and/or quality parameters of biodiesels: density at 20 ºC, kinematic viscosity at 40 ºC, iodine value, acid number, oxidative stability, and water content. Biodiesels from twelve different oils sources were used in this study: babassu, brown flaxseed, canola, corn, cottonseed, macauba almond, microalgae, palm kernel, residual frying, sesame, soybean, and sunflower. 1H NMR reflects the structures of the compounds present in biodiesel samples and showed suitable correlations with the six parameters. The PLS models were constructed with latent variables between 5 and 7, the obtained values of r(cal) and r(val) were greater than 0.994 and 0.989, respectively. In addition, the models were considered suitable to predict all the six parameters for external samples, taking into account the analytical speed to perform it. Thus, the alliance between 1H NMR and PLS showed to be appropriate to characterize and evaluate the quality of biodiesels, reducing significantly analysis time, the consumption of reagents/solvents, and waste generation. Therefore, the proposed methods can be considered to adhere to the principles of green chemistry.

Keywords: biodiesel, multivariate calibration, nuclear magnetic resonance, quality parameters

Procedia PDF Downloads 527
28486 Automated Resin Transfer Moulding of Carbon Phenolic Composites

Authors: Zhenyu Du, Ed Collings, James Meredith

Abstract:

The high cost of composite materials versus conventional materials remains a major barrier to uptake in the transport sector. This is exacerbated by a shortage of skilled labour which makes the labour content of a hand laid composite component (~40 % of total cost) an obvious target for reduction. Automation is a method to remove labour cost and improve quality. This work focuses on the challenges and benefits to automating the manufacturing process from raw fibre to trimmed component. It will detail the experimental work required to complete an automation cell, the control strategy used to integrate all machines and the final benefits in terms of throughput and cost.

Keywords: automation, low cost technologies, processing and manufacturing technologies, resin transfer moulding

Procedia PDF Downloads 282
28485 Ta(l)king Pictures: Development of an Educational Program (SELVEs) for Adolescents Combining Social-Emotional Learning and Photography Taking

Authors: Adi Gielgun-Katz, Alina S. Rusu

Abstract:

In the last two decades, education systems worldwide have integrated new pedagogical methods and strategies in lesson plans, such as innovative technologies, social-emotional learning (SEL), gamification, mixed learning, multiple literacies, and many others. Visual language, such as photographs, is known to transcend cultures and languages, and it is commonly used by youth to express positions and affective states in social networks. Therefore, visual language needs more educational attention as a linguistic and communicative component that can create connectedness among the students and their teachers. Nowadays, when SEL is gaining more and more space and meaning in the area of academic improvement in relation to social well-being, and taking and sharing pictures is part of the everyday life of the majority of people, it becomes natural to add the visual language to SEL approach as a reinforcement strategy for connecting education to the contemporary culture and language of the youth. This article presents a program conducted in a high school class in Israel, which combines the five SEL with photography techniques, i.e., Social-Emotional Learning Visual Empowerments (SELVEs) program (experimental group). Another class of students from the same institution represents the control group, which is participating in the SEL program without the photography component. The SEL component of the programs addresses skills such as: troubleshooting, uncertainty, personal strengths and collaboration, accepting others, control of impulses, communication, self-perception, and conflict resolution. The aim of the study is to examine the effects of programs on the level of the five SEL aspects in the two groups of high school students: Self-Awareness, Social Awareness, Self-Management, Responsible Decision Making, and Relationship Skills. The study presents a quantitative assessment of the SEL programs’ impact on the students. The main hypothesis is that the students’ questionnaires' analysis will reveal a better understanding and improvement of the five aspects of the SEL in the group of students involved in the photography-enhanced SEL program.

Keywords: social-emotional learning, photography, education program, adolescents

Procedia PDF Downloads 66
28484 Development and Performance Analysis of Multifunctional City Smart Card System

Authors: Vedat Coskun, Fahri Soylemezgiller, Busra Ozdenizci, Kerem Ok

Abstract:

In recent years, several smart card solutions for transportation services of cities with different technical infrastructures and business models has emerged considerably, which triggers new business and technical opportunities. In order to create a unique system, we present a novel, promising system called Multifunctional City Smart Card System to be used in all cities that provides transportation and loyalty services based on the MasterCard M/Chip Advance standards. The proposed system provides a unique solution for transportation services of large cities over the world, aiming to answer all transportation needs of citizens. In this paper, development of the Multifunctional City Smart Card System and system requirements are briefly described. Moreover, performance analysis results of M/Chip Advance Compatible Validators which is the system's most important component are presented.

Keywords: smart card, m/chip advance standard, city transportation, performance analysis

Procedia PDF Downloads 468
28483 Escalation of Commitment and Turnover in Top Management Teams

Authors: Dmitriy V. Chulkov

Abstract:

Escalation of commitment is defined as continuation of a project after receiving negative information about it. While literature in management and psychology identified various factors contributing to escalation behavior, this phenomenon has received little analysis in economics, potentially due to the apparent irrationality of escalation. In this study, we present an economic model of escalation with asymmetric information in a principal-agent setup where the agents are responsible for a project selection decision and discover the outcome of the project before the principal. Our theoretical model complements the existing literature on several accounts. First, we link the incentive to escalate commitment to a project with the turnover decision by the manager. When a manager learns the outcome of the project and stops it that reveals that a mistake was made. There is an incentive to continue failing projects and avoid admitting the mistake. This incentive is enhanced when the agent may voluntarily resign from the firm before the outcome of the failing project is revealed, and thus not bear the full extent of reputation damage due to project failure. As long as some successful managers leave the firm for extraneous reasons, outside firms find it difficult to link failing projects with certainty to managers that left a firm. Second, we demonstrate that non-CEO managers have reputation concerns separate from those of the CEO, and thus may escalate commitment to projects they oversee, when such escalation can attenuate damage to reputation from impending project failure. Such incentive for escalation will be present for non-CEO managers if the CEO delegates responsibility for a project to a non-CEO executive. If reputation matters for promotion to the CEO, the incentive for a rising executive to escalate in order to protect reputation is distinct from that of a CEO. Third, our theoretical model is supported by empirical analysis of changes in the firm’s operations measured by the presence of discontinued operations at the time of turnover among the top four members of the top management team. Discontinued operations are indicative of termination of failing projects at a firm. The empirical results demonstrate that in a large dataset of over three thousand publicly traded U.S. firms for a period from 1993 to 2014 turnover by top executives significantly increases the likelihood that the firm discontinues operations. Furthermore, the type of turnover matters as this effect is strongest when at least one non-CEO member of the top management team leaves the firm and when the CEO departure is due to a voluntary resignation and not to a retirement or illness. Empirical results are consistent with the predictions of the theoretical model and suggest that escalation of commitment is primarily observed in decisions by non-CEO members of the top management team.

Keywords: discontinued operations, escalation of commitment, executive turnover, top management teams

Procedia PDF Downloads 355
28482 Development and Validation of the Response to Stressful Situations Scale in the General Population

Authors: Célia Barreto Carvalho, Carolina da Motta, Marina Sousa, Joana Cabral, Ana Luísa Carvalho, Ermelindo Peixoto

Abstract:

The aim of the current study was to develop and validate a Response to Stressful Situations Scale (RSSS) for the Portuguese population. This scale assesses the degree of stress experienced in scenarios that can constitute positive, negative and more neutral stressors, and also describes the physiological, emotional and behavioral reactions to those events according to their intensity. These scenario include typical stressor scenarios relevant to patients with schizophrenia, which are currently absent from most scale, assessing specific risks that these stressors may bring on subjects, which may prove useful in non-clinical and clinical populations (i.e. patients with mood or anxiety disorders, schizophrenia). Results from Principal Components Analysis and Confirmatory Factor Analysis of on two adult samples from general population allowed to confirm a three-factor model with good fit indices: χ2 (144)= 370.211, p = 0.000; GFI = 0.928; CFI = 0.927; TLI = 0.914, RMSEA = 0.055, P( rmsea ≤ 0.005) = 0.096; PCFI = 0.781. Further data analysis on the scale revealed that RSSS is an adequate assessment tool of stress response in adults to be used in further research and clinical settings, with good psychometric characteristics, adequate divergent and convergent validity, good temporal stability and high internal consistency.

Keywords: assessment, stress events, stress response, stress vulnerability

Procedia PDF Downloads 512
28481 Directivity in the Dramatherapeutic Process for People with Addictive Behaviour

Authors: Jakub Vávra, Milan Valenta, Petr Kosek

Abstract:

This article presents a perspective on the conduct of the dramatherapy process with persons with addictive behaviours with regard to the directiveness of the process. Although drama therapy as one of the creative arts approaches is rather non-directive in nature, depending on the clientele, there may be a need to structure the process more and, depending on the needs of the clients, to guide the process more directive. The specificity for people with addictive behaviours is discussed through the prism of the dramatherapeutic perspective, where we can find both a psychotherapeutic component as well as a component touching on expression and art, which is rather non-directive in nature. Within the context of practice with clients, this theme has repeatedly emerged and dramatherapists themselves have sought to find ways of coping with clients' demands and needs for structure and guidance within the dramatherapy process. Some of the outcomes from the supervision work also guided the research. Based on this insight, the research questions were approached. The first research question asks: in what ways is directive in dramatherapy manifested and manifested in the process? The second research question then complements the first and asks: to which phenomena are directivity in dramatherapy linked? In relation to the research questions, data were collected using focus groups and field notes. The qualitative approach of Content analysis and Relational analysis was chosen as the methodology. For analyzing qualitative research, we chose an Inductive coding scheme: Open coding, Axial coding, Pattern matching, Member checking, and Creating a coding scheme. In the presented partial research results, we find recurrent schemes related to directive coding in drama therapy. As an important element, directive leadership emerges in connection with safety for the client group, then in connection with the clients' order and also the department of the facility, and last but not least, to the personality of the drama therapist. By careful analysis and looking for patterns in the research results, we can see connections that are impossible to interpret at this stage but already provide clues to our understanding of the topic and open up further avenues for research in this area.

Keywords: dramatherapy, directivity, personal approach, aims of dramatherapy process, safetyness

Procedia PDF Downloads 58
28480 Application of Local Mean Decomposition for Rolling Bearing Fault Diagnosis Based On Vibration Signals

Authors: Toufik Bensana, Slimane Mekhilef, Kamel Tadjine

Abstract:

Vibration analysis has been frequently applied in the condition monitoring and fault diagnosis of rolling element bearings. Unfortunately, the vibration signals collected from a faulty bearing are generally non stationary, nonlinear and with strong noise interference, so it is essential to obtain the fault features correctly. In this paper, a novel numerical analysis method based on local mean decomposition (LMD) is proposed. LMD decompose the signal into a series of product functions (PFs), each of which is the product of an envelope signal and a purely frequency modulated FM signal. The envelope of a PF is the instantaneous amplitude (IA) and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. After that the fault characteristic frequency of the roller bearing can be extracted by performing spectrum analysis to the instantaneous amplitude of PF component containing dominant fault information. The results show the effectiveness of the proposed technique in fault detection and diagnosis of rolling element bearing.

Keywords: fault diagnosis, condition monitoring, local mean decomposition, rolling element bearing, vibration analysis

Procedia PDF Downloads 382
28479 Purification and Pre-Crystallization of Recombinant PhoR Cytoplasmic Domain Protein from Mycobacterium Tuberculosis H37Rv

Authors: Oktira Roka Aji, Maelita R. Moeis, Ihsanawati, Ernawati A. Giri-Rachman

Abstract:

Globally, tuberculosis (TB) remains a leading cause of death. The emergence of multidrug-resistant strains and extensively drug-resistant strains have become a major public concern. One of the potential candidates for drug target is the cytoplasmic domain of PhoR Histidine Kinase, a part of the Two Component System (TCS) PhoR-PhoP in Mycobacterium tuberculosis (Mtb). TCS PhoR-PhoP relay extracellular signal to control the expression of 114 virulent associated genes in Mtb. The 3D structure of PhoR cytoplasmic domain is needed to screen novel drugs using structure based drug discovery. The PhoR cytoplasmic domain from Mtb H37Rv was overexpressed in E. coli BL21(DE3), then purified using IMAC Ni-NTA Agarose his-tag affinity column and DEAE-ion exchange column chromatography. The molecular weight of the purified protein was estimated to be 37 kDa after SDS-PAGE analysis. This sample was used for pre-crystallization screening by applying sitting drop vapor diffusion method using Natrix (HR2-116) 48 solutions crystal screen kit at 25ºC. Needle-like crystals were observed after the seventh day of incubation in test solution No.47 (0.1 M KCl, 0.01 M MgCl2.6H2O, 0.05 M Tris-Cl pH 8.5, 30% v/v PEG 4000). Further testing is required for confirming the crystal.

Keywords: tuberculosis, two component system, histidine kinase, needle-like crystals

Procedia PDF Downloads 426