Search results for: error detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5094

Search results for: error detection

804 The Role of Bone Marrow Stem Cells Transplantation in the Repair of Damaged Inner Ear in Albino Rats

Authors: Ahmed Gaber Abdel Raheem, Nashwa Ahmed Mohamed

Abstract:

Introduction: Sensorineural hearing loss (SNHL) is largely caused by the degeneration of the cochlea. Therapeutic options for SNHL are limited to hearing aids and cochlear implants. The cell transplantation approach to the regeneration of hair cells has gained considerable attention because stem cells are believed to accumulate in the damaged sites and have the potential for the repair of damaged tissues. The aim of the work: was to assess the use of bone marrow transplantation in repair of damaged inner ear hair cells in rats after the damage had been inflicted by Amikacin injection. Material and Methods: Thirty albino rats were used in this study. They were divided into three groups. Each group ten rats. Group I: used as control. Group II: Were given Amikacin- intratympanic injection till complete loss of hearing function. This could be assessed by Distortion product Otoacoustic Emission (DPOAEs) and / or auditory brain stem evoked potential (ABR). GroupIII: were given intra-peritoneal injection of bone marrow stem cell after complete loss of hearing caused by Amikacin. Clinical assessment was done using DPOAEs and / or auditory brain stem evoked potential (ABR), before and after bone marrow injection. Histological assessment of the inner ear was done by light and electron microscope. Also, Detection of stem cells in the inner ear by immunohistochemistry. Results: Histological examination of the specimens showed promising improvement in the structure of cochlea that may be responsible for the improvement of hearing function in rats detected by DPOAEs and / or ABR. Conclusion: Bone marrow stem cells transplantation might be useful for the treatment of SNHL.

Keywords: amikacin, hair cells, sensorineural hearing loss, stem cells

Procedia PDF Downloads 438
803 Frequency of BCR-ABL Fusion Transcript Types with Chronic Myeloid Leukemia by Multiplex Polymerase Chain Reaction in Srinagarind Hospital, Khon Kaen Thailand

Authors: Kanokon Chaicom, Chitima Sirijerachai, Kanchana Chansung, Pinsuda Klangsang, Boonpeng Palaeng, Prajuab Chaimanee, Pimjai Ananta

Abstract:

Chronic myeloid leukemia (CML) is characterized by the consistent involvement of the Philadelphia chromosome (Ph), which is derived from a reciprocal translocation between chromosome 9 and 22, the main product of the t(9;22) (q34;q11) translocation, is found in the leukemic clone of at least 95% of CML patients. There are two major forms of the BCR/ABL fusion gene, involving ABL exon 2, but including different exons of BCR gene. The transcripts b2a2 (e13a2) or b3a2 (e14a2) code for a p210 protein. Another fusion gene leads to the expression of an e1a2 transcript, which codes for a p190 protein. Other less common fusion genes are b3a3 or b2a3, which codes for a p203 protein and e19a2 (c3a2) transcript, which codes for a p230 protein. Its frequency varies in different populations. In this study, we aimed to report the frequency of BCR-ABL fusion transcript types with CML by multiplex PCR (polymerase chain reaction) in Srinagarind Hospital, Khon Kaen, Thailand. Multiplex PCR for BCR-ABL was performed on 58 patients, to detect different types of BCR-ABL transcripts of the t (9; 22). All patients examined were positive for some type of BCR/ABL rearrangement. The majority of the patients (93.10%) expressed one of the p210 BCR-ABL transcripts, b3a2 and b2a2 transcripts were detected in 53.45% and 39.65% respectively. The expression of an e1a2 transcript showed 3.75%. Co-expression of p210/p230 was detected in 3.45%. Co-expression of p210/p190 was not detected. Multiplex PCR is useful, saves time and reliable in the detection of BCR-ABL transcript types. The frequency of one or other rearrangement in CML varies in different population.

Keywords: chronic myeloid leukemia, BCR-ABL fusion transcript types, multiplex PCR, frequency of BCR-ABL fusion

Procedia PDF Downloads 224
802 Detection of the Effectiveness of Training Courses and Their Limitations Using CIPP Model (Case Study: Isfahan Oil Refinery)

Authors: Neda Zamani

Abstract:

The present study aimed to investigate the effectiveness of training courses and their limitations using the CIPP model. The investigations were done on Isfahan Refinery as a case study. From a purpose point of view, the present paper is included among applied research and from a data gathering point of view, it is included among descriptive research of the field type survey. The population of the study included participants in training courses, their supervisors and experts of the training department. Probability-proportional-to-size (PPS) was used as the sampling method. The sample size for participants in training courses included 195 individuals, 30 supervisors and 11 individuals from the training experts’ group. To collect data, a questionnaire designed by the researcher and a semi-structured interview was used. The content validity of the data was confirmed by training management experts and the reliability was calculated through 0.92 Cronbach’s alpha. To analyze the data in descriptive statistics aspect (tables, frequency, frequency percentage and mean) were applied, and inferential statistics (Mann Whitney and Wilcoxon tests, Kruskal-Wallis test to determine the significance of the opinion of the groups) have been applied. Results of the study indicated that all groups, i.e., participants, supervisors and training experts, absolutely believe in the importance of training courses; however, participants in training courses regard content, teacher, atmosphere and facilities, training process, managing process and product as to be in a relatively appropriate level. The supervisors also regard output to be at a relatively appropriate level, but training experts regard content, teacher and managing processes as to be in an appropriate and higher than average level.

Keywords: training courses, limitations of training effectiveness, CIPP model, Isfahan oil refinery company

Procedia PDF Downloads 47
801 Rumination Time and Reticuloruminal Temperature around Calving in Eutocic and Dystocic Dairy Cows

Authors: Levente Kovács, Fruzsina Luca Kézér, Ottó Szenci

Abstract:

Prediction of the onset of calving and recognizing difficulties at calving has great importance in decreasing neonatal losses and reducing the risk of health problems in the early postpartum period. In this study, changes of rumination time, reticuloruminal pH and temperature were investigated in eutocic (EUT, n = 10) and dystocic (DYS, n = 8) dairy cows around parturition. Rumination time was continuously recorded using an acoustic biotelemetry system, whereas reticuloruminal pH and temperature were recorded using an indwelling and wireless data transmitting system. The recording period lasted from 3 d before calving until 7 days in milk. For the comparison of rumination time and reticuloruminal characteristics between groups, time to return to baseline (the time interval required to return to baseline from the delivery of the calf) and area under the curve (AUC, both for prepartum and postpartum periods) were calculated for each parameter. Rumination time decreased from baseline 28 h before calving both for EUT and DYS cows (P = 0.023 and P = 0.017, respectively). After 20 h before calving, it decreased onwards to reach 32.4 ± 2.3 and 13.2 ± 2.0 min/4 h between 8 and 4 h before delivery in EUT and DYS cows, respectively, and then it decreased below 10 and 5 min during the last 4 h before calving (P = 0.003 and P = 0.008, respectively). Until 12 h after delivery rumination time reached 42.6 ± 2.7 and 51.0 ± 3.1 min/4 h in DYS and EUT dams, respectively, however, AUC and time to return to baseline suggested lower rumination activity in DYS cows than in EUT dams for the 168-h postpartum observational period (P = 0.012 and P = 0.002, respectively). Reticuloruminal pH decreased from baseline 56 h before calving both for EUT and DYS cows (P = 0.012 and P = 0.016, respectively), but did not differ between groups before delivery. In DYS cows, reticuloruminal temperature decreased from baseline 32 h before calving by 0.23 ± 0.02 °C (P = 0.012), whereas in EUT cows such a decrease was found only 20 h before delivery (0.48 ± 0.05 °C, P < 0.01). AUC of reticuloruminal temperature calculated for the prepartum period was greater in EUT cows than in DYS cows (P = 0.042). During the first 4 h after calving, it decreased from 39.7 ± 0.1 to 39.00 ± 0.1 °C and from 39.8 ± 0.1 to 38.8 ± 0.1 °C in EUT and DYS cows, respectively (P < 0.01 for both groups) and reached baseline levels after 35.4 ± 3.4 and 37.8 ± 4.2 h after calving in EUT and DYS cows, respectively. Based on our results, continuous monitoring of changes in rumination time and reticuloruminal temperature seems to be promising in the early detection of cows with a higher risk of dystocia. Depressed postpartum rumination time of DYS cows highlights the importance of the monitoring of cows experiencing difficulties at calving.

Keywords: reticuloruminal pH, reticuloruminal temperature, rumination time, dairy cows, dystocia

Procedia PDF Downloads 302
800 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 54
799 Non-Destructive Testing of Carbon Fiber Reinforced Plastic by Infrared Thermography Methods

Authors: W. Swiderski

Abstract:

Composite materials are one answer to the growing demand for materials with better parameters of construction and exploitation. Composite materials also permit conscious shaping of desirable properties to increase the extent of reach in the case of metals, ceramics or polymers. In recent years, composite materials have been used widely in aerospace, energy, transportation, medicine, etc. Fiber-reinforced composites including carbon fiber, glass fiber and aramid fiber have become a major structural material. The typical defect during manufacture and operation is delamination damage of layered composites. When delamination damage of the composites spreads, it may lead to a composite fracture. One of the many methods used in non-destructive testing of composites is active infrared thermography. In active thermography, it is necessary to deliver energy to the examined sample in order to obtain significant temperature differences indicating the presence of subsurface anomalies. To detect possible defects in composite materials, different methods of thermal stimulation can be applied to the tested material, these include heating lamps, lasers, eddy currents, microwaves or ultrasounds. The use of a suitable source of thermal stimulation on the test material can have a decisive influence on the detection or failure to detect defects. Samples of multilayer structure carbon composites were prepared with deliberately introduced defects for comparative purposes. Very thin defects of different sizes and shapes made of Teflon or copper having a thickness of 0.1 mm were screened. Non-destructive testing was carried out using the following sources of thermal stimulation, heating lamp, flash lamp, ultrasound and eddy currents. The results are reported in the paper.

Keywords: Non-destructive testing, IR thermography, composite material, thermal stimulation

Procedia PDF Downloads 241
798 The Associations between Ankle and Brachial Systolic Blood Pressures with Obesity Parameters

Authors: Matei Tudor Berceanu, Hema Viswambharan, Kirti Kain, Chew Weng Cheng

Abstract:

Background - Obesity parameters, particularly visceral obesity as measured by the waist-to-height ratio (WHtR), correlate with insulin resistance. The metabolic microvascular changes associated with insulin resistance causes increased peripheral arteriolar resistance primarily to the lower limb vessels. We hypothesize that ankle systolic blood pressures (SBPs) are more significantly associated with visceral obesity than brachial SBPs. Methods - 1098 adults enriched in south Asians or Europeans with diabetes (T2DM) were recruited from a primary care practice in West Yorkshire. Their medical histories, including T2DM and cardiovascular disease (CVD) status, were gathered from an electronic database. The brachial, dorsalis pedis, and posterior tibial SBPs were measured using a Doppler machine. Their body mass index (BMI) and WHtR were calculated after measuring their weight, height, and waist circumference. Linear regressions were performed between the 6 SBPs and both obesity parameters, after adjusting for covariates. Results - Generally, the left posterior tibial SBP (P=4.559*10⁻¹⁵) and right posterior tibial SBP (P=1.114* 10⁻¹³ ) are the pressures most significantly associated with the BMI, as well as in south Asians (P < 0.001) and Europeans (P < 0.001) specifically. In South Asians, although the left (P=0.032) and right brachial SBP (P=0.045) were associated to the WHtR, the left posterior tibial SBP (P=0.023) showed the strongest association. Conclusion - Regardless of ethnicity, ankle SBPs are more significantly associated with generalized obesity than brachial SBPs, suggesting their screening potential for screening for early detection of T2DM and CVD. A combination of ankle SBPs with WHtR is proposed in south Asians.

Keywords: ankle blood pressures, body mass index, insulin resistance, waist-to-height-ratio

Procedia PDF Downloads 123
797 Molecular Epidemiology of Anthrax in Georgia

Authors: N. G. Vepkhvadze, T. Enukidze

Abstract:

Anthrax is a fatal disease caused by strains of Bacillus anthracis, a spore-forming gram-positive bacillus that causes the disease anthrax in animals and humans. Anthrax is a zoonotic disease that is also well-recognized as a potential agent of bioterrorism. Infection in humans is extremely rare in the developed world and is generally due to contact with infected animals or contaminated animal products. Testing of this zoonotic disease began in 1907 in Georgia and is still being tested routinely to provide accurate information and efficient testing results at the State Laboratory of Agriculture of Georgia. Each clinical sample is analyzed by RT-PCR and bacteriology methods; this study used Real-Time PCR assays for the detection of B. anthracis that rely on plasmid-encoded targets with a chromosomal marker to correctly differentiate pathogenic strains from non-anthracis Bacillus species. During the period of 2015-2022, the State Laboratory of Agriculture (SLA) tested 250 clinical and environmental (soil) samples from several different regions in Georgia. In total, 61 out of the 250 samples were positive during this period. Based on the results, Anthrax cases are mostly present in Eastern Georgia, with a high density of the population of livestock, specifically in the regions of Kakheti and Kvemo Kartli. All laboratory activities are being performed in accordance with International Quality standards, adhering to biosafety and biosecurity rules by qualified and experienced personnel handling pathogenic agents. Laboratory testing plays the largest role in diagnosing animals with anthrax, which helps pertinent institutions to quickly confirm a diagnosis of anthrax and evaluate the epidemiological situation that generates important data for further responses.

Keywords: animal disease, baccilus anthracis, edp, laboratory molecular diagnostics

Procedia PDF Downloads 68
796 Legal Study on the Construction of Olympic and Paralympic Soft Law about Manipulation of Sports Competition

Authors: Clemence Collon, Didier Poracchia

Abstract:

The manipulation of sports competitions is a new type of sports integrity problem. While doping has become an organized, institutionalized struggle, the manipulation of sports competitions is gradually building up. This study aims to describe and understand how the soft Olympic and Paralympic law was gradually built. It also summarizes the legal tools for prevention, detection, and sanction developed by the international Olympic movement. Then, it analyzes the impact of this soft law on the law of the States, in particular in French law. This study is mainly based on an analysis of existing legal literature and non-binding law in the International Olympic and Paralympic movement and on the French National Olympic Committee. Interviews were carried out with experts from the Olympic movement or experts working on combating the manipulation of sports competitions; the answers are also used in this article. The International Olympic Committee has created a supranational legal base to fight against the manipulation of sports competitions. This legal basis must be respected by sports organizations. The Olympic Charter, the Olympic Code of Ethics, the Olympic Movement Code on the prevention of the manipulation of sports competitions, the rules of standards, the basic universal principles, the manuals, the declarations have been published in this perspective. This sports soft law has influences or repercussions in each state. Many states take this new form of integrity problem into account by creating state laws or measures in favor of the fight against sports manipulations. France has so far only a legal basis for manipulation related to betting on sports competitions through the infraction of sports corruption included in the penal code and also created a national platform with various actors to combat this cheating. This legal study highlights the progressive construction of the sports law rules of the Olympic movement in the fight against the manipulation of sports competitions linked to sports betting and their impact on the law of the states.

Keywords: integrity, law and ethics, manipulation of sports competitions, olympic, sports law

Procedia PDF Downloads 134
795 The ‘Quartered Head Technique’: A Simple, Reliable Way of Maintaining Leg Length and Offset during Total Hip Arthroplasty

Authors: M. Haruna, O. O. Onafowokan, G. Holt, K. Anderson, R. G. Middleton

Abstract:

Background: Requirements for satisfactory outcomes following total hip arthroplasty (THA) include restoration of femoral offset, version, and leg length. Various techniques have been described for restoring these biomechanical parameters, with leg length restoration being the most predominantly described. We describe a “quartered head technique” (QHT) which uses a stepwise series of femoral head osteotomies to identify and preserve the centre of rotation of the femoral head during THA in order to ensure reconstruction of leg length, offset and stem version, such that hip biomechanics are restored as near to normal as possible. This study aims to identify whether using the QHT during hip arthroplasty effectively restores leg length and femoral offset to within acceptable parameters. Methods: A retrospective review of 206 hips was carried out, leaving 124 hips in the final analysis. Power analysis indicated a minimum of 37 patients required. All operations were performed using an anterolateral approach by a single surgeon. All femoral implants were cemented, collarless, polished double taper CPT® stems (Zimmer, Swindon, UK). Both cemented, and uncemented acetabular components were used (Zimmer, Swindon, UK). Leg length, version, and offset were assessed intra-operatively and reproduced using the QHT. Post-operative leg length and femoral offset were determined and compared with the contralateral native hip, and the difference was then calculated. For the determination of leg length discrepancy (LLD), we used the method described by Williamson & Reckling, which has been shown to be reproducible with a measurement error of ±1mm. As a reference, the inferior margin of the acetabular teardrop and the most prominent point of the lesser trochanter were used. A discrepancy of less than 6mm LLD was chosen as acceptable. All peri-operative radiographs were assessed by two independent observers. Results: The mean absolute post-operative difference in leg length from the contralateral leg was +3.58mm. 84% of patients (104/124) had LLD within ±6mm of the contralateral limb. The mean absolute post-operative difference in offset from contralateral leg was +3.88mm (range -15 to +9mm, median 3mm). 90% of patients (112/124) were within ±6mm offset of the contralateral limb. There was no statistical difference noted between observer measurements. Conclusion: The QHT provides a simple, inexpensive yet effective method of maintaining femoral leg length and offset during total hip arthroplasty. Combining this technique with pre-operative templating or other techniques described may enable surgeons to reduce even further the discrepancies between pre-operative state and post-operative outcome.

Keywords: leg length discrepancy, technical tip, total hip arthroplasty, operative technique

Procedia PDF Downloads 65
794 The Use of Corpora in Improving Modal Verb Treatment in English as Foreign Language Textbooks

Authors: Lexi Li, Vanessa H. K. Pang

Abstract:

This study aims to demonstrate how native and learner corpora can be used to enhance modal verb treatment in EFL textbooks in mainland China. It contributes to a corpus-informed and learner-centered design of grammar presentation in EFL textbooks that enhances the authenticity and appropriateness of textbook language for target learners. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the 'secondary school' section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was analyzed in terms of the use (distributional features, semantic functions, and co-occurring constructions) and the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The analysis of distribution indicates several discrepancies between the textbook corpus and BNCS2014. The first four most frequent modal verbs in BNCS2014 are can, would, will, could, while can, will, should, could are the top four in the textbooks. Most strikingly, there is an unusually high proportion of can (41.1%) in the textbooks. The results on different meanings shows that will, would and must are the most problematic. For example, for will, the textbooks contain 20% more occurrences of 'volition' and 20% less of 'prediction' than those in BNCS2014. Regarding co-occurring structures, the textbooks over-represented the structure 'modal +do' across the nine modal verbs. Another major finding is that the structure of 'modal +have done' that frequently co-occur with could, would, should, and must is underused in textbooks. Besides, these four modal verbs are the most difficult for learners, as the error analysis shows. This study demonstrates how the synergy of native and learner corpora can be harnessed to improve EFL textbook presentation of modal verbs in a way that textbooks can provide not only authentic language used in natural discourse but also appropriate design tailed for the needs of target learners.

Keywords: English as Foreign Language, EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 127
793 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Iran: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: Crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Iran using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in VECM suggests that all energy consumption variables in this study have significant impacts on GDP in the long term. The consumption of petroleum products and the direct combustion of crude oil and natural gas decrease GDP, while the coal and electricity use enhanced the GDP between 1980-2010 in Iran. In the short term, only electricity use enhances the GDP as well as its long-run effects. All variables of this study, except the CO2 emissions, show significant effects on the GDP in the country for the long term. The long-run equilibrium in VECM suggests that the consumption of petroleum products and the direct combustion of crude oil and natural gas use have positive impacts on the GDP while the consumptions of electricity and coal have adverse impacts on the GDP in the long term. In the short run, electricity use enhances the GDP over period of 1980-2010 in Iran. Overall, the results partly support arguments that there are relationships between energy use and economic output, but the associations can be differed by the sources of energy in the case of Iran over period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.

Keywords: CO2 emissions, energy consumption, GDP, Iran, time series analysis

Procedia PDF Downloads 579
792 Scalable and Accurate Detection of Pathogens from Whole-Genome Shotgun Sequencing

Authors: Janos Juhasz, Sandor Pongor, Balazs Ligeti

Abstract:

Next-generation sequencing, especially whole genome shotgun sequencing, is becoming a common approach to gain insight into the microbiomes in a culture-independent way, even in clinical practice. It does not only give us information about the species composition of an environmental sample but opens the possibility to detect antimicrobial resistance and novel, or currently unknown, pathogens. Accurately and reliably detecting the microbial strains is a challenging task. Here we present a sensitive approach for detecting pathogens in metagenomics samples with special regard to detecting novel variants of known pathogens. We have developed a pipeline that uses fast, short read aligner programs (i.e., Bowtie2/BWA) and comprehensive nucleotide databases. Taxonomic binning is based on the lowest common ancestor (LCA) principle; each read is assigned to a taxon, covering the most significantly hit taxa. This approach helps in balancing between sensitivity and running time. The program was tested both on experimental and synthetic data. The results implicate that our method performs as good as the state-of-the-art BLAST-based ones, furthermore, in some cases, it even proves to be better, while running two orders magnitude faster. It is sensitive and capable of identifying taxa being present only in small abundance. Moreover, it needs two orders of magnitude less reads to complete the identification than MetaPhLan2 does. We analyzed an experimental anthrax dataset (B. anthracis strain BA104). The majority of the reads (96.50%) was classified as Bacillus anthracis, a small portion, 1.2%, was classified as other species from the Bacillus genus. We demonstrate that the evaluation of high-throughput sequencing data is feasible in a reasonable time with good classification accuracy.

Keywords: metagenomics, taxonomy binning, pathogens, microbiome, B. anthracis

Procedia PDF Downloads 116
791 Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks

Authors: Bahareh Golchin, Nooshin Riahi

Abstract:

One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.

Keywords: emotion classification, sentiment analysis, social networks, deep neural networks

Procedia PDF Downloads 122
790 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.

Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction

Procedia PDF Downloads 75
789 Landsat Data from Pre Crop Season to Estimate the Area to Be Planted with Summer Crops

Authors: Valdir Moura, Raniele dos Anjos de Souza, Fernando Gomes de Souza, Jose Vagner da Silva, Jerry Adriani Johann

Abstract:

The estimate of the Area of Land to be planted with annual crops and its stratification by the municipality are important variables in crop forecast. Nowadays in Brazil, these information’s are obtained by the Brazilian Institute of Geography and Statistics (IBGE) and published under the report Assessment of the Agricultural Production. Due to the high cloud cover in the main crop growing season (October to March) it is difficult to acquire good orbital images. Thus, one alternative is to work with remote sensing data from dates before the crop growing season. This work presents the use of multitemporal Landsat data gathered on July and September (before the summer growing season) in order to estimate the area of land to be planted with summer crops in an area of São Paulo State, Brazil. Geographic Information Systems (GIS) and digital image processing techniques were applied for the treatment of the available data. Supervised and non-supervised classifications were used for data in digital number and reflectance formats and the multitemporal Normalized Difference Vegetation Index (NDVI) images. The objective was to discriminate the tracts with higher probability to become planted with summer crops. Classification accuracies were evaluated using a sampling system developed basically for this study region. The estimated areas were corrected using the error matrix derived from these evaluations. The classification techniques presented an excellent level according to the kappa index. The proportion of crops stratified by municipalities was derived by a field work during the crop growing season. These proportion coefficients were applied onto the area of land to be planted with summer crops (derived from Landsat data). Thus, it was possible to derive the area of each summer crop by the municipality. The discrepancies between official statistics and our results were attributed to the sampling and the stratification procedures. Nevertheless, this methodology can be improved in order to provide good crop area estimates using remote sensing data, despite the cloud cover during the growing season.

Keywords: area intended for summer culture, estimated area planted, agriculture, Landsat, planting schedule

Procedia PDF Downloads 128
788 Cas9-Assisted Direct Cloning and Refactoring of a Silent Biosynthetic Gene Cluster

Authors: Peng Hou

Abstract:

Natural products produced from marine bacteria serve as an immense reservoir for anti-infective drugs and therapeutic agents. Nowadays, heterologous expression of gene clusters of interests has been widely adopted as an effective strategy for natural product discovery. Briefly, the heterologous expression flowchart would be: biosynthetic gene cluster identification, pathway construction and expression, and product detection. However, gene cluster capture using traditional Transformation-associated recombination (TAR) protocol is low-efficient (0.5% positive colony rate). To make things worse, most of these putative new natural products are only predicted by bioinformatics analysis such as antiSMASH, and their corresponding natural products biosynthetic pathways are either not expressed or expressed at very low levels under laboratory conditions. Those setbacks have inspired us to focus on seeking new technologies to efficiently edit and refractor of biosynthetic gene clusters. Recently, two cutting-edge techniques have attracted our attention - the CRISPR-Cas9 and Gibson Assembly. By now, we have tried to pretreat Brevibacillus laterosporus strain genomic DNA with CRISPR-Cas9 nucleases that specifically generated breaks near the gene cluster of interest. This trial resulted in an increase in the efficiency of gene cluster capture (9%). Moreover, using Gibson Assembly by adding/deleting certain operon and tailoring enzymes regardless of end compatibility, the silent construct (~80kb) has been successfully refactored into an active one, yielded a series of analogs expected. With the appearances of the novel molecular tools, we are confident to believe that development of a high throughput mature pipeline for DNA assembly, transformation, product isolation and identification would no longer be a daydream for marine natural product discovery.

Keywords: biosynthesis, CRISPR-Cas9, DNA assembly, refactor, TAR cloning

Procedia PDF Downloads 260
787 Receptor-Independent Effects of Endocannabinoid Anandamide on Contractility and Electrophysiological Properties of Rat Ventricular Myocytes

Authors: Lina T. Al Kury, Oleg I. Voitychuk, Ramiz M. Ali, Sehamuddin Galadari, Keun-Hang Susan Yang, Frank Christopher Howarth, Yaroslav M. Shuba, Murat Oz

Abstract:

A role for anandamide (N-arachidonoyl ethanolamide; AEA), a major endocannabinoid, in the cardiovascular system in various pathological conditions has been reported in earlier studies. In the present work, we have hypothesized that the antiarrhythmic effects reported for AEA are due to its negative inotropic effect and altered action potential (AP) characteristics. Therefore, we tested the effects of AEA on contractility and electrophysiological properties of rat ventricular myocytes. Video edge detection was used to measure myocyte shortening. Intracellular Ca2+ was measured in cells loaded with the fluorescent indicator fura-2 AM. Whole-cell patch-clamp technique was employed to investigate the effect of AEA on the characteristics of APs. AEA (1 μM) caused a significant decrease in the amplitudes of electrically-evoked myocyte shortening and Ca2+ transients and significantly decreased the duration of AP. The effect of AEA on myocyte shortening and AP characteristics was not altered in the presence of pertussis toxin (PTX, 2 µg/ml for 4 h), AM251 and SR141716 (cannabinoid type 1 receptor antagonists) or AM630 and SR 144528 (cannabinoid type 2 receptor antagonists). Furthermore, AEA inhibited voltage-activated inward Na+ (INa) and Ca2+ (IL,Ca) currents; major ionic currents shaping the APs in ventricular myocytes, in a voltage and PTX-independent manner. Collectively, the results suggest that AEA depresses ventricular myocyte contractility, by decreasing the action potential duration (APD), and inhibits the function of voltage-dependent Na+ and L-type Ca2+ channels in a manner independent of cannabinoid receptors. This mechanism may be importantly involved in the antiarrhythmic effects of anandamide.

Keywords: action potential, anandamide, cannabinoid receptor, endocannabinoid, ventricular myocytes

Procedia PDF Downloads 341
786 Sensor and Actuator Fault Detection in Connected Vehicles under a Packet Dropping Network

Authors: Z. Abdollahi Biron, P. Pisu

Abstract:

Connected vehicles are one of the promising technologies for future Intelligent Transportation Systems (ITS). A connected vehicle system is essentially a set of vehicles communicating through a network to exchange their information with each other and the infrastructure. Although this interconnection of the vehicles can be potentially beneficial in creating an efficient, sustainable, and green transportation system, a set of safety and reliability challenges come out with this technology. The first challenge arises from the information loss due to unreliable communication network which affects the control/management system of the individual vehicles and the overall system. Such scenario may lead to degraded or even unsafe operation which could be potentially catastrophic. Secondly, faulty sensors and actuators can affect the individual vehicle’s safe operation and in turn will create a potentially unsafe node in the vehicular network. Further, sending that faulty sensor information to other vehicles and failure in actuators may significantly affect the safe operation of the overall vehicular network. Therefore, it is of utmost importance to take these issues into consideration while designing the control/management algorithms of the individual vehicles as a part of connected vehicle system. In this paper, we consider a connected vehicle system under Co-operative Adaptive Cruise Control (CACC) and propose a fault diagnosis scheme that deals with these aforementioned challenges. Specifically, the conventional CACC algorithm is modified by adding a Kalman filter-based estimation algorithm to suppress the effect of lost information under unreliable network. Further, a sliding mode observer-based algorithm is used to improve the sensor reliability under faults. The effectiveness of the overall diagnostic scheme is verified via simulation studies.

Keywords: fault diagnostics, communication network, connected vehicles, packet drop out, platoon

Procedia PDF Downloads 222
785 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors

Authors: Jakob Krause

Abstract:

Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.

Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling

Procedia PDF Downloads 130
784 Detection and Expression of Peroxidase Genes in Trichoderma harzianum KY488466 and Its Response to Crude Oil Degradation

Authors: Michael Dare Asemoloye, Segun Gbolagade Jonathan, Rafiq Ahmad, Odunayo Joseph Olawuyi, D. O. Adejoye

Abstract:

Fungi have potentials for degrading hydrocarbons through the secretion of different enzymes. Crude oil tolerance and degradation by Trichoderma harzianum was investigated in this study with its ability to produce peroxidase enzymes (LiP and MnP). Many fungal strains were isolated from rhizosphere of grasses growing on a crude oil spilled site, and the most frequent strain based on percentage incidence was further characterized using morphological and molecular characteristics. Molecular characterization was done through the amplification of Ribosomal-RNA regions of 18s (1609-1627) and 28s (287-266) using ITS1 and ITS4 combinations and it was identified using NCBI BLAST tool. The selected fungus was also subjected to an in-vitro tolerance test at crude oil concentrations of 5, 10, 15, 20 and 25% while 0% served as control. In addition, lignin peroxidase genes (lig1-6) and manganese peroxidase gene (mnp) were detected and expressed in this strain using RT-PCR technique, its peroxidase producing activities was also studied in aliquots (U/ml). This strain had highest incidence of 80%, it was registered in NCBI as Trichoderma harzianum asemoJ KY488466. The strain KY488466 responded to crude oil concentrations as it increase, the dose inhibition response percentage (DIRP) increased from 41.67 to 95.41 at 5 to 25 % crude oil concentrations. All the peroxidase genes are present in KY488466, and expressed with amplified 900-1000 bp through RT-PCR technique. In this strain, lig2, lig4 and mnp genes were over-expressed, lig 6 was moderately expressed, while none of the genes was under-expressed. The strain also produced 90±0.87 U/ml lignin peroxidase and 120±1.23 U/mil manganese peroxidase enzymes in aliquots. These results imply that KY488466 can tolerate and survive high crude oil concentration and could be exploited for bioremediation of oil-spilled soils, the produced peroxidase enzymes could also be exploited for other biotechnological experiments.

Keywords: crude oil, enzymes, expression, peroxidase genes, tolerance, Trichoderma harzianum

Procedia PDF Downloads 202
783 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice

Authors: T. Ewetumo, K. D. Adedayo, Festus Ben

Abstract:

Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.

Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation

Procedia PDF Downloads 333
782 Cross-Sectional Study of Critical Parameters on RSET and Decision-Making of At-Risk Groups in Fire Evacuation

Authors: Naser Kazemi Eilaki, Ilona Heldal, Carolyn Ahmer, Bjarne Christian Hagen

Abstract:

Elderly people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to a safe place. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. While earlier studies have frequently addressed quantitative measurements regarding at-risk groups' physical characteristics (e.g., their speed of travel), this paper considers the influence of at-risk groups’ characteristics on their decision and determining better escape routes. Most of evacuation models are based on mapping people's movement and their behaviour to summation times for common activity types on a timeline. Usually, timeline models estimate required safe egress time (RSET) as a sum of four timespans: detection, alarm, premovement, and movement time, and compare this with the available safe egress time (ASET) to determine what is influencing the margin of safety.This paper presents a cross-sectional study for identifying the most critical items on RSET and people's decision-making and with possibilities to include safety knowledge regarding people with physical or cognitive functional impairments. The result will contribute to increased knowledge on considering at-risk groups and disabilities for designing and developing safe escape routes. The expected results can be an asset to predict the probabilistic behavioural pattern of at-risk groups and necessary components for defining a framework for understanding how stakeholders can consider various disabilities when determining the margin of safety for a safe escape route.

Keywords: fire safety, evacuation, decision-making, at-risk groups

Procedia PDF Downloads 86
781 Interpersonal Variation of Salivary Microbiota Using Denaturing Gradient Gel Electrophoresis

Authors: Manjula Weerasekera, Chris Sissons, Lisa Wong, Sally Anderson, Ann Holmes, Richard Cannon

Abstract:

The aim of this study was to characterize bacterial population and yeasts in saliva by Polymerase chain reaction followed by denaturing gradient gel electrophoresis (PCR-DGGE) and measure yeast levels by culture. PCR-DGGE was performed to identify oral bacteria and yeasts in 24 saliva samples. DNA was extracted and used to generate DNA amplicons of the V2–V3 hypervariable region of the bacterial 16S rDNA gene using PCR. Further universal primers targeting the large subunit rDNA gene (25S-28S) of fungi were used to amplify yeasts present in human saliva. Resulting PCR products were subjected to denaturing gradient gel electrophoresis using Universal mutation detection system. DGGE bands were extracted and sequenced using Sanger method. A potential relationship was evaluated between groups of bacteria identified by cluster analysis of DGGE fingerprints with the yeast levels and with their diversity. Significant interpersonal variation of salivary microbiome was observed. Cluster and principal component analysis of the bacterial DGGE patterns yielded three significant major clusters, and outliers. Seventeen of the 24 (71%) saliva samples were yeast positive going up to 10³ cfu/mL. Predominately, C. albicans, and six other species of yeast were detected. The presence, amount and species of yeast showed no clear relationship to the bacterial clusters. Microbial community in saliva showed a significant variation between individuals. A lack of association between yeasts and the bacterial fingerprints in saliva suggests the significant ecological person-specific independence in highly complex oral biofilm systems under normal oral conditions.

Keywords: bacteria, denaturing gradient gel electrophoresis, oral biofilm, yeasts

Procedia PDF Downloads 202
780 Application of Compressed Sensing and Different Sampling Trajectories for Data Reduction of Small Animal Magnetic Resonance Image

Authors: Matheus Madureira Matos, Alexandre Rodrigues Farias

Abstract:

Magnetic Resonance Imaging (MRI) is a vital imaging technique used in both clinical and pre-clinical areas to obtain detailed anatomical and functional information. However, MRI scans can be expensive, time-consuming, and often require the use of anesthetics to keep animals still during the imaging process. Anesthetics are commonly administered to animals undergoing MRI scans to ensure they remain still during the imaging process. However, prolonged or repeated exposure to anesthetics can have adverse effects on animals, including physiological alterations and potential toxicity. Minimizing the duration and frequency of anesthesia is, therefore, crucial for the well-being of research animals. In recent years, various sampling trajectories have been investigated to reduce the number of MRI measurements leading to shorter scanning time and minimizing the duration of animal exposure to the effects of anesthetics. Compressed sensing (CS) and sampling trajectories, such as cartesian, spiral, and radial, have emerged as powerful tools to reduce MRI data while preserving diagnostic quality. This work aims to apply CS and cartesian, spiral, and radial sampling trajectories for the reconstruction of MRI of the abdomen of mice sub-sampled at levels below that defined by the Nyquist theorem. The methodology of this work consists of using a fully sampled reference MRI of a female model C57B1/6 mouse acquired experimentally in a 4.7 Tesla MRI scanner for small animals using Spin Echo pulse sequences. The image is down-sampled by cartesian, radial, and spiral sampling paths and then reconstructed by CS. The quality of the reconstructed images is objectively assessed by three quality assessment techniques RMSE (Root mean square error), PSNR (Peak to Signal Noise Ratio), and SSIM (Structural similarity index measure). The utilization of optimized sampling trajectories and CS technique has demonstrated the potential for a significant reduction of up to 70% of image data acquisition. This result translates into shorter scan times, minimizing the duration and frequency of anesthesia administration and reducing the potential risks associated with it.

Keywords: compressed sensing, magnetic resonance, sampling trajectories, small animals

Procedia PDF Downloads 51
779 Synthesis of Pd@ Cu Core−Shell Nanowires by Galvanic Displacement of Cu by Pd²⁺ Ions as a Modified Glassy Carbon Electrode for the Simultaneous Determination of Dihydroxybenzene Isomers Speciation

Authors: Majid Farsadrouh Rashti, Parisa Jahani, Amir Shafiee, Mehrdad Mofidi

Abstract:

The dihydroxybenzene isomers, hydroquinone (HQ), catechol (CC) and resorcinol (RS) have been widely recognized as important environmental pollutants due to their toxicity and low degradability in the ecological environment. Speciation of HQ, CC and RS is very important for environmental analysis because they co-exist of these isomers in environmental samples and are too difficult to degrade as an environmental contaminant with high toxicity. There are many analytical methods have been reported for detecting these isomers, such as spectrophotometry, fluorescence, High-performance liquid chromatography (HPLC) and electrochemical methods. These methods have attractive advantages such as simple and fast response, low maintenance costs, wide linear analysis range, high efficiency, excellent selectivity and high sensitivity. A novel modified glassy carbon electrode (GCE) with Pd@ Cu/CNTs core−shell nanowires for the simultaneous determination of hydroquinone (HQ), catechol (CC) and resorcinol (RS) is described. A detailed investigation by field emission scanning electron microscopy and electrochemistry was performed in order to elucidate the preparation process and properties of the GCE/ Pd/CuNWs-CNTs. The electrochemical response characteristic of the modified GPE/LFOR toward HQ, CC and RS were investigated by cyclic voltammetry, differential pulse voltammetry (DPV) and Chronoamperometry. Under optimum conditions, the calibrations curves were linear up to 228 µM for each with detection limits of 0.4, 0.6 and 0.8 µM for HQ, CC and RS, respectively. The diffusion coefficient for the oxidation of HQ, CC and RS at the modified electrode was calculated as 6.5×10⁻⁵, 1.6 ×10⁻⁵ and 8.5 ×10⁻⁵ cm² s⁻¹, respectively. DPV was used for the simultaneous determination of HQ, CC and RS at the modified electrode and the relative standard deviations were 2.1%, 1.9% and 1.7% for HQ, CC and RS, respectively. Moreover, GCE/Pd/CuNWs-CNTs was successfully used for determination of HQ, CC and RS in real samples.

Keywords: dihydroxybenzene isomers, galvanized copper nanowires, electrochemical sensor, Palladium, speciation

Procedia PDF Downloads 118
778 Comparative Electrochemical Studies of Enzyme-Based and Enzyme-less Graphene Oxide-Based Nanocomposite as Glucose Biosensor

Authors: Chetna Tyagi. G. B. V. S. Lakshmi, Ambuj Tripathi, D. K. Avasthi

Abstract:

Graphene oxide provides a good host matrix for preparing nanocomposites due to the different functional groups attached to its edges and planes. Being biocompatible, it is used in therapeutic applications. As enzyme-based biosensor requires complicated enzyme purification procedure, high fabrication cost and special storage conditions, we need enzyme-less biosensors for use even in a harsh environment like high temperature, varying pH, etc. In this work, we have prepared both enzyme-based and enzyme-less graphene oxide-based biosensors for glucose detection using glucose-oxidase as enzyme and gold nanoparticles, respectively. These samples were characterized using X-ray diffraction, UV-visible spectroscopy, scanning electron microscopy, and transmission electron microscopy to confirm the successful synthesis of the working electrodes. Electrochemical measurements were performed for both the working electrodes using a 3-electrode electrochemical cell. Cyclic voltammetry curves showed the homogeneous transfer of electron on the electrodes in the scan range between -0.2V to 0.6V. The sensing measurements were performed using differential pulse voltammetry for the glucose concentration varying from 0.01 mM to 20 mM, and sensing was improved towards glucose in the presence of gold nanoparticles. Gold nanoparticles in graphene oxide nanocomposite played an important role in sensing glucose in the absence of enzyme, glucose oxidase, as evident from these measurements. The selectivity was tested by measuring the current response of the working electrode towards glucose in the presence of the other common interfering agents like cholesterol, ascorbic acid, citric acid, and urea. The enzyme-less working electrode also showed storage stability for up to 15 weeks, making it a suitable glucose biosensor.

Keywords: electrochemical, enzyme-less, glucose, gold nanoparticles, graphene oxide, nanocomposite

Procedia PDF Downloads 123
777 AI Predictive Modeling of Excited State Dynamics in OPV Materials

Authors: Pranav Gunhal., Krish Jhurani

Abstract:

This study tackles the significant computational challenge of predicting excited state dynamics in organic photovoltaic (OPV) materials—a pivotal factor in the performance of solar energy solutions. Time-dependent density functional theory (TDDFT), though effective, is computationally prohibitive for larger and more complex molecules. As a solution, the research explores the application of transformer neural networks, a type of artificial intelligence (AI) model known for its superior performance in natural language processing, to predict excited state dynamics in OPV materials. The methodology involves a two-fold process. First, the transformer model is trained on an extensive dataset comprising over 10,000 TDDFT calculations of excited state dynamics from a diverse set of OPV materials. Each training example includes a molecular structure and the corresponding TDDFT-calculated excited state lifetimes and key electronic transitions. Second, the trained model is tested on a separate set of molecules, and its predictions are rigorously compared to independent TDDFT calculations. The results indicate a remarkable degree of predictive accuracy. Specifically, for a test set of 1,000 OPV materials, the transformer model predicted excited state lifetimes with a mean absolute error of 0.15 picoseconds, a negligible deviation from TDDFT-calculated values. The model also correctly identified key electronic transitions contributing to the excited state dynamics in 92% of the test cases, signifying a substantial concordance with the results obtained via conventional quantum chemistry calculations. The practical integration of the transformer model with existing quantum chemistry software was also realized, demonstrating its potential as a powerful tool in the arsenal of materials scientists and chemists. The implementation of this AI model is estimated to reduce the computational cost of predicting excited state dynamics by two orders of magnitude compared to conventional TDDFT calculations. The successful utilization of transformer neural networks to accurately predict excited state dynamics provides an efficient computational pathway for the accelerated discovery and design of new OPV materials, potentially catalyzing advancements in the realm of sustainable energy solutions.

Keywords: transformer neural networks, organic photovoltaic materials, excited state dynamics, time-dependent density functional theory, predictive modeling

Procedia PDF Downloads 95
776 Linear and Nonlinear Resonance of Flat Bottom Hole in an Aluminum Plate

Authors: Biaou Jean-Baptiste Kouchoro, Anissa Meziane, Philippe Micheau, Mathieu Renier, Nicolas Quaegebeur

Abstract:

Numerous experimental and numerical studies have shown the interest of the local defects resonance (LDR) for the Non-Destructive Testing of metallic and composite plates. Indeed, guided ultrasonic waves such as Lamb waves, which are increasingly used for the inspection of these flat structures, enable the generation of local resonance phenomena by their interaction with a damaged area, allowing the detection of defects. When subjected to a large amplitude motion, a nonlinear behavior can predominate in the damaged area. This work presents a 2D Finite Element Model of the local resonance of a 12 mm long and 5 mm deep Flat Bottom Hole (FBH) in a 6 mm thick aluminum plate under the excitation induced by an incident A0 Lamb mode. The analysis of the transient response of the FBH enables the precise determination of its resonance frequencies and the associate modal deformations. Then, a linear parametric study varying the geometrical properties of the FBH highlights the sensitivity of the resonance frequency with respect to the plate thickness. It is demonstrated that the resonance effect disappears when the ratio of thicknesses between the FBH and the plate is below 0.1. Finally, the nonlinear behavior of the FBH is considered and studied introducing geometrical (taken into account the nonlinear component of the strain tensor) nonlinearities that occur at large vibration amplitudes. Experimental analysis allows observation of the resonance effects and nonlinear response of the FBH. The differences between these experimental results and the numerical results will be commented on. The results of this study are promising and allow to consider more realistic defects such as delamination in composite materials.

Keywords: guided waves, non-destructive testing, dynamic field testing, non-linear ultrasound/vibration

Procedia PDF Downloads 119
775 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer

Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom

Abstract:

Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.

Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN

Procedia PDF Downloads 55