Search results for: quantum computation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1096

Search results for: quantum computation

256 Human Leukocyte Antigen Class 1 Phenotype Distribution and Analysis in Persons from Central Uganda with Active Tuberculosis and Latent Mycobacterium tuberculosis Infection

Authors: Helen K. Buteme, Rebecca Axelsson-Robertson, Moses L. Joloba, Henry W. Boom, Gunilla Kallenius, Markus Maeurer

Abstract:

Background: The Ugandan population is heavily affected by infectious diseases and Human leukocyte antigen (HLA) diversity plays a crucial role in the host-pathogen interaction and affects the rates of disease acquisition and outcome. The identification of HLA class 1 alleles and determining which alleles are associated with tuberculosis (TB) outcomes would help in screening individuals in TB endemic areas for susceptibility to TB and to predict resistance or progression to TB which would inevitably lead to better clinical management of TB. Aims: To be able to determine the HLA class 1 phenotype distribution in a Ugandan TB cohort and to establish the relationship between these phenotypes and active and latent TB. Methods: Blood samples were drawn from 32 HIV negative individuals with active TB and 45 HIV negative individuals with latent MTB infection. DNA was extracted from the blood samples and the DNA samples HLA typed by the polymerase chain reaction-sequence specific primer method. The allelic frequencies were determined by direct count. Results: HLA-A*02, A*01, A*74, A*30, B*15, B*58, C*07, C*03 and C*04 were the dominant phenotypes in this Ugandan cohort. There were differences in the distribution of HLA types between the individuals with active TB and the individuals with LTBI with only HLA-A*03 allele showing a statistically significant difference (p=0.0136). However, after FDR computation the corresponding q-value is above the expected proportion of false discoveries (q-value 0.2176). Key findings: We identified a number of HLA class I alleles in a population from Central Uganda which will enable us to carry out a functional characterization of CD8+ T-cell mediated immune responses to MTB. Our results also suggest that there may be a positive association between the HLA-A*03 allele and TB implying that individuals with the HLA-A*03 allele are at a higher risk of developing active TB.

Keywords: HLA, phenotype, tuberculosis, Uganda

Procedia PDF Downloads 403
255 Comparison of Modulus from Repeated Plate Load Test and Resonant Column Test for Compaction Control of Trackbed Foundation

Authors: JinWoog Lee, SeongHyeok Lee, ChanYong Choi, Yujin Lim, Hojin Cho

Abstract:

Primary function of the trackbed in a conventional railway track system is to decrease the stresses in the subgrade to be in an acceptable level. A properly designed trackbed layer performs this task adequately. Many design procedures have used assumed and/or are based on critical stiffness values of the layers obtained mostly in the field to calculate an appropriate thickness of the sublayers of the trackbed foundation. However, those stiffness values do not consider strain levels clearly and precisely in the layers. This study proposes a method of computation of stiffness that can handle with strain level in the layers of the trackbed foundation in order to provide properly selected design values of the stiffness of the layers. The shear modulus values are dependent on shear strain level so that the strain levels generated in the subgrade in the trackbed under wheel loading and below plate of Repeated Plate Bearing Test (RPBT) are investigated by finite element analysis program ABAQUS and PLAXIS programs. The strain levels generated in the subgrade from RPBT are compared to those values from RC (Resonant Column) test after some consideration of strain levels and stress consideration. For comparison of shear modulus G obtained from RC test and stiffness moduli Ev2 obtained from RPBT in the field, many numbers of mid-size RC tests in laboratory and RPBT in field were performed extensively. It was found in this study that there is a big difference in stiffness modulus when the converted Ev2 values were compared to those values of RC test. It is verified in this study that it is necessary to use precise and increased loading steps to construct nonlinear curves from RPBT in order to get correct Ev2 values in proper strain levels.

Keywords: modulus, plate load test, resonant column test, trackbed foundation

Procedia PDF Downloads 495
254 Improving Fake News Detection Using K-means and Support Vector Machine Approaches

Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy

Abstract:

Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.

Keywords: clustering, fake news detection, feature selection, machine learning, social media, support vector machine

Procedia PDF Downloads 176
253 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling

Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed

Abstract:

The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.

Keywords: streamflow, neural network, optimisation, algorithm

Procedia PDF Downloads 152
252 Multivariate Analysis on Water Quality Attributes Using Master-Slave Neural Network Model

Authors: A. Clementking, C. Jothi Venkateswaran

Abstract:

Mathematical and computational functionalities such as descriptive mining, optimization, and predictions are espoused to resolve natural resource planning. The water quality prediction and its attributes influence determinations are adopted optimization techniques. The water properties are tainted while merging water resource one with another. This work aimed to predict influencing water resource distribution connectivity in accordance to water quality and sediment using an innovative proposed master-slave neural network back-propagation model. The experiment results are arrived through collecting water quality attributes, computation of water quality index, design and development of neural network model to determine water quality and sediment, master–slave back propagation neural network back-propagation model to determine variations on water quality and sediment attributes between the water resources and the recommendation for connectivity. The homogeneous and parallel biochemical reactions are influences water quality and sediment while distributing water from one location to another. Therefore, an innovative master-slave neural network model [M (9:9:2)::S(9:9:2)] designed and developed to predict the attribute variations. The result of training dataset given as an input to master model and its maximum weights are assigned as an input to the slave model to predict the water quality. The developed master-slave model is predicted physicochemical attributes weight variations for 85 % to 90% of water quality as a target values.The sediment level variations also predicated from 0.01 to 0.05% of each water quality percentage. The model produced the significant variations on physiochemical attribute weights. According to the predicated experimental weight variation on training data set, effective recommendations are made to connect different resources.

Keywords: master-slave back propagation neural network model(MSBPNNM), water quality analysis, multivariate analysis, environmental mining

Procedia PDF Downloads 477
251 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 136
250 Free Energy Computation of A G-Quadruplex-Ligand Structure: A Classical Molecular Dynamics and Metadynamics Simulation Study

Authors: Juan Antonio Mondragon Sanchez, Ruben Santamaria

Abstract:

The DNA G-quadruplex is a four-stranded DNA structure formed by stacked planes of four base paired guanines (G-quartet). Guanine rich DNA sequences appear in many sites of genomic DNA and can potential form G-quadruplexes, such as those occurring at 3'-terminus of the human telomeric DNA. The formation and stabilization of a G-quadruplex by small ligands at the telomeric region can inhibit the telomerase activity. In turn, the ligands can be used to down regulate oncogene expression making G-quadruplex an attractive target for anticancer therapy. Many G-quadruplex ligands have been proposed with a planar core to facilitate the pi–pi stacking and electrostatic interactions with the G-quartets. However, many drug candidates are impossibilitated to discriminate a G-quadruplex from a double helix DNA structure. In this context, it is important to investigate the site topology for the interaction of a G-quadruplex with a ligand. In this work, we determine the free energy surface of a G-quadruplex-ligand to study the binding modes of the G-quadruplex (TG4T) with the daunomycin (DM) drug. The complex TG4T-DM is studied using classical molecular dynamics in combination with metadynamics simulations. The metadynamics simulations permit an enhanced sampling of the conformational space with a modest computational cost and obtain free energy surfaces in terms of the collective variables (CV). The free energy surfaces of TG4T-DM exhibit other local minima, indicating the presence of additional binding modes of daunomycin that are not observed in short MD simulations without the metadynamics approach. The results are compared with similar calculations on a different structure (the mutated mu-G4T-DM where the 5' thymines on TG4T-DM have been deleted). The results should be of help to design new G-quadruplex drugs, and understand the differences in the recognition topology sites of the duplex and quadruplex DNA structures in their interaction with ligands.

Keywords: g-quadruplex, cancer, molecular dynamics, metadynamics

Procedia PDF Downloads 460
249 Effects of Computer Aided Instructional Package on Performance and Retention of Genetic Concepts amongst Secondary School Students in Niger State, Nigeria

Authors: Muhammad R. Bello, Mamman A. Wasagu, Yahya M. Kamar

Abstract:

The study investigated the effects of computer-aided instructional package (CAIP) on performance and retention of genetic concepts among secondary school students in Niger State. Quasi-experimental research design i.e. pre-test-post-test experimental and control groups were adopted for the study. The population of the study was all senior secondary school three (SS3) students’ offering biology. A sample of 223 students was randomly drawn from six purposively selected secondary schools. The researchers’ developed computer aided instructional package (CAIP) on genetic concepts was used as treatment instrument for the experimental group while the control group was exposed to the conventional lecture method (CLM). The instrument for data collection was a Genetic Performance Test (GEPET) that had 50 multiple-choice questions which were validated by science educators. A Reliability coefficient of 0.92 was obtained for GEPET using Pearson Product Moment Correlation (PPMC). The data collected were analyzed using IBM SPSS Version 20 package for computation of Means, Standard deviation, t-test, and analysis of covariance (ANCOVA). The ANOVA analysis (Fcal (220) = 27.147, P < 0.05) shows that students who received instruction with CAIP outperformed the students who received instruction with CLM and also had higher retention. The findings also revealed no significant difference in performance and retention between male and female students (tcal (103) = -1.429, P > 0.05). It was recommended amongst others that teachers should use computer-aided instructional package in teaching genetic concepts in order to improve students’ performance and retention in biology subject. Keywords: Computer-aided Instructional Package, Performance, Retention and Genetic Concepts.

Keywords: computer aided instructional package, performance, retention, genetic concepts, senior secondary school students

Procedia PDF Downloads 362
248 Discovering New Organic Materials through Computational Methods

Authors: Lucas Viani, Benedetta Mennucci, Soo Young Park, Johannes Gierschner

Abstract:

Organic semiconductors have attracted the attention of the scientific community in the past decades due to their unique physicochemical properties, allowing new designs and alternative device fabrication methods. Until today, organic electronic devices are largely based on conjugated polymers mainly due to their easy processability. In the recent years, due to moderate ET and CT efficiencies and the ill-defined nature of polymeric systems the focus has been shifting to small conjugated molecules with well-defined chemical structure, easier control of intermolecular packing, and enhanced CT and ET properties. It has led to the synthesis of new small molecules, followed by the growth of their crystalline structure and ultimately by the device preparation. This workflow is commonly followed without a clear knowledge of the ET and CT properties related mainly to the macroscopic systems, which may lead to financial and time losses, since not all materials will deliver the properties and efficiencies demanded by the current standards. In this work, we present a theoretical workflow designed to predict the key properties of ET of these new materials prior synthesis, thus speeding up the discovery of new promising materials. It is based on quantum mechanical, hybrid, and classical methodologies, starting from a single molecule structure, finishing with the prediction of its packing structure, and prediction of properties of interest such as static and averaged excitonic couplings, and exciton diffusion length.

Keywords: organic semiconductor, organic crystals, energy transport, excitonic couplings

Procedia PDF Downloads 253
247 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 186
246 An Overview of Domain Models of Urban Quantitative Analysis

Authors: Mohan Li

Abstract:

Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.

Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design

Procedia PDF Downloads 177
245 Enhancing Information Technologies with AI: Unlocking Efficiency, Scalability, and Innovation

Authors: Abdal-Hafeez Alhussein

Abstract:

Artificial Intelligence (AI) has become a transformative force in the field of information technologies, reshaping how data is processed, analyzed, and utilized across various domains. This paper explores the multifaceted applications of AI within information technology, focusing on three key areas: automation, scalability, and data-driven decision-making. We delve into how AI-powered automation is optimizing operational efficiency in IT infrastructures, from automated network management to self-healing systems that reduce downtime and enhance performance. Scalability, another critical aspect, is addressed through AI’s role in cloud computing and distributed systems, enabling the seamless handling of increasing data loads and user demands. Additionally, the paper highlights the use of AI in cybersecurity, where real-time threat detection and adaptive response mechanisms significantly improve resilience against sophisticated cyberattacks. In the realm of data analytics, AI models—especially machine learning and natural language processing—are driving innovation by enabling more precise predictions, automated insights extraction, and enhanced user experiences. The paper concludes with a discussion on the ethical implications of AI in information technologies, underscoring the importance of transparency, fairness, and responsible AI use. It also offers insights into future trends, emphasizing the potential of AI to further revolutionize the IT landscape by integrating with emerging technologies like quantum computing and IoT.

Keywords: artificial intelligence, information technology, automation, scalability

Procedia PDF Downloads 17
244 Effect of Sr-Doping on Multiferroic Properties of Ca₁₋ₓSrₓMn₇O₁₂

Authors: Parul Jain, Jitendra Saha, L. C. Gupta, Satyabrata Patnaik, Ashok K. Ganguli, Ratnamala Chatterjee

Abstract:

This study shows how sensitively and drastically multiferroic properties of CaMn₇O₁₂ get modified by isovalent Sr-doping, namely, in Ca₁₋ₓSrₓMn₇O₁₂ for x as small as 0.01 and 0.02. CaMn₇O₁₂ is a type-II multiferroic, wherein polarization is caused by magnetic spin ordering. In this report magnetic and ferroelectric properties of Ca₁₋ₓSrₓMn₇O₁₂ (0 ≤ x ≤ 0.1) are investigated. Samples were prepared by wet sol gel technique using their respective nitrates; powders thus obtained were calcined and sintered in optimized conditions. The X-ray diffraction patterns of all samples doped with Sr concentrations in the range (0 ≤ x ≤ 10%) were found to be free from secondary phases. Magnetization versus temperature and magnetization versus field measurements were carried out using Quantum Design SQUID magnetometer. Pyroelectric current measurements were done for finding the polarization in the samples. Findings of the measurements are: (i) increase of Sr-doping in CaMn₇O₁₂ lattice i.e. for x ≤ 0.02, increases the polarization, whereas decreases the magnetization and the coercivity of the samples; (ii) the material with x = 0.02 exhibits ferroelectric polarization Ps which is more than double the Ps in the un-doped material and the magnetization M is reduced to less than half of that of the pure material; remarkably (iii) the modifications in Ps and M are reversed as x increases beyond x = 0.02 and for x = 0.10, Ps is reduced even below that for the pure sample; (iv) there is no visible change of the two magnetic transitions TN1 (90 K) and TN2 (48 K) of the pure material as a function of x. The strong simultaneous variations of Ps and M for x = 0.02 strongly suggest that either a basic modification of the magnetic structure of the material or a significant change of the coupling of P and M or possibly both.

Keywords: ferroelectric, isovalent, multiferroic, polarization, pyroelectric

Procedia PDF Downloads 462
243 A Monolithic Arbitrary Lagrangian-Eulerian Finite Element Strategy for Partly Submerged Solid in Incompressible Fluid with Mortar Method for Modeling the Contact Surface

Authors: Suman Dutta, Manish Agrawal, C. S. Jog

Abstract:

Accurate computation of hydrodynamic forces on floating structures and their deformation finds application in the ocean and naval engineering and wave energy harvesting. This manuscript presents a monolithic, finite element strategy for fluid-structure interaction involving hyper-elastic solids partly submerged in an incompressible fluid. A velocity-based Arbitrary Lagrangian-Eulerian (ALE) formulation has been used for the fluid and a displacement-based Lagrangian approach has been used for the solid. The flexibility of the ALE technique permits us to treat the free surface of the fluid as a Lagrangian entity. At the interface, the continuity of displacement, velocity and traction are enforced using the mortar method. In the mortar method, the constraints are enforced in a weak sense using the Lagrange multiplier method. In the literature, the mortar method has been shown to be robust in solving various contact mechanics problems. The time-stepping strategy used in this work reduces to the generalized trapezoidal rule in the Eulerian setting. In the Lagrangian limit, in the absence of external load, the algorithm conserves the linear and angular momentum and the total energy of the system. The use of monolithic coupling with an energy-conserving time-stepping strategy gives an unconditionally stable algorithm and allows the user to take large time steps. All the governing equations and boundary conditions have been mapped to the reference configuration. The use of the exact tangent stiffness matrix ensures that the algorithm converges quadratically within each time step. The robustness and good performance of the proposed method are demonstrated by solving benchmark problems from the literature.

Keywords: ALE, floating body, fluid-structure interaction, monolithic, mortar method

Procedia PDF Downloads 274
242 Bimetallic Silver-Platinum Core-Shell Nanoparticles Formation and Spectroscopic Analysis

Authors: Mangaka C. Matoetoe, Fredrick O. Okumu

Abstract:

Metal nanoparticles have attracted a great interest in scientific research and industrial applications, owing to their unique large surface area-to-volume ratios and quantum-size effects. Supported metal nanoparticles play a pivotal role in areas such as nanoelectronics, energy storage and as catalysts for the sustainable production of fuels and chemicals. Monometallics (Ag, Pt) and Silver-platinum (Ag-Pt) bimetallic (BM) nanoparticles (NPs) with a mole fraction (1:1) were prepared by reduction / co-reduction of hexachloroplatinate and silver nitrate with sodium citrate. The kinetics of the nanoparticles formation was monitored using UV-visible spectrophotometry. Transmission electron microscopy (TEM) and Energy-dispersive X-ray (EDX) spectroscopy were used for size, film morphology as well as elemental composition study. Fast reduction processes was noted in Ag NPs (0.079 s-1) and Ag-Pt NPs 1:1 (0.082 s-1) with exception of Pt NPs (0.006 s-1) formation. The UV-visible spectra showed characteristic peaks in Ag NPs while the Pt NPs and Ag-Pt NPs 1:1 had no observable absorption peaks. UV visible spectra confirmed chemical reduction resulting to formation of NPs while TEM images depicted core-shell arrangement in the Ag-Pt NPs 1:1 with particle size of 20 nm. Monometallic Ag and Pt NPs reported particle sizes of 60 nm and 2.5 nm respectively. The particle size distribution in the BM NPs was found to directly depend on the concentration of Pt NPs around the Ag core. EDX elemental composition analysis of the nanoparticle suspensions confirmed presence of the Ag and Pt in the Ag-Pt NPs 1:1. All the spectroscopic analysis confirmed the successful formation of the nanoparticles.

Keywords: kinetics, morphology, nanoparticles, platinum, silver

Procedia PDF Downloads 401
241 Numerical Simulation of Aeroelastic Influence Exerted by Kinematic and Geometrical Parameters on Oscillations' Frequencies and Phase Shift Angles in a Simulated Compressor of Gas Transmittal Unit

Authors: Liliia N. Butymova, Vladimir Y. Modorsky, Nikolai A. Shevelev

Abstract:

Prediction of vibration processes in gas transmittal units (GTU) is an urgent problem. Despite numerous scientific publications on the problem of vibrations in general, there are not enough works concerning FSI-modeling interaction processes between several deformable blades in gas-dynamic flow. Since it is very difficult to solve the problem in full scope, with all factors considered, a unidirectional dynamic coupled 1FSI model is suggested for use at the first stage, which would include, from symmetry considerations, two blades, which might be considered as the first stage of solving more general bidirectional problem. ANSYS CFX programmed multi-processor was chosen as a numerical computation tool. The problem was solved on PNRPU high-capacity computer complex. At the first stage of the study, blades were believed oscillating with the same frequency, although oscillation phases could be equal and could be different. At that non-stationary gas-dynamic forces distribution over the blades surfaces is calculated in run of simulation experiment. Oscillations in the “gas — structure” dynamic system are assumed to increase if the resultant of these gas-dynamic forces is in-phase with blade oscillation, and phase shift (φ=0). Provided these oscillation occur with phase shift, then oscillations might increase or decrease, depending on the phase shift value. The most important results are as follows: the angle of phase shift in inter-blade oscillation and the gas-dynamic force depends on the flow velocity, the specific inter-blade gap, and the shaft rotation speed; a phase shift in oscillation of adjacent blades does not always correspond to phase shift of gas-dynamic forces affecting the blades. Thus, it was discovered, that asynchronous oscillation of blades might cause either attenuation or intensification of oscillation. It was revealed that clocking effect might depend not only on the mutual circumferential displacement of blade rows and the gap between the blades, but also on the blade dynamic deformation nature.

Keywords: aeroelasticity, ANSYS CFX, oscillation, phase shift, clocking effect, vibrations

Procedia PDF Downloads 269
240 A Neural Network for the Prediction of Contraction after Burn Injuries

Authors: Ginger Egberts, Marianne Schaaphok, Fred Vermolen, Paul van Zuijlen

Abstract:

A few years ago, a promising morphoelastic model was developed for the simulation of contraction formation after burn injuries. Contraction can lead to a serious reduction in physical mobility, like a reduction in the range-of-motion of joints. If this is the case in a healing burn wound, then this is referred to as a contracture that needs medical intervention. The morphoelastic model consists of a set of partial differential equations describing both a chemical part and a mechanical part in dermal wound healing. These equations are solved with the numerical finite element method (FEM). In this method, many calculations are required on each of the chosen elements. In general, the more elements, the more accurate the solution. However, the number of elements increases rapidly if simulations are performed in 2D and 3D. In that case, it not only takes longer before a prediction is available, the computation also becomes more expensive. It is therefore important to investigate alternative possibilities to generate the same results, based on the input parameters only. In this study, a surrogate neural network has been designed to mimic the results of the one-dimensional morphoelastic model. The neural network generates predictions quickly, is easy to implement, and there is freedom in the choice of input and output. Because a neural network requires extensive training and a data set, it is ideal that the one-dimensional FEM code generates output quickly. These feed-forward-type neural network results are very promising. Not only can the network give faster predictions, but it also has a performance of over 99%. It reports on the relative surface area of the wound/scar, the total strain energy density, and the evolutions of the densities of the chemicals and mechanics. It is, therefore, interesting to investigate the applicability of a neural network for the two- and three-dimensional morphoelastic model for contraction after burn injuries.

Keywords: biomechanics, burns, feasibility, feed-forward NN, morphoelasticity, neural network, relative surface area wound

Procedia PDF Downloads 55
239 Synthesis and Characterization of New Thermotropic Monomers – Containing Phosphorus

Authors: Diana Serbezeanu, Ionela-Daniela Carja, Tachita Vlad-Bubulac, Sergiu Sova

Abstract:

New phosphorus-containing monomers having methoxy end functional groups were prepared from methyl 4-hydroxybenzoate and two different dichlorides with phosphorus, namely phenyl phosphonic dichloride and phenyl dichlorophosphate. The structures of the monomers were confirmed by FTIR and NMR spectroscopy. The assignments for the 1H, 13C and 31P chemical shifts are based on 1D and 2D NMR homo- and heteronuclear correlations (H,H-COSY (Correlation Spectroscopy), H,C-HMQC (Heteronuclear Multiple Quantum Correlation and H,C-HMBC (Heteronuclear Multiple Bond Correlation)) and 31P-13C couplings. The monomers exhibited good solubility in common organic solvents. Dimethyl sulfoxide was to be a good solvent to grow crystals of considerable size which were investigated by X-ray analysis. One of these two new monomers presented thermotropic liquid crystalline behaviour, as revealed by differential scanning calorimetry (DSC), polarized light microscopy (PLM) and X-ray diffraction (XRD). The transition temperature from crystal to liquid crystalline state (K→LC) was 143°C and from the LC to isotropic state (LC→I) was 167°C. Upon heating, bis(4-(methoxycarbonyl)phenyl formed fine textures, difficult to be ascribed to smectic or nematic phases. Upon cooling from the isotropic state, bis(4-(methoxycarbonyl)phenyl exhibited a mosaic-type texture. X-ray diffraction measurements at small angles (SAXS) of bis(4-(methoxycarbonyl)phenyl showed two peaks at 1.8 Å and 3.5 Å, respectively suggesting organization at supramolecular level.

Keywords: phosphorus-containing monomers, polarized light microscopy, structure investigation, thermotropic liquid crystalline properties

Procedia PDF Downloads 300
238 Application of a Lighting Design Method Using Mean Room Surface Exitance

Authors: Antonello Durante, James Duff, Kevin Kelly

Abstract:

The visual needs of people in modern work based buildings are changing. Self-illuminated screens of computers, televisions, tablets and smart phones have changed the relationship between people and the lit environment. In the past, lighting design practice was primarily based on providing uniform horizontal illuminance on the working plane, but this has failed to ensure good quality lit environments. Lighting standards of today continue to be set based upon a 100 year old approach that at its core, considers the task illuminance of the utmost importance, with this task typically being located on a horizontal plane. An alternative method focused on appearance has been proposed, as opposed to the traditional performance based approach. Mean Room Surface Exitance (MRSE) and Target-Ambient Illuminance Ratio (TAIR) are two new metrics proposed to assess illumination adequacy in interiors. The hypothesis is that these factors will be superior to the existing metrics used, which are horizontal illuminance led. For the six past years, research has examined this, within the Dublin Institute of Technology, with a view to determining the suitability of this approach for application to general lighting practice. Since the start of this research, a number of key findings have been produced that centered on how occupants will react to various levels of MRSE. This paper provides a broad update on how this research has progressed. More specifically, this paper will: i) Demonstrate how MRSE can be measured using HDR images technology, ii) Illustrate how MRSE can be calculated using scripting and an open source lighting computation engine, iii) Describe experimental results that demonstrate how occupants have reacted to various levels of MRSE within experimental office environments.

Keywords: illumination hierarchy (IH), mean room surface exitance (MRSE), perceived adequacy of illumination (PAI), target-ambient illumination ratio (TAIR)

Procedia PDF Downloads 187
237 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kumar Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform

Procedia PDF Downloads 115
236 Automation of Savitsky's Method for Power Calculation of High Speed Vessel and Generating Empirical Formula

Authors: M. Towhidur Rahman, Nasim Zaman Piyas, M. Sadiqul Baree, Shahnewaz Ahmed

Abstract:

The design of high-speed craft has recently become one of the most active areas of naval architecture. Speed increase makes these vehicles more efficient and useful for military, economic or leisure purpose. The planing hull is designed specifically to achieve relatively high speed on the surface of the water. Speed on the water surface is closely related to the size of the vessel and the installed power. The Savitsky method was first presented in 1964 for application to non-monohedric hulls and for application to stepped hulls. This method is well known as a reliable comparative to CFD analysis of hull resistance. A computer program based on Savitsky’s method has been developed using MATLAB. The power of high-speed vessels has been computed in this research. At first, the program reads some principal parameters such as displacement, LCG, Speed, Deadrise angle, inclination of thrust line with respect to keel line etc. and calculates the resistance of the hull using empirical planning equations of Savitsky. However, some functions used in the empirical equations are available only in the graphical form, which is not suitable for the automatic computation. We use digital plotting system to extract data from nomogram. As a result, value of wetted length-beam ratio and trim angle can be determined directly from the input of initial variables, which makes the power calculation automated without manually plotting of secondary variables such as p/b and other coefficients and the regression equations of those functions are derived by using data from different charts. Finally, the trim angle, mean wetted length-beam ratio, frictional coefficient, resistance, and power are computed and compared with the results of Savitsky and good agreement has been observed.

Keywords: nomogram, planing hull, principal parameters, regression

Procedia PDF Downloads 405
235 Computation and Validation of the Stress Distribution around a Circular Hole in a Slab Undergoing Plastic Deformation

Authors: Sherif D. El Wakil, John Rice

Abstract:

The aim of the current work was to employ the finite element method to model a slab, with a small hole across its width, undergoing plastic plane strain deformation. The computational model had, however, to be validated by comparing its results with those obtained experimentally. Since they were in good agreement, the finite element method can therefore be considered a reliable tool that can help gain better understanding of the mechanism of ductile failure in structural members having stress raisers. The finite element software used was ANSYS, and the PLANE183 element was utilized. It is a higher order 2-D, 8-node or 6-node element with quadratic displacement behavior. A bilinear stress-strain relationship was used to define the material properties, with constants similar to those of the material used in the experimental study. The model was run for several tensile loads in order to observe the progression of the plastic deformation region, and the stress concentration factor was determined in each case. The experimental study involved employing the visioplasticity technique, where a circular mesh (each circle was 0.5 mm in diameter, with 0.05 mm line thickness) was initially printed on the side of an aluminum slab having a small hole across its width. Tensile loading was then applied to produce a small increment of plastic deformation. Circles in the plastic region became ellipses, where the directions of the principal strains and stresses coincided with the major and minor axes of the ellipses. Next, we were able to determine the directions of the maximum and minimum shear stresses at the center of each ellipse, and the slip-line field was then constructed. We were then able to determine the stress at any point in the plastic deformation zone, and hence the stress concentration factor. The experimental results were found to be in good agreement with the analytical ones.

Keywords: finite element method to model a slab, slab undergoing plastic deformation, stress distribution around a circular hole, visioplasticity

Procedia PDF Downloads 319
234 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions

Authors: Valerii Dashuk

Abstract:

The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.

Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function

Procedia PDF Downloads 175
233 Comparative Study of Skeletonization and Radial Distance Methods for Automated Finger Enumeration

Authors: Mohammad Hossain Mohammadi, Saif Al Ameri, Sana Ziaei, Jinane Mounsef

Abstract:

Automated enumeration of the number of hand fingers is widely used in several motion gaming and distance control applications, and is discussed in several published papers as a starting block for hand recognition systems. The automated finger enumeration technique should not only be accurate, but also must have a fast response for a moving-picture input. The high performance of video in motion games or distance control will inhibit the program’s overall speed, for image processing software such as Matlab need to produce results at high computation speeds. Since an automated finger enumeration with minimum error and processing time is desired, a comparative study between two finger enumeration techniques is presented and analyzed in this paper. In the pre-processing stage, various image processing functions were applied on a real-time video input to obtain the final cleaned auto-cropped image of the hand to be used for the two techniques. The first technique uses the known morphological tool of skeletonization to count the number of skeleton’s endpoints for fingers. The second technique uses a radial distance method to enumerate the number of fingers in order to obtain a one dimensional hand representation. For both discussed methods, the different steps of the algorithms are explained. Then, a comparative study analyzes the accuracy and speed of both techniques. Through experimental testing in different background conditions, it was observed that the radial distance method was more accurate and responsive to a real-time video input compared to the skeletonization method. All test results were generated in Matlab and were based on displaying a human hand for three different orientations on top of a plain color background. Finally, the limitations surrounding the enumeration techniques are presented.

Keywords: comparative study, hand recognition, fingertip detection, skeletonization, radial distance, Matlab

Procedia PDF Downloads 382
232 Arabic Lexicon Learning to Analyze Sentiment in Microblogs

Authors: Mahmoud B. Rokaya

Abstract:

The study of opinion mining and sentiment analysis includes analysis of opinions, sentiments, evaluations, attitudes, and emotions. The rapid growth of social media, social networks, reviews, forum discussions, microblogs, and Twitter, leads to a parallel growth in the field of sentiment analysis. The field of sentiment analysis tries to develop effective tools to make it possible to capture the trends of people. There are two approaches in the field, lexicon-based and corpus-based methods. A lexicon-based method uses a sentiment lexicon which includes sentiment words and phrases with assigned numeric scores. These scores reveal if sentiment phrases are positive or negative, their intensity, and/or their emotional orientations. Creation of manual lexicons is hard. This brings the need for adaptive automated methods for generating a lexicon. The proposed method generates dynamic lexicons based on the corpus and then classifies text using these lexicons. In the proposed method, different approaches are combined to generate lexicons from text. The proposed method classifies the tweets into 5 classes instead of +ve or –ve classes. The sentiment classification problem is written as an optimization problem, finding optimum sentiment lexicons are the goal of the optimization process. The solution was produced based on mathematical programming approaches to find the best lexicon to classify texts. A genetic algorithm was written to find the optimal lexicon. Then, extraction of a meta-level feature was done based on the optimal lexicon. The experiments were conducted on several datasets. Results, in terms of accuracy, recall and F measure, outperformed the state-of-the-art methods proposed in the literature in some of the datasets. A better understanding of the Arabic language and culture of Arab Twitter users and sentiment orientation of words in different contexts can be achieved based on the sentiment lexicons proposed by the algorithm.

Keywords: social media, Twitter sentiment, sentiment analysis, lexicon, genetic algorithm, evolutionary computation

Procedia PDF Downloads 188
231 Integrated Clean Development Mechanism and Risk Management Approach for Infrastructure Transportation Project

Authors: Debasis Sarkar

Abstract:

Clean development mechanism (CDM) can act as an effective instrument for mitigating climate change. This mechanism can effectively reduce the emission of CO2 and other green house gases (GHG). Construction of a mega infrastructure project like underground corridor construction for metro rail operation involves in consumption of substantial quantity of concrete which consumes huge quantity of energy consuming materials like cement and steel. This paper is an attempt to develop an integrated clean development mechanism and risk management approach for sustainable development for an underground corridor metro rail project in India during its construction phase. It was observed that about 35% reduction in CO2 emission can be obtained by adding fly ash as a part replacement of cement. The reduced emission quantity of CO2 which is of the quantum of about 21,646.36 MT would result in cost savings of approximately INR 8.5 million (USD 1,29,878).But construction and operation of such infrastructure projects of the present era are subject to huge risks and uncertainties throughout all the phases of the project, thus reducing the probability of successful completion of the project within stipulated time and cost frame. Thus, an integrated approach of combining CDM with risk management would enable the metro rail authorities to develop a sustainable risk mitigation measure framework to ensure more cost and energy savings and lesser time and cost over-run.

Keywords: clean development mechanism (CDM), infrastructure transportation, project risk management, underground metro rail

Procedia PDF Downloads 475
230 Effect of Silver Nanoparticles on Seed Germination of Crop Plants

Authors: Zainab M. Almutairi, Amjad Alharbi

Abstract:

The use of engineered nanomaterials has increased as a result of their positive impact on many sectors of the economy, including agriculture. Silver nanoparticles (AgNPs) are now used to enhance seed germination, plant growth, and photosynthetic quantum efficiency and as antimicrobial agents to control plant diseases. In this study, we examined the effect of AgNP dosage on the seed germination of three plant species: corn (Zea mays L.), watermelon (Citrullus lanatus [Thunb.] Matsum. & Nakai) and zucchini (Cucurbita pepo L.). This experiment was designed to study the effect of AgNPs on germination percentage, germination rate, mean germination time, root length and fresh and dry weight of seedlings for the three species. Seven concentrations (0.05, 0.1, 0.5, 1, 1.5, 2, and 2.5 mg/ml) of AgNPs were examined at the seed germination stage. The three species had different dose responses to AgNPs in terms of germination parameters and the measured growth characteristics. The germination rates of the three plants were enhanced in response to AgNPs. Significant enhancement of the germination percentage values was observed after treatment of the watermelon and zucchini plants with AgNPs in comparison with untreated seeds. AgNPs showed a toxic effect on corn root elongation, whereas watermelon and zucchini seedling growth were positively affected by certain concentrations of AgNPs. This study showed that exposure to AgNPs caused both positive and negative effects on plant growth and germination.

Keywords: citrullus lanatus, cucurbita pepo, seed germination, seedling growth, silver nanoparticles, zea mays

Procedia PDF Downloads 308
229 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 157
228 Robust Numerical Solution for Flow Problems

Authors: Gregor Kosec

Abstract:

Simple and robust numerical approach for solving flow problems is presented, where involved physical fields are represented through the local approximation functions, i.e., the considered field is approximated over a local support domain. The approximation functions are then used to evaluate the partial differential operators. The type of approximation, the size of support domain, and the type and number of basis function can be general. The solution procedure is formulated completely through local computational operations. Besides local numerical method also the pressure velocity is performed locally with retaining the correct temporal transient. The complete locality of the introduced numerical scheme has several beneficial effects. One of the most attractive is the simplicity since it could be understood as a generalized Finite Differences Method, however, much more powerful. Presented methodology offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or p-adaptivity to treat obscure anomalies in physical field. The stability versus computation complexity and accuracy can be regulated by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation. The presented methodology is relatively simple to understand and implement, which makes it potentially powerful tool for engineering simulations. Besides simplicity and straightforward implementation, there are many opportunities to fully exploit modern computer architectures through different parallel computing strategies. The performance of the method is presented on the lid driven cavity problem, backward facing step problem, de Vahl Davis natural convection test, extended also to low Prandtl fluid and Darcy porous flow. Results are presented in terms of velocity profiles, convergence plots, and stability analyses. Results of all cases are also compared against published data.

Keywords: fluid flow, meshless, low Pr problem, natural convection

Procedia PDF Downloads 233
227 Investigation of Polymer Solar Cells Degradation Behavior Using High Defect States Influence Over Various Polymer Absorber Layers

Authors: Azzeddine Abdelalim, Fatiha Rogti

Abstract:

The degradation phenomenon in polymer solar cells (PCSs) has not been clearly explained yet. In fact, there are many causes that show up and influence these cells in a variety of ways. Also, there has been a growing concern over this degradation in the photovoltaic community. One of the main variables deciding PSCs photovoltaic output is defect states. In this research, devices modeling is carried out to analyze the multiple effects of degradation by applying high defect states (HDS) on ideal PSCs, mainly poly(3-hexylthiophene) (P3HT) absorber layer. Besides, a comparative study is conducted between P3HT and other PSCs by a simulation program called Solar Cell Capacitance Simulator (SCAPS). The adjustments to the defect parameters in several absorber layers explain the effect of HDS on the total output properties of PSCs. The performance parameters for HDS, quantum efficiency, and energy band were therefore examined. This research attempts to explain the degradation process of PSCs and the causes of their low efficiency. It was found that the defects often affect PSCs performance, but defect states have a little effect on output when the defect level is less than 1014cm-3, which gives similar performance values with P3HT cells when these defects is about 1019cm-3. The high defect states can cause up to 11% relative reduction in conversion efficiency of ideal P3HT. In the center of the band gap, defect states become more noxious. This approach is for one of the degradation processes potential of PSCs especially that use fullerene derivative acceptors.

Keywords: degradation, high defect states, polymer solar cells, SCAPS-1D

Procedia PDF Downloads 91