Search results for: normalized Laplacian eigenvalues
277 Identification of Healthy and BSR-Infected Oil Palm Trees Using Color Indices
Authors: Siti Khairunniza-Bejo, Yusnida Yusoff, Nik Salwani Nik Yusoff, Idris Abu Seman, Mohamad Izzuddin Anuar
Abstract:
Most of the oil palm plantations have been threatened by Basal Stem Rot (BSR) disease which causes serious economic impact. This study was conducted to identify the healthy and BSR-infected oil palm tree using thirteen color indices. Multispectral and thermal camera was used to capture 216 images of the leaves taken from frond number 1, 9 and 17. Indices of normalized difference vegetation index (NDVI), red (R), green (G), blue (B), near infrared (NIR), green – blue (GB), green/blue (G/B), green – red (GR), green/red (G/R), hue (H), saturation (S), intensity (I) and thermal index (T) were used. From this study, it can be concluded that G index taken from frond number 9 is the best index to differentiate between the healthy and BSR-infected oil palm trees. It not only gave high value of correlation coefficient (R=-0.962), but also high value of separation between healthy and BSR-infected oil palm tree. Furthermore, power and S model developed using G index gave the highest R2 value which is 0.985.Keywords: oil palm, image processing, disease, leaves
Procedia PDF Downloads 498276 Understanding the Fundamental Driver of Semiconductor Radiation Tolerance with Experiment and Theory
Authors: Julie V. Logan, Preston T. Webster, Kevin B. Woller, Christian P. Morath, Michael P. Short
Abstract:
Semiconductors, as the base of critical electronic systems, are exposed to damaging radiation while operating in space, nuclear reactors, and particle accelerator environments. What innate property allows some semiconductors to sustain little damage while others accumulate defects rapidly with dose is, at present, poorly understood. This limits the extent to which radiation tolerance can be implemented as a design criterion. To address this problem of determining the driver of semiconductor radiation tolerance, the first step is to generate a dataset of the relative radiation tolerance of a large range of semiconductors (exposed to the same radiation damage and characterized in the same way). To accomplish this, Rutherford backscatter channeling experiments are used to compare the displaced lattice atom buildup in InAs, InP, GaP, GaN, ZnO, MgO, and Si as a function of step-wise alpha particle dose. With this experimental information on radiation-induced incorporation of interstitial defects in hand, hybrid density functional theory electron densities (and their derived quantities) are calculated, and their gradient and Laplacian are evaluated to obtain key fundamental information about the interactions in each material. It is shown that simple, undifferentiated values (which are typically used to describe bond strength) are insufficient to predict radiation tolerance. Instead, the curvature of the electron density at bond critical points provides a measure of radiation tolerance consistent with the experimental results obtained. This curvature and associated forces surrounding bond critical points disfavors localization of displaced lattice atoms at these points, favoring their diffusion toward perfect lattice positions. With this criterion to predict radiation tolerance, simple density functional theory simulations can be conducted on potential new materials to gain insight into how they may operate in demanding high radiation environments.Keywords: density functional theory, GaN, GaP, InAs, InP, MgO, radiation tolerance, rutherford backscatter channeling
Procedia PDF Downloads 174275 Data Integration in a GIS Geographic Information System Mapping of Agriculture in Semi-Arid Region of Setif, Algeria
Authors: W. Riahi, M. L. Mansour
Abstract:
Using tools of data processing such as geographic information system (GIS) for the contribution of the space management becomes more and more frequent. It allows collecting and analyzing diverse natural information relative to the same territory. Space technologies play crucial role in agricultural phenomenon analysis. For this, satellite images treatment were used to classify vegetation density and particularly agricultural areas in Setif province by making recourse to the Normalized Difference Vegetation Index (NDVI). This step was completed by mapping agricultural activities of the province by using ArcGIS.10 software in order to display an overall view and to realize spatial analysis of various themes combined between them which are chosen according to their strategic importance in different thematic maps. The synthesis map elaborately showed that geographic information system can contribute significantly to agricultural management by describing potentialities and development opportunities of production systems and agricultural sectors.Keywords: GIS, satellite image, agriculture, NDVI, thematic map
Procedia PDF Downloads 424274 Oil Pollution Analysis of the Ecuadorian Rainforest Using Remote Sensing Methods
Authors: Juan Heredia, Naci Dilekli
Abstract:
The Ecuadorian Rainforest has been polluted for almost 60 years with little to no regard to oversight, law, or regulations. The consequences have been vast environmental damage such as pollution and deforestation, as well as sickness and the death of many people and animals. The aim of this paper is to quantify and localize the polluted zones, which something that has not been conducted and is the first step for remediation. To approach this problem, multi-spectral Remote Sensing imagery was utilized using a novel algorithm developed for this study, based on four normalized indices available in the literature. The algorithm classifies the pixels in polluted or healthy ones. The results of this study include a new algorithm for pixel classification and quantification of the polluted area in the selected image. Those results were finally validated by ground control points found in the literature. The main conclusion of this work is that using hyperspectral images, it is possible to identify polluted vegetation. The future work is environmental remediation, in-situ tests, and more extensive results that would inform new policymaking.Keywords: remote sensing, oil pollution quatification, amazon forest, hyperspectral remote sensing
Procedia PDF Downloads 163273 Max-Entropy Feed-Forward Clustering Neural Network
Authors: Xiaohan Bookman, Xiaoyan Zhu
Abstract:
The outputs of non-linear feed-forward neural network are positive, which could be treated as probability when they are normalized to one. If we take Entropy-Based Principle into consideration, the outputs for each sample could be represented as the distribution of this sample for different clusters. Entropy-Based Principle is the principle with which we could estimate the unknown distribution under some limited conditions. As this paper defines two processes in Feed-Forward Neural Network, our limited condition is the abstracted features of samples which are worked out in the abstraction process. And the final outputs are the probability distribution for different clusters in the clustering process. As Entropy-Based Principle is considered into the feed-forward neural network, a clustering method is born. We have conducted some experiments on six open UCI data sets, comparing with a few baselines and applied purity as the measurement. The results illustrate that our method outperforms all the other baselines that are most popular clustering methods.Keywords: feed-forward neural network, clustering, max-entropy principle, probabilistic models
Procedia PDF Downloads 435272 Optimal Analysis of Structures by Large Wing Panel Using FEM
Authors: Byeong-Sam Kim, Kyeongwoo Park
Abstract:
In this study, induced structural optimization is performed to compare the trade-off between wing weight and induced drag for wing panel extensions, construction of wing panel and winglets. The aerostructural optimization problem consists of parameters with strength condition, and two maneuver conditions using residual stresses in panel production. The results of kinematic motion analysis presented a homogenization based theory for 3D beams and 3D shells for wing panel. This theory uses a kinematic description of the beam based on normalized displacement moments. The displacement of the wing is a significant design consideration as large deflections lead to large stresses and increased fatigue of components cause residual stresses. The stresses in the wing panel are small compared to the yield stress of aluminum alloy. This study describes the implementation of a large wing panel, aerostructural analysis and structural parameters optimization framework that couples a three-dimensional panel method.Keywords: wing panel, aerostructural optimization, FEM, structural analysis
Procedia PDF Downloads 591271 Influence of ABCB1 2677G > T Single Nucleotide Polymorphism on Warfarin Maintenance Therapy among Patients with Prosthetic Heart Valve
Authors: M. G. Gopisankar, A. Surendiran, M. Hemachandren
Abstract:
The dose requirement of warfarin to achieve target INR range varies in patients with prosthetic heart valve. This variation in is affected by both genetic and non-genetic factors. Earlier studies have identified role of CYP2C9 and VKORC1 genetic polymorphisms on warfarin dose requirement. Warfarin being a substrate for drug transporter, P-glycoprotein coded by ABCB1 gene, may also be influenced by its genetic polymorphisms. This study was aimed to study the effect of single nucleotide polymorphism (SNP), ABCB1 2677G > T on warfarin maintenance dose requirement in patients with steady-state International Normalized Ratio (INR). The median dose requirement was significantly different between the genotype groups GG vs. GT (35 ± 20; 42.5 ± 18, p < 0.05), GG vs. TT (35 ± 20; 41.25 ± 25, p<0.05). There was no significant difference between GT vs. TT. In conclusion, patients with variant allele require a higher weekly maintenance dose of warfarin compared to patients without variant allele.Keywords: warfarin pharamcogenetics, pharmacogenomics of warfarin, ABCB1 and warfarin, pglycoprotein and warfarin
Procedia PDF Downloads 260270 Dynamic Analysis of Transmission Line Towers
Authors: L. Srikanth, D. Neelima Satyam
Abstract:
The transmission line towers are one of the important life line structures in the distribution of power from the source to the various places for several purposes. The predominant external loads which act on these towers are wind and earthquake loads. In this present study tower is analyzed using Indian Standards IS: 875:1987 (Wind Load), IS: 802:1995 (Structural Steel), IS:1893:2002 (Earthquake) and dynamic analysis of tower has been performed considering ground motion of 2001 Bhuj Earthquake (India). The dynamic analysis was performed considering a tower system consisting two towers spaced 800m apart and 35m height each. This analysis has been performed using numerical time stepping finite difference method which is central difference method were employed by a developed MATLAB program to get the normalized ground motion parameters includes acceleration, frequency, velocity which are important in designing the tower. The tower is analyzed using response spectrum analysis.Keywords: response spectra, dynamic analysis, central difference method, transmission tower
Procedia PDF Downloads 398269 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference
Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade
Abstract:
In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory
Procedia PDF Downloads 89268 Manufacturing of Twist-Free Surfaces by Magnetism Aided Machining Technologies
Authors: Zs. Kovács, Zs. J. Viharos, J. Kodácsy
Abstract:
As a well-known conventional finishing process, the grinding is commonly used to manufacture seal mating surfaces and bearing surfaces, but is also creates twisted surfaces. The machined surfaces by turning or grinding usually have twist structure on the surfaces, which can convey lubricants such as conveyor screw. To avoid this phenomenon, have to use special techniques or machines, for example start-stop turning, tangential turning, ultrasonic protection or special toll geometries. All of these solutions have high cost and difficult usability. In this paper, we describe a system and summarize the results of the experimental research carried out mainly in the field of Magnetic Abrasive Polishing (MAP) and Magnetic Roller Burnishing (MRB). These technologies are simple and also green while able to produce twist-free surfaces. During the tests, C45 normalized steel was used as workpiece material which was machined by simple and Wiper geometrical turning inserts in a CNC turning lathe. After the turning, the MAP and MRB technologies can be used directly to reduce the twist of surfaces. The evaluation was completed by advanced measuring and IT equipment.Keywords: magnetism, finishing, polishing, roller burnishing, twist-free
Procedia PDF Downloads 576267 Language Processing of Seniors with Alzheimer’s Disease: From the Perspective of Temporal Parameters
Authors: Lai Yi-Hsiu
Abstract:
The present paper aims to examine the language processing of Chinese-speaking seniors with Alzheimer’s disease (AD) from the perspective of temporal cues. Twenty healthy adults, 17 healthy seniors, and 13 seniors with AD in Taiwan participated in this study to tell stories based on two sets of pictures. Nine temporal cues were fetched and analyzed. Oral productions in Mandarin Chinese were compared and discussed to examine to what extent and in what way these three groups of participants performed with significant differences. Results indicated that the age effects were significant in filled pauses. The dementia effects were significant in mean duration of pauses, empty pauses, filled pauses, lexical pauses, normalized mean duration of filled pauses and lexical pauses. The findings reported in the current paper help characterize the nature of language processing in seniors with or without AD, and contribute to the interactions between the AD neural mechanism and their temporal parameters.Keywords: language processing, Alzheimer’s disease, Mandarin Chinese, temporal cues
Procedia PDF Downloads 446266 Finding Bicluster on Gene Expression Data of Lymphoma Based on Singular Value Decomposition and Hierarchical Clustering
Authors: Alhadi Bustaman, Soeganda Formalidin, Titin Siswantining
Abstract:
DNA microarray technology is used to analyze thousand gene expression data simultaneously and a very important task for drug development and test, function annotation, and cancer diagnosis. Various clustering methods have been used for analyzing gene expression data. However, when analyzing very large and heterogeneous collections of gene expression data, conventional clustering methods often cannot produce a satisfactory solution. Biclustering algorithm has been used as an alternative approach to identifying structures from gene expression data. In this paper, we introduce a transform technique based on singular value decomposition to identify normalized matrix of gene expression data followed by Mixed-Clustering algorithm and the Lift algorithm, inspired in the node-deletion and node-addition phases proposed by Cheng and Church based on Agglomerative Hierarchical Clustering (AHC). Experimental study on standard datasets demonstrated the effectiveness of the algorithm in gene expression data.Keywords: agglomerative hierarchical clustering (AHC), biclustering, gene expression data, lymphoma, singular value decomposition (SVD)
Procedia PDF Downloads 278265 Rare Earth Element (REE) Geochemistry of Tepeköy Sandstones (Central Anatolia, Turkey)
Authors: Mehmet Yavuz Hüseyinca, Şuayip Küpeli
Abstract:
Sandstones from Upper Eocene - Oligocene Tepeköy formation (Member of Mezgit Group) that exposed on the eastern edge of Tuz Gölü (Salt Lake) were analyzed for their rare earth element (REE) contents. Average concentrations of ΣREE, ΣLREE (Total light rare earth elements) and ΣHREE (Total heavy rare earth elements) were determined as 31.37, 26.47 and 4.55 ppm respectively. These values are lower than UCC (Upper continental crust) which indicates grain size and/or CaO dilution effect. The chondrite-normalized REE pattern is characterized by the average ratios of (La/Yb)cn = 6.20, (La/Sm)cn = 4.06, (Gd/Lu)cn = 1.10, Eu/Eu* = 0.99 and Ce/Ce* = 0.94. Lower values of ΣLREE/ΣHREE (Average 5.97) and (La/Yb)cn suggest lower fractionation of overall REE. Moreover (La/Sm)cn and (Gd/Lu)cn ratios define less inclined LREE and almost flat HREE pattern when compared with UCC. Almost no Ce anomaly (Ce/Ce*) emphasizes that REE were originated from terrigenous material. Also depleted LREE and no Eu anomaly (Eu/Eu*) suggest an undifferentiated mafic provenance for the sandstones.Keywords: central Anatolia, provenance, rare earth elements, REE, Tepeköy sandstone
Procedia PDF Downloads 475264 Challenging Convections: Rethinking Literature Review Beyond Citations
Authors: Hassan Younis
Abstract:
Purpose: The objective of this study is to review influential papers in the sustainability and supply chain studies domain, leveraging insights from this review to develop a structured framework for academics and researchers. This framework aims to assist scholars in identifying the most impactful publications for their scholarly pursuits. Subsequently, the study will apply and trial the developed framework on selected scholarly articles within the sustainability and supply chain studies domain to evaluate its efficacy, practicality, and reliability. Design/Methodology/Approach: Utilizing the "Publish or Perish" tool, a search was conducted to locate papers incorporating "sustainability" and "supply chain" in their titles. After rigorous filtering steps, a panel of university professors identified five crucial criteria for evaluating research robustness: average yearly citation counts (25%), scholarly contribution (25%), alignment of findings with objectives (15%), methodological rigor (20%), and journal impact factor (15%). These five evaluation criteria are abbreviated as “ACMAJ" framework. Each paper then received a tiered score (1-3) for each criterion, normalized within its category, and summed using weighted averages to calculate a Final Normalized Score (FNS). This systematic approach allows for objective comparison and ranking of the research based on its impact, novelty, rigor, and publication venue. Findings: The study's findings highlight the lack of structured frameworks for assessing influential sustainability research in supply chain management, which often results in a dependence on citation counts. A complete model that incorporates five essential criteria has been suggested as a response. By conducting a methodical trial on specific academic articles in the field of sustainability and supply chain studies, the model demonstrated its effectiveness as a tool for identifying and selecting influential research papers that warrant additional attention. This work aims to fill a significant deficiency in existing techniques by providing a more comprehensive approach to identifying and ranking influential papers in the field. Practical Implications: The developed framework helps scholars identify the most influential sustainability and supply chain publications. Its validation serves the academic community by offering a credible tool and helping researchers, students, and practitioners find and choose influential papers. This approach aids field literature reviews and study suggestions. Analysis of major trends and topics deepens our grasp of this critical study area's changing terrain. Originality/Value: The framework stands as a unique contribution to academia, offering scholars an important and new tool to identify and validate influential publications. Its distinctive capacity to efficiently guide scholars, learners, and professionals in selecting noteworthy publications, coupled with the examination of key patterns and themes, adds depth to our understanding of the evolving landscape in this critical field of study.Keywords: supply chain management, sustainability, framework, model
Procedia PDF Downloads 52263 Myoelectric Analysis for the Assessment of Muscle Functions and Fatigue Monitoring of Upper Extremity for Stroke Patients Performing Robot-Assisted Bilateral Training
Authors: Hsiao-Lung Chan, Ching-Yi Wu, Yan-Zou Lin, Yo Chiao, Ya-Ju Chang
Abstract:
Robot-assisted bilateral arm training has demonstrated useful to improve motor control in stroke patients and save human resources. In clinics, the efficiency of this treatment is mostly performed by comparing functional scales before and after rehabilitation. However, most of these assessments are based on behavior evaluation. The underlying improvement of muscle activation and coordination is unknown. Moreover, stroke patients are easier to have muscle fatigue under robot-assisted rehabilitation due to the weakness of muscles. This safety issue is still less studied. In this study, EMG analysis was applied during training. Our preliminary results showed the co-contraction index and co-contraction area index can delineate the improved muscle coordination of biceps brachii vs. flexor carpiradialis. Moreover, the smoothed, normalized cycle-by-cycle median frequency of left and right extensor carpiradialis decreased as the training progress, implying the occurrence of muscle fatigue.Keywords: robot-assisted rehabilitation, strokes, muscle coordination, muscle fatigue
Procedia PDF Downloads 475262 Solution of the Nonrelativistic Radial Wave Equation of Hydrogen Atom Using the Green's Function Approach
Authors: F. U. Rahman, R. Q. Zhang
Abstract:
This work aims to develop a systematic numerical technique which can be easily extended to many-body problem. The Lippmann Schwinger equation (integral form of the Schrodinger wave equation) is solved for the nonrelativistic radial wave of hydrogen atom using iterative integration scheme. As the unknown wave function appears on both sides of the Lippmann Schwinger equation, therefore an approximate wave function is used in order to solve the equation. The Green’s function is obtained by the method of Laplace transform for the radial wave equation with excluded potential term. Using the Lippmann Schwinger equation, the product of approximate wave function, the Green’s function and the potential term is integrated iteratively. Finally, the wave function is normalized and plotted against the standard radial wave for comparison. The outcome wave function converges to the standard wave function with the increasing number of iteration. Results are verified for the first fifteen states of hydrogen atom. The method is efficient and consistent and can be applied to complex systems in future.Keywords: Green’s function, hydrogen atom, Lippmann Schwinger equation, radial wave
Procedia PDF Downloads 394261 Post-Exercise Recovery Tracking Based on Electrocardiography-Derived Features
Authors: Pavel Bulai, Taras Pitlik, Tatsiana Kulahava, Timofei Lipski
Abstract:
The method of Electrocardiography (ECG) interpretation for post-exercise recovery tracking was developed. Metabolic indices (aerobic and anaerobic) were designed using ECG-derived features. This study reports the associations between aerobic and anaerobic indices and classical parameters of the person’s physiological state, including blood biochemistry, glycogen concentration and VO2max changes. During the study 9 participants, healthy, physically active medium trained men and women, which trained 2-4 times per week for at least 9 weeks, fulfilled (i) ECG monitoring using Apple Watch Series 4 (AWS4); (ii) blood biochemical analysis; (iii) maximal oxygen consumption (VO2max) test, (iv) bioimpedance analysis (BIA). ECG signals from a single-lead wrist-wearable device were processed with detection of QRS-complex. Aerobic index (AI) was derived as the normalized slope of QR segment. Anaerobic index (ANI) was derived as the normalized slope of SJ segment. Biochemical parameters, glycogen content and VO2max were evaluated eight times within 3-60 hours after training. ECGs were recorded 5 times per day, plus before and after training, cycloergometry and BIA. The negative correlation between AI and blood markers of the muscles functional status including creatine phosphokinase (r=-0.238, p < 0.008), aspartate aminotransferase (r=-0.249, p < 0.004) and uric acid (r = -0.293, p<0.004) were observed. ANI was also correlated with creatine phosphokinase (r= -0.265, p < 0.003), aspartate aminotransferase (r = -0.292, p < 0.001), lactate dehydrogenase (LDH) (r = -0.190, p < 0.050). So, when the level of muscular enzymes increases during post-exercise fatigue, AI and ANI decrease. During recovery, the level of metabolites is restored, and metabolic indices rising is registered. It can be concluded that AI and ANI adequately reflect the physiology of the muscles during recovery. One of the markers of an athlete’s physiological state is the ratio between testosterone and cortisol (TCR). TCR provides a relative indication of anabolic-catabolic balance and is considered to be more sensitive to training stress than measuring testosterone and cortisol separately. AI shows a strong negative correlation with TCR (r=-0.437, p < 0.001) and correctly represents post-exercise physiology. In order to reveal the relation between the ECG-derived metabolic indices and the state of the cardiorespiratory system, direct measurements of VO2max were carried out at various time points after training sessions. The negative correlation between AI and VO2max (r = -0.342, p < 0.001) was obtained. These data testifying VO2max rising during fatigue are controversial. However, some studies have revealed increased stroke volume after training, that agrees with findings. It is important to note that post-exercise increase in VO2max does not mean an athlete’s readiness for the next training session, because the recovery of the cardiovascular system occurs over a substantially longer period. Negative correlations registered for ANI with glycogen (r = -0.303, p < 0.001), albumin (r = -0.205, p < 0.021) and creatinine (r = -0.268, p < 0.002) reflect the dehydration status of participants after training. Correlations between designed metabolic indices and physiological parameters revealed in this study can be considered as the sufficient evidence to use these indices for assessing the state of person’s aerobic and anaerobic metabolic systems after training during fatigue, recovery and supercompensation.Keywords: aerobic index, anaerobic index, electrocardiography, supercompensation
Procedia PDF Downloads 115260 A DOE Study of Ultrasound Intensified Removal of Phenol
Authors: P. R. Rahul, A. Kannan
Abstract:
Ultrasound-aided adsorption of phenol by Granular Activated Carbon (GAC) was investigated at different frequencies ranging from 35 kHz, 58 kHz, and 192 kHz. Other factors influencing adsorption such as Adsorbent dosage (g/L), the initial concentration of the phenol solution (ppm) and RPM was also considered along with the frequency variable. However, this study involved calorimetric measurements which helped is determining the effect of frequency on the % removal of phenol from the power dissipated to the system was normalized. It was found that low frequency (35 kHz) cavitation effects had a profound influence on the % removal of phenol per unit power. This study also had cavitation mapping of the ultrasonic baths, and it showed that the effect of cavitation on the adsorption system is irrespective of the position of the vessel. Hence, the vessel was placed at the center of the bath. In this study, novel temperature control and monitoring system to make sure that the system is under proper condition while operations. From the BET studies, it was found that there was only 5% increase in the surface area and hence it was concluded that ultrasound doesn’t profoundly alter the equilibrium value of the adsorption system. DOE studies indicated that adsorbent dosage has a higher influence on the % removal in comparison with other factors.Keywords: ultrasound, adsorption, granulated activated carbon, phenol
Procedia PDF Downloads 283259 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms
Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang
Abstract:
Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.Keywords: bioassay, machine learning, preprocessing, virtual screen
Procedia PDF Downloads 274258 Single Cell Analysis of Circulating Monocytes in Prostate Cancer Patients
Authors: Leander Van Neste, Kirk Wojno
Abstract:
The innate immune system reacts to foreign insult in several unique ways, one of which is phagocytosis of perceived threats such as cancer, bacteria, and viruses. The goal of this study was to look for evidence of phagocytosed RNA from tumor cells in circulating monocytes. While all monocytes possess phagocytic capabilities, the non-classical CD14+/FCGR3A+ monocytes and the intermediate CD14++/FCGR3A+ monocytes most actively remove threatening ‘external’ cellular materials. Purified CD14-positive monocyte samples from fourteen patients recently diagnosed with clinically localized prostate cancer (PCa) were investigated by single-cell RNA sequencing using the 10X Genomics protocol followed by paired-end sequencing on Illumina’s NovaSeq. Similarly, samples were processed and used as controls, i.e., one patient underwent biopsy but was found not to harbor prostate cancer (benign), three young, healthy men, and three men previously diagnosed with prostate cancer that recently underwent (curative) radical prostatectomy (post-RP). Sequencing data were mapped using 10X Genomics’ CellRanger software and viable cells were subsequently identified using CellBender, removing technical artifacts such as doublets and non-cellular RNA. Next, data analysis was performed in R, using the Seurat package. Because the main goal was to identify differences between PCa patients and ‘control’ patients, rather than exploring differences between individual subjects, the individual Seurat objects of all 21 patients were merged into one Seurat object per Seurat’s recommendation. Finally, the single-cell dataset was normalized as a whole prior to further analysis. Cell identity was assessed using the SingleR and cell dex packages. The Monaco Immune Data was selected as the reference dataset, consisting of bulk RNA-seq data of sorted human immune cells. The Monaco classification was supplemented with normalized PCa data obtained from The Cancer Genome Atlas (TCGA), which consists of bulk RNA sequencing data from 499 prostate tumor tissues (including 1 metastatic) and 52 (adjacent) normal prostate tissues. SingleR was subsequently run on the combined immune cell and PCa datasets. As expected, the vast majority of cells were labeled as having a monocytic origin (~90%), with the most noticeable difference being the larger number of intermediate monocytes in the PCa patients (13.6% versus 7.1%; p<.001). In men harboring PCa, 0.60% of all purified monocytes were classified as harboring PCa signals when the TCGA data were included. This was 3-fold, 7.5-fold, and 4-fold higher compared to post-RP, benign, and young men, respectively (all p<.001). In addition, with 7.91%, the number of unclassified cells, i.e., cells with pruned labels due to high uncertainty of the assigned label, was also highest in men with PCa, compared to 3.51%, 2.67%, and 5.51% of cells in post-RP, benign, and young men, respectively (all p<.001). It can be postulated that actively phagocytosing cells are hardest to classify due to their dual immune cell and foreign cell nature. Hence, the higher number of unclassified cells and intermediate monocytes in PCa patients might reflect higher phagocytic activity due to tumor burden. This also illustrates that small numbers (~1%) of circulating peripheral blood monocytes that have interacted with tumor cells might still possess detectable phagocytosed tumor RNA.Keywords: circulating monocytes, phagocytic cells, prostate cancer, tumor immune response
Procedia PDF Downloads 162257 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry
Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood
Abstract:
The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.Keywords: ADV, experimental data, multiple Reynolds number, post-processing
Procedia PDF Downloads 148256 Determination of Verapamil Hydrochloride in the Tablet and Injection Solution by the Verapamil-Sensitive Electrode and Possibilities of Application in Pharmaceutical Analysis
Authors: Faisal A. Salih, V. V. Egorov
Abstract:
Verapamil is a drug used in medicine for arrhythmia, angina, and hypertension as a calcium channel blocker. In this study, a Verapamil-selective electrode was prepared, and the concentrations of the components in the membrane were as follows: PVC (32.8 wt %), O-NPhOE (66.6 wt %), and KTPClPB (0.6 wt % or approximately 0.01 M). The inner solution containing verapamil hydrochloride 1 x 10⁻³ M was introduced, and the electrodes were conditioned overnight in 1 x 10⁻³ M verapamil hydrochloride solution in 1 x 10⁻³ M orthophosphoric acid. These studies have demonstrated that O-NPhOE and KTPClPB are the best plasticizers and ion exchangers, while both direct potentiometry and potentiometric titration methods can be used for the determination of verapamil hydrochloride in tablets and injection solutions. Normalized weights of verapamil per tablet (80.4±0.2, 80.7±0.2, 81.0±0.4 mg) were determined by direct potentiometry and potentiometric titration, respectively. Weights of verapamil per average tablet weight determined by the methods of direct potentiometry and potentiometric titration were" 80.4±0.2, 80.7±0.2 mg determined for the same set of tablets, respectively. The masses of verapamil in solutions for injection, determined by direct potentiometry for two ampoules from one set, were (5.00±0.015, 5.004±0.006) mg. In all cases, good reproducibility and excellent correspondence with the declared quantities were observed.Keywords: verapamil, potentiometry, ion-selective electrode, lipophilic physiologically active amines
Procedia PDF Downloads 86255 Geometric, Energetic and Topological Analysis of (Ethanol)₉-Water Heterodecamers
Authors: Jennifer Cuellar, Angie L. Parada, Kevin N. S. Chacon, Sol M. Mejia
Abstract:
The purification of bio-ethanol through distillation methods is an unresolved issue at the biofuel industry because of the ethanol-water azeotrope formation, which increases the steps of the purification process and subsequently increases the production costs. Therefore, understanding the mixture nature at the molecular level could provide new insights for improving the current methods and/or designing new and more efficient purification methods. For that reason, the present study focuses on the evaluation and analysis of (ethanol)₉-water heterodecamers, as the systems with the minimum molecular proportion that represents the azeotropic concentration (96 %m/m in ethanol). The computational modelling was carried out with B3LYP-D3/6-311++G(d,p) in Gaussian 09. Initial explorations of the potential energy surface were done through two methods: annealing simulated runs and molecular dynamics trajectories besides intuitive structures obtained from smaller (ethanol)n-water heteroclusters, n = 7, 8 and 9. The energetic order of the seven stable heterodecamers determines the most stable heterodecamer (Hdec-1) as a structure forming a bicyclic geometry with the O-H---O hydrogen bonds (HBs) where the water is a double proton donor molecule. Hdec-1 combines 1 water molecule and the same quantity of every ethanol conformer; this is, 3 trans, 3 gauche 1 and 3 gauche 2; its abundance is 89%, its decamerization energy is -80.4 kcal/mol, i.e. 13 kcal/mol most stable than the less stable heterodecamer. Besides, a way to understand why methanol does not form an azeotropic mixture with water, analogous systems ((ethanol)10, (methanol)10, and (methanol)9-water)) were optimized. Topologic analysis of the electron density reveals that Hec-1 forms 33 weak interactions in total: 11 O-H---O, 8 C-H---O, 2 C-H---C hydrogen bonds and 12 H---H interactions. The strength and abundance of the most unconventional interactions (H---H, C-H---O and C-H---O) seem to explain the preference of the ethanol for forming heteroclusters instead of clusters. Besides, O-H---O HBs present a significant covalent character according to topologic parameters as the Laplacian of electron density and the relationship between potential and kinetic energy densities evaluated at the bond critical points; obtaining negatives values and values between 1 and 2, for those two topological parameters, respectively.Keywords: ADMP, DFT, ethanol-water azeotrope, Grimme dispersion correction, simulated annealing, weak interactions
Procedia PDF Downloads 103254 The Predictive Value of Serum Bilirubin in the Post-Transplant De Novo Malignancy: A Data Mining Approach
Authors: Nasim Nosoudi, Amir Zadeh, Hunter White, Joshua Conrad, Joon W. Shim
Abstract:
De novo Malignancy has become one of the major causes of death after transplantation, so early cancer diagnosis and detection can drastically improve survival rates post-transplantation. Most previous work focuses on using artificial intelligence (AI) to predict transplant success or failure outcomes. In this work, we focused on predicting de novo malignancy after liver transplantation using AI. We chose the patients that had malignancy after liver transplantation with no history of malignancy pre-transplant. Their donors were cancer-free as well. We analyzed 254,200 patient profiles with post-transplant malignancy from the US Organ Procurement and Transplantation Network (OPTN). Several popular data mining methods were applied to the resultant dataset to build predictive models to characterize de novo malignancy after liver transplantation. Recipient's bilirubin, creatinine, weight, gender, number of days recipient was on the transplant waiting list, Epstein Barr Virus (EBV), International normalized ratio (INR), and ascites are among the most important factors affecting de novo malignancy after liver transplantationKeywords: De novo malignancy, bilirubin, data mining, transplantation
Procedia PDF Downloads 105253 The Conflict between Empowerment and Exploitation: The Hypersexualization of Women in the Media
Authors: Seung Won Park
Abstract:
Pornographic images are becoming increasingly normalized as innovations in media technology arise, the porn industry explosively grows, and transnational capitalism spreads due to government deregulation and privatization of media. As the media evolves, pornography has become more and more violent and non-consensual; this growth of ‘raunch culture’ reifies the traditional power balance between men and women in which men are dominant, and women are submissive. This male domination objectifies and commodifies women, reducing them to merely sexual objects for the gratification of men. Women are exposed to pornographic images at younger and younger ages, providing unhealthy sexual role models and teaching them lessons on sexual behavior before the onset of puberty. The increasingly sexualized depiction of women in particular positions them as appropriately desirable and available to men. As a result, women are not only viewed as sexual prey but also end up treating themselves primarily as sexual objects, basing their worth off of their sexuality alone. Although many scholars are aware of and have written on the great lack of agency exercised by women in these representations, the general public tends to view some of these women as being empowered, rather than exploited. Scholarly discourse is constrained by the popular misconception that the construction of women’s sexuality in the media is controlled by women themselves.Keywords: construction of gender, hypersexualization, media, objectification
Procedia PDF Downloads 296252 Spatially Downscaling Land Surface Temperature with a Non-Linear Model
Authors: Kai Liu
Abstract:
Remote sensing-derived land surface temperature (LST) can provide an indication of the temporal and spatial patterns of surface evapotranspiration (ET). However, the spatial resolution achieved by existing commonly satellite products is ~1 km, which remains too coarse for ET estimations. This paper proposed a model that can disaggregate coarse resolution MODIS LST at 1 km scale to fine spatial resolutions at the scale of 250 m. Our approach attempted to weaken the impacts of soil moisture and growing statues on LST variations. The proposed model spatially disaggregates the coarse thermal data by using a non-linear model involving Bowen ratio, normalized difference vegetation index (NDVI) and photochemical reflectance index (PRI). This LST disaggregation model was tested on two heterogeneous landscapes in central Iowa, USA and Heihe River, China, during the growing seasons. Statistical results demonstrated that our model achieved better than the two classical methods (DisTrad and TsHARP). Furthermore, using the surface energy balance model, it was observed that the estimated ETs using the disaggregated LST from our model were more accurate than those using the disaggregated LST from DisTrad and TsHARP.Keywords: Bowen ration, downscaling, evapotranspiration, land surface temperature
Procedia PDF Downloads 329251 Rare Earth Doped Alkali Halide Crystals for Thermoluminescence Dosimetry Application
Authors: Pooja Seth, Shruti Aggarwal
Abstract:
The Europium (Eu) doped (0.02-0.1 wt %) lithium fluoride (LiF) crystal in the form of multicrystalline sheet was gown by the edge defined film fed growth (EFG) technique. Crystals were grown in argon gas atmosphere using graphite crucible and stainless steel die. The systematic incorporation of Eu inside the host LiF lattice was confirmed by X-ray diffractometry. Thermoluminescence (TL) glow curve was recorded on annealed (AN) crystals after irradiation with a gamma dose of 15 Gy. The effect of different concentration of Eu in enhancing the thermoluminescence (TL) intensity of LiF was studied. The normalized peak height of the Eu-doped LiF crystal was nearly 12 times that of the LiF crystals. The optimized concentration of Eu in LiF was found to be 0.05wt% at which maximum TL intensity was observed with main TL peak positioned at 185 °C. At higher concentration TL intensity decreases due to the formation of precipitates in the form of clusters or aggregates. The nature of the energy traps in Eu doped LiF was analysed through glow curve deconvolution. The trap depth was found to be in the range of 0.2 – 0.5 eV. These results showed that doping with Eu enhances the TL intensity by creating more defect sites for capturing of electron and holes during irradiation which might be useful for dosimetry application.Keywords: thermoluminescence, defects, gamma radiation, crystals
Procedia PDF Downloads 330250 Genetically Encoded Tool with Time-Resolved Fluorescence Readout for the Calcium Concentration Measurement
Authors: Tatiana R. Simonyan, Elena A. Protasova, Anastasia V. Mamontova, Eugene G. Maksimov, Konstantin A. Lukyanov, Alexey M. Bogdanov
Abstract:
Here, we describe two variants of the calcium indicators based on the GCaMP sensitive core and BrUSLEE fluorescent protein (GCaMP-BrUSLEE and GCaMP-BrUSLEE-145). In contrast to the conventional GCaMP6-family indicators, these fluorophores are characterized by the well-marked responsiveness of their fluorescence decay kinetics to external calcium concentration both in vitro and in cellulo. Specifically, we show that the purified GCaMP-BrUSLEE and GCaMP-BrUSLEE-145 exhibit three-component fluorescence decay kinetics, with the amplitude-normalized lifetime component (t3*A3) of GCaMP-BrUSLEE-145 changing four-fold (500-2000 a.u.) in response to a Ca²⁺ concentration shift in the range of 0—350 nM. Time-resolved fluorescence microscopy of live cells displays the two-fold change of the GCaMP-BrUSLEE-145 mean lifetime upon histamine-stimulated calcium release. The aforementioned Ca²⁺-dependence calls considering the GCaMP-BrUSLEE-145 as a prospective Ca²⁺-indicator with the signal read-out in the time domain.Keywords: calcium imaging, fluorescence lifetime imaging microscopy, fluorescent proteins, genetically encoded indicators
Procedia PDF Downloads 158249 Influence of Wall Stiffness and Embedment Depth on Excavations Supported by Cantilever Walls
Authors: Muhammad Naseem Baig, Abdul Qudoos Khan, Jamal Ali
Abstract:
Ground deformations in deep excavations are affected by wall stiffness and pile embedment ratio. This paper presents the findings of a parametric study of 64ft deep excavation in mixed stiff soil conditions supported by a cantilever pile wall. A series of finite element analyses have been carried out in Plaxis 2D by varying pile embedment ratio and wall stiffness. It has been observed that maximum wall deflections decrease by increasing the embedment ratio up to 1.50; however, any further increase in pile length does not improve the performance of wall. Similarly, increasing wall stiffness reduces the wall deformations and affects the deflection patterns of wall. The finite element analysis results are compared with field data of 25 case studies of cantilever walls. Analysis results fall within the range of normalized wall deflections of 25 case studies. It has been concluded that deep excavations can be supported by cantilever walls provided the system stiffness is increased significantly.Keywords: excavations, support systems, wall stiffness, cantilever walls
Procedia PDF Downloads 210248 Shock Response Analysis of Soil-Structure Systems Induced by Near-Fault Pulses
Authors: H. Masaeli, R. Ziaei, F. Khoshnoudian
Abstract:
Shock response analysis of the soil–structure systems induced by near–fault pulses is investigated. Vibration transmissibility of the soil–structure systems is evaluated by Shock Response Spectra (SRS). Medium–to–high rise buildings with different aspect ratios located on different soil types as well as different foundations with respect to vertical load bearing safety factors are studied. Two types of mathematical near–fault pulses, i.e. forward directivity and fling step, with different pulse periods as well as pulse amplitudes are selected as incident ground shock. Linear versus nonlinear Soil–Structure Interaction (SSI) condition are considered alternatively and the corresponding results are compared. The results show that nonlinear SSI is likely to amplify the acceleration responses when subjected to long–period incident pulses with normalized period exceeding a threshold. It is also shown that this threshold correlates with soil type, so that increased shear–wave velocity of the underlying soil makes the threshold period decrease.Keywords: nonlinear soil–structure interaction, shock response spectrum, near–fault ground shock, rocking isolation
Procedia PDF Downloads 315