Search results for: pedestrian target selection
4224 High Motivational Salient Face Distractors Slowed Target Detection: Evidence from Behavioral Studies
Authors: Rashmi Gupta
Abstract:
Rewarding stimuli capture attention involuntarily as a result of an association process that develops quickly during value learning, referred to as the reward or value-driven attentional capture. It is essential to compare reward with punishment processing to get a full picture of value-based modulation in visual attention processing. Hence, the present study manipulated both valence/value (reward as well as punishment) and motivational salience (probability of an outcome: high vs. low) together. Series of experiments were conducted, and there were two phases in each experiment. In phase 1, participants were required to learn to associate specific face stimuli with a high or low probability of winning or losing points. In the second phase, these conditioned stimuli then served as a distractor or prime in a speeded letter search task. Faces with high versus low outcome probability, regardless of valence, slowed the search for targets (specifically the left visual field target) and suggesting that the costs to performance on non-emotional cognitive tasks were only driven by motivational salience (high vs. loss) associated with the stimuli rather than the valence (gain vs. loss). It also suggests that the processing of motivationally salient stimuli is right-hemisphere biased. Together, results of these studies strengthen the notion that our visual attention system is more sensitive to affected by motivational saliency rather than valence, which termed here as motivational-driven attentional capture.Keywords: attention, distractors, motivational salience, valence
Procedia PDF Downloads 2204223 Potential Impacts of Maternal Nutrition and Selection for Residual Feed Intake on Metabolism and Fertility Parameters in Angus Bulls
Authors: Aidin Foroutan, David S. Wishart, Leluo L. Guan, Carolyn Fitzsimmons
Abstract:
Maximizing efficiency and growth potential of beef cattle requires not only genetic selection (i.e. residual feed intake (RFI)) but also adequate nutrition throughout all stages of growth and development. Nutrient restriction during gestation has been shown to negatively affect post-natal growth and development as well as fertility of the offspring. This, when combined with RFI may affect progeny traits. This study aims to investigate the impact of selection for divergent genetic potential for RFI and maternal nutrition during early- to mid-gestation, on bull calf traits such as fertility and muscle development using multiple ‘omics’ approaches. Comparisons were made between High-diet vs. Low-diet and between High-RFI vs. Low-RFI animals. An epigenetics experiment on semen samples identified 891 biomarkers associated with growth and development. A gene expression study on Longissimus thoracis muscle, semimembranosus muscle, liver, and testis identified 4 genes associated with muscle development and immunity of which Myocyte enhancer factor 2A [MEF2A; induces myogenesis and control muscle differentiation] was the only differentially expressed gene identified in all four tissues. An initial metabolomics experiment on serum samples using nuclear magnetic resonance (NMR) identified 4 metabolite biomarkers related to energy and protein metabolism. Once all the biomarkers are identified, bioinformatics approaches will be used to create a database covering all the ‘omics’ data collected from this project. This database will be broadened by adding other information obtained from relevant literature reviews. Association analyses with these data sets will be performed to reveal key biological pathways affected by RFI and maternal nutrition. Through these association studies between the genome and metabolome, it is expected that candidate biomarker genes and metabolites for feed efficiency, fertility, and/or muscle development are identified. If these gene/metabolite biomarkers are validated in a larger animal population, they could potentially be used in breeding programs to select superior animals. It is also expected that this work will lead to the development of an online tool that could be used to predict future traits of interest in an animal given its measurable ‘omics’ traits.Keywords: biomarker, maternal nutrition, omics, residual feed intake
Procedia PDF Downloads 1914222 Quantifying the Protein-Protein Interaction between the Ion-Channel-Forming Colicin A and the Tol Proteins by Potassium Efflux in E. coli Cells
Authors: Fadilah Aleanizy
Abstract:
Colicins are a family of bacterial toxins that kill Escherichia coli and other closely related species. The mode of action of colicins involves binding to an outer membrane receptor and translocation across the cell envelope, leading to cytotoxicity through specific targets. The mechanism of colicin cytotoxicity includes a non-specific endonuclease activity or depolarization of the cytoplasmic membrane by pore-forming activity. For Group A colicins, translocation requires an interaction between the N-terminal domain of the colicin and a series of membrane- bound and periplasmic proteins known as the Tol system (TolB, TolR, TolA, TolQ, and Pal and the active domain must be translocated through the outer membranes. Protein-protein interactions are intrinsic to virtually every cellular process. The transient protein-protein interactions of the colicin include the interaction with much more complicated assemblies during colicin translocation across the cellular membrane to its target. The potassium release assay detects variation in the K+ content of bacterial cells (K+in). This assays is used to measure the effect of pore-forming colicins such as ColA on an indicator organism by measuring the changes of the K+ concentration in the external medium (K+out ) that are caused by cell killing with a K+ selective electrode. One of the goals of this work is to employ a quantifiable in-vivo method to spot which Tol protein are more implicated in the interaction with colicin A as it is translocated to its target.Keywords: K+ efflux, Colicin A, Tol-proteins, E. coli
Procedia PDF Downloads 4104221 Molecular Characterisation and Expression of Glutathione S-Transferase of Fasciola Gigantica
Authors: J. Adeppa, S. Samanta, O. K. Raina
Abstract:
Fasciolosis is a widespread economically important parasitic infection throughout the world caused by Fasciola hepatica and F. gigantica. In order to identify novel immunogen conferring significant protection against fasciolosis, currently, research has been focused on the defined antigens viz. glutathione S-transferase, fatty acid binding protein, cathepsin-L, fluke hemoglobin, paramyosin, myosin and F. hepatica- Kunitz Type Molecule. Among various antigens, GST which plays a crucial role in detoxification processes, i.e. phase II defense mechanism of this parasite, has a unique position as a novel vaccine candidate and a drug target in the control of this disease. For producing the antigens in large quantities and their purification to complete homogeneity, the recombinant DNA technology has become an important tool to achieve this milestone. RT- PCR was carried out using F. gigantica total RNA as template, and an amplicon of 657 bp GST gene was obtained. TA cloning vector was used for cloning of this gene, and the presence of insert was confirmed by blue-white selection for recombinant colonies. Sequence analysis of the present isolate showed 99.1% sequence homology with the published sequence of the F. gigantica GST gene of cattle origin (accession no. AF112657), with six nucleotide changes at 72, 74, 423, 513, 549 and 627th bp found in the present isolate, causing an overall change of 4 amino acids. The 657 bp GST gene was cloned at BamH1 and HindIII restriction sites of the prokaryotic expression vector pPROEXHTb in frame with six histidine residues and expressed in E. coli DH5α. Recombinant protein was purified from the bacterial lysate under non-denaturing conditions by the process of sonication after lysozyme treatment and subjecting the soluble fraction of the bacterial lysate to Ni-NTA affinity chromatography. Western blotting with rabbit hyper-immune serum showed immuno-reactivity with 25 kDa recombinant GST. Recombinant protein detected F. gigantica experimental as well as field infection in buffaloes by dot-ELISA. However, cross-reactivity studies on Fasciola gigantica GST antigen are needed to evaluate the utility of this protein in the serodiagnosis of fasciolosis.Keywords: fasciola gigantic, fasciola hepatica, GST, RT- PCR
Procedia PDF Downloads 1864220 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests
Authors: Huseyin Guler, Cigdem Kosar
Abstract:
The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.Keywords: bridge estimators, HEGY test, model selection, seasonal unit root
Procedia PDF Downloads 3404219 Schizosaccharomyces pombe, Saccharomyces cerevisiae Yeasts and Acetic Acid Bacteria in Alcoholic and Acetous Fermentations: Effect on Phenolic Acids of Kei-Apple (Dovyalis caffra L.) Vinegar
Authors: Phillip Minnaar, Neil Jolly, Louisa Beukes, Santiago Benito-Saez
Abstract:
Dovyalis caffra is a tree found on the African continent. Limited information exists on the effect of acetous fermentation on the phytochemicals of Kei-apple fruit. The phytochemical content of vinegars is derived from compounds present in the fruit the vinegar is made of. Kei-apple fruit juice was co-inoculated with Schizosaccharomyces pombe and Saccharomyces cerevisiae to induce alcoholic fermentation (AF). Acetous fermentation followed AF, using an acetic acid bacteria consortium as an inoculant. Juice had the lowest pH and highest total acidity (TA). The wine had the highest pH and vinegars lowest TA. Total soluble solids and L-malic acid decreased during AF and acetous fermentation. Volatile acidity concentration was not different among vinegars. Gallic, syringic, caffeic, p-coumaric, and chlorogenic acids increased during acetous fermentation, whereas ferulic, sinapic, and protocatechuic acids decreased. Chlorogenic acid was the most abundant phenolic acid in both wines and vinegars. It is evident from this investigation that Kei-apple vinegar is a source of plant-derived phenolics, which evolved through fermentation. However, the AAB selection showed minimal performance with respect to VA production. Acetic acid bacteria selection for acetous fermentation should be reconsidered, and the reasons for the decrease of certain phenolic acids during acetous fermentation needs to be investigated.Keywords: acetic acid bacteria, acetous fermentation, liquid chromatography, phenolic acids
Procedia PDF Downloads 1484218 The Processing of Context-Dependent and Context-Independent Scalar Implicatures
Authors: Liu Jia’nan
Abstract:
The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing
Procedia PDF Downloads 3234217 Readability Facing the Irreducible Otherness: Translation as a Third Dimension toward a Multilingual Higher Education
Authors: Noury Bakrim
Abstract:
From the point of view of language morphodynamics, interpretative Readability of the text-result (the stasis) is not the external hermeneutics of its various potential reading events but the paradigmatic, semantic immanence of its dynamics. In other words, interpretative Readability articulates the potential tension between projection (intentionality of the discursive event) and the result (Readability within the syntagmatic stasis). We then consider that translation represents much more a metalinguistic conversion of neurocognitive bilingual sub-routines and modular relations than a semantic equivalence. Furthermore, the actualizing Readability (the process of rewriting a target text within a target language/genre) builds upon the descriptive level between the generative syntax/semantic from and its paradigmatic potential translatability. Translation corpora reveal the evidence of a certain focusing on the positivist stasis of the source text at the expense of its interpretative Readability. For instance, Fluchere's brilliant translation of Miller's Tropic of cancer into French realizes unconsciously an inversion of the hierarchical relations between Life Thought and Fable: From Life Thought (fable) into Fable (Life Thought). We could regard the translation of Bernard Kreiss basing on Canetti's work die englischen Jahre (les annees anglaises) as another inversion of the historical scale from individual history into Hegelian history. In order to describe and test both translation process and result, we focus on the pedagogical practice which enables various principles grounding in interpretative/actualizing Readability. Henceforth, establishing the analytical uttering dynamics of the source text could be widened by other practices. The reversibility test (target - source text) or the comparison with a second translation in a third language (tertium comparationis A/B and A/C) point out the evidence of an impossible event. Therefore, it doesn't imply an uttering idealistic/absolute source but the irreducible/non-reproducible intentionality of its production event within the experience of world/discourse. The aim of this paper is to conceptualize translation as the tension between interpretative and actualizing Readability in a new approach grounding in morphodynamics of language and Translatability (mainly into French) within literary and non-literary texts articulating theoretical and described pedagogical corpora.Keywords: readability, translation as deverbalization, translation as conversion, Tertium Comparationis, uttering actualization, translation pedagogy
Procedia PDF Downloads 1664216 Using Authentic and Instructional Materials to Support Intercultural Communicative Competence in ELT
Authors: Jana Beresova
Abstract:
The paper presents a study carried out in 2015-2016 within the national scheme of research - VEGA 1/0106/15 based on theoretical research and empirical verification of the concept of intercultural communicative competence. It focuses on the current conception concerning target languages teaching compatible with the Common European Framework of Reference for Languages: Learning, teaching, assessment. Our research had revealed how the concept of intercultural communicative competence had been perceived by secondary-school teachers of English in Slovakia before they were intensively trained. Intensive workshops were based on the use of both authentic and instructional materials with the goal to support interculturally oriented language teaching aimed at challenging thinking. The former concept that supported the development of the students´ linguistic knowledge and the use of a target language to obtain information about the culture of the country whose language learners were learning was expanded by the meaning-making framework which views language as a typical means by which culture is mediated. The goal of the workshop was to influence English teachers to better understand the concept of intercultural communicative competence, combining theory and practice optimally. The results of the study will be presented and analysed, providing particular recommendations for language teachers and suggesting some changes in the National Educational Programme from which English learners should benefit in their future studies or professional careers.Keywords: authentic materials, English language teaching, instructional materials, intercultural communicative competence
Procedia PDF Downloads 2704215 Triple Modulation on Wound Healing in Glaucoma Surgery Using Mitomycin C and Ologen Augmented with Anti-Vascular Endothelial Growth Factor
Authors: Reetika Sharma, Lalit Tejwani, Himanshu Shekhar, Arun Singhvi
Abstract:
Purpose: To describe a novel technique of trabeculectomy targeting triple modulation on wound healing to increase the overall success rate. Method: Ten eyes of 10 patients underwent trabeculectomy with subconjunctival mitomycin C (0.4mg/ml for 4 minutes) application combined with Ologen implantation subconjunctivally and subsclerally. Five of these patients underwent additional phacoemulsification with intraocular lens implantation. The Ologen implant was wet with 0.1 ml Bevacizumab. Result: All the eyes achieved target intraocular pressure (IOP), which was maintained until one year of follow-up. Two patients needed anterior chamber reformation at day two post surgery. One patient needed cataract surgery after four months of surgery and achieved target intraocular pressure on two topical antiglaucoma medicines. Conclusion: Vascular endothelial growth factor (VEGF) concentration has been seen to increase in the aqueous humor after filtration surgery. Ologen implantation helps in collagen remodelling, antifibroblastic response, and acts as a spacer. Bevacizumab augmented Ologen, in addition, targets the increased VEGF and helps in decreasing scarring. Anti-VEGF augmented Ologen in trabeculectomy with mitomycin C (MMC) hence appears to have encouraging short-term intraocular pressure control.Keywords: ologen, anti-VEGF, trabeculectomy, scarring
Procedia PDF Downloads 1884214 A pH-Activatable Nanoparticle Self-Assembly Triggered by 7-Amino Actinomycin D Demonstrating Superior Tumor Fluorescence Imaging and Anticancer Performance
Authors: Han Xiao
Abstract:
The development of nanomedicines has recently achieved several breakthroughs in the field of cancer treatment; however, the biocompatibility and targeted burst release of these medications remain a limitation, which leads to serious side effects and significantly narrows the scope of their applications. The self-assembly of intermediate filament protein (IFP) peptides was triggered by a hydrophobic cation drug 7-amino actinomycin D (7-AAD) to synthesize pH-activatable nanoparticles (NPs) that could simultaneously locate tumors and produce antitumor effects. The designed IFP peptide included a target peptide (arginine–glycine–aspartate), a negatively charged region, and an α-helix sequence. It also possessed the ability to encapsulate 7-AAD molecules through the formation of hydrogen bonds and hydrophobic interactions by a one-step method. 7-AAD molecules with excellent near-infrared fluorescence properties could be target delivered into tumor cells by NPs and released immediately in the acidic environments of tumors and endosome/lysosomes, ultimately inducing cytotoxicity by arresting the tumor cell cycle with inserted DNA. It is noteworthy that the IFP/7-AAD NPs tail vein injection approach demonstrated not only high tumor-targeted imaging potential, but also strong antitumor therapeutic effects in vivo. The proposed strategy may be used in the delivery of cationic antitumor drugs for precise imaging and cancer therapy.Keywords: 7-amino actinomycin D, intermediate filament protein, nanoparticle, tumor image
Procedia PDF Downloads 1384213 A Large Language Model-Driven Method for Automated Building Energy Model Generation
Authors: Yake Zhang, Peng Xu
Abstract:
The development of building energy models (BEM) required for architectural design and analysis is a time-consuming and complex process, demanding a deep understanding and proficient use of simulation software. To streamline the generation of complex building energy models, this study proposes an automated method for generating building energy models using a large language model and the BEM library aimed at improving the efficiency of model generation. This method leverages a large language model to parse user-specified requirements for target building models, extracting key features such as building location, window-to-wall ratio, and thermal performance of the building envelope. The BEM library is utilized to retrieve energy models that match the target building’s characteristics, serving as reference information for the large language model to enhance the accuracy and relevance of the generated model, allowing for the creation of a building energy model that adapts to the user’s modeling requirements. This study enables the automatic creation of building energy models based on natural language inputs, reducing the professional expertise required for model development while significantly decreasing the time and complexity of manual configuration. In summary, this study provides an efficient and intelligent solution for building energy analysis and simulation, demonstrating the potential of a large language model in the field of building simulation and performance modeling.Keywords: artificial intelligence, building energy modelling, building simulation, large language model
Procedia PDF Downloads 264212 Effects of Plumage Colour on Measurable Attributes of Indigenous Chickens in North Central Nigeria
Authors: Joseph J. Okoh, Samuel T. Mbap, Tahir Ibrahim, Yusuf P. Mancha
Abstract:
The influence of plumage colour on measurable attributes of 6176 adult indigenous chickens of mixed-sex from four states of the North Central Zone of Nigeria namely; Nasarawa, Niger, Benue, Kogi and the Federal Capital Territory (FCT) Abuja were assessed. The overall average body weight of the chickens was 1.95 ± 0.03kg. The body weights of black, white, black/white, brown, black/brown, grey and mottled chicken however were 1.87 ± 0.04, 1.94 ± 0.04, 1.95 ± 0.03, 1.93 ± 0.03, 2.01 ± 0.04, 1.96 ± 0.04 and 1.94±0.14kg respectively. Only body length did not vary by plumage colour. The others; body weight and width, shank, comb and breast length, breast height (p < 0.001), beak and wing lengths (p < 0.001) varied significantly. Generally, no colour was outrightly superior to others in all body measurements. However, body weight and breast height were both highest in black/brown chickens which also had the second highest breast length. Body width, shank, beak, comb and wing lengths were highest in grey chickens but lowest in those with white colour and combinations. Egg quality was on the other hand mostly lowest in grey chickens. In selection for genetic improvement in body measurements, black/brown and grey chickens should be favoured. However, in view of the known negative relationship between body weight and egg attributes, selection in favour of grey plumage may result in chickens of poor egg attributes. Therefore, grey chickens should be selected against egg quality.Keywords: body weight, indigenous chicken, measurements, plumage colour
Procedia PDF Downloads 1284211 Image Ranking to Assist Object Labeling for Training Detection Models
Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman
Abstract:
Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.Keywords: computer vision, deep learning, object detection, semiconductor
Procedia PDF Downloads 1364210 Seismic Assessment of Non-Structural Component Using Floor Design Spectrum
Authors: Amin Asgarian, Ghyslaine McClure
Abstract:
Experiences in the past earthquakes have clearly demonstrated the necessity of seismic design and assessment of Non-Structural Components (NSCs) particularly in post-disaster structures such as hospitals, power plants, etc. as they have to be permanently functional and operational. Meeting this objective is contingent upon having proper seismic performance of both structural and non-structural components. Proper seismic design, analysis, and assessment of NSCs can be attained through generation of Floor Design Spectrum (FDS) in a similar fashion as target spectrum for structural components. This paper presents the developed methodology to generate FDS directly from corresponding Uniform Hazard Spectrum (UHS) (i.e. design spectra for structural components). The methodology is based on the experimental and numerical analysis of a database of 27 real Reinforced Concrete (RC) buildings which are located in Montreal, Canada. The buildings were tested by Ambient Vibration Measurements (AVM) and their dynamic properties have been extracted and used as part of the approach. Database comprises 12 low-rises, 10 medium-rises, and 5 high-rises and they are mostly designated as post-disaster\emergency shelters by the city of Montreal. The buildings are subjected to 20 compatible seismic records to UHS of Montreal and Floor Response Spectra (FRS) are developed for every floors in two horizontal direction considering four different damping ratios of NSCs (i.e. 2, 5, 10, and 20 % viscous damping). Generated FRS (approximately 132’000 curves) are statistically studied and the methodology is proposed to generate the FDS directly from corresponding UHS. The approach is capable of generating the FDS for any selection of floor level and damping ratio of NSCs. It captures the effect of: dynamic interaction between primary (structural) and secondary (NSCs) systems, higher and torsional modes of primary structure. These are important improvements of this approach compared to conventional methods and code recommendations. Application of the proposed approach are represented here through two real case-study buildings: one low-rise building and one medium-rise. The proposed approach can be used as practical and robust tool for seismic assessment and design of NSCs especially in existing post-disaster structures.Keywords: earthquake engineering, operational and functional components, operational modal analysis, seismic assessment and design
Procedia PDF Downloads 2134209 Optimizing Emergency Rescue Center Layouts: A Backpropagation Neural Networks-Genetic Algorithms Method
Authors: Xiyang Li, Qi Yu, Lun Zhang
Abstract:
In the face of natural disasters and other emergency situations, determining the optimal location of rescue centers is crucial for improving rescue efficiency and minimizing impact on affected populations. This paper proposes a method that integrates genetic algorithms (GA) and backpropagation neural networks (BPNN) to address the site selection optimization problem for emergency rescue centers. We utilize BPNN to accurately estimate the cost of delivering supplies from rescue centers to each temporary camp. Moreover, a genetic algorithm with a special partially matched crossover (PMX) strategy is employed to ensure that the number of temporary camps assigned to each rescue center adheres to predetermined limits. Using the population distribution data during the 2022 epidemic in Jiading District, Shanghai, as an experimental case, this paper verifies the effectiveness of the proposed method. The experimental results demonstrate that the BPNN-GA method proposed in this study outperforms existing algorithms in terms of computational efficiency and optimization performance. Especially considering the requirements for computational resources and response time in emergency situations, the proposed method shows its ability to achieve rapid convergence and optimal performance in the early and mid-stages. Future research could explore incorporating more real-world conditions and variables into the model to further improve its accuracy and applicability.Keywords: emergency rescue centers, genetic algorithms, back-propagation neural networks, site selection optimization
Procedia PDF Downloads 854208 Life Stage Customer Segmentation by Fine-Tuning Large Language Models
Authors: Nikita Katyal, Shaurya Uppal
Abstract:
This paper tackles the significant challenge of accurately classifying customers within a retailer’s customer base. Accurate classification is essential for developing targeted marketing strategies that effectively engage this important demographic. To address this issue, we propose a method that utilizes Large Language Models (LLMs). By employing LLMs, we analyze the metadata associated with product purchases derived from historical data to identify key product categories that act as distinguishing factors. These categories, such as baby food, eldercare products, or family-sized packages, offer valuable insights into the likely household composition of customers, including families with babies, families with kids/teenagers, families with pets, households caring for elders, or mixed households. We segment high-confidence customers into distinct categories by integrating historical purchase behavior with LLM-powered product classification. This paper asserts that life stage segmentation can significantly enhance e-commerce businesses’ ability to target the appropriate customers with tailored products and campaigns, thereby augmenting sales and improving customer retention. Additionally, the paper details the data sources, model architecture, and evaluation metrics employed for the segmentation task.Keywords: LLMs, segmentation, product tags, fine-tuning, target segments, marketing communication
Procedia PDF Downloads 234207 Method to Find a ε-Optimal Control of Stochastic Differential Equation Driven by a Brownian Motion
Authors: Francys Souza, Alberto Ohashi, Dorival Leao
Abstract:
We present a general solution for finding the ε-optimal controls for non-Markovian stochastic systems as stochastic differential equations driven by Brownian motion, which is a problem recognized as a difficult solution. The contribution appears in the development of mathematical tools to deal with modeling and control of non-Markovian systems, whose applicability in different areas is well known. The methodology used consists to discretize the problem through a random discretization. In this way, we transform an infinite dimensional problem in a finite dimensional, thereafter we use measurable selection arguments, to find a control on an explicit form for the discretized problem. Then, we prove the control found for the discretized problem is a ε-optimal control for the original problem. Our theory provides a concrete description of a rather general class, among the principals, we can highlight financial problems such as portfolio control, hedging, super-hedging, pairs-trading and others. Therefore, our main contribution is the development of a tool to explicitly the ε-optimal control for non-Markovian stochastic systems. The pathwise analysis was made through a random discretization jointly with measurable selection arguments, has provided us with a structure to transform an infinite dimensional problem into a finite dimensional. The theory is applied to stochastic control problems based on path-dependent stochastic differential equations, where both drift and diffusion components are controlled. We are able to explicitly show optimal control with our method.Keywords: dynamic programming equation, optimal control, stochastic control, stochastic differential equation
Procedia PDF Downloads 1884206 Impact Evaluation and Technical Efficiency in Ethiopia: Correcting for Selectivity Bias in Stochastic Frontier Analysis
Authors: Tefera Kebede Leyu
Abstract:
The purpose of this study was to estimate the impact of LIVES project participation on the level of technical efficiency of farm households in three regions of Ethiopia. We used household-level data gathered by IRLI between February and April 2014 for the year 2013(retroactive). Data on 1,905 (754 intervention and 1, 151 control groups) sample households were analyzed using STATA software package version 14. Efforts were made to combine stochastic frontier modeling with impact evaluation methodology using the Heckman (1979) two-stage model to deal with possible selectivity bias arising from unobservable characteristics in the stochastic frontier model. Results indicate that farmers in the two groups are not efficient and operate below their potential frontiers i.e., there is a potential to increase crop productivity through efficiency improvements in both groups. In addition, the empirical results revealed selection bias in both groups of farmers confirming the justification for the use of selection bias corrected stochastic frontier model. It was also found that intervention farmers achieved higher technical efficiency scores than the control group of farmers. Furthermore, the selectivity bias-corrected model showed a different technical efficiency score for the intervention farmers while it more or less remained the same for that of control group farmers. However, the control group of farmers shows a higher dispersion as measured by the coefficient of variation compared to the intervention counterparts. Among the explanatory variables, the study found that farmer’s age (proxy to farm experience), land certification, frequency of visit to improved seed center, farmer’s education and row planting are important contributing factors for participation decisions and hence technical efficiency of farmers in the study areas. We recommend that policies targeting the design of development intervention programs in the agricultural sector focus more on providing farmers with on-farm visits by extension workers, provision of credit services, establishment of farmers’ training centers and adoption of modern farm technologies. Finally, we recommend further research to deal with this kind of methodological framework using a panel data set to test whether technical efficiency starts to increase or decrease with the length of time that farmers participate in development programs.Keywords: impact evaluation, efficiency analysis and selection bias, stochastic frontier model, Heckman-two step
Procedia PDF Downloads 754205 A Computational Investigation of Potential Drugs for Cholesterol Regulation to Treat Alzheimer’s Disease
Authors: Marina Passero, Tianhua Zhai, Zuyi (Jacky) Huang
Abstract:
Alzheimer’s disease has become a major public health issue, as indicated by the increasing populations of Americans living with Alzheimer’s disease. After decades of extensive research in Alzheimer’s disease, only seven drugs have been approved by Food and Drug Administration (FDA) to treat Alzheimer’s disease. Five of these drugs were designed to treat the dementia symptoms, and only two drugs (i.e., Aducanumab and Lecanemab) target the progression of Alzheimer’s disease, especially the accumulation of amyloid-b plaques. However, controversial comments were raised for the accelerated approvals of either Aducanumab or Lecanemab, especially with concerns on safety and side effects of these two drugs. There is still an urgent need for further drug discovery to target the biological processes involved in the progression of Alzheimer’s disease. Excessive cholesterol has been found to accumulate in the brain of those with Alzheimer’s disease. Cholesterol can be synthesized in both the blood and the brain, but the majority of biosynthesis in the adult brain takes place in astrocytes and is then transported to the neurons via ApoE. The blood brain barrier separates cholesterol metabolism in the brain from the rest of the body. Various proteins contribute to the metabolism of cholesterol in the brain, which offer potential targets for Alzheimer’s treatment. In the astrocytes, SREBP cleavage-activating protein (SCAP) binds to Sterol Regulatory Element-binding Protein 2 (SREBP2) in order to transport the complex from the endoplasmic reticulum to the Golgi apparatus. Cholesterol is secreted out of the astrocytes by ATP-Binding Cassette A1 (ABCA1) transporter. Lipoprotein receptors such as triggering receptor expressed on myeloid cells 2 (TREM2) internalize cholesterol into the microglia, while lipoprotein receptors such as Low-density lipoprotein receptor-related protein 1 (LRP1) internalize cholesterol into the neuron. Cytochrome P450 Family 46 Subfamily A Member 1 (CYP46A1) converts excess cholesterol to 24S-hydroxycholesterol (24S-OHC). Cholesterol has been approved for its direct effect on the production of amyloid-beta and tau proteins. The addition of cholesterol to the brain promotes the activity of beta-site amyloid precursor protein cleaving enzyme 1 (BACE1), secretase, and amyloid precursor protein (APP), which all aid in amyloid-beta production. The reduction of cholesterol esters in the brain have been found to reduce phosphorylated tau levels in mice. In this work, a computational pipeline was developed to identify the protein targets involved in cholesterol regulation in brain and further to identify chemical compounds as the inhibitors of a selected protein target. Since extensive evidence shows the strong correlation between brain cholesterol regulation and Alzheimer’s disease, a detailed literature review on genes or pathways related to the brain cholesterol synthesis and regulation was first conducted in this work. An interaction network was then built for those genes so that the top gene targets were identified. The involvement of these genes in Alzheimer’s disease progression was discussed, which was followed by the investigation of existing clinical trials for those targets. A ligand-protein docking program was finally developed to screen 1.5 million chemical compounds for the selected protein target. A machine learning program was developed to evaluate and predict the binding interaction between chemical compounds and the protein target. The results from this work pave the way for further drug discovery to regulate brain cholesterol to combat Alzheimer’s disease.Keywords: Alzheimer’s disease, drug discovery, ligand-protein docking, gene-network analysis, cholesterol regulation
Procedia PDF Downloads 754204 The Molecule Preserve Environment: Effects of Inhibitor of the Angiotensin Converting Enzyme on Reproductive Potential and Composition Contents of the Mediterranean Flour Moth, Ephestia kuehniella Zeller
Authors: Yezli-Touiker Samira, Amrani-Kirane Leila, Soltani Mazouni Nadia
Abstract:
Due to secondary effects of conventional insecticides on the environment, the agrochemical research has resulted in the discovery of novel molecules. That research work will help in the development of a new group of pesticides that may be cheaper and less hazardous to the environment and non-target organisms which is the main desired outcome of the present work. Angiotensin-converting enzyme as a target for the development of novel insect growth regulators. Captopril is an inhibitor of angiotensin converting enzyme (ACE) it was tested in vivo by topical application on reproduction of Ephestia kuehniella Zeller (Lepidoptera: Pyralidae). The compound is diluted in acetone and applied topically to newly emerged pupae (10µg/ 2µl). The effects of this molecule was studied,on the biochemistry of ovary (on amounts nucleic acid, proteins, the qualitative analysis of the ovarian proteins and the reproductive potential (duration of the pre-oviposition, duration of the oviposition, number of eggs laid and hatching percentage). Captopril reduces significantly quantity of ovarian proteins and nucleic acid. The electrophoresis profile reveals the absence of tree bands at the treated series. This molecule reduced the duration of the oviposition period, the fecundity and the eggviability.Keywords: environment, ephestia kuehniella, captopril, reproduction, the agrochemical research
Procedia PDF Downloads 2854203 Synthesis and Tribological Properties of the Al-Cr-N/MoS₂ Self-Lubricating Coatings by Hybrid Magnetron Sputtering
Authors: Tie-Gang Wang, De-Qiang Meng, Yan-Mei Liu
Abstract:
Ternary AlCrN coatings were widely used to prolong cutting tool life because of their high hardness and excellent abrasion resistance. However, the friction between the workpiece and cutter surface was increased remarkably during machining difficult-to-cut materials (such as superalloy, titanium, etc.). As a result, a lot of cutting heat was generated and cutting tool life was shortened. In this work, an appropriate amount of solid lubricant MoS₂ was added into the AlCrN coating to reduce the friction between the tool and the workpiece. A series of Al-Cr-N/MoS₂ self-lubricating coatings with different MoS₂ contents were prepared by high power impulse magnetron sputtering (HiPIMS) and pulsed direct current magnetron sputtering (Pulsed DC) compound system. The MoS₂ content in the coatings was changed by adjusting the sputtering power of the MoS₂ target. The composition, structure and mechanical properties of the Al-Cr-N/MoS2 coatings were systematically evaluated by energy dispersive spectrometer, scanning electron microscopy, X-ray photoelectron spectroscopy, X-ray diffractometer, nano-indenter tester, scratch tester, and ball-on-disk tribometer. The results indicated the lubricant content played an important role in the coating properties. As the sputtering power of the MoS₂ target was 0.1 kW, the coating possessed the highest hardness 14.1GPa, the highest critical load 44.8 N, and the lowest wear rate 4.4×10−3μm2/N.Keywords: self-lubricating coating, Al-Cr-N/MoS₂ coating, wear rate, friction coefficient
Procedia PDF Downloads 1324202 Ideology Shift in Political Translation
Authors: Jingsong Ma
Abstract:
In political translation, ideology plays an important role in conveying implications accurately. Ideological collisions can occur in political translation when there existdifferences of political environments embedded in the translingual political texts in both source and target languages. To reach an accurate translationrequires the translatorto understand the ideologies implied in (and often transcending) the texts. This paper explores the conditions, procedure, and purpose of processingideological collision and resolution of such issues in political translation. These points will be elucidated by case studies of translating English and Chinese political texts. First, there are specific political terminologies in certain political environments. These terminological peculiarities in one language are often determined by ideological elements rather than by syntactical and semantical understanding. The translation of these ideological-loaded terminologiesis a process and operation consisting of understanding the ideological context, including cultural, historical, and political situations. This will be explained with characteristic Chinese political terminologies and their renderings in English. Second, when the ideology in the source language fails to match with the ideology in the target language, the decisions to highlight or disregard these conflicts are shaped by power relations, political engagement, social context, etc. It thus is necessary to go beyond linguisticanalysis of the context by deciphering ideology in political documents to provide a faithful or equivalent rendering of certain messages. Finally, one of the practical issues is about equivalence in political translation by redefining the notion of faithfulness and retainment of ideological messages in the source language in translations of political texts. To avoid distortion, the translator should be liberated from grip the literal meaning, instead diving into functional meanings of the text.Keywords: translation, ideology, politics, society
Procedia PDF Downloads 1114201 Case Analysis of Bamboo Based Social Enterprises in India-Improving Profitability and Sustainability
Authors: Priyal Motwani
Abstract:
The current market for bamboo products in India is about Rs. 21000 crores and is highly unorganised and fragmented. In this study, we have closely analysed the structure and functions of a major bamboo craft based organisation in Kerela, India and elaborated about its value chain, product mix, pricing strategy and supply chain, collaborations and competitive landscape. We have identified six major bottlenecks that are prevalent in such organisations, based on the Indian context, in relation to their product mix, asset management, and supply chain- corresponding waste management and retail network. The study has identified that the target customers for the bamboo based products and alternative revenue streams (eco-tourism, microenterprises, training), by carrying out secondary and primary research (5000 sample space), that can boost the existing revenue by 150%. We have then recommended an optimum product mix-covering premium, medium and low valued processing, for medium sized bamboo based organisations, in accordance with their capacity to maximize their revenue potential. After studying such organisations and their counter parts, the study has established an optimum retail network, considering B2B, B2C physical and online retail, to maximize their sales to their target groups. On the basis of the results obtained from the analysis of the future and present trends, our study gives recommendations to improve the revenue potential of bamboo based organisation in India and promote sustainability.Keywords: bamboo, bottlenecks, optimization, product mix, retail network, value chain
Procedia PDF Downloads 2174200 Utilizing Computational Fluid Dynamics in the Analysis of Natural Ventilation in Buildings
Authors: A. W. J. Wong, I. H. Ibrahim
Abstract:
Increasing urbanisation has driven building designers to incorporate natural ventilation in the designs of sustainable buildings. This project utilises Computational Fluid Dynamics (CFD) to investigate the natural ventilation of an academic building, SIT@SP, using an assessment criterion based on daily mean temperature and mean velocity. The areas of interest are the pedestrian level of first and fourth levels of the building. A reference case recommended by the Architectural Institute of Japan was used to validate the simulation model. The validated simulation model was then used for coupled simulations on SIT@SP and neighbouring geometries, under two wind speeds. Both steady and transient simulations were used to identify differences in results. Steady and transient results are agreeable with the transient simulation identifying peak velocities during flow development. Under a lower wind speed, the first level was sufficiently ventilated while the fourth level was not. The first level has excessive wind velocities in the higher wind speed and the fourth level was adequately ventilated. Fourth level flow velocity was consistently lower than those of the first level. This is attributed to either simulation model error or poor building design. SIT@SP is concluded to have a sufficiently ventilated first level and insufficiently ventilated fourth level. Future works for this project extend to modifying the urban geometry, simulation model improvements, evaluation using other assessment metrics and extending the area of interest to the entire building.Keywords: buildings, CFD Simulations, natural ventilation, urban airflow
Procedia PDF Downloads 2214199 Path Planning for Unmanned Aerial Vehicles in Constrained Environments for Locust Elimination
Authors: Aadiv Shah, Hari Nair, Vedant Mittal, Alice Cheeran
Abstract:
Present-day agricultural practices such as blanket spraying not only lead to excessive usage of pesticides but also harm the overall crop yield. This paper introduces an algorithm to optimize the traversal of an unmanned aerial vehicle (UAV) in constrained environments. The proposed system focuses on the agricultural application of targeted spraying for locust elimination. Given a satellite image of a farm, target zones that are prone to locust swarm formation are detected through the calculation of the normalized difference vegetation index (NDVI). This is followed by determining the optimal path for traversal of a UAV through these target zones using the proposed algorithm in order to perform pesticide spraying in the most efficient manner possible. Unlike the classic travelling salesman problem involving point-to-point optimization, the proposed algorithm determines an optimal path for multiple regions, independent of its geometry. Finally, the paper explores the idea of implementing reinforcement learning to model complex environmental behaviour and make the path planning mechanism for UAVs agnostic to external environment changes. This system not only presents a solution to the enormous losses incurred due to locust attacks but also an efficient way to automate agricultural practices across the globe in order to improve farmer ergonomics.Keywords: locust, NDVI, optimization, path planning, reinforcement learning, UAV
Procedia PDF Downloads 2514198 Proteome-Wide Convergent Evolution on Vocal Learning Birds Reveals Insight into cAMP-Based Learning Pathway
Authors: Chul Lee, Seoae Cho, Erich D. Jarvis, Heebal Kim
Abstract:
Vocal learning, the ability to imitate vocalizations based on auditory experience, is a homoplastic character state observed in different independent lineages of animals such as songbirds, parrots, hummingbirds and human. It has now become possible to perform genome-wide molecular analyses across vocal learners and vocal non-learners with the recent expansion of avian genome data. It was analyzed the whole genomes of human and 48 avian species including those belonging to the three avian vocal learning lineages, to determine if behavior and neural convergence are associated with molecular convergence in divergent species of vocal learners. Analyses of 8295 orthologous genes across bird species revealed 141 genes with amino acid substitutions specific to vocal learners. Out of these, 25 genes have vocal learner specific genetic homoplasies, and their functions were enriched for learning. Several sites in these genes are estimated under convergent evolution and positive selection. A potential role for a subset of these genes in vocal learning was supported by associations with gene expression profiles in vocal learning brain regions of songbirds and human disease that cause language dysfunctions. The key candidate gene with multiple independent lines of the evidences specific to vocal learners was DRD5. Our findings suggest cAMP-based learning pathway in avian vocal learners, indicating molecular homoplastic changes associated with a complex behavioral trait, vocal learning.Keywords: amino acid substitutions, convergent evolution, positive selection, vocal learning
Procedia PDF Downloads 3414197 Immersive and Non-Immersive Virtual Reality Applied to the Cervical Spine Assessment
Authors: Pawel Kiper, Alfonc Baba, Mahmoud Alhelou, Giorgia Pregnolato, Michela Agostini, Andrea Turolla
Abstract:
Impairment of cervical spine mobility is often related to pain triggered by musculoskeletal disorders or direct traumatic injuries of the spine. To date, these disorders are assessed with goniometers and inclinometers, which are the most popular devices used in clinical settings. Nevertheless, these technologies usually allow measurement of no more than two-dimensional range of motion (ROM) quotes in static conditions. Conversely, the wide use of motion tracking systems able to measure 3 to 6 degrees of freedom dynamically, while performing standard ROM assessment, are limited due to technical complexities in preparing the setup and high costs. Thus, motion tracking systems are primarily used in research. These systems are an integral part of virtual reality (VR) technologies, which can be used for measuring spine mobility. To our knowledge, the accuracy of VR measure has not yet been studied within virtual environments. Thus, the aim of this study was to test the reliability of a protocol for the assessment of sensorimotor function of the cervical spine in a population of healthy subjects and to compare whether using immersive or non-immersive VR for visualization affects the performance. Both VR assessments consisted of the same five exercises and random sequence determined which of the environments (i.e. immersive or non-immersive) was used as first assessment. Subjects were asked to perform head rotation (right and left), flexion, extension and lateral flexion (right and left side bending). Each movement was executed five times. Moreover, the participants were invited to perform head reaching movements i.e. head movements toward 8 targets placed along a circular perimeter each 45°, visualized one-by-one in random order. Finally, head repositioning movement was obtained by head movement toward the same 8 targets as for reaching and following reposition to the start point. Thus, each participant performed 46 tasks during assessment. Main measures were: ROM of rotation, flexion, extension, lateral flexion and complete kinematics of the cervical spine (i.e. number of completed targets, time of execution (seconds), spatial length (cm), angle distance (°), jerk). Thirty-five healthy participants (i.e. 14 males and 21 females, mean age 28.4±6.47) were recruited for the cervical spine assessment with immersive and non-immersive VR environments. Comparison analysis demonstrated that: head right rotation (p=0.027), extension (p=0.047), flexion (p=0.000), time (p=0.001), spatial length (p=0.004), jerk target (p=0.032), trajectory repositioning (p=0.003), and jerk target repositioning (p=0.007) were significantly better in immersive than non-immersive VR. A regression model showed that assessment in immersive VR was influenced by height, trajectory repositioning (p<0.05), and handedness (p<0.05), whereas in non-immersive VR performance was influenced by height, jerk target (p=0.002), head extension, jerk target repositioning (p=0.002), and by age, head flex/ext, trajectory repositioning, and weight (p=0.040). The results of this study showed higher accuracy of cervical spine assessment when executed in immersive VR. The assessment of ROM and kinematics of the cervical spine can be affected by independent and dependent variables in both immersive and non-immersive VR settings.Keywords: virtual reality, cervical spine, motion analysis, range of motion, measurement validity
Procedia PDF Downloads 1664196 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis
Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya
Abstract:
In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis
Procedia PDF Downloads 3264195 An Inverse Docking Approach for Identifying New Potential Anticancer Targets
Authors: Soujanya Pasumarthi
Abstract:
Inverse docking is a relatively new technique that has been used to identify potential receptor targets of small molecules. Our docking software package MDock is well suited for such an application as it is both computationally efficient, yet simultaneously shows adequate results in binding affinity predictions and enrichment tests. As a validation study, we present the first stage results of an inverse-docking study which seeks to identify potential direct targets of PRIMA-1. PRIMA-1 is well known for its ability to restore mutant p53's tumor suppressor function, leading to apoptosis in several types of cancer cells. For this reason, we believe that potential direct targets of PRIMA-1 identified in silico should be experimentally screened for their ability to inhibitcancer cell growth. The highest-ranked human protein of our PRIMA-1 docking results is oxidosqualene cyclase (OSC), which is part of the cholesterol synthetic pathway. The results of two followup experiments which treat OSC as a possible anti-cancer target are promising. We show that both PRIMA-1 and Ro 48-8071, a known potent OSC inhibitor, significantly reduce theviability of BT-474 breast cancer cells relative to normal mammary cells. In addition, like PRIMA-1, we find that Ro 48-8071 results in increased binding of mutant p53 to DNA in BT- 474cells (which highly express p53). For the first time, Ro 48-8071 is shown as a potent agent in killing human breast cancer cells. The potential of OSC as a new target for developing anticancer therapies is worth further investigation.Keywords: inverse docking, in silico screening, protein-ligand interactions, molecular docking
Procedia PDF Downloads 446