Search results for: artificial intelligence and genetic algorithms
1239 Advantages and Disadvantages of Socioscientific Issue Based Instruction in Science Classrooms: Pre-Service Science Teachers' Views
Authors: Aysegul Evren Yapicioglu
Abstract:
The social roles and responsibilities expected from citizens are increasing due to changing global living conditions. Science education is expected to prepare conscious and sensitive students. Because today’s students are the adults of future. Precondition of this task is Teacher Education. In the past decade, one of the most important research field is socioscientific issues. This study deals with advantages and disadvantages of socioscientific issue based instruction in science classroom according to pre-service science teachers’ views. A case study approach that is one of the qualitative research design was used to explore their views. Fourteen pre-service science teachers participated to instruction process. Dolphinariums, Kyoto Protocol, genetically modified organisms, recyclable black bags’ benefits and damages, genetic tests, alternative energy sources and organ donation are examples of socioscientific issues, which were taught through activities in a special teaching course. Diaries and focus group interview were used as data collection tools. As a result of the study, the advantages of socioscientific issue based instruction in science classroom comprise of six sub-categories which are multi-skilling, social awareness development of thinking, meaningful learning, character and professional development, contribution of scientific literacy whereas disadvantages of this instruction process are challenges teachers and students, limitations of teaching and learning process in pre-service science teachers’ perspectives. Finally, this study contributes to science teachers and researchers to overcome disadvantages and benefit from the advantage of socioscientific issue based instruction in science classroom.Keywords: science education, socioscientific issues, socioscientific issue based instruction, pre-service science teacher
Procedia PDF Downloads 1801238 Pushing the Boundary of Parallel Tractability for Ontology Materialization via Boolean Circuits
Authors: Zhangquan Zhou, Guilin Qi
Abstract:
Materialization is an important reasoning service for applications built on the Web Ontology Language (OWL). To make materialization efficient in practice, current research focuses on deciding tractability of an ontology language and designing parallel reasoning algorithms. However, some well-known large-scale ontologies, such as YAGO, have been shown to have good performance for parallel reasoning, but they are expressed in ontology languages that are not parallelly tractable, i.e., the reasoning is inherently sequential in the worst case. This motivates us to study the problem of parallel tractability of ontology materialization from a theoretical perspective. That is we aim to identify the ontologies for which materialization is parallelly tractable, i.e., in the NC complexity. Since the NC complexity is defined based on Boolean circuit that is widely used to investigate parallel computing problems, we first transform the problem of materialization to evaluation of Boolean circuits, and then study the problem of parallel tractability based on circuits. In this work, we focus on datalog rewritable ontology languages. We use Boolean circuits to identify two classes of datalog rewritable ontologies (called parallelly tractable classes) such that materialization over them is parallelly tractable. We further investigate the parallel tractability of materialization of a datalog rewritable OWL fragment DHL (Description Horn Logic). Based on the above results, we analyze real-world datasets and show that many ontologies expressed in DHL belong to the parallelly tractable classes.Keywords: ontology materialization, parallel reasoning, datalog, Boolean circuit
Procedia PDF Downloads 2711237 Molecular Characterization of Cysticercus tenuicolis of Slaughtered Livestock in Upper-Egypt Governorates
Authors: Mosaab A. Omara, Layla O. Elmajdoubb, Mohammad Saleh Al-Aboodyc, Ahmed ElSifyd, Ahmed O. Elkhtamd
Abstract:
The aim of this study is to present the molecular characterization of cysticercus tenuicolis of Taenia hydatigena from livestock isolates in Egypt, using the amplification of sequencing of the mt-CO1 gene. We introduce a detailed image of the Cysticercus tenuicolis infection in ruminant animals in Upper Egypt. Cysticercus tenuicolis inhabits such organs in ruminants as the omentum, viscera, and liver. In the present study, the infection rate of Cysticercus tenuicolis was found to be 16% and 19% in sheep and goat sample respectively. Firstly we report one larval stage of Taenia hydatigena detected in the camel liver in Egypt. Cysticercus tenuicolis infection manifested a higher prevalence in females than in males. Those above 2 years of age manifested a higher infection rate than younger animals. The preferred site for the infection was the omentum: a 70% preference in sheep and a 68% preference in goat samples. The molecular characterization using the mitochondrial cytochrome c oxidase subunit 1 (CO1) gene of isolates from sheep, goats and camels corresponded to T. hydatigena. For this study, molecular characterizations of T. hydatigena were done for the first time in Egypt. Molecular tools are of great assistance in characterizing the Cysticercus tenuicolis parasite especially when the morphological character cannot be detected because the metacestodes are frequently confused with infection by the Hydatid cyst, especially when these occur in the visceral organs. In the present study, Cysticercus tenuicolis manifested high identity in the goat and sheep samples, while differences were found more frequently in the camel samples (10 pairbase). Clearly molecular diagnosis for Cysticercus tenuicolis infection significantly helps to differentiate it from such other metacestodes.Keywords: cysticercus tenuicolis, its2, genetic, qena, molecular and taenia hydatigena
Procedia PDF Downloads 5231236 Teaching Tools for Web Processing Services
Authors: Rashid Javed, Hardy Lehmkuehler, Franz Josef-Behr
Abstract:
Web Processing Services (WPS) have up growing concern in geoinformation research. However, teaching about them is difficult because of the generally complex circumstances of their use. They limit the possibilities for hands- on- exercises on Web Processing Services. To support understanding however a Training Tools Collection was brought on the way at University of Applied Sciences Stuttgart (HFT). It is limited to the scope of Geostatistical Interpolation of sample point data where different algorithms can be used like IDW, Nearest Neighbor etc. The Tools Collection aims to support understanding of the scope, definition and deployment of Web Processing Services. For example it is necessary to characterize the input of Interpolation by the data set, the parameters for the algorithm and the interpolation results (here a grid of interpolated values is assumed). This paper reports on first experiences using a pilot installation. This was intended to find suitable software interfaces for later full implementations and conclude on potential user interface characteristics. Experiences were made with Deegree software, one of several Services Suites (Collections). Being strictly programmed in Java, Deegree offers several OGC compliant Service Implementations that also promise to be of benefit for the project. The mentioned parameters for a WPS were formalized following the paradigm that any meaningful component will be defined in terms of suitable standards. E.g. the data output can be defined as a GML file. But, the choice of meaningful information pieces and user interactions is not free but partially determined by the selected WPS Processing Suite.Keywords: deegree, interpolation, IDW, web processing service (WPS)
Procedia PDF Downloads 3551235 Multi-Objective Evolutionary Computation Based Feature Selection Applied to Behaviour Assessment of Children
Authors: F. Jiménez, R. Jódar, M. Martín, G. Sánchez, G. Sciavicco
Abstract:
Abstract—Attribute or feature selection is one of the basic strategies to improve the performances of data classification tasks, and, at the same time, to reduce the complexity of classifiers, and it is a particularly fundamental one when the number of attributes is relatively high. Its application to unsupervised classification is restricted to a limited number of experiments in the literature. Evolutionary computation has already proven itself to be a very effective choice to consistently reduce the number of attributes towards a better classification rate and a simpler semantic interpretation of the inferred classifiers. We present a feature selection wrapper model composed by a multi-objective evolutionary algorithm, the clustering method Expectation-Maximization (EM), and the classifier C4.5 for the unsupervised classification of data extracted from a psychological test named BASC-II (Behavior Assessment System for Children - II ed.) with two objectives: Maximizing the likelihood of the clustering model and maximizing the accuracy of the obtained classifier. We present a methodology to integrate feature selection for unsupervised classification, model evaluation, decision making (to choose the most satisfactory model according to a a posteriori process in a multi-objective context), and testing. We compare the performance of the classifier obtained by the multi-objective evolutionary algorithms ENORA and NSGA-II, and the best solution is then validated by the psychologists that collected the data.Keywords: evolutionary computation, feature selection, classification, clustering
Procedia PDF Downloads 3701234 Ecological impacts of Cage Farming: A Case Study of Lake Victoria, Kenya
Authors: Mercy Chepkirui, Reuben Omondi, Paul Orina, Albert Getabu, Lewis Sitoki, Jonathan Munguti
Abstract:
Globally, the decline in capture fisheries as a result of the growing population and increasing awareness of the nutritional benefits of white meat has led to the development of aquaculture. This is anticipated to meet the increasing call for more food for the human population, which is likely to increase further by 2050. Statistics showed that more than 50% of the global future fish diet will come from aquaculture. Aquaculture began commercializing some decades ago; this is accredited to technological advancement from traditional to modern cultural systems, including cage farming. Cage farming technology has been rapidly growing since its inception in Lake Victoria, Kenya. Currently, over 6,000 cages have been set up in Kenyan waters, and this offers an excellent opportunity for recognition of Kenya’s government tactic to eliminate food insecurity and malnutrition, create employment and promote a Blue Economy. However, being an open farming enterprise is likely to emit large bulk of waste hence altering the ecosystem integrity of the lake. This is through increased chlorophyll-a pigments, alteration of the plankton community, macroinvertebrates, fish genetic pollution, transmission of fish diseases and pathogens. Cage farming further increases the nutrient loads leading to the production of harmful algal blooms, thus negatively affecting aquatic and human life. Despite the ecological transformation, cage farming provides a platform for the achievement of the Sustainable Development Goals of 2030, especially the achievement of food security and nutrition. Therefore, there is a need for Integrated Multitrophic Aquaculture as part of Blue Transformation for ecosystem monitoring.Keywords: aquaculture, ecosystem, blue economy, food security
Procedia PDF Downloads 791233 Time Series Forecasting (TSF) Using Various Deep Learning Models
Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan
Abstract:
Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window
Procedia PDF Downloads 1541232 Combining Transcriptomics, Bioinformatics, Biosynthesis Networks and Chromatographic Analyses for Cotton Gossypium hirsutum L. Defense Volatiles Study
Authors: Ronald Villamar-Torres, Michael Staudt, Christopher Viot
Abstract:
Cotton Gossypium hirsutum L. is one of the most important industrial crops, producing the world leading natural textile fiber, but is very prone to arthropod attacks that reduce crop yield and quality. Cotton cultivation, therefore, makes an outstanding use of chemical pesticides. In reaction to herbivorous arthropods, cotton plants nevertheless show natural defense reactions, in particular through volatile organic compounds (VOCs) emissions. These natural defense mechanisms are nowadays underutilized but have a very high potential for cotton cultivation, and elucidating their genetic bases will help to improve their use. Simulating herbivory attacks by mechanical wounding of cotton plants in greenhouse, we studied by qPCR the changes in gene expression for genes of the terpenoids biosynthesis pathway. Differentially expressed genes corresponded to higher levels of the terpenoids biosynthesis pathway and not to enzymes synthesizing particular terpenoids. The genes were mapped on the G. hirsutum L. reference genome; their global relationships inside the general metabolic pathways and the biosynthesis of secondary metabolites were visualized with iPath2. The chromatographic profiles of VOCs emissions indicated first monoterpenes and sesquiterpenes emissions, dominantly four molecules known to be involved in plant reactions to arthropod attacks. As a result, the study permitted to identify potential key genes for the emission of volatile terpenoids by cotton plants in reaction to an arthropod attack, opening possibilities for molecular-assisted cotton breeding in benefit of smallholder cotton growers.Keywords: biosynthesis pathways, cotton, mechanisms of plant defense, terpenoids, volatile organic compounds
Procedia PDF Downloads 3741231 Interval Bilevel Linear Fractional Programming
Authors: F. Hamidi, N. Amiri, H. Mishmast Nehi
Abstract:
The Bilevel Programming (BP) model has been presented for a decision making process that consists of two decision makers in a hierarchical structure. In fact, BP is a model for a static two person game (the leader player in the upper level and the follower player in the lower level) wherein each player tries to optimize his/her personal objective function under dependent constraints; this game is sequential and non-cooperative. The decision making variables are divided between the two players and one’s choice affects the other’s benefit and choices. In other words, BP consists of two nested optimization problems with two objective functions (upper and lower) where the constraint region of the upper level problem is implicitly determined by the lower level problem. In real cases, the coefficients of an optimization problem may not be precise, i.e. they may be interval. In this paper we develop an algorithm for solving interval bilevel linear fractional programming problems. That is to say, bilevel problems in which both objective functions are linear fractional, the coefficients are interval and the common constraint region is a polyhedron. From the original problem, the best and the worst bilevel linear fractional problems have been derived and then, using the extended Charnes and Cooper transformation, each fractional problem can be reduced to a linear problem. Then we can find the best and the worst optimal values of the leader objective function by two algorithms.Keywords: best and worst optimal solutions, bilevel programming, fractional, interval coefficients
Procedia PDF Downloads 4461230 Flood Hazard and Risk Mapping to Assess Ice-Jam Flood Mitigation Measures
Authors: Karl-Erich Lindenschmidt, Apurba Das, Joel Trudell, Keanne Russell
Abstract:
In this presentation, we explore options for mitigating ice-jam flooding along the Athabasca River in western Canada. Not only flood hazard, expressed in this case as the probability of flood depths and extents being exceeded, but also flood risk, in which annual expected damages are calculated. Flood risk is calculated, which allows a cost-benefit analysis to be made so that decisions on the best mitigation options are not based solely on flood hazard but also on the costs related to flood damages and the benefits of mitigation. The river ice model is used to simulate extreme ice-jam flood events with which scenarios are run to determine flood exposure and damages in flood-prone areas along the river. We will concentrate on three mitigation options – the placement of a dike, artificial breakage of the ice cover along the river, the installation of an ice-control structure, and the construction of a reservoir. However, any mitigation option is not totally failsafe. For example, dikes can still be overtopped and breached, and ice jams may still occur in areas of the river where ice covers have been artificially broken up. Hence, for all options, it is recommended that zoning of building developments away from greater flood hazard areas be upheld. Flood mitigation can have a negative effect of giving inhabitants a false sense of security that flooding may not happen again, leading to zoning policies being relaxed. (Text adapted from Lindenschmidt [2022] "Ice Destabilization Study - Phase 2", submitted to the Regional Municipality of Wood Buffalo, Alberta, Canada)Keywords: ice jam, flood hazard, flood risk river ice modelling, flood risk
Procedia PDF Downloads 1851229 Vehicular Speed Detection Camera System Using Video Stream
Authors: C. A. Anser Pasha
Abstract:
In this paper, a new Vehicular Speed Detection Camera System that is applicable as an alternative to traditional radars with the same accuracy or even better is presented. The real-time measurement and analysis of various traffic parameters such as speed and number of vehicles are increasingly required in traffic control and management. Image processing techniques are now considered as an attractive and flexible method for automatic analysis and data collections in traffic engineering. Various algorithms based on image processing techniques have been applied to detect multiple vehicles and track them. The SDCS processes can be divided into three successive phases; the first phase is Objects detection phase, which uses a hybrid algorithm based on combining an adaptive background subtraction technique with a three-frame differencing algorithm which ratifies the major drawback of using only adaptive background subtraction. The second phase is Objects tracking, which consists of three successive operations - object segmentation, object labeling, and object center extraction. Objects tracking operation takes into consideration the different possible scenarios of the moving object like simple tracking, the object has left the scene, the object has entered the scene, object crossed by another object, and object leaves and another one enters the scene. The third phase is speed calculation phase, which is calculated from the number of frames consumed by the object to pass by the scene.Keywords: radar, image processing, detection, tracking, segmentation
Procedia PDF Downloads 4671228 A Review on Medical Image Registration Techniques
Authors: Shadrack Mambo, Karim Djouani, Yskandar Hamam, Barend van Wyk, Patrick Siarry
Abstract:
This paper discusses the current trends in medical image registration techniques and addresses the need to provide a solid theoretical foundation for research endeavours. Methodological analysis and synthesis of quality literature was done, providing a platform for developing a good foundation for research study in this field which is crucial in understanding the existing levels of knowledge. Research on medical image registration techniques assists clinical and medical practitioners in diagnosis of tumours and lesion in anatomical organs, thereby enhancing fast and accurate curative treatment of patients. Literature review aims to provide a solid theoretical foundation for research endeavours in image registration techniques. Developing a solid foundation for a research study is possible through a methodological analysis and synthesis of existing contributions. Out of these considerations, the aim of this paper is to enhance the scientific community’s understanding of the current status of research in medical image registration techniques and also communicate to them, the contribution of this research in the field of image processing. The gaps identified in current techniques can be closed by use of artificial neural networks that form learning systems designed to minimise error function. The paper also suggests several areas of future research in the image registration.Keywords: image registration techniques, medical images, neural networks, optimisaztion, transformation
Procedia PDF Downloads 1781227 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space
Authors: Nanjiang Chen
Abstract:
In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experi-ence of space. Addressing these gaps, this paper introduces a distinct continuous visibility algorithm, a leap in measuring urban spaces from a human-centric per-spective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this tech-nique allows for a continuous range of visibility assessment, closely mirroring hu-man visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Bei-jing's urban setting. Its key distinction lies in its potential to benefit a broad spec-trum of stakeholders, ranging from urban developers to public policymakers, aid-ing in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.Keywords: visual openness, spatial continuity, ray-tracing algorithms, urban computation
Procedia PDF Downloads 461226 Intelligent Recognition of Diabetes Disease via FCM Based Attribute Weighting
Authors: Kemal Polat
Abstract:
In this paper, an attribute weighting method called fuzzy C-means clustering based attribute weighting (FCMAW) for classification of Diabetes disease dataset has been used. The aims of this study are to reduce the variance within attributes of diabetes dataset and to improve the classification accuracy of classifier algorithm transforming from non-linear separable datasets to linearly separable datasets. Pima Indians Diabetes dataset has two classes including normal subjects (500 instances) and diabetes subjects (268 instances). Fuzzy C-means clustering is an improved version of K-means clustering method and is one of most used clustering methods in data mining and machine learning applications. In this study, as the first stage, fuzzy C-means clustering process has been used for finding the centers of attributes in Pima Indians diabetes dataset and then weighted the dataset according to the ratios of the means of attributes to centers of theirs. Secondly, after weighting process, the classifier algorithms including support vector machine (SVM) and k-NN (k- nearest neighbor) classifiers have been used for classifying weighted Pima Indians diabetes dataset. Experimental results show that the proposed attribute weighting method (FCMAW) has obtained very promising results in the classification of Pima Indians diabetes dataset.Keywords: fuzzy C-means clustering, fuzzy C-means clustering based attribute weighting, Pima Indians diabetes, SVM
Procedia PDF Downloads 4131225 From Wave-Powered Propulsion to Flight with Membrane Wings: Insights Powered by High-Fidelity Immersed Boundary Methods based FSI Simulations
Authors: Rajat Mittal, Jung Hee Seo, Jacob Turner, Harshal Raut
Abstract:
The perpetual advancement in computational capabilities, coupled with the continuous evolution of software tools and numerical algorithms, is creating novel avenues for research, exploration, and application at the nexus of computational fluid and structural mechanics. Fish leverage their remarkably flexible bodies and fins to harness energy from vortices, propelling themselves with an elegance and efficiency that captivates engineers. Bats fly with unparalleled agility and speed by using their flexible membrane wings. Wave-assisted propulsion (WAP) systems, utilizing elastically mounted hydrofoils, convert wave energy into thrust. Each of these problems involves a complex and elegant interplay between fluid dynamics and structural mechanics. Historically, investigations into such phenomena were constrained by available tools, but modern computational advancements now facilitate exploration of these multi-physics challenges with an unprecedented level of fidelity, precision, and realism. In this work, the author will discuss projects that harness the capabilities of high-fidelity sharp-interface immersed boundary methods to address a spectrum of engineering and biological challenges involving fluid-structure interaction.Keywords: immersed boundary methods, CFD, bioflight, fluid structure interaction
Procedia PDF Downloads 701224 Real-Time Multi-Vehicle Tracking Application at Intersections Based on Feature Selection in Combination with Color Attribution
Authors: Qiang Zhang, Xiaojian Hu
Abstract:
In multi-vehicle tracking, based on feature selection, the tracking system efficiently tracks vehicles in a video with minimal error in combination with color attribution, which focuses on presenting a simple and fast, yet accurate and robust solution to the problem such as inaccurately and untimely responses of statistics-based adaptive traffic control system in the intersection scenario. In this study, a real-time tracking system is proposed for multi-vehicle tracking in the intersection scene. Considering the complexity and application feasibility of the algorithm, in the object detection step, the detection result provided by virtual loops were post-processed and then used as the input for the tracker. For the tracker, lightweight methods were designed to extract and select features and incorporate them into the adaptive color tracking (ACT) framework. And the approbatory online feature selection algorithms are integrated on the mature ACT system with good compatibility. The proposed feature selection methods and multi-vehicle tracking method are evaluated on KITTI datasets and show efficient vehicle tracking performance when compared to the other state-of-the-art approaches in the same category. And the system performs excellently on the video sequences recorded at the intersection. Furthermore, the presented vehicle tracking system is suitable for surveillance applications.Keywords: real-time, multi-vehicle tracking, feature selection, color attribution
Procedia PDF Downloads 1631223 Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features
Authors: Rabab M. Ramadan, Elaraby A. Elgallad
Abstract:
With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.Keywords: iris recognition, particle swarm optimization, feature extraction, feature selection, palm print, the Scale Invariant Feature Transform (SIFT)
Procedia PDF Downloads 2351222 The Role of Non-Native Plant Species in Enhancing Food Security in Sub-Saharan Africa
Authors: Thabiso Michael Mokotjomela, Jasper Knight
Abstract:
Intensification of agricultural food production in sub-Saharan Africa is of paramount importance as a means of increasing the food security of communities that are already experiencing a range of environmental and socio-economic stresses. However, achieving this aim faces several challenges including ongoing climate change, increased resistance of diseases and pests, extreme environmental degradation partly due to biological invasions, land tenure and management practices, socio-economic developments of rural populations, and national population growth. In particular, non-native plant species tend to display greater adaptation capacity to environmental stress than native species that form important food resource base for human beings, thus suggesting a potential for usage to shift accordingly. Based on review of the historical benefits of non-native plant species in food production in sub-Saharan Africa, we propose that use of non-invasive, non-native plant species and/or the genetic modification of native species might be viable options for future agricultural sustainability in this region. Coupled with strategic foresight planning (e.g. use of biological control agents that suppress plant species’ invasions), the consumptive use of already-introduced non-native species might help in containment and control of possible negative environmental impacts of non-native species on native species, ecosystems and biodiversity, and soil fertility and hydrology. Use of non-native species in food production should be accompanied by low cost agroecology practices (e.g. conservation agriculture and agrobiodiversity) that may promote the gradual recovery of natural capital, ecosystem services, and promote conservation of the natural environment as well as enhance food security.Keywords: food security, invasive species, agroecology, agrobiodiversity, socio-economic stresses
Procedia PDF Downloads 3691221 Interaction of Racial and Gender Disparities in Salivary Gland Cancer Survival in the United States: A Surveillance Epidemiology and End Results Study
Authors: Sarpong Boateng, Rohit Balasundaram, Akua Afrah Amoah
Abstract:
Introduction: Racial and Gender disparities have been found to be independently associated with Salivary Gland Cancers (SGCs) survival; however, to our best knowledge, there are no previous studies on the interplay of these social determinants on the prognosis of SGCs. The objective of this study was to examine the joint effect of race and gender on the survival of SGCs. Methods: We analyzed survival outcomes of 13,547 histologically confirmed cases of SGCs using the Surveillance Epidemiology and End Results (SEER) database (2004 to 2015). Multivariable Cox regression analysis and Kaplan-Meier curves were used to estimate hazard ratios (HR) after controlling for age, tumor characteristics, treatment type and year of diagnosis. Results: 73.5% of the participants were whites, 8.5% were blacks, 10.1% were Hispanics and 58.5% were males. Overall, males had poorer survival than females (HR = 1.16, p=0.003). In the adjusted multivariable model, there were no significant differences in survival by race. However, the interaction of gender and race was statistically significant (p=0.01) in Hispanic males. Thus, compared to White females (reference), Hispanic females had significantly better survival (HR=0.53), whiles Hispanic males had worse survival outcomes (HR=1.82) for SGCs. Conclusions: Our results show significant interactions between race and gender, with racial disparities varying across the different genders for SGCs survival. This study indicates that racial and gender differences are crucial factors to be considered in the prognostic counseling and management of patients with SGCs. Biologic factors, tumor genetic characteristics, chemotherapy, lifestyle, environmental exposures, and socioeconomic and dietary factors are potential yet proven reasons that could account for racial and gender differences in the survival of SGCs.Keywords: salivary, cancer, survival, disparity, race, gender, SEER
Procedia PDF Downloads 2011220 Beyond Baudrillard: A Critical Intersection between Semiotics and Materialism
Authors: Francesco Piluso
Abstract:
Nowadays, to restore the deconstructive power of semiotics implies a critical analysis of neoliberal ideology, and, even more critically, a confrontation with materialist perspective. The theoretical path of Jean Baudrillard is crucial to understand the ambivalence of this intersection. A semiotic critique of Baudrillard’s work, through tools of both structuralism and interpretative semiotics, has the aim to give materialism a new consistent semiotic approach and vice-versa. According to Baudrillard, the commodity form is characterized by the same abstract and systemic logic of the sign-form, in which the production of the signified (use-value) is a mere ideological mean for the reproduction of the signifiers-chain (exchange-value). Nevertheless, this parallelism is broken by the author himself: if the use-value is deconstructed in its relative logic, the signified and the referent, both as discrete and positive elements, are collapsed on the same plane at the shadows of the signified forms. These divergent considerations lead Baudrillard to the same crucial point: the dismissal of the material world, replaced by the hyperreality as reproduction of a semiotic (genetic) Code. The stress on the concept of form, as an epistemological and semiotic tool to analyse the construction of values in the consumer society, has led to the Code as its ontological drift. In other words, Baudrillard seems to enclose consumer society (and reality) in this immanent and self-fetishized world of signs–an ideological perspective that mystifies the gravity of the material relationships between Northern-Western World and Third World. The notion of Encyclopaedia by Umberto Eco is the key to overturn the relationship of immanence/transcendence between the Code and the economic political of the sign, by understanding the former as an ideological plane within the encyclopedia itself. Therefore, rather than building semiotic (hyper)realities, semiotics has to deal with materialism in terms of material relationships of power which are mystified and reproduced through such ideological ontologies of signs.Keywords: Baudrillard, Code, Eco, Encyclopaedia, epistemology vs. ontology, semiotics vs. materialism
Procedia PDF Downloads 1631219 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform
Authors: David Jurado, Carlos Ávila
Abstract:
Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis
Procedia PDF Downloads 831218 Comparative Analysis of Classification Methods in Determining Non-Active Student Characteristics in Indonesia Open University
Authors: Dewi Juliah Ratnaningsih, Imas Sukaesih Sitanggang
Abstract:
Classification is one of data mining techniques that aims to discover a model from training data that distinguishes records into the appropriate category or class. Data mining classification methods can be applied in education, for example, to determine the classification of non-active students in Indonesia Open University. This paper presents a comparison of three methods of classification: Naïve Bayes, Bagging, and C.45. The criteria used to evaluate the performance of three methods of classification are stratified cross-validation, confusion matrix, the value of the area under the ROC Curve (AUC), Recall, Precision, and F-measure. The data used for this paper are from the non-active Indonesia Open University students in registration period of 2004.1 to 2012.2. Target analysis requires that non-active students were divided into 3 groups: C1, C2, and C3. Data analyzed are as many as 4173 students. Results of the study show: (1) Bagging method gave a high degree of classification accuracy than Naïve Bayes and C.45, (2) the Bagging classification accuracy rate is 82.99 %, while the Naïve Bayes and C.45 are 80.04 % and 82.74 % respectively, (3) the result of Bagging classification tree method has a large number of nodes, so it is quite difficult in decision making, (4) classification of non-active Indonesia Open University student characteristics uses algorithms C.45, (5) based on the algorithm C.45, there are 5 interesting rules which can describe the characteristics of non-active Indonesia Open University students.Keywords: comparative analysis, data mining, clasiffication, Bagging, Naïve Bayes, C.45, non-active students, Indonesia Open University
Procedia PDF Downloads 3151217 Crop Classification using Unmanned Aerial Vehicle Images
Authors: Iqra Yaseen
Abstract:
One of the well-known areas of computer science and engineering, image processing in the context of computer vision has been essential to automation. In remote sensing, medical science, and many other fields, it has made it easier to uncover previously undiscovered facts. Grading of diverse items is now possible because of neural network algorithms, categorization, and digital image processing. Its use in the classification of agricultural products, particularly in the grading of seeds or grains and their cultivars, is widely recognized. A grading and sorting system enables the preservation of time, consistency, and uniformity. Global population growth has led to an increase in demand for food staples, biofuel, and other agricultural products. To meet this demand, available resources must be used and managed more effectively. Image processing is rapidly growing in the field of agriculture. Many applications have been developed using this approach for crop identification and classification, land and disease detection and for measuring other parameters of crop. Vegetation localization is the base of performing these task. Vegetation helps to identify the area where the crop is present. The productivity of the agriculture industry can be increased via image processing that is based upon Unmanned Aerial Vehicle photography and satellite. In this paper we use the machine learning techniques like Convolutional Neural Network, deep learning, image processing, classification, You Only Live Once to UAV imaging dataset to divide the crop into distinct groups and choose the best way to use it.Keywords: image processing, UAV, YOLO, CNN, deep learning, classification
Procedia PDF Downloads 1071216 Modeling of Surface Roughness in Hard Turning of DIN 1.2210 Cold Work Tool Steel with Ceramic Tools
Authors: Mehmet Erdi Korkmaz, Mustafa Günay
Abstract:
Nowadays, grinding is frequently replaced with hard turning for reducing set up time and higher accuracy. This paper focused on mathematical modeling of average surface roughness (Ra) in hard turning of AISI L2 grade (DIN 1.2210) cold work tool steel with ceramic tools. The steel was hardened to 60±1 HRC after the heat treatment process. Cutting speed, feed rate, depth of cut and tool nose radius was chosen as the cutting conditions. The uncoated ceramic cutting tools were used in the machining experiments. The machining experiments were performed according to Taguchi L27 orthogonal array on CNC lathe. Ra values were calculated by averaging three roughness values obtained from three different points of machined surface. The influences of cutting conditions on surface roughness were evaluated as statistical and experimental. The analysis of variance (ANOVA) with 95% confidence level was applied for statistical analysis of experimental results. Finally, mathematical models were developed using the artificial neural networks (ANN). ANOVA results show that feed rate is the dominant factor affecting surface roughness, followed by tool nose radius and cutting speed.Keywords: ANN, hard turning, DIN 1.2210, surface roughness, Taguchi method
Procedia PDF Downloads 3711215 Counter-Terrorism Policies in the Wider Black Sea Region: Evaluating the Robustness of Constantza Port under Potential Terror Attacks
Authors: A. V. Popa, C. Barna, V. Mihalache
Abstract:
Being the largest port at the Black Sea and functioning as a civil and military nodal point between Europe and Asia, Constantza Port has become a potential target on the terrorist international agenda. The authors use qualitative research based on both face-to-face and online semi-structured interviews with relevant stakeholders (top decision-makers in the Romanian Naval Authority, Romanian Maritime Training Centre, National Company "Maritime Ports Administration" and military staff) in order to detect potential vulnerabilities which might be exploited by terrorists in the case of Constantza Port. Likewise, this will enable bringing together the experts’ opinions on potential mitigation measures. Subsequently, this paper formulates various counter-terrorism policies to enhance the robustness of Constantza Port under potential terror attacks and connects them with the attributions in the field of critical infrastructure protection conferred by the law to the lead national authority for preventing and countering terrorism, namely the Romanian Intelligence Service. Extending the national counterterrorism efforts to an international level, the authors propose the establishment – among the experts of the NATO member states of the Wider Black Sea Region – of a platform for the exchange of know-how and best practices in the field of critical infrastructure protection.Keywords: Constantza Port, counter-terrorism policies, critical infrastructure protection, security, Wider Black Sea Region
Procedia PDF Downloads 2951214 Antibacterial Evaluation, in Silico ADME and QSAR Studies of Some Benzimidazole Derivatives
Authors: Strahinja Kovačević, Lidija Jevrić, Miloš Kuzmanović, Sanja Podunavac-Kuzmanović
Abstract:
In this paper, various derivatives of benzimidazole have been evaluated against Gram-negative bacteria Escherichia coli. For all investigated compounds the minimum inhibitory concentration (MIC) was determined. Quantitative structure-activity relationships (QSAR) attempts to find consistent relationships between the variations in the values of molecular properties and the biological activity for a series of compounds so that these rules can be used to evaluate new chemical entities. The correlation between MIC and some absorption, distribution, metabolism and excretion (ADME) parameters was investigated, and the mathematical models for predicting the antibacterial activity of this class of compounds were developed. The quality of the multiple linear regression (MLR) models was validated by the leave-one-out (LOO) technique, as well as by the calculation of the statistical parameters for the developed models and the results are discussed on the basis of the statistical data. The results of this study indicate that ADME parameters have a significant effect on the antibacterial activity of this class of compounds. Principal component analysis (PCA) and agglomerative hierarchical clustering algorithms (HCA) confirmed that the investigated molecules can be classified into groups on the basis of the ADME parameters: Madin-Darby Canine Kidney cell permeability (MDCK), Plasma protein binding (PPB%), human intestinal absorption (HIA%) and human colon carcinoma cell permeability (Caco-2).Keywords: benzimidazoles, QSAR, ADME, in silico
Procedia PDF Downloads 3751213 Considerations in Pregnancy Followed by Obesity Surgery
Authors: Maryam Nazari, Atefeh Ghanbari, Saghar Noorinia
Abstract:
Obesity, as an abnormal or excessive accumulation of fat, is caused by genetic, behavioral and environmental factors. Recently, obesity surgeries, such as bariatric surgery, as the last measure to control obesity, have attracted experts and society, especially women, attention, so knowing the possible complications of this major surgery and their control in reproductive age is of particular importance due to its effects on pregnancy outcomes. Bariatric surgery reduces the risk of diabetes and high blood pressure associated with pregnancy, premature birth, macrosomia, stillbirth and dumping syndrome. Although in the first months after surgery, nausea and vomiting caused by changes in intra-abdominal pressure are associated with an increased risk of malabsorption of micronutrients such as folic acid, iron, vitamin B1, D, calcium, selenium and phosphorus and finally, fetal growth disorder. Moreover, serum levels of micronutrients such as vitamin D, calcium, and iron in mothers who used to have bariatric surgery and their babies have been shown to be lower than in mothers without a history of bariatric surgery. Moreover, vitamin A deficiency is shown to be more widespread in pregnancies after bariatric surgery, which leads to visual problems in newborns and premature delivery. However, complications such as the duration of hospitalization of newborns in the NICU, disease rate in the first 28 days of life and congenital anomalies are not significantly different in babies born to mothers undergoing bariatric surgery compared to the control group. In spite of the vast advantages following obesity surgeries, due to the catabolic conditions and severe weight loss followed by such major intervention and the probability of nutrients malnutrition in a pregnant woman and her baby, after having surgery, at least 12 to 18 months should be considered to get pregnant as a recovery period. In addition, taking essential supplements before and at least 6 months after this approach is recommended.Keywords: bariatric surgery, pregnancy, malnutrition, vitamin and mineral deficiency
Procedia PDF Downloads 931212 Hydrologic Impacts of Climate Change and Urbanization on Quetta Watershed, Pakistan
Authors: Malik Muhammad Akhtar, Tanzeel Khan
Abstract:
Various natural and anthropogenic factors are affecting recharge processes in urban areas due to intense urban expansion; land-use/landcover change (LULC) and climate considerably influence the ecosystem functions. In Quetta, a terrible transformation of LULC has occurred due to an increase in human population and rapid urbanization over the past years; according to the Pakistan Bureau of Statistics, the increase of population from 252,577 in 1972 to 2,275,699 in 2017 shows an abrupt rise which in turn has affected the aquifer recharge capability, vegetation, and precipitation at Quetta. This study focuses on the influence of population growth and LULC on groundwater table level by employing multi-temporal, multispectral satellite data during the selected years, i.e. 2014, 2017, and 2020. The results of land classification showed that barren land had shown a considerable decrease, whereas the urban area has increased over time from 152.4sq/km in 2014 to 195.5sq/km in 2017 to 283.3sq/km in 2020, whereas surface-water area coverage has increased since 2014 because of construction of few dams around the valley. Rapid urbanization stresses limited hydrology resources, and this needs to be addressed to conserve/sustain the resources through educating the local community, awareness regarding water use and climate change, and supporting artificial recharge of the aquifers.Keywords: climate changes, urbanization, GIS, land use, Quetta, watershed
Procedia PDF Downloads 1241211 Functional Connectivity Signatures of Polygenic Depression Risk in Youth
Authors: Louise Moles, Steve Riley, Sarah D. Lichenstein, Marzieh Babaeianjelodar, Robert Kohler, Annie Cheng, Corey Horien Abigail Greene, Wenjing Luo, Jonathan Ahern, Bohan Xu, Yize Zhao, Chun Chieh Fan, R. Todd Constable, Sarah W. Yip
Abstract:
Background: Risks for depression are myriad and include both genetic and brain-based factors. However, relationships between these systems are poorly understood, limiting understanding of disease etiology, particularly at the developmental level. Methods: We use a data-driven machine learning approach connectome-based predictive modeling (CPM) to identify functional connectivity signatures associated with polygenic risk scores for depression (DEP-PRS) among youth from the Adolescent Brain and Cognitive Development (ABCD) study across diverse brain states, i.e., during resting state, during affective working memory, during response inhibition, during reward processing. Results: Using 10-fold cross-validation with 100 iterations and permutation testing, CPM identified connectivity signatures of DEP-PRS across all examined brain states (rho’s=0.20-0.27, p’s<.001). Across brain states, DEP-PRS was positively predicted by increased connectivity between frontoparietal and salience networks, increased motor-sensory network connectivity, decreased salience to subcortical connectivity, and decreased subcortical to motor-sensory connectivity. Subsampling analyses demonstrated that model accuracies were robust across random subsamples of N’s=1,000, N’s=500, and N’s=250 but became unstable at N’s=100. Conclusions: These data, for the first time, identify neural networks of polygenic depression risk in a large sample of youth before the onset of significant clinical impairment. Identified networks may be considered potential treatment targets or vulnerability markers for depression risk.Keywords: genetics, functional connectivity, pre-adolescents, depression
Procedia PDF Downloads 581210 Hardware in the Loop Platform for Virtual Commissioning: Case Study of a Hydraulic-Press Model Simulated in Real-Time
Authors: Jorge Rodriguez-Guerra, Carlos Calleja, Aron Pujana, Ana Maria Macarulla
Abstract:
Hydraulic-press commissioning consumes a great amount of man-hours, due to the fact that it takes place several miles away from where it has been designed. This factor became exacerbated due to control designers’ lack of knowledge about which will be the final controller gains before they start working with it. Virtual commissioning has been postulated as an optimal solution to deal with this lack of knowledge. Here, a case study is presented in which a controller is set up against a real-time model based on a hydraulic-press. The press model is designed following manufacturer specifications and it is embedded in a real-time simulator. This methodology ensures that the model achieves similar responses as the real machine that would be placed on the industry. A deterministic communication protocol is in charge of the bidirectional information transmission between the real-time model and the controller. This platform allows the engineer to test and verify the final control responses with exactly the same hardware that is going to be installed in the hydraulic-press, in other words, realize a virtual commissioning of the electro-hydraulic actuator. The Hardware in the Loop (HiL) platform validates in laboratory conditions and harmless for the machine the control algorithms designed, which allows embedding them afterwards in the industrial environment without further modifications.Keywords: deterministic communication protocol, electro-hydraulic actuator, hardware in the loop, real-time, virtual commissioning
Procedia PDF Downloads 143