Search results for: large graph
7148 Characterization of the GntR Family Transcriptional Regulator Rv0792c: A Potential Drug Target for Mycobacterium tuberculosis
Authors: Thanusha D. Abeywickrama, Inoka C. Perera, Genji Kurisu
Abstract:
Tuberculosis, considered being as the ninth leading cause of death worldwide, cause from a single infectious agent M. tuberculosis and the drug resistance nature of this bacterium is a continuing threat to the world. Therefore TB preventing treatment is expanding, where this study designed to analyze the regulatory mechanism of GntR transcriptional regulator gene Rv0792c, which lie between several genes codes for some hypothetical proteins, a monooxygenase and an oxidoreductase. The gene encoding Rv0792c was cloned into pET28a and expressed protein was purified to near homogeneity by Nickel affinity chromatography. It was previously reported that the protein binds within the intergenic region (BS region) between Rv0792c gene and monooxygenase (Rv0793). This resulted in binding of three protein molecules with the BS region suggesting tight control of monooxygenase as well as its own gene. Since monooxygenase plays a key role in metabolism, this gene may have a global regulatory role. The natural ligand for this regulator is still under investigation. In relation to the Rv0792 protein structure, a Circular Dichroism (CD) spectrum was carried out to determine its secondary structure elements. Percentage-wise, 17.4% Helix, 21.8% Antiparallel, 5.1% Parallel, 12.3% turn and 43.5% other were revealed from CD spectrum data under room temperature. Differential Scanning Calorimetry (DSC) was conducted to assess the thermal stability of Rv0792, which the melting temperature of protein is 57.2 ± 0.6 °C. The graph of heat capacity (Cp) versus temperature for the best fit was obtained for non-two-state model, which concludes the folding of Rv0792 protein occurs through stable intermediates. Peak area (∆HCal ) and Peak shape (∆HVant ) was calculated from the graph and ∆HCal / ∆HVant was close to 0.5, suggesting dimeric nature of the protein.Keywords: CD spectrum, DSC analysis, GntR transcriptional regulator, protein structure
Procedia PDF Downloads 2237147 Study of Virus/es Threatening Large Cardamom Cultivation in Sikkim and Darjeeling Hills of Northeast India
Authors: Dharmendra Pratap
Abstract:
Large Cardamom (Amomum subulatum), family Zingiberaceae is an aromatic spice crop and has rich medicinal value. Large Cardamom is as synonymous to Sikkim as Tea is to Darjeeling. Since Sikkim alone contributes up to 88% of India's large cardamom production which is the world leader by producing over 50% of the global yield. However, the production of large cardamom has declined almost to half since last two decade. The economic losses have been attributed to two viral diseases namely, chirke and Foorkey. Chirke disease is characterized by light and dark green streaks on leaves. The affected leaves exhibit streak mosaic, which gradually coalesce, turn brown and eventually dry up. Excessive sprouting and formation of bushy dwarf clumps at the base of mother plants that gradually die characterize the foorkey disease. In our surveys in Sikkim–Darjeeling hill area during 2012-14, 40-45% of plants were found to be affected with foorkey disease and 10-15% with chirke. Mechanical and aphid transmission study showed banana as an alternate host for both the disease. For molecular identification, total genomic DNA and RNA was isolated from the infected leaf tissues and subjected to Rolling circle amplification (RCA) and RT-PCR respectively. The DNA concatamers produced in the RCA reaction were monomerized by different restriction enzymes and the bands corresponding to ~1 kb genomes were purified and cloned in the respective sites. The nucleotide sequencing results revealed the association of Nanovirus with the foorkey disease of large cardamom. DNA1 showed 74% identity with Replicase gene of FBNYV, DNA2 showed 77% identity with the NSP gene of BBTV and DNA3 showed 74% identity with CP gene of BBTV. The finding suggests the presence of a new species of nanovirus associated with foorkey disease of large cardamom in Sikkim and Darjeeling hills. The details of their epidemiology and other factors would be discussed.Keywords: RCA, nanovirus, large cardamom, molecular virology and microbiology
Procedia PDF Downloads 4947146 Graph-Based Semantical Extractive Text Analysis
Authors: Mina Samizadeh
Abstract:
In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them), has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. This algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as a result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework, which can be used individually or as a part of generating the summary to overcome coverage problems.Keywords: keyword extraction, n-gram extraction, text summarization, topic clustering, semantic analysis
Procedia PDF Downloads 727145 Probabilistic Graphical Model for the Web
Authors: M. Nekri, A. Khelladi
Abstract:
The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.Keywords: clustering coefficient, preferential attachment, small world, web community
Procedia PDF Downloads 2727144 Large Panel Technology Apartments of Yesterday and Today: Quality Aspects
Authors: Barbara Gronostajska
Abstract:
Currently, housing conditions of buildings executed in large panel technology are deteriorating. The article presents modernization solutions implemented throughout the variety of architectural activities (adding of balconies and staircases, connecting apartments) which guarantee very intriguing results that meet the needs and expectations of the modern society.Keywords: housing estate, apartments, flats, modernization, plate blocks
Procedia PDF Downloads 4827143 Analysis and Identification of Different Factors Affecting Students’ Performance Using a Correlation-Based Network Approach
Authors: Jeff Chak-Fu Wong, Tony Chun Yin Yip
Abstract:
The transition from secondary school to university seems exciting for many first-year students but can be more challenging than expected. Enabling instructors to know students’ learning habits and styles enhances their understanding of the students’ learning backgrounds, allows teachers to provide better support for their students, and has therefore high potential to improve teaching quality and learning, especially in any mathematics-related courses. The aim of this research is to collect students’ data using online surveys, to analyze students’ factors using learning analytics and educational data mining and to discover the characteristics of the students at risk of falling behind in their studies based on students’ previous academic backgrounds and collected data. In this paper, we use correlation-based distance methods and mutual information for measuring student factor relationships. We then develop a factor network using the Minimum Spanning Tree method and consider further study for analyzing the topological properties of these networks using social network analysis tools. Under the framework of mutual information, two graph-based feature filtering methods, i.e., unsupervised and supervised infinite feature selection algorithms, are used to analyze the results for students’ data to rank and select the appropriate subsets of features and yield effective results in identifying the factors affecting students at risk of failing. This discovered knowledge may help students as well as instructors enhance educational quality by finding out possible under-performers at the beginning of the first semester and applying more special attention to them in order to help in their learning process and improve their learning outcomes.Keywords: students' academic performance, correlation-based distance method, social network analysis, feature selection, graph-based feature filtering method
Procedia PDF Downloads 1317142 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores
Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi
Abstract:
In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.Keywords: drug synergy, clustering, prediction, machine learning., deep learning
Procedia PDF Downloads 817141 REDUCER: An Architectural Design Pattern for Reducing Large and Noisy Data Sets
Authors: Apkar Salatian
Abstract:
To relieve the burden of reasoning on a point to point basis, in many domains there is a need to reduce large and noisy data sets into trends for qualitative reasoning. In this paper we propose and describe a new architectural design pattern called REDUCER for reducing large and noisy data sets that can be tailored for particular situations. REDUCER consists of 2 consecutive processes: Filter which takes the original data and removes outliers, inconsistencies or noise; and Compression which takes the filtered data and derives trends in the data. In this seminal article, we also show how REDUCER has successfully been applied to 3 different case studies.Keywords: design pattern, filtering, compression, architectural design
Procedia PDF Downloads 2137140 Public Transportation Demand and Policy in Kabul, Afghanistan
Authors: Ahmad Samim Ranjbar, Shoshi Mizokami
Abstract:
Kabul is the heart of political, commercial, cultural, educational and social life in Afghanistan and the Kabul fifth fastest growing city in the world, since 2001 with the establishment of new government Lack of adequate employment opportunities and basic utility services in remote provinces have prompted people to move to Kabul and other urban areas. From 2001 to the present, a rapid increase in population, and also less income of the people most of residence tend to use public transport, especially buses, however there is no proper bus system exist in Kabul city, because of wars, from 1992 to 2001 Kabul suffered damage and destruction of its transportation facilities including pavements, sidewalks, traffic circles, drainage systems, traffic signs and signals, trolleybuses and almost all of the public transit buses (e.g. Millie bus). This research is a primary and very important phase into Kabul city transportation and especially an initial and important step toward using large bus in Kabul city, which the main purpose of this research is to find the demand of Kabul city residence for public transport (Large Bus) and compare it with the actual supply from government. Finding of this research shows that the demand of Kabul city residence for the public transport (Large Bus) exceed the supply from the government, means that current public transportation (Large Bus) is not sufficient to serve people of Kabul city, it is mentionable that according to this research there is no need to build a new road or exclusive way for bus, this research propose to government for investment on the public transportation and exceed the number of large buses to can handle the current demand for public transport.Keywords: transportation, planning, public transport, large bus, Kabul, Afghanistan
Procedia PDF Downloads 2997139 Simscape Library for Large-Signal Physical Network Modeling of Inertial Microelectromechanical Devices
Authors: S. Srinivasan, E. Cretu
Abstract:
The information flow (e.g. block-diagram or signal flow graph) paradigm for the design and simulation of Microelectromechanical (MEMS)-based systems allows to model MEMS devices using causal transfer functions easily, and interface them with electronic subsystems for fast system-level explorations of design alternatives and optimization. Nevertheless, the physical bi-directional coupling between different energy domains is not easily captured in causal signal flow modeling. Moreover, models of fundamental components acting as building blocks (e.g. gap-varying MEMS capacitor structures) depend not only on the component, but also on the specific excitation mode (e.g. voltage or charge-actuation). In contrast, the energy flow modeling paradigm in terms of generalized across-through variables offers an acausal perspective, separating clearly the physical model from the boundary conditions. This promotes reusability and the use of primitive physical models for assembling MEMS devices from primitive structures, based on the interconnection topology in generalized circuits. The physical modeling capabilities of Simscape have been used in the present work in order to develop a MEMS library containing parameterized fundamental building blocks (area and gap-varying MEMS capacitors, nonlinear springs, displacement stoppers, etc.) for the design, simulation and optimization of MEMS inertial sensors. The models capture both the nonlinear electromechanical interactions and geometrical nonlinearities and can be used for both small and large signal analyses, including the numerical computation of pull-in voltages (stability loss). Simscape behavioral modeling language was used for the implementation of reduced-order macro models, that present the advantage of a seamless interface with Simulink blocks, for creating hybrid information/energy flow system models. Test bench simulations of the library models compare favorably with both analytical results and with more in-depth finite element simulations performed in ANSYS. Separate MEMS-electronic integration tests were done on closed-loop MEMS accelerometers, where Simscape was used for modeling the MEMS device and Simulink for the electronic subsystem.Keywords: across-through variables, electromechanical coupling, energy flow, information flow, Matlab/Simulink, MEMS, nonlinear, pull-in instability, reduced order macro models, Simscape
Procedia PDF Downloads 1377138 Frequent Item Set Mining for Big Data Using MapReduce Framework
Authors: Tamanna Jethava, Rahul Joshi
Abstract:
Frequent Item sets play an essential role in many data Mining tasks that try to find interesting patterns from the database. Typically it refers to a set of items that frequently appear together in transaction dataset. There are several mining algorithm being used for frequent item set mining, yet most do not scale to the type of data we presented with today, so called “BIG DATA”. Big Data is a collection of large data sets. Our approach is to work on the frequent item set mining over the large dataset with scalable and speedy way. Big Data basically works with Map Reduce along with HDFS is used to find out frequent item sets from Big Data on large cluster. This paper focuses on using pre-processing & mining algorithm as hybrid approach for big data over Hadoop platform.Keywords: frequent item set mining, big data, Hadoop, MapReduce
Procedia PDF Downloads 4397137 A Deep Learning Approach to Real Time and Robust Vehicular Traffic Prediction
Authors: Bikis Muhammed, Sehra Sedigh Sarvestani, Ali R. Hurson, Lasanthi Gamage
Abstract:
Vehicular traffic events have overly complex spatial correlations and temporal interdependencies and are also influenced by environmental events such as weather conditions. To capture these spatial and temporal interdependencies and make more realistic vehicular traffic predictions, graph neural networks (GNN) based traffic prediction models have been extensively utilized due to their capability of capturing non-Euclidean spatial correlation very effectively. However, most of the already existing GNN-based traffic prediction models have some limitations during learning complex and dynamic spatial and temporal patterns due to the following missing factors. First, most GNN-based traffic prediction models have used static distance or sometimes haversine distance mechanisms between spatially separated traffic observations to estimate spatial correlation. Secondly, most GNN-based traffic prediction models have not incorporated environmental events that have a major impact on the normal traffic states. Finally, most of the GNN-based models did not use an attention mechanism to focus on only important traffic observations. The objective of this paper is to study and make real-time vehicular traffic predictions while incorporating the effect of weather conditions. To fill the previously mentioned gaps, our prediction model uses a real-time driving distance between sensors to build a distance matrix or spatial adjacency matrix and capture spatial correlation. In addition, our prediction model considers the effect of six types of weather conditions and has an attention mechanism in both spatial and temporal data aggregation. Our prediction model efficiently captures the spatial and temporal correlation between traffic events, and it relies on the graph attention network (GAT) and Bidirectional bidirectional long short-term memory (Bi-LSTM) plus attention layers and is called GAT-BILSTMA.Keywords: deep learning, real time prediction, GAT, Bi-LSTM, attention
Procedia PDF Downloads 737136 A New Approach for Assertions Processing during Assertion-Based Software Testing
Authors: Ali M. Alakeel
Abstract:
Assertion-based software testing has been shown to be a promising tool for generating test cases that reveal program faults. Because the number of assertions may be very large for industry-size programs, one of the main concerns to the applicability of assertion-based testing is the amount of search time required to explore a large number of assertions. This paper presents a new approach for assertions exploration during the process of Assertion-Based software testing. Our initial exterminations with the proposed approach show that the performance of Assertion-Based testing may be improved, therefore, making this approach more efficient when applied on programs with large number of assertions.Keywords: software testing, assertion-based testing, program assertions, generating test
Procedia PDF Downloads 4627135 Identifying Coloring in Graphs with Twins
Authors: Souad Slimani, Sylvain Gravier, Simon Schmidt
Abstract:
Recently, several vertex identifying notions were introduced (identifying coloring, lid-coloring,...); these notions were inspired by identifying codes. All of them, as well as original identifying code, is based on separating two vertices according to some conditions on their closed neighborhood. Therefore, twins can not be identified. So most of known results focus on twin-free graph. Here, we show how twins can modify optimal value of vertex-identifying parameters for identifying coloring and locally identifying coloring.Keywords: identifying coloring, locally identifying coloring, twins, separating
Procedia PDF Downloads 1487134 A Computational Framework for Decoding Hierarchical Interlocking Structures with SL Blocks
Authors: Yuxi Liu, Boris Belousov, Mehrzad Esmaeili Charkhab, Oliver Tessmann
Abstract:
This paper presents a computational solution for designing reconfigurable interlocking structures that are fully assembled with SL Blocks. Formed by S-shaped and L-shaped tetracubes, SL Block is a specific type of interlocking puzzle. Analogous to molecular self-assembly, the aggregation of SL blocks will build a reversible hierarchical and discrete system where a single module can be numerously replicated to compose semi-interlocking components that further align, wrap, and braid around each other to form complex high-order aggregations. These aggregations can be disassembled and reassembled, responding dynamically to design inputs and changes with a unique capacity for reconfiguration. To use these aggregations as architectural structures, we developed computational tools that automate the configuration of SL blocks based on architectural design objectives. There are three critical phases in our work. First, we revisit the hierarchy of the SL block system and devise a top-down-type design strategy. From this, we propose two key questions: 1) How to translate 3D polyominoes into SL block assembly? 2) How to decompose the desired voxelized shapes into a set of 3D polyominoes with interlocking joints? These two questions can be considered the Hamiltonian path problem and the 3D polyomino tiling problem. Then, we derive our solution to each of them based on two methods. The first method is to construct the optimal closed path from an undirected graph built from the voxelized shape and translate the node sequence of the resulting path into the assembly sequence of SL blocks. The second approach describes interlocking relationships of 3D polyominoes as a joint connection graph. Lastly, we formulate the desired shapes and leverage our methods to achieve their reconfiguration within different levels. We show that our computational strategy will facilitate the efficient design of hierarchical interlocking structures with a self-replicating geometric module.Keywords: computational design, SL-blocks, 3D polyomino puzzle, combinatorial problem
Procedia PDF Downloads 1307133 Model Order Reduction of Continuous LTI Large Descriptor System Using LRCF-ADI and Square Root Balanced Truncation
Authors: Mohammad Sahadet Hossain, Shamsil Arifeen, Mehrab Hossian Likhon
Abstract:
In this paper, we analyze a linear time invariant (LTI) descriptor system of large dimension. Since these systems are difficult to simulate, compute and store, we attempt to reduce this large system using Low Rank Cholesky Factorized Alternating Directions Implicit (LRCF-ADI) iteration followed by Square Root Balanced Truncation. LRCF-ADI solves the dual Lyapunov equations of the large system and gives low-rank Cholesky factors of the gramians as the solution. Using these cholesky factors, we compute the Hankel singular values via singular value decomposition. Later, implementing square root balanced truncation, the reduced system is obtained. The bode plots of original and lower order systems are used to show that the magnitude and phase responses are same for both the systems.Keywords: low-rank cholesky factor alternating directions implicit iteration, LTI Descriptor system, Lyapunov equations, Square-root balanced truncation
Procedia PDF Downloads 4197132 Association of Leptin Gene T3469C Polymorphism on Reproductive Performance of Purebred Sows
Authors: Mariedel Autriz, Angel Lambio, Renato Vega, Severino Capitan, Rita Laude
Abstract:
The study was conducted to associate genetic polymorphism of the leptin gene T3469C with reproductive performance in purebred sows. DNA were isolated from hair follicles of 29 Landrace and 24 Large White sows. Amplification of the leptin gene was done followed by Hinf1digestion to determine the base at the T3469C site. Electrophoresis of the digestion products revealed that there were 25 Landrace and 15 Large White sows with the TT genotype while there were 3 Landrace and 6 Large White TC. There was 1 CC for Landrace and 3 for Large White. Significant genotype associations were observed for total litter size born and total born alive. Significant breed differences, on the other hand, was observed for gestation length and average birth weight. Significant breed by genotype interaction was observed in litter size total born and litter size born alive.Keywords: genetic polymorphism, leptin, swine, T3469C
Procedia PDF Downloads 4197131 Thermal Network Model for a Large Scale AC Induction Motor
Authors: Sushil Kumar, M. Dakshina Murty
Abstract:
Thermal network modelling has proven to be important tool for thermal analysis of electrical machine. This article investigates numerical thermal network model and experimental performance of a large-scale AC motor. Experimental temperatures were measured using RTD in the stator which have been compared with the numerical data. Thermal network modelling fairly predicts the temperature of various components inside the large-scale AC motor. Results of stator winding temperature is compared with experimental results which are in close agreement with accuracy of 6-10%. This method of predicting hot spots within AC motors can be readily used by the motor designers for estimating the thermal hot spots of the machine.Keywords: AC motor, thermal network, heat transfer, modelling
Procedia PDF Downloads 3277130 A Theoretical Model for Pattern Extraction in Large Datasets
Authors: Muhammad Usman
Abstract:
Pattern extraction has been done in past to extract hidden and interesting patterns from large datasets. Recently, advancements are being made in these techniques by providing the ability of multi-level mining, effective dimension reduction, advanced evaluation and visualization support. This paper focuses on reviewing the current techniques in literature on the basis of these parameters. Literature review suggests that most of the techniques which provide multi-level mining and dimension reduction, do not handle mixed-type data during the process. Patterns are not extracted using advanced algorithms for large datasets. Moreover, the evaluation of patterns is not done using advanced measures which are suited for high-dimensional data. Techniques which provide visualization support are unable to handle a large number of rules in a small space. We present a theoretical model to handle these issues. The implementation of the model is beyond the scope of this paper.Keywords: association rule mining, data mining, data warehouses, visualization of association rules
Procedia PDF Downloads 2247129 Survey on Arabic Sentiment Analysis in Twitter
Authors: Sarah O. Alhumoud, Mawaheb I. Altuwaijri, Tarfa M. Albuhairi, Wejdan M. Alohaideb
Abstract:
Large-scale data stream analysis has become one of the important business and research priorities lately. Social networks like Twitter and other micro-blogging platforms hold an enormous amount of data that is large in volume, velocity and variety. Extracting valuable information and trends out of these data would aid in a better understanding and decision-making. Multiple analysis techniques are deployed for English content. Moreover, one of the languages that produce a large amount of data over social networks and is least analyzed is the Arabic language. The proposed paper is a survey on the research efforts to analyze the Arabic content in Twitter focusing on the tools and methods used to extract the sentiments for the Arabic content on Twitter.Keywords: big data, social networks, sentiment analysis, twitter
Procedia PDF Downloads 5797128 A Modified Nonlinear Conjugate Gradient Algorithm for Large Scale Unconstrained Optimization Problems
Authors: Tsegay Giday Woldu, Haibin Zhang, Xin Zhang, Yemane Hailu Fissuh
Abstract:
It is well known that nonlinear conjugate gradient method is one of the widely used first order methods to solve large scale unconstrained smooth optimization problems. Because of the low memory requirement, attractive theoretical features, practical computational efficiency and nice convergence properties, nonlinear conjugate gradient methods have a special role for solving large scale unconstrained optimization problems. Large scale optimization problems are with important applications in practical and scientific world. However, nonlinear conjugate gradient methods have restricted information about the curvature of the objective function and they are likely less efficient and robust compared to some second order algorithms. To overcome these drawbacks, the new modified nonlinear conjugate gradient method is presented. The noticeable features of our work are that the new search direction possesses the sufficient descent property independent of any line search and it belongs to a trust region. Under mild assumptions and standard Wolfe line search technique, the global convergence property of the proposed algorithm is established. Furthermore, to test the practical computational performance of our new algorithm, numerical experiments are provided and implemented on the set of some large dimensional unconstrained problems. The numerical results show that the proposed algorithm is an efficient and robust compared with other similar algorithms.Keywords: conjugate gradient method, global convergence, large scale optimization, sufficient descent property
Procedia PDF Downloads 2087127 Large-scale Foraging Behaviour of Free-ranging Goats: Influence of Herd Size, Landscape Quality and Season
Authors: Manqhai Kraai, Adrian M. Shrader, Peter F. Scogings
Abstract:
For animals living in herds, competition between group members increases as herd size increases. The intensity of this competition is likely greater across poor quality landscapes and during the dry season. In contrast to wild herbivores, herd size in domestic livestock is determined by their owners. This then raises the question, how do domestic livestock, like goats, reduce competition for food within these defined herds? To explore this question, large-scale foraging behaviour of both small (12 to 28 individuals) and large (42 to 83 individuals) herds of free-ranging goats were recorded in Tugela Ferry, KwaZulu-Natal, South Africa. The study was conducted on three different landscapes that varied in both food quality and availability, during the wet and dry seasons of 2013-2014. The goats were housed in kraals overnight and let out in the mornings to forage unattended. Thus, foraging decisions were made by the goats and not by herders. The large-scale foraging behaviours focussed on included, (i) total distance travelled by goats while foraging, (ii) distance travelled before starting to feed, (iii) travel speed, and (iv) feeding duration. This was done using Garmin Foretrex 401 GPS devices harnessed to two goats per herd. Irrespective of season, there was no difference in the total distance travelled by the different sized herds across the different quality landscapes. However, both small and large herds started feeding farther from the kraal in the dry compared to the wet season. Despite this, there was no significant seasonal difference in total amount of time the herds spent feeding across the different landscapes. Finally, both small and large herds increased their travel speed across all the landscapes in the dry season, but large herds travelled faster than small herds. This increase was likely to maximise the time that large herds could spend feeding in good areas. Ultimately, these results indicate that both small and large herds were affected by declines in food quality and quantity during the dry season. However, as large herds made greater behavioural adjustments compared to smaller herds (i.e., feeding farther away from the kraal and travelling faster), it appeared that they were more affected by the seasonal increases in intra-herd competition.Keywords: distance, feeding duration, food availability, food quality, travel speed
Procedia PDF Downloads 1267126 Design of Cylindrical Crawler Robot Inspired by Amoeba Locomotion
Authors: Jun-ya Nagase
Abstract:
Recently, the need of colonoscopy is increasing because of the rise of colonic disorder including cancer of the colon. However, current colonoscopy depends on doctor's skill strongly. Therefore, a large intestine endoscope that does not depend on the techniques of a doctor with high safety is required. In this research, we aim at development a novel large intestine endoscope that can realize safe insertion without specific techniques. A wheel movement type robot, a snake-like robot and an earthworm-like robot are all described in the relevant literature as endoscope robots that are currently studied. Among them, the tracked crawler robot can travel by traversing uneven ground flexibly with a crawler belt attached firmly to the ground surface. Although conventional crawler robots have high efficiency and/or high ground-covering ability, they require a comparatively large space to move. In this study, a small cylindrical crawler robot inspired by amoeba locomotion, which does not need large space to move and which has high ground-covering ability, is proposed. In addition, we developed a prototype of the large intestine endoscope using the proposed crawler mechanism. Experiments have demonstrated smooth operation and a forward movement of the robot by application of voltage to the motor. This paper reports the structure, drive mechanism, prototype, and experimental evaluation.Keywords: tracked-crawler, endoscopic robot, narrow path, amoeba locomotion.
Procedia PDF Downloads 3847125 Influence of Displacement Amplitude and Vertical Load on the Horizontal Dynamic and Static Behavior of Helical Wire Rope Isolators
Authors: Nicolò Vaiana, Mariacristina Spizzuoco, Giorgio Serino
Abstract:
In this paper, the results of experimental tests performed on a Helical Wire Rope Isolator (HWRI) are presented in order to describe the dynamic and static behavior of the selected metal device in three different displacements ranges, namely small, relatively large, and large displacements ranges, without and under the effect of a vertical load. A testing machine, allowing to apply horizontal displacement or load histories to the tested bearing with a constant vertical load, has been adopted to perform the dynamic and static tests. According to the experimental results, the dynamic behavior of the tested device depends on the applied displacement amplitude. Indeed, the HWRI displays a softening and a hardening stiffness at small and relatively large displacements, respectively, and a stronger nonlinear stiffening behavior at large displacements. Furthermore, the experimental tests reveal that the application of a vertical load allows to have a more flexible device with higher damping properties and that the applied vertical load affects much less the dynamic response of the metal device at large displacements. Finally, a decrease in the static to dynamic effective stiffness ratio with increasing displacement amplitude has been observed.Keywords: base isolation, earthquake engineering, experimental hysteresis loops, wire rope isolators
Procedia PDF Downloads 4347124 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering
Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi
Abstract:
In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering
Procedia PDF Downloads 1527123 An Interpolation Tool for Data Transfer in Two-Dimensional Ice Accretion Problems
Authors: Marta Cordero-Gracia, Mariola Gomez, Olivier Blesbois, Marina Carrion
Abstract:
One of the difficulties in icing simulations is for extended periods of exposure, when very large ice shapes are created. As well as being large, they can have complex shapes, such as a double horn. For icing simulations, these configurations are currently computed in several steps. The icing step is stopped when the ice shapes become too large, at which point a new mesh has to be created to allow for further CFD and ice growth simulations to be performed. This can be very costly, and is a limiting factor in the simulations that can be performed. A way to avoid the costly human intervention in the re-meshing step of multistep icing computation is to use mesh deformation instead of re-meshing. The aim of the present work is to apply an interpolation method based on Radial Basis Functions (RBF) to transfer deformations from surface mesh to volume mesh. This deformation tool has been developed specifically for icing problems. It is able to deal with localized, sharp and large deformations, unlike the tools traditionally used for more smooth wing deformations. This tool will be presented along with validation on typical two-dimensional icing shapes.Keywords: ice accretion, interpolation, mesh deformation, radial basis functions
Procedia PDF Downloads 3147122 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation
Authors: Somayeh Komeylian
Abstract:
The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE
Procedia PDF Downloads 1017121 Computational Team Dynamics and Interaction Patterns in New Product Development Teams
Authors: Shankaran Sitarama
Abstract:
New Product Development (NPD) is invariably a team effort and involves effective teamwork. NPD team has members from different disciplines coming together and working through the different phases all the way from conceptual design phase till the production and product roll out. Creativity and Innovation are some of the key factors of successful NPD. Team members going through the different phases of NPD interact and work closely yet challenge each other during the design phases to brainstorm on ideas and later converge to work together. These two traits require the teams to have a divergent and a convergent thinking simultaneously. There needs to be a good balance. The team dynamics invariably result in conflicts among team members. While some amount of conflict (ideational conflict) is desirable in NPD teams to be creative as a group, relational conflicts (or discords among members) could be detrimental to teamwork. Team communication truly reflect these tensions and team dynamics. In this research, team communication (emails) between the members of the NPD teams is considered for analysis. The email communication is processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. The amount of communication (content and not frequency of communication) defines the interaction strength between the members. Social network adjacency matrix is thus obtained for the team. Standard social network analysis techniques based on the Adjacency Matrix (AM) and Dichotomized Adjacency Matrix (DAM) based on network density yield network graphs and network metrics like centrality. The social network graphs are then rendered for visual representation using a Metric Multi-Dimensional Scaling (MMDS) algorithm for node placements and arcs connecting the nodes (representing team members) are drawn. The distance of the nodes in the placement represents the tie-strength between the members. Stronger tie-strengths render nodes closer. Overall visual representation of the social network graph provides a clear picture of the team’s interactions. This research reveals four distinct patterns of team interaction that are clearly identifiable in the visual representation of the social network graph and have a clearly defined computational scheme. The four computational patterns of team interaction defined are Central Member Pattern (CMP), Subgroup and Aloof member Pattern (SAP), Isolate Member Pattern (IMP), and Pendant Member Pattern (PMP). Each of these patterns has a team dynamics implication in terms of the conflict level in the team. For instance, Isolate member pattern, clearly points to a near break-down in communication with the member and hence a possible high conflict level, whereas the subgroup or aloof member pattern points to a non-uniform information flow in the team and some moderate level of conflict. These pattern classifications of teams are then compared and correlated to the real level of conflict in the teams as indicated by the team members through an elaborate self-evaluation, team reflection, feedback form and results show a good correlation.Keywords: team dynamics, team communication, team interactions, social network analysis, sna, new product development, latent semantic analysis, LSA, NPD teams
Procedia PDF Downloads 717120 Using Automated Agents to Facilitate Instructions in a Large Online Course
Authors: David M Gilstrap
Abstract:
In an online course with a large enrollment, the potential exists for the instructor to become overburdened with having to respond to students’ emails, which consequently decreases the instructor’s efficiency in teaching the course. Repetition of instructions is an effective way of reducing confusion among students, which in turn increases their efficiencies, as well. World of Turf is the largest online course at Michigan State University, which employs Brightspace as its management system (LMS) software. Recently, the LMS upgraded its capabilities to utilize agents, which are auto generated email notifications to students based on certain criteria. Agents are additional tools that can enhance course design. They can be run on-demand or according to a schedule. Agents can be timed to effectively remind students of approaching deadlines. The content of these generated emails can also include reinforced instructions. With a large online course, even a small percentage of students that either do not read or do not comprehend the course syllabus or do not notice instructions on course pages can result in numerous emails to the instructor, often near the deadlines for assignments. Utilizing agents to decrease the number of emails from students has enabled the instructor to efficiently instruct more than one thousand students per semester without any graduate student teaching assistants.Keywords: agents, Brightspace, large enrollment, learning management system, repetition of instructions
Procedia PDF Downloads 2037119 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures
Authors: Silvina Caíno-Lores, Jesús Carretero
Abstract:
Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.Keywords: data locality, data-centric computing, large scale infrastructures, cloud computing
Procedia PDF Downloads 260