Search results for: Graph Library
725 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering
Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi
Abstract:
In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering
Procedia PDF Downloads 152724 Computational Team Dynamics and Interaction Patterns in New Product Development Teams
Authors: Shankaran Sitarama
Abstract:
New Product Development (NPD) is invariably a team effort and involves effective teamwork. NPD team has members from different disciplines coming together and working through the different phases all the way from conceptual design phase till the production and product roll out. Creativity and Innovation are some of the key factors of successful NPD. Team members going through the different phases of NPD interact and work closely yet challenge each other during the design phases to brainstorm on ideas and later converge to work together. These two traits require the teams to have a divergent and a convergent thinking simultaneously. There needs to be a good balance. The team dynamics invariably result in conflicts among team members. While some amount of conflict (ideational conflict) is desirable in NPD teams to be creative as a group, relational conflicts (or discords among members) could be detrimental to teamwork. Team communication truly reflect these tensions and team dynamics. In this research, team communication (emails) between the members of the NPD teams is considered for analysis. The email communication is processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. The amount of communication (content and not frequency of communication) defines the interaction strength between the members. Social network adjacency matrix is thus obtained for the team. Standard social network analysis techniques based on the Adjacency Matrix (AM) and Dichotomized Adjacency Matrix (DAM) based on network density yield network graphs and network metrics like centrality. The social network graphs are then rendered for visual representation using a Metric Multi-Dimensional Scaling (MMDS) algorithm for node placements and arcs connecting the nodes (representing team members) are drawn. The distance of the nodes in the placement represents the tie-strength between the members. Stronger tie-strengths render nodes closer. Overall visual representation of the social network graph provides a clear picture of the team’s interactions. This research reveals four distinct patterns of team interaction that are clearly identifiable in the visual representation of the social network graph and have a clearly defined computational scheme. The four computational patterns of team interaction defined are Central Member Pattern (CMP), Subgroup and Aloof member Pattern (SAP), Isolate Member Pattern (IMP), and Pendant Member Pattern (PMP). Each of these patterns has a team dynamics implication in terms of the conflict level in the team. For instance, Isolate member pattern, clearly points to a near break-down in communication with the member and hence a possible high conflict level, whereas the subgroup or aloof member pattern points to a non-uniform information flow in the team and some moderate level of conflict. These pattern classifications of teams are then compared and correlated to the real level of conflict in the teams as indicated by the team members through an elaborate self-evaluation, team reflection, feedback form and results show a good correlation.Keywords: team dynamics, team communication, team interactions, social network analysis, sna, new product development, latent semantic analysis, LSA, NPD teams
Procedia PDF Downloads 71723 Study of Evaluation Model Based on Information System Success Model and Flow Theory Using Web-scale Discovery System
Authors: June-Jei Kuo, Yi-Chuan Hsieh
Abstract:
Because of the rapid growth of information technology, more and more libraries introduce the new information retrieval systems to enhance the users’ experience, improve the retrieval efficiency, and increase the applicability of the library resources. Nevertheless, few of them are discussed the usability from the users’ aspect. The aims of this study are to understand that the scenario of the information retrieval system utilization, and to know why users are willing to continuously use the web-scale discovery system to improve the web-scale discovery system and promote their use of university libraries. Besides of questionnaires, observations and interviews, this study employs both Information System Success Model introduced by DeLone and McLean in 2003 and the flow theory to evaluate the system quality, information quality, service quality, use, user satisfaction, flow, and continuing to use web-scale discovery system of students from National Chung Hsing University. Then, the results are analyzed through descriptive statistics and structural equation modeling using AMOS. The results reveal that in web-scale discovery system, the user’s evaluation of system quality, information quality, and service quality is positively related to the use and satisfaction; however, the service quality only affects user satisfaction. User satisfaction and the flow show a significant impact on continuing to use. Moreover, user satisfaction has a significant impact on user flow. According to the results of this study, to maintain the stability of the information retrieval system, to improve the information content quality, and to enhance the relationship between subject librarians and students are recommended for the academic libraries. Meanwhile, to improve the system user interface, to minimize layer from system-level, to strengthen the data accuracy and relevance, to modify the sorting criteria of the data, and to support the auto-correct function are required for system provider. Finally, to establish better communication with librariana commended for all users.Keywords: web-scale discovery system, discovery system, information system success model, flow theory, academic library
Procedia PDF Downloads 104722 Marriage Domination and Divorce Domination in Graphs
Authors: Mark L. Caay, Rodolfo E. Maza
Abstract:
In this paper, the authors define two new variants of domination in graphs: the marriage and the divorce domination. A subset S ⊆ V (G) is said to be a marriage dominating set of G if for every e ∈ E(G), there exists a u ∈ V (G) such that u is one of the end vertex of e. A marriage dominating set S ⊆ V (G) is said to be a divorce dominating set of G if G\S is a disconnected graph. In this study, the authors present conditions of graphs for which the marriage and the divorce domination will take place and for which the two sets will coincide. Furthermore, the author gives the necessary and sufficient conditions for marriage domination to avoid divorce.Keywords: domination, decomposition, marriage domination, divorce domination, marriage theorem
Procedia PDF Downloads 25721 Constructing a Probabilistic Ontology from a DBLP Data
Authors: Emna Hlel, Salma Jamousi, Abdelmajid Ben Hamadou
Abstract:
Every model for knowledge representation to model real-world applications must be able to cope with the effects of uncertain phenomena. One of main defects of classical ontology is its inability to represent and reason with uncertainty. To remedy this defect, we try to propose a method to construct probabilistic ontology for integrating uncertain information in an ontology modeling a set of basic publications DBLP (Digital Bibliography & Library Project) using a probabilistic model.Keywords: classical ontology, probabilistic ontology, uncertainty, Bayesian network
Procedia PDF Downloads 348720 Analyzing the Street Pattern Characteristics on Young People’s Choice to Walk or Not: A Study Based on Accelerometer and Global Positioning Systems Data
Authors: Ebru Cubukcu, Gozde Eksioglu Cetintahra, Burcin Hepguzel Hatip, Mert Cubukcu
Abstract:
Obesity and overweight cause serious health problems. Public and private organizations aim to encourage walking in various ways in order to cope with the problem of obesity and overweight. This study aims to understand how the spatial characteristics of urban street pattern, connectivity and complexity influence young people’s choice to walk or not. 185 public university students in Izmir, the third largest city in Turkey, participated in the study. Each participant had worn an accelerometer and a global positioning (GPS) device for a week. The accelerometer device records data on the intensity of the participant’s activity at a specified time interval, and the GPS device on the activities’ locations. Combining the two datasets, activity maps are derived. These maps are then used to differentiate the participants’ walk trips and motor vehicle trips. Given that, the frequency of walk and motor vehicle trips are calculated at the street segment level, and the street segments are then categorized into two as ‘preferred by pedestrians’ and ‘preferred by motor vehicles’. Graph Theory-based accessibility indices are calculated to quantify the spatial characteristics of the streets in the sample. Six different indices are used: (I) edge density, (II) edge sinuosity, (III) eta index, (IV) node density, (V) order of a node, and (VI) beta index. T-tests show that the index values for the ‘preferred by pedestrians’ and ‘preferred by motor vehicles’ are significantly different. The findings indicate that the spatial characteristics of the street network have a measurable effect on young people’s choice to walk or not. Policy implications are discussed. This study is funded by the Scientific and Technological Research Council of Turkey, Project No: 116K358.Keywords: graph theory, walkability, accessibility, street network
Procedia PDF Downloads 228719 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering
Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott
Abstract:
Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.Keywords: cancer research, graph theory, machine learning, single cell analysis
Procedia PDF Downloads 114718 Effect of Serine/Threonine Kinases on Autophagy Mechanism
Authors: Ozlem Oral, Seval Kilic, Ozlem Yedier, Serap Dokmeci, Devrim Gozuacik
Abstract:
Autophagy is a degradation pathway, activating under stress conditions. It digests macromolecules, such as abnormal proteins and long-lived organelles by engulfing them and by subsequent delivery of the cargo to lysosomes. The members of the phospholipid-dependent serine/threonine kinases, involved in many signaling pathways, which are necessary for the regulation of cellular metabolic activation. Previous studies implicate that, serine/threonine kinases have crucial roles in the mechanism of many diseases depend on the activated and/or inactivated signaling pathway. Data indicates, the signaling pathways activated by serine/threonine kinases are also involved in activation of autophagy mechanism. However, the information about the effect of serine/threonine kinases on autophagy mechanism and the roles of these effects in disease formation is limited. In this study, we investigated the effect of activated serine/threonine kinases on autophagic pathway. We performed a commonly used autophagy technique, GFP-LC3 dot formation and by using microscopy analyses, we evaluated promotion and/or inhibition of autophagy in serine/threonine kinase-overexpressed fibroblasts as well as cancer cells. In addition, we carried out confocal microscopy analyses and examined autophagic flux by utilizing the differential pH sensitivities of RFP and GFP in mRFP-GFP-LC3 probe. Based on the shRNA-library based screening, we identified autophagy-related proteins affected by serine/threonine kinases. We further studied the involvement of serine/threonine kinases on the molecular mechanism of newly identified autophagy proteins and found that, autophagic pathway is indirectly controlled by serine/threonine kinases via specific autophagic proteins. Our data indicate the molecular connection between two critical cellular mechanisms, which have important roles in the formation of many disease pathologies, particularly cancer. This project is supported by TUBITAK-1001-Scientific and Technological Research Projects Funding Program, Project No: 114Z836.Keywords: autophagy, GFP-LC3 dot formation assay, serine/threonine kinases, shRNA-library screening
Procedia PDF Downloads 292717 Hamiltonian Paths and Cycles Passing through Prescribed Edges in the Balanced Hypercubes
Authors: Dongqin Cheng
Abstract:
The n-dimensional balanced hypercube BHn (n ≥ 1) has been proved to be a bipartite graph. Let P be a set of edges whose induced subgraph consists of pairwise vertex-disjoint paths. For any two vertices u, v from different partite sets of V (BHn). In this paper, we prove that if |P| ≤ 2n − 2 and the subgraph induced by P has neither u nor v as internal vertices, or both of u and v as end-vertices, then BHn contains a Hamiltonian path joining u and v passing through P. As a corollary, if |P| ≤ 2n−1, then the BHn contains a Hamiltonian cycle passing through P.Keywords: interconnection network, balanced hypercube, Hamiltonian cycle, prescribed edges
Procedia PDF Downloads 205716 A Molecular Dynamic Simulation Study to Explore Role of Chain Length in Predicting Useful Characteristic Properties of Commodity and Engineering Polymers
Authors: Lokesh Soni, Sushanta Kumar Sethi, Gaurav Manik
Abstract:
This work attempts to use molecular simulations to create equilibrated structures of a range of commercially used polymers. Generated equilibrated structures for polyvinyl acetate (isotactic), polyvinyl alcohol (atactic), polystyrene, polyethylene, polyamide 66, poly dimethyl siloxane, poly carbonate, poly ethylene oxide, poly amide 12, natural rubber, poly urethane, and polycarbonate (bisphenol-A) and poly ethylene terephthalate are employed to estimate the correct chain length that will correctly predict the chain parameters and properties. Further, the equilibrated structures are used to predict some properties like density, solubility parameter, cohesive energy density, surface energy, and Flory-Huggins interaction parameter. The simulated densities for polyvinyl acetate, polyvinyl alcohol, polystyrene, polypropylene, and polycarbonate are 1.15 g/cm3, 1.125 g/cm3, 1.02 g/cm3, 0.84 g/cm3 and 1.223 g/cm3 respectively are found to be in good agreement with the available literature estimates. However, the critical repeating units or the degree of polymerization after which the solubility parameter showed saturation were 15, 20, 25, 10 and 20 respectively. This also indicates that such properties that dictate the miscibility of two or more polymers in their blends are strongly dependent on the chosen polymer or its characteristic properties. An attempt has been made to correlate such properties with polymer properties like Kuhn length, free volume and the energy term which plays a vital role in predicting the mentioned properties. These results help us to screen and propose a useful library which may be used by the research groups in estimating the polymer properties using the molecular simulations of chains with the predicted critical lengths. The library shall help to obviate the need for researchers to spend efforts in finding the critical chain length needed for simulating the mentioned polymer properties.Keywords: Kuhn length, Flory Huggins interaction parameter, cohesive energy density, free volume
Procedia PDF Downloads 195715 Hypergraph for System of Systems modeling
Authors: Haffaf Hafid
Abstract:
Hypergraphs, after being used to model the structural organization of System of Sytems (SoS) at macroscopic level, has recent trends towards generalizing this powerful representation at different stages of complex system modelling. In this paper, we first describe different applications of hypergraph theory, and step by step, introduce multilevel modeling of SoS by means of integrating Constraint Programming Langages (CSP) dealing with engineering system reconfiguration strategy. As an application, we give an A.C.T Terminal controlled by a set of Intelligent Automated Vehicle.Keywords: hypergraph model, structural analysis, bipartite graph, monitoring, system of systems, reconfiguration analysis, hypernetwork
Procedia PDF Downloads 489714 The Development of Iranian Theatrical Performance through the Integration of Narrative Elements from Western Drama
Authors: Azadeh Abbasikangevari
Abstract:
Background and Objectives: Theatre and performance are two separate themes. What is presented in Iran as a performance is the species and ritual and traditional forms of the play. The Iranian performance has its roots in myth and ritual. Drama is essentially a Western phenomenon that has gradually entered Iran and influenced Iranian performance. A theatre is based on antagonism (axis) and protagonism (anti-axis), while performance has a monotonous and steady motion. The elements of Iranian performance include field, performance on the stage, and magnification in performance, all of which are based on narration. This type of narration has been present in Iranian modern drama. The objective of this study was to analyze the drama structure according to narration elements by a comparison between the Western theater and the Iranian performance and determining the structural differences in the type of narrative. Materials and Methods: In this study, the elements of the drama were analyzed using the library method among the available library resources. The review of the literature included research articles and textbooks which focused on Iranian plays, as well as books and articles which encompassed narrative and drama element. Data were analyzed in the comparative-descriptive method. Results: Examining and studying different kinds of Iranian performances, showed that the narrative has always been a characteristic feature of Iranian plays. Iranians have narrated the stories and myths and have had a particular skill of oral literature. Over time, they slowly introduced narrative culture into their art, where this element is the most important structural element in Iran's dramatic art. Considering the fact that narration in Iranian traditional play, such as Ta'ziyeh and Naghali, was oral and consequently, it was slowly forgotten and excluded from written theatrical texts. Since the drama has entered in its western form in Iran, the plays written by the authors were influenced by narrative elements existing in western plays. Conclusions: The narrative’s element has undoubtedly had an impact on modern Iranian drama and Iranian contemporary drama. Therefore, the element of narration is an integral part of the Iranian traditional play structure.Keywords: drama methodology, Iranian performance, Iranian modern drama, narration
Procedia PDF Downloads 133713 Geological Structure Identification in Semilir Formation: An Correlated Geological and Geophysical (Very Low Frequency) Data for Zonation Disaster with Current Density Parameters and Geological Surface Information
Authors: E. M. Rifqi Wilda Pradana, Bagus Bayu Prabowo, Meida Riski Pujiyati, Efraim Maykhel Hagana Ginting, Virgiawan Arya Hangga Reksa
Abstract:
The VLF (Very Low Frequency) method is an electromagnetic method that uses low frequencies between 10-30 KHz which results in a fairly deep penetration. In this study, the VLF method was used for zonation of disaster-prone areas by identifying geological structures in the form of faults. Data acquisition was carried out in Trimulyo Region, Jetis District, Bantul Regency, Special Region of Yogyakarta, Indonesia with 8 measurement paths. This study uses wave transmitters from Japan and Australia to obtain Tilt and Elipt values that can be used to create RAE (Rapat Arus Ekuivalen or Current Density) sections that can be used to identify areas that are easily crossed by electric current. This section will indicate the existence of a geological structure in the form of faults in the study area which is characterized by a high RAE value. In data processing of VLF method, it is obtained Tilt vs Elliptical graph and Moving Average (MA) Tilt vs Moving Average (MA) Elipt graph of each path that shows a fluctuating pattern and does not show any intersection at all. Data processing uses Matlab software and obtained areas with low RAE values that are 0%-6% which shows medium with low conductivity and high resistivity and can be interpreted as sandstone, claystone, and tuff lithology which is part of the Semilir Formation. Whereas a high RAE value of 10% -16% which shows a medium with high conductivity and low resistivity can be interpreted as a fault zone filled with fluid. The existence of the fault zone is strengthened by the discovery of a normal fault on the surface with strike N550W and dip 630E at coordinates X= 433256 and Y= 9127722 so that the activities of residents in the zone such as housing, mining activities and other activities can be avoided to reduce the risk of natural disasters.Keywords: current density, faults, very low frequency, zonation
Procedia PDF Downloads 175712 Implementation of a Serializer to Represent PHP Objects in the Extensible Markup Language
Authors: Lidia N. Hernández-Piña, Carlos R. Jaimez-González
Abstract:
Interoperability in distributed systems is an important feature that refers to the communication of two applications written in different programming languages. This paper presents a serializer and a de-serializer of PHP objects to and from XML, which is an independent library written in the PHP programming language. The XML generated by this serializer is independent of the programming language, and can be used by other existing Web Objects in XML (WOX) serializers and de-serializers, which allow interoperability with other object-oriented programming languages.Keywords: interoperability, PHP object serialization, PHP to XML, web objects in XML, WOX
Procedia PDF Downloads 237711 Hosoya Polynomials of Zero-Divisor Graphs
Authors: Abdul Jalil M. Khalaf, Esraa M. Kadhim
Abstract:
The Hosoya polynomial of a graph G is a graphical invariant polynomial that its first derivative at x= 1 is equal to the Wiener index and second derivative at x=1 is equal to the Hyper-Wiener index. In this paper we study the Hosoya polynomial of zero-divisor graphs.Keywords: Hosoya polynomial, wiener index, Hyper-Wiener index, zero-divisor graphs
Procedia PDF Downloads 531710 Product Development in Company
Authors: Giorgi Methodishvili, Iuliia Methodishvili
Abstract:
In this paper product development algorithm is used to determine the optimal management of financial resources in company. Aspects of financial management considered include put initial investment, examine all possible ways to solve the problem and the optimal rotation length of profit. The software of given problems is based using greedy algorithm. The obtained model and program maintenance enable us to define the optimal version of management of proper financial flows by using visual diagram on each level of investment.Keywords: management, software, optimal, greedy algorithm, graph-diagram
Procedia PDF Downloads 56709 Perceptions of Academic Staff on the Influences of Librarians and Working Colleagues Towards the Awareness and Use of Electronic Databases in Umaru Musa Yar’adua University, Katsina
Authors: Lawal Kado
Abstract:
This paper investigates the perceptions of academic staff at Umaru Musa Yar’adua University regarding the influences of librarians and working colleagues on the awareness and use of electronic databases. The study aims to provide insights into the effectiveness of these influences and suggest strategies to improve the usage of electronic databases. Research aim: The aim of this study is to determine the perceptions of academic staff on the influence of librarians and working colleagues towards the awareness and use of electronic databases in Umaru Musa Yar’adua University, Katsina. Methodology: The study adopts a quantitative method and survey research design. The survey questionnaire is distributed to 110 respondents selected through simple random sampling from a population of 523 academic staff. The collected data is analyzed using the Statistical Package for Social Sciences (SPSS) version 23. Findings: The study reveals a high level of general awareness of electronic databases in the university, largely influenced by librarians and colleagues. Librarians have played a crucial role in making academic staff aware of the available databases. The sources of information for awareness include colleagues, social media, e-mails from the library, and internet searching. Theoretical importance: This study contributes to the literature by examining the perceptions of academic staff, which can inform policymakers and stakeholders in developing strategies to maximize the use of electronic databases. Data collection and analysis procedures: The data is collected through a survey questionnaire that utilizes the Likert scaling technique. The closed-ended questions are analyzed using SPSS 23. Question addressed: The paper addresses the question of how librarians and working colleagues influence the awareness and use of electronic databases among academic staff. Conclusion: The study concludes that the influence of librarians and working colleagues significantly contributes to the awareness and use of electronic databases among academic staff. The paper recommends the establishment of dedicated departments or units for marketing library resources to further promote the usage of electronic databases.Keywords: awareness, electronic databases, academic staff, unified theory of acceptance and use of technology, social influence
Procedia PDF Downloads 92708 Learning Traffic Anomalies from Generative Models on Real-Time Observations
Authors: Fotis I. Giasemis, Alexandros Sopasakis
Abstract:
This study focuses on detecting traffic anomalies using generative models applied to real-time observations. By integrating a Graph Neural Network with an attention-based mechanism within the Spatiotemporal Generative Adversarial Network framework, we enhance the capture of both spatial and temporal dependencies in traffic data. Leveraging minute-by-minute observations from cameras distributed across Gothenburg, our approach provides a more detailed and precise anomaly detection system, effectively capturing the complex topology and dynamics of urban traffic networks.Keywords: traffic, anomaly detection, GNN, GAN
Procedia PDF Downloads 12707 SIPINA Induction Graph Method for Seismic Risk Prediction
Authors: B. Selma
Abstract:
The aim of this study is to test the feasibility of SIPINA method to predict the harmfulness parameters controlling the seismic response. The approach developed takes into consideration both the focal depth and the peak ground acceleration. The parameter to determine is displacement. The data used for the learning of this method and analysis nonlinear seismic are described and applied to a class of models damaged to some typical structures of the existing urban infrastructure of Jassy, Romania. The results obtained indicate an influence of the focal depth and the peak ground acceleration on the displacement.Keywords: SIPINA algorithm, seism, focal depth, peak ground acceleration, displacement
Procedia PDF Downloads 314706 Astronomical Object Classification
Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan
Abstract:
We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis
Procedia PDF Downloads 80705 A Newspapers Expectations Indicator from Web Scraping
Authors: Pilar Rey del Castillo
Abstract:
This document describes the building of an average indicator of the general sentiments about the future exposed in the newspapers in Spain. The raw data are collected through the scraping of the Digital Periodical and Newspaper Library website. Basic tools of natural language processing are later applied to the collected information to evaluate the sentiment strength of each word in the texts using a polarized dictionary. The last step consists of summarizing these sentiments to produce daily indices. The results are a first insight into the applicability of these techniques to produce periodic sentiment indicators.Keywords: natural language processing, periodic indicator, sentiment analysis, web scraping
Procedia PDF Downloads 133704 The KAPSARC Energy Policy Database: Introducing a Quantified Library of China's Energy Policies
Authors: Philipp Galkin
Abstract:
Government policy is a critical factor in the understanding of energy markets. Regardless, it is rarely approached systematically from a research perspective. Gaining a precise understanding of what policies exist, their intended outcomes, geographical extent, duration, evolution, etc. would enable the research community to answer a variety of questions that, for now, are either oversimplified or ignored. Policy, on its surface, also seems a rather unstructured and qualitative undertaking. There may be quantitative components, but incorporating the concept of policy analysis into quantitative analysis remains a challenge. The KAPSARC Energy Policy Database (KEPD) is intended to address these two energy policy research limitations. Our approach is to represent policies within a quantitative library of the specific policy measures contained within a set of legal documents. Each of these measures is recorded into the database as a single entry characterized by a set of qualitative and quantitative attributes. Initially, we have focused on the major laws at the national level that regulate coal in China. However, KAPSARC is engaged in various efforts to apply this methodology to other energy policy domains. To ensure scalability and sustainability of our project, we are exploring semantic processing using automated computer algorithms. Automated coding can provide a more convenient input data for human coders and serve as a quality control option. Our initial findings suggest that the methodology utilized in KEPD could be applied to any set of energy policies. It also provides a convenient tool to facilitate understanding in the energy policy realm enabling the researcher to quickly identify, summarize, and digest policy documents and specific policy measures. The KEPD captures a wide range of information about each individual policy contained within a single policy document. This enables a variety of analyses, such as structural comparison of policy documents, tracing policy evolution, stakeholder analysis, and exploring interdependencies of policies and their attributes with exogenous datasets using statistical tools. The usability and broad range of research implications suggest a need for the continued expansion of the KEPD to encompass a larger scope of policy documents across geographies and energy sectors.Keywords: China, energy policy, policy analysis, policy database
Procedia PDF Downloads 323703 A Lightweight Blockchain: Enhancing Internet of Things Driven Smart Buildings Scalability and Access Control Using Intelligent Direct Acyclic Graph Architecture and Smart Contracts
Authors: Syed Irfan Raza Naqvi, Zheng Jiangbin, Ahmad Moshin, Pervez Akhter
Abstract:
Currently, the IoT system depends on a centralized client-servant architecture that causes various scalability and privacy vulnerabilities. Distributed ledger technology (DLT) introduces a set of opportunities for the IoT, which leads to practical ideas for existing components at all levels of existing architectures. Blockchain Technology (BCT) appears to be one approach to solving several IoT problems, like Bitcoin (BTC) and Ethereum, which offer multiple possibilities. Besides, IoTs are resource-constrained devices with insufficient capacity and computational overhead to process blockchain consensus mechanisms; the traditional BCT existing challenge for IoTs is poor scalability, energy efficiency, and transaction fees. IOTA is a distributed ledger based on Direct Acyclic Graph (DAG) that ensures M2M micro-transactions are free of charge. IOTA has the potential to address existing IoT-related difficulties such as infrastructure scalability, privacy and access control mechanisms. We proposed an architecture, SLDBI: A Scalable, lightweight DAG-based Blockchain Design for Intelligent IoT Systems, which adapts the DAG base Tangle and implements a lightweight message data model to address the IoT limitations. It enables the smooth integration of new IoT devices into a variety of apps. SLDBI enables comprehensive access control, energy efficiency, and scalability in IoT ecosystems by utilizing the Masked Authentication Message (MAM) protocol and the IOTA Smart Contract Protocol (ISCP). Furthermore, we suggest proof-of-work (PoW) computation on the full node in an energy-efficient way. Experiments have been carried out to show the capability of a tangle to achieve better scalability while maintaining energy efficiency. The findings show user access control management at granularity levels and ensure scale up to massive networks with thousands of IoT nodes, such as Smart Connected Buildings (SCBDs).Keywords: blockchain, IOT, direct acyclic graphy, scalability, access control, architecture, smart contract, smart connected buildings
Procedia PDF Downloads 123702 Recognition and Protection of Indigenous Society in Indonesia
Authors: Triyanto, Rima Vien Permata Hartanto
Abstract:
Indonesia is a legal state. The consequence of this status is the recognition and protection of the existence of indigenous peoples. This paper aims to describe the dynamics of legal recognition and protection for indigenous peoples within the framework of Indonesian law. This paper is library research based on literature. The result states that although the constitution has normatively recognized the existence of indigenous peoples and their traditional rights, in reality, not all rights were recognized and protected. The protection and recognition for indigenous people need to be strengthened.Keywords: indigenous peoples, customary law, state law, state of law
Procedia PDF Downloads 330701 Introduction to Transversal Pendant Domination in Graphs
Authors: Nayaka S.R., Putta Swamy, Purushothama S.
Abstract:
Let G=(V, E) be a graph. A dominating set S in G is a pendant dominating set if < S > contains a pendant vertex. A pendant dominating set of G which intersects every minimum pendant dominating set in G is called a transversal pendant dominating set. The minimum cardinality of a transversal pendant dominating set is called the transversal pendant domination number of G, denoted by γ_tp(G). In this paper, we begin to study this parameter. We calculate γ_tp(G) for some families of graphs. Furthermore, some bounds and relations with other domination parameters are obtained for γ_tp(G).Keywords: dominating set, pendant dominating set, pendant domination number, transversal pendant dominating set, transversal pendant domination number
Procedia PDF Downloads 182700 Behaviour of an RC Circuit near Extreme Point
Authors: Tribhuvan N. Soorya
Abstract:
Charging and discharging of a capacitor through a resistor can be shown as exponential curve. Theoretically, it takes infinite time to fully charge or discharge a capacitor. The flow of charge is due to electrons having finite and fixed value of charge. If we carefully examine the charging and discharging process after several time constants, the points on q vs t graph become discrete and curve become discontinuous. Moreover for all practical purposes capacitor with charge (q0-e) can be taken as fully charged, as it introduces an error less than one part per million. Similar is the case for discharge of a capacitor, where the capacitor with the last electron (charge e) can be taken as fully discharged. With this, we can estimate the finite value of time for fully charging and discharging a capacitor.Keywords: charging, discharging, RC Circuit, capacitor
Procedia PDF Downloads 443699 Consumer Protection Law For Users Mobile Commerce as a Global Effort to Improve Business in Indonesia
Authors: Rina Arum Prastyanti
Abstract:
Information technology has changed the ways of transacting and enabling new opportunities in business transactions. Problems to be faced by consumers M Commerce, among others, the consumer will have difficulty accessing the full information about the products on offer and the forms of transactions given the small screen and limited storage capacity, the need to protect children from various forms of excess supply and usage as well as errors in access and disseminate personal data, not to mention the more complex problems as well as problems agreements, dispute resolution that can protect consumers and assurance of security of personal data. It is no less important is the risk of payment and personal information of payment dal am also an important issue that should be on the swatch solution. The purpose of this study is 1) to describe the phenomenon of the use of Mobile Commerce in Indonesia. 2) To determine the form of legal protection for the consumer use of Mobile Commerce. 3) To get the right type of law so as to provide legal protection for consumers Mobile Commerce users. This research is a descriptive qualitative research. Primary and secondary data sources. This research is a normative law. Engineering conducted engineering research library collection or library research. The analysis technique used is deductive analysis techniques. Growing mobile technology and more affordable prices as well as low rates of provider competition also affects the increasing number of mobile users, Indonesia is placed into 4 HP users in the world, the number of mobile phones in Indonesia is estimated at around 250.1 million telephones with a population of 237 556. 363. Indonesian form of legal protection in the use of mobile commerce still a part of the Law No. 11 of 2008 on Information and Electronic Transactions and until now there is no rule of law that specifically regulates mobile commerce. Legal protection model that can be applied to protect consumers of mobile commerce users ensuring that consumers get information about potential security and privacy challenges they may face in m commerce and measures that can be used to limit the risk. Encourage the development of security measures and built security features. To encourage mobile operators to implement data security policies and measures to prevent unauthorized transactions. Provide appropriate methods both time and effectiveness of redress when consumers suffer financial loss.Keywords: mobile commerce, legal protection, consumer, effectiveness
Procedia PDF Downloads 365698 KPI and Tool for the Evaluation of Competency in Warehouse Management for Furniture Business
Authors: Kritchakhris Na-Wattanaprasert
Abstract:
The objective of this research is to design and develop a prototype of a key performance indicator system this is suitable for warehouse management in a case study and use requirement. In this study, we design a prototype of key performance indicator system (KPI) for warehouse case study of furniture business by methodology in step of identify scope of the research and study related papers, gather necessary data and users requirement, develop key performance indicator base on balance scorecard, design pro and database for key performance indicator, coding the program and set relationship of database and finally testing and debugging each module. This study use Balance Scorecard (BSC) for selecting and grouping key performance indicator. The system developed by using Microsoft SQL Server 2010 is used to create the system database. In regard to visual-programming language, Microsoft Visual C# 2010 is chosen as the graphic user interface development tool. This system consists of six main menus: menu login, menu main data, menu financial perspective, menu customer perspective, menu internal, and menu learning and growth perspective. Each menu consists of key performance indicator form. Each form contains a data import section, a data input section, a data searches – edit section, and a report section. The system generates outputs in 5 main reports, the KPI detail reports, KPI summary report, KPI graph report, benchmarking summary report and benchmarking graph report. The user will select the condition of the report and period time. As the system has been developed and tested, discovers that it is one of the ways to judging the extent to warehouse objectives had been achieved. Moreover, it encourages the warehouse functional proceed with more efficiency. In order to be useful propose for other industries, can adjust this system appropriately. To increase the usefulness of the key performance indicator system, the recommendations for further development are as follows: -The warehouse should review the target value and set the better suitable target periodically under the situation fluctuated in the future. -The warehouse should review the key performance indicators and set the better suitable key performance indicators periodically under the situation fluctuated in the future for increasing competitiveness and take advantage of new opportunities.Keywords: key performance indicator, warehouse management, warehouse operation, logistics management
Procedia PDF Downloads 432697 Utilizing Literature Review and Shared Decision-Making to Support a Patient Make the Decision: A Case Study of Virtual Reality for Postoperative Pain
Authors: Pei-Ru Yang, Yu-Chen Lin, Jia-Min Wu
Abstract:
Background: A 58-year-old man with a history of osteoporosis and diabetes presented with chronic pain in his left knee due to severe knee joint degeneration. The knee replacement surgery was recommended by the doctor. But the patient suffered from low pain tolerance and wondered if virtual reality could relieve acute postoperative wound pain. Methods: We used the PICO (patient, intervention, comparison, and outcome) approach to generate indexed keywords and searched systematic review articles from 2017 to 2021 on the Cochran Library, PubMed, and Clinical Key databases. Results: The initial literature results included 38 articles, including 12 Cochrane library articles and 26 PubMed articles. One article was selected for further analysis after removing duplicates and off-topic articles. The eight trials included in this article were published between 2013 and 2019 and recruited a total of 723 participants. The studies, conducted in India, Lebanon, Iran, South Korea, Spain, and China, included adults who underwent hemorrhoidectomy, dental surgery, craniotomy or spine surgery, episiotomy repair, and knee surgery, with a mean age (24.1 ± 4.1 to 73.3 ± 6.5). Virtual reality is an emerging non-drug postoperative analgesia method. The findings showed that pain control was reduced by a mean of 1.48 points (95% CI: -2.02 to -0.95, p-value < 0.0001) in minor surgery and 0.32 points in major surgery (95% CI: -0.53 to -0.11, p-value < 0.03), and the overall postoperative satisfaction has improved. Discussion: Postoperative pain is a common clinical problem in surgical patients. Research has confirmed that virtual reality can create an immersive interactive environment, communicate with patients, and effectively relieve postoperative pain. However, virtual reality requires the purchase of hardware and software and other related computer equipment, and its high cost is a disadvantage. We selected the best literature based on clinical questions to answer the patient's question and used share decision making (SDM) to help the patient make decisions based on the clinical situation after knee replacement surgery to improve the quality of patient-centered care.Keywords: knee replacement surgery, postoperative pain, share decision making, virtual reality
Procedia PDF Downloads 69696 University Clusters Using ICT for Teaching and Learning
Authors: M. Roberts Masillamani
Abstract:
There is a phenomenal difference, as regard to the teaching methodology adopted at the urban and the rural area colleges. However, bright and talented student may be from rural back ground even. But there is huge dearth of the digitization in the rural areas and lesser developed countries. Today’s students need new skills to compete and successful in the future. Education should be combination of practical, intellectual, and social skills. What does this mean for rural classrooms and how can it be achieved. Rural colleges are not able to hire the best resources, since the best teacher’s aim is to move towards the city. If city is provided everywhere, then there will be no rural area. This is possible by forming university clusters (UC). The University cluster is a group of renowned and accredited universities coming together to bridge this dearth. The UC will deliver the live lectures and allow the students’ from remote areas to actively participate in the classroom. This paper tries to present a plan of action of providing a better live classroom teaching and learning system from the city to the rural and the lesser developed countries. This paper titled “University Clusters using ICT for teaching and learning” provides a true concept of opening live digital classroom windows for rural colleges, where resources are not available, thus reducing the digital divide. This is different from pod casting a lecture or distance learning and eLearning. The live lecture can be streamed through digital equipment to another classroom. The rural students can collaborate with their peers and critiques, be assessed, collect information, acquire different techniques in assessment and learning process. This system will benefit rural students and teachers and develop socio economic status. This will also will increase the degree of confidence of the Rural students and teachers. Thus bringing about the concept of ‘Train the Trainee’ in reality. An educational university cloud for each cluster will be built remote infrastructure facilities (RIF) for the above program. The users may be informed, about the available lecture schedules, through the RIF service. RIF with an educational cloud can be set by the universities under one cluster. This paper talks a little more about University clusters and the methodology to be adopted as well as some extended features like, tutorial classes, library grids, remote laboratory login, research and development.Keywords: lesser developed countries, digital divide, digital learning, education, e-learning, ICT, library grids, live classroom windows, RIF, rural, university clusters and urban
Procedia PDF Downloads 474