Search results for: complete tripartite graph
2654 An Efficient Hardware/Software Workflow for Multi-Cores Simulink Applications
Authors: Asma Rebaya, Kaouther Gasmi, Imen Amari, Salem Hasnaoui
Abstract:
Over these last years, applications such as telecommunications, signal processing, digital communication with advanced features (Multi-antenna, equalization..) witness a rapid evaluation accompanied with an increase of user exigencies in terms of latency, the power of computation… To satisfy these requirements, the use of hardware/software systems is a common solution; where hardware is composed of multi-cores and software is represented by models of computation, synchronous data flow (SDF) graph for instance. Otherwise, the most of the embedded system designers utilize Simulink for modeling. The issue is how to simplify the c code generation, for a multi-cores platform, of an application modeled by Simulink. To overcome this problem, we propose a workflow allowing an automatic transformation from the Simulink model to the SDF graph and providing an efficient schedule permitting to optimize the number of cores and to minimize latency. This workflow goes from a Simulink application and a hardware architecture described by IP.XACT language. Based on the synchronous and hierarchical behavior of both models, the Simulink block diagram is automatically transformed into an SDF graph. Once this process is successfully achieved, the scheduler calculates the optimal cores’ number needful by minimizing the maximum density of the whole application. Then, a core is chosen to execute a specific graph task in a specific order and, subsequently, a compatible C code is generated. In order to perform this proposal, we extend Preesm, a rapid prototyping tool, to take the Simulink model as entry input and to support the optimal schedule. Afterward, we compared our results to this tool results, using a simple illustrative application. The comparison shows that our results strictly dominate the Preesm results in terms of number of cores and latency. In fact, if Preesm needs m processors and latency L, our workflow need processors and latency L'< L.Keywords: hardware/software system, latency, modeling, multi-cores platform, scheduler, SDF graph, Simulink model, workflow
Procedia PDF Downloads 2682653 Synchrotron Radiation and Inverse Compton Scattering in Astrophysical Plasma
Authors: S. S. Sathiesh
Abstract:
The aim of this project is to study the radiation mechanism synchrotron and Inverse Compton scattering. Theoretically, we discussed spectral energy distribution for both. Programming is done for plotting the graph of Power-law spectrum for synchrotron Radiation using fortran90. The importance of power law spectrum was discussed and studied to infer its physical parameters from the model fitting. We also discussed how to infer the physical parameters from the theoretically drawn graph, we have seen how one can infer B (magnetic field of the source), γ min, γ max, spectral indices (p1, p2) while fitting the curve to the observed data.Keywords: blazars/quasars, beaming, synchrotron radiation, Synchrotron Self Compton, inverse Compton scattering, mrk421
Procedia PDF Downloads 4132652 Implant Operation Guiding Device for Dental Surgeons
Authors: Daniel Hyun
Abstract:
Dental implants are one of the top 3 reasons to sue a dentist for malpractice. It involves dental implant complications, usually because of the angle of the implant from the surgery. At present, surgeons usually use a 3D-printed navigator that is customized for the patient’s teeth. However, those can’t be reused for other patients as they require time. Therefore, I made a guiding device to assist the surgeon in implant operations. The surgeon can input the objective of the operation, and the device constantly checks if the surgery is heading towards the objective within the set range, telling the surgeon by manipulating the LED. We tested the prototypes’ consistency and accuracy by checking the graph, average standard deviation, and the average change of the calculated angles. The accuracy of performance was also acquired by running the device and checking the outputs. My first prototype used accelerometer and gyroscope sensors from the Arduino MPU6050 sensor, getting a changeable graph, achieving 0.0295 of standard deviations, 0.25 of average change, and 66.6% accuracy of performance. The second prototype used only the gyroscope, and it got a constant graph, achieved 0.0062 of standard deviation, 0.075 of average change, and 100% accuracy of performance, indicating that the accelerometer sensor aggravated the functionality of the device. Using the gyroscope sensor allowed it to measure the orientations of separate axes without affecting each other and also increased the stability and accuracy of the measurements.Keywords: implant, guide, accelerometer, gyroscope, handpiece
Procedia PDF Downloads 432651 Stock Market Prediction Using Convolutional Neural Network That Learns from a Graph
Authors: Mo-Se Lee, Cheol-Hwi Ahn, Kee-Young Kwahk, Hyunchul Ahn
Abstract:
Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN (Convolutional Neural Network), which is known as effective solution for recognizing and classifying images, has been popularly applied to classification and prediction problems in various fields. In this study, we try to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. In specific, we propose to apply CNN as the binary classifier that predicts stock market direction (up or down) by using a graph as its input. That is, our proposal is to build a machine learning algorithm that mimics a person who looks at the graph and predicts whether the trend will go up or down. Our proposed model consists of four steps. In the first step, it divides the dataset into 5 days, 10 days, 15 days, and 20 days. And then, it creates graphs for each interval in step 2. In the next step, CNN classifiers are trained using the graphs generated in the previous step. In step 4, it optimizes the hyper parameters of the trained model by using the validation dataset. To validate our model, we will apply it to the prediction of KOSPI200 for 1,986 days in eight years (from 2009 to 2016). The experimental dataset will include 14 technical indicators such as CCI, Momentum, ROC and daily closing price of KOSPI200 of Korean stock market.Keywords: convolutional neural network, deep learning, Korean stock market, stock market prediction
Procedia PDF Downloads 4252650 Managing Cognitive Load in Accounting: An Analysis of Three Instructional Designs in Financial Accounting
Authors: Seedwell Sithole
Abstract:
One of the persistent problems in accounting education is how to effectively support students’ learning. A promising technique to this issue is to investigate the extent that learning is determined by the design of instructional material. This study examines the academic performance of students using three instructional designs in financial accounting. Student’s performance scores and reported mental effort ratings were used to determine the instructional effectiveness. The findings of this study show that accounting students prefer graph and text designs that are integrated. The results suggest that spatially separated graph and text presentations in accounting should be reorganized to align with the requirements of human cognitive architecture.Keywords: accounting, cognitive load, education, instructional preferences, students
Procedia PDF Downloads 1502649 CTHTC: A Convolution-Backed Transformer Architecture for Temporal Knowledge Graph Embedding with Periodicity Recognition
Authors: Xinyuan Chen, Mohd Nizam Husen, Zhongmei Zhou, Gongde Guo, Wei Gao
Abstract:
Temporal Knowledge Graph Completion (TKGC) has attracted increasing attention for its enormous value; however, existing models lack capabilities to capture both local interactions and global dependencies simultaneously with evolutionary dynamics, while the latest achievements in convolutions and Transformers haven't been employed in this area. What’s more, periodic patterns in TKGs haven’t been fully explored either. To this end, a multi-stage hybrid architecture with convolution-backed Transformers is introduced in TKGC tasks for the first time combining the Hawkes process to model evolving event sequences in a continuous-time domain. In addition, the seasonal-trend decomposition is adopted to identify periodic patterns. Experiments on six public datasets are conducted to verify model effectiveness against state-of-the-art (SOTA) methods. An extensive ablation study is carried out accordingly to evaluate architecture variants as well as the contributions of independent components in addition, paving the way for further potential exploitation. Besides complexity analysis, input sensitivity and safety challenges are also thoroughly discussed for comprehensiveness with novel methods.Keywords: temporal knowledge graph completion, convolution, transformer, Hawkes process, periodicity
Procedia PDF Downloads 782648 Programmed Speech to Text Summarization Using Graph-Based Algorithm
Authors: Hamsini Pulugurtha, P. V. S. L. Jagadamba
Abstract:
Programmed Speech to Text and Text Summarization Using Graph-based Algorithms can be utilized in gatherings to get the short depiction of the gathering for future reference. This gives signature check utilizing Siamese neural organization to confirm the personality of the client and convert the client gave sound record which is in English into English text utilizing the discourse acknowledgment bundle given in python. At times just the outline of the gathering is required, the answer for this text rundown. Thus, the record is then summed up utilizing the regular language preparing approaches, for example, solo extractive text outline calculationsKeywords: Siamese neural network, English speech, English text, natural language processing, unsupervised extractive text summarization
Procedia PDF Downloads 2172647 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem
Authors: Y. Wang
Abstract:
The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.Keywords: frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem
Procedia PDF Downloads 2332646 Ultraviolet Visible Spectroscopy Analysis on Transformer Oil by Correlating It with Various Oil Parameters
Authors: Rajnish Shrivastava, Y. R. Sood, Priti Pundir, Rahul Srivastava
Abstract:
Power transformer is one of the most important devices that are used in power station. Due to several fault impending upon it or due to ageing, etc its life gets lowered. So, it becomes necessary to have diagnosis of oil for fault analysis. Due to the chemical, electrical, thermal and mechanical stress the insulating material in the power transformer degraded. It is important to regularly assess the condition of oil and the remaining life of the power transformer. In this paper UV-VIS absorption graph area is correlated with moisture content, Flash point, IFT and Density of Transformer oil. Since UV-VIS absorption graph area varies accordingly with the variation in different transformer parameters. So by obtaining the correlation among different oil parameters for oil with respect to UV-VIS absorption area, decay contents of transformer oil can be predictedKeywords: breakdown voltage (BDV), interfacial Tension (IFT), moisture content, ultra violet-visible rays spectroscopy (UV-VIS)
Procedia PDF Downloads 6422645 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 1902644 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion
Authors: Ali Kazemi
Abstract:
Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting
Procedia PDF Downloads 652643 Power Iteration Clustering Based on Deflation Technique on Large Scale Graphs
Authors: Taysir Soliman
Abstract:
One of the current popular clustering techniques is Spectral Clustering (SC) because of its advantages over conventional approaches such as hierarchical clustering, k-means, etc. and other techniques as well. However, one of the disadvantages of SC is the time consuming process because it requires computing the eigenvectors. In the past to overcome this disadvantage, a number of attempts have been proposed such as the Power Iteration Clustering (PIC) technique, which is one of versions from SC; some of PIC advantages are: 1) its scalability and efficiency, 2) finding one pseudo-eigenvectors instead of computing eigenvectors, and 3) linear combination of the eigenvectors in linear time. However, its worst disadvantage is an inter-class collision problem because it used only one pseudo-eigenvectors which is not enough. Previous researchers developed Deflation-based Power Iteration Clustering (DPIC) to overcome problems of PIC technique on inter-class collision with the same efficiency of PIC. In this paper, we developed Parallel DPIC (PDPIC) to improve the time and memory complexity which is run on apache spark framework using sparse matrix. To test the performance of PDPIC, we compared it to SC, ESCG, ESCALG algorithms on four small graph benchmark datasets and nine large graph benchmark datasets, where PDPIC proved higher accuracy and better time consuming than other compared algorithms.Keywords: spectral clustering, power iteration clustering, deflation-based power iteration clustering, Apache spark, large graph
Procedia PDF Downloads 1892642 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering
Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott
Abstract:
Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.Keywords: cancer research, graph theory, machine learning, single cell analysis
Procedia PDF Downloads 1122641 Utilization of Complete Feed Based on Ammoniated Corn Waste on Bali Cattle Peformance
Authors: Elihasridas, Rusmana Wijaya Setia Ninggrat
Abstract:
This research aims to study the utilization of ammoniated corn waste complete ration for substitution basal ration of natural grass in Bali cattle. Four treatments (complete feed ration consisted of: R1=40% natural grass + 60% concentrate (control), R2= 50% natural grass+50% concentrate, R3=60% natural grass+40% concentrate and R4=40% ammoniated corn waste+60% concentrate) were employed in this experiment. This experiment was arranged in a latin square design. Observed variables included dry matter intake (DMI), average daily gain and feed conversion. Data were analyzed by using the Analysis of Variance following a 4 x 4 Latin Square Design. The DMI for R1was 7,15kg/day which was significantly (P < 0,05) higher than R2 (6,32 kg/day) and R3(6,07 kg/day), but was not significantly different (P < 0,05) from R4 (7,01 kg/day). Average daily gain for R1(0,75 kg/day) which was significantly (P < 0,05) higher than R2(0,66 kg/day) and R3 (0,61 kg/day),but was not significantly different (P > 0,05) from R4(0,74 kg/day). Feed conversion was not significantly affected (P > 0,05) by ration. It was concluded that ammoniated corn waste complete ration (40% ammoniated corn waste + 60% concentrate) could be utilized for substitution natural grass basal ration.Keywords: ammoniated corn waste, bali cattle, complete feed, daily gain
Procedia PDF Downloads 2052640 NSBS: Design of a Network Storage Backup System
Authors: Xinyan Zhang, Zhipeng Tan, Shan Fan
Abstract:
The first layer of defense against data loss is the backup data. This paper implements an agent-based network backup system used the backup, server-storage and server-backup agent these tripartite construction, and we realize the snapshot and hierarchical index in the NSBS. It realizes the control command and data flow separation, balances the system load, thereby improving the efficiency of the system backup and recovery. The test results show the agent-based network backup system can effectively improve the task-based concurrency, reasonably allocate network bandwidth, the system backup performance loss costs smaller and improves data recovery efficiency by 20%.Keywords: agent, network backup system, three architecture model, NSBS
Procedia PDF Downloads 4592639 Encapsulation of Volatile Citronella Essential oil by Coacervation: Efficiency and Release Kinetic Study
Authors: Rafeqah Raslan, Mastura AbdManaf, Junaidah Jai, Istikamah Subuki, Ana Najwa Mustapa
Abstract:
The volatile citronella essential oil was encapsulated by simple coacervation and complex coacervation using gum Arabic and gelatin as wall material. Glutaraldehyde was used in the methodology as crosslinking agent. The citronella standard calibration graph was developed with R2 equal to 0.9523 for the accurate determination of encapsulation efficiency and release study. The release kinetic was analyzed based on Fick’s law of diffusion for polymeric system and linear graph of log fraction release over log time was constructed to determine the release rate constant, k and diffusion coefficient, n. Both coacervation methods in the present study produce encapsulation efficiency around 94%. The capsules morphology analysis supported the release kinetic mechanisms of produced capsules for both coacervation process.Keywords: simple coacervation, complex coacervation, encapsulation efficiency, release kinetic study
Procedia PDF Downloads 3162638 Stress Concentration Trend for Combined Loading Conditions
Authors: Aderet M. Pantierer, Shmuel Pantierer, Raphael Cordina, Yougashwar Budhoo
Abstract:
Stress concentration occurs when there is an abrupt change in geometry, a mechanical part under loading. These changes in geometry can include holes, notches, or cracks within the component. The modifications create larger stress within the part. This maximum stress is difficult to determine, as it is directly at the point of the minimum area. Strain gauges have yet to be developed to analyze stresses at such minute areas. Therefore, a stress concentration factor must be utilized. The stress concentration factor is a dimensionless parameter calculated solely on the geometry of a part. The factor is multiplied by the nominal, or average, stress of the component, which can be found analytically or experimentally. Stress concentration graphs exist for common loading conditions and geometrical configurations to aid in the determination of the maximum stress a part can withstand. These graphs were developed from historical data yielded from experimentation. This project seeks to verify a stress concentration graph for combined loading conditions. The aforementioned graph was developed using CATIA Finite Element Analysis software. The results of this analysis will be validated through further testing. The 3D modeled parts will be subjected to further finite element analysis using Patran-Nastran software. The finite element models will then be verified by testing physical specimen using a tensile testing machine. Once the data is validated, the unique stress concentration graph will be submitted for publication so it can aid engineers in future projects.Keywords: stress concentration, finite element analysis, finite element models, combined loading
Procedia PDF Downloads 4432637 A Forbidden-Minor Characterization for the Class of Co-Graphic Matroids Which Yield the Graphic Element-Splitting Matroids
Authors: Prashant Malavadkar, Santosh Dhotre, Maruti Shikare
Abstract:
The n-point splitting operation on graphs is used to characterize 4-connected graphs with some more operations. Element splitting operation on binary matroids is a natural generalization of the notion of n-point splitting operation on graphs. The element splitting operation on a graphic (cographic) matroid may not yield a graphic (cographic) matroid. Characterization of graphic (cographic) matroids whose element splitting matroids are graphic (cographic) is known. The element splitting operation on a co-graphic matroid, in general may not yield a graphic matroid. In this paper, we give a necessary and sufficient condition for the cographic matroid to yield a graphic matroid under the element splitting operation. In fact, we prove that the element splitting operation, by any pair of elements, on a cographic matroid yields a graphic matroid if and only if it has no minor isomorphic to M(K4); where K4 is the complete graph on 4 vertices.Keywords: binary matroids, splitting, element splitting, forbidden minor
Procedia PDF Downloads 2762636 Plotting of an Ideal Logic versus Resource Outflow Graph through Response Analysis on a Strategic Management Case Study Based Questionnaire
Authors: Vinay A. Sharma, Shiva Prasad H. C.
Abstract:
The initial stages of any project are often observed to be in a mixed set of conditions. Setting up the project is a tough task, but taking the initial decisions is rather not complex, as some of the critical factors are yet to be introduced into the scenario. These simple initial decisions potentially shape the timeline and subsequent events that might later be plotted on it. Proceeding towards the solution for a problem is the primary objective in the initial stages. The optimization in the solutions can come later, and hence, the resources deployed towards attaining the solution are higher than what they would have been in the optimized versions. A ‘logic’ that counters the problem is essentially the core of the desired solution. Thus, if the problem is solved, the deployment of resources has led to the required logic being attained. As the project proceeds along, the individuals working on the project face fresh challenges as a team and are better accustomed to their surroundings. The developed, optimized solutions are then considered for implementation, as the individuals are now experienced, and know better of the consequences and causes of possible failure, and thus integrate the adequate tolerances wherever required. Furthermore, as the team graduates in terms of strength, acquires prodigious knowledge, and begins its efficient transfer, the individuals in charge of the project along with the managers focus more on the optimized solutions rather than the traditional ones to minimize the required resources. Hence, as time progresses, the authorities prioritize attainment of the required logic, at a lower amount of dedicated resources. For empirical analysis of the stated theory, leaders and key figures in organizations are surveyed for their ideas on appropriate logic required for tackling a problem. Key-pointers spotted in successfully implemented solutions are noted from the analysis of the responses and a metric for measuring logic is developed. A graph is plotted with the quantifiable logic on the Y-axis, and the dedicated resources for the solutions to various problems on the X-axis. The dedicated resources are plotted over time, and hence the X-axis is also a measure of time. In the initial stages of the project, the graph is rather linear, as the required logic will be attained, but the consumed resources are also high. With time, the authorities begin focusing on optimized solutions, since the logic attained through them is higher, but the resources deployed are comparatively lower. Hence, the difference between consecutive plotted ‘resources’ reduces and as a result, the slope of the graph gradually increases. On an overview, the graph takes a parabolic shape (beginning on the origin), as with each resource investment, ideally, the difference keeps on decreasing, and the logic attained through the solution keeps increasing. Even if the resource investment is higher, the managers and authorities, ideally make sure that the investment is being made on a proportionally high logic for a larger problem, that is, ideally the slope of the graph increases with the plotting of each point.Keywords: decision-making, leadership, logic, strategic management
Procedia PDF Downloads 1082635 Human Posture Estimation Based on Multiple Viewpoints
Authors: Jiahe Liu, HongyangYu, Feng Qian, Miao Luo
Abstract:
This study aimed to address the problem of improving the confidence of key points by fusing multi-view information, thereby estimating human posture more accurately. We first obtained multi-view image information and then used the MvP algorithm to fuse this multi-view information together to obtain a set of high-confidence human key points. We used these as the input for the Spatio-Temporal Graph Convolution (ST-GCN). ST-GCN is a deep learning model used for processing spatio-temporal data, which can effectively capture spatio-temporal relationships in video sequences. By using the MvP algorithm to fuse multi-view information and inputting it into the spatio-temporal graph convolution model, this study provides an effective method to improve the accuracy of human posture estimation and provides strong support for further research and application in related fields.Keywords: multi-view, pose estimation, ST-GCN, joint fusion
Procedia PDF Downloads 702634 A Combinatorial Representation for the Invariant Measure of Diffusion Processes on Metric Graphs
Authors: Michele Aleandri, Matteo Colangeli, Davide Gabrielli
Abstract:
We study a generalization to a continuous setting of the classical Markov chain tree theorem. In particular, we consider an irreducible diffusion process on a metric graph. The unique invariant measure has an atomic component on the vertices and an absolutely continuous part on the edges. We show that the corresponding density at x can be represented by a normalized superposition of the weights associated to metric arborescences oriented toward the point x. A metric arborescence is a metric tree oriented towards its root. The weight of each oriented metric arborescence is obtained by the product of the exponential of integrals of the form ∫a/b², where b is the drift and σ² is the diffusion coefficient, along the oriented edges, for a weight for each node determined by the local orientation of the arborescence around the node and for the inverse of the diffusion coefficient at x. The metric arborescences are obtained by cutting the original metric graph along some edges.Keywords: diffusion processes, metric graphs, invariant measure, reversibility
Procedia PDF Downloads 1722633 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks
Authors: Van Trieu, Shouhuai Xu, Yusheng Feng
Abstract:
Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.Keywords: causality, multilevel graph, cyber-attacks, prediction
Procedia PDF Downloads 1562632 Efficient Heuristic Algorithm to Speed Up Graphcut in Gpu for Image Stitching
Authors: Tai Nguyen, Minh Bui, Huong Ninh, Tu Nguyen, Hai Tran
Abstract:
GraphCut algorithm has been widely utilized to solve various types of computer vision problems. Its expensive computational cost encouraged many researchers to improve the speed of the algorithm. Recent works proposed schemes that work on parallel computing platforms such as CUDA. However, the problem of low convergence speed prevents the usage of GraphCut for real time applications. In this paper, we propose global suppression heuristic to boost the conver-gence process of the algorithm. A parallel implementation of GraphCut algorithm on CUDA designed for the image stitching problem is introduced. Our method achieves up to 3× time boost on the graph of size 80 × 480 compared to the best sequential GraphCut algorithm while achieving satisfactory stitched images, suitable for panorama applications. Our source code will be soon available for further research.Keywords: CUDA, graph cut, image stitching, texture synthesis, maxflow/mincut algorithm
Procedia PDF Downloads 1312631 Weak Mutually Unbiased Bases versus Mutually Unbiased Bases in Terms of T-Designs
Authors: Mohamed Shalaby, Yasser Kamal, Negm Shawky
Abstract:
Mutually unbiased bases (MUBs) have an important role in the field of quantum computation and information. A complete set of these bases can be constructed when the system dimension is the power of the prime. Constructing such complete set in composite dimensions is still an open problem. Recently, the concept of weak mutually unbiased bases (WMUBs) in composite dimensions was introduced. A complete set of such bases can be constructed by combining the MUBs in each subsystem. In this paper, we present a comparative study between MUBs and WMUBs in the context of complex projective t-design. Explicit proofs are presented.Keywords: complex projective t-design, finite quantum systems, mutually unbiased bases, weak mutually unbiased bases
Procedia PDF Downloads 4482630 On the Basis Number and the Minimum Cycle Bases of the Wreath Product of Paths with Wheels
Authors: M. M. M. Jaradat
Abstract:
For a given graph G, the set Ԑ of all subsets of E(G) forms an |E(G)| dimensional vector space over Z2 with vector addition X⊕Y = (X\Y ) [ (Y \X) and scalar multiplication 1.X = X and 0.X = Ø for all X, Yϵ Ԑ. The cycle space, C(G), of a graph G is the vector subspace of (E; ⊕; .) spanned by the cycles of G. Traditionally there have been two notions of minimality among bases of C(G). First, a basis B of G is called a d-fold if each edge of G occurs in at most d cycles of the basis B. The basis number, b(G), of G is the least non-negative integer d such that C(G) has a d-fold basis; a required basis of C(G) is a basis for which each edge of G belongs to at most b(G) elements of B. Second, a basis B is called a minimum cycle basis (MCB) if its total length Σ BϵB |B| is minimum among all bases of C(G). The lexicographic product GρH has the vertex set V (GρH) = V (G) x V (H) and the edge set E(GρH) = {(u1, v1)(u2, v2)|u1 = u2 and v1 v2 ϵ E(H); or u1u2 ϵ E(G) and there is α ϵ Aut(H) such that α (v1) = v2}. In this work, a construction of a minimum cycle basis for the wreath product of wheels with paths is presented. Also, the length of the longest cycle of a minimum cycle basis is determined. Moreover, the basis number for the wreath product of the same is investigated.Keywords: cycle space, minimum cycle basis, basis number, wreath product
Procedia PDF Downloads 2802629 Message Passing Neural Network (MPNN) Approach to Multiphase Diffusion in Reservoirs for Well Interconnection Assessments
Authors: Margarita Mayoral-Villa, J. Klapp, L. Di G. Sigalotti, J. E. V. Guzmán
Abstract:
Automated learning techniques are widely applied in the energy sector to address challenging problems from a practical point of view. To this end, we discuss the implementation of a Message Passing algorithm (MPNN)within a Graph Neural Network(GNN)to leverage the neighborhood of a set of nodes during the aggregation process. This approach enables the characterization of multiphase diffusion processes in the reservoir, such that the flow paths underlying the interconnections between multiple wells may be inferred from previously available data on flow rates and bottomhole pressures. The results thus obtained compare favorably with the predictions produced by the Reduced Order Capacitance-Resistance Models (CRM) and suggest the potential of MPNNs to enhance the robustness of the forecasts while improving the computational efficiency.Keywords: multiphase diffusion, message passing neural network, well interconnection, interwell connectivity, graph neural network, capacitance-resistance models
Procedia PDF Downloads 1492628 An Owen Value for Cooperative Games with Pairwise a Priori Incompatibilities
Authors: Jose M. Gallardo, Nieves Jimenez, Andres Jimenez-Losada, Esperanza Lebron
Abstract:
A game with a priori incompatibilities is a triple (N,v,g) where (N,v) is a cooperative game, and (N,g) is a graph which establishes initial incompatibilities between some players. In these games, the negotiation has two stages. In the first stage, players can only negotiate with others with whom they are compatible. In the second stage, the grand coalition will be formed. We introduce a value for these games. Given a game with a priori incompatibility (N,v,g), we consider the family of coalitions without incompatibility relations among their players. This family is a normal set system or coalition configuration Ig. Therefore, we can assign to each game with a priori incompatibilities (N,v,g) a game with coalition configuration (N,v, Ig). Now, in order to obtain a payoff vector for (N,v,g), it suffices to calculate a payoff vector for (N,v, Ig). To this end, we apply a value for games with coalition configuration. In our case, we will use the dual configuration value, which has been studied in the literature. With this method, we obtain a value for games with a priori incompatibilities, which is called the Owen value for a priori incompatibilities. We provide a characterization of this value.Keywords: cooperative game, game with coalition configuration, graph, independent set, Owen value, Shapley value
Procedia PDF Downloads 1312627 Research on Dynamic Practical Byzantine Fault Tolerance Consensus Algorithm
Authors: Cao Xiaopeng, Shi Linkai
Abstract:
The practical Byzantine fault-tolerant algorithm does not add nodes dynamically. It is limited in practical application. In order to add nodes dynamically, Dynamic Practical Byzantine Fault Tolerance Algorithm (DPBFT) was proposed. Firstly, a new node sends request information to other nodes in the network. The nodes in the network decide their identities and requests. Then the nodes in the network reverse connect to the new node and send block information of the current network. The new node updates information. Finally, the new node participates in the next round of consensus, changes the view and selects the master node. This paper abstracts the decision of nodes into the undirected connected graph. The final consistency of the graph is used to prove that the proposed algorithm can adapt to the network dynamically. Compared with the PBFT algorithm, DPBFT has better fault tolerance and lower network bandwidth.Keywords: practical byzantine, fault tolerance, blockchain, consensus algorithm, consistency analysis
Procedia PDF Downloads 1302626 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 682625 Paralysis from an Ear Infection: A Severe Case of Otitis Externa Leading to Acute Complete Cervical Cord Syndrome
Authors: Rachael Collins, George Lafford
Abstract:
We report a case of a generally fit and a well 54-year-old gentleman who presented with a two-day history of worsening left-sided otorrhea, headache, neck stiffness, vomiting and pyrexia on the background of a seven-week history of OE. His condition progressed dramatically as he developed symptoms consistent with acute complete cervical cord syndrome with radiological evidence of skull base osteomyelitis, parapharyngeal, retropharyngeal and paravertebral abscesses and sigmoid sinus thrombus. Ultimately he made a significant, although not complete, recovery. This case is unique in demonstrating how OE can develop into a potentially life-threatening condition. It emphasizes the importance of early diagnosis and treatment of OE, the recognition of ‘red flag’ symptoms and highlights the importance of a multi-disciplinary team (MDT) approach when managing complex complications of OE.Keywords: ENT, neurology, otology, MDT
Procedia PDF Downloads 149