Search results for: approximate graph matching
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1282

Search results for: approximate graph matching

532 The Failure and Energy Mechanism of Rock-Like Material with Single Flaw

Authors: Yu Chen

Abstract:

This paper investigates the influence of flaw on failure process of rock-like material under uniaxial compression. In laboratory, the uniaxial compression tests of intact specimens and a series of specimens within single flaw were conducted. The inclination angle of flaws includes 0°, 15°, 30°, 45°, 60°, 75° and 90°. Based on the laboratory tests, the corresponding models of numerical simulation were built and loaded in PFC2D. After analysing the crack initiation and failure modes, deformation field, and energy mechanism for both laboratory tests and numerical simulation, it can be concluded that the influence of flaws on the failure process is determined by its inclination. The characteristic stresses increase as flaw angle rising basically. The tensile cracks develop from gentle flaws (α ≤ 30°) and the shear cracks develop from other flaws. The propagation of cracks changes during failure process and the failure mode of a specimen corresponds to the orientation of the flaw. A flaw has significant influence on the transverse deformation field at the middle of the specimen, except the 75° and 90° flaw sample. The input energy, strain energy and dissipation energy of specimens show approximate increase trends with flaw angle rising and it presents large difference on the energy distribution.

Keywords: failure pattern, particle deformation field, energy mechanism, PFC

Procedia PDF Downloads 210
531 Indexing and Incremental Approach Using Map Reduce Bipartite Graph (MRBG) for Mining Evolving Big Data

Authors: Adarsh Shroff

Abstract:

Big data is a collection of dataset so large and complex that it becomes difficult to process using data base management tools. To perform operations like search, analysis, visualization on big data by using data mining; which is the process of extraction of patterns or knowledge from large data set. In recent years, the data mining applications become stale and obsolete over time. Incremental processing is a promising approach to refreshing mining results. It utilizes previously saved states to avoid the expense of re-computation from scratch. This project uses i2MapReduce, an incremental processing extension to Map Reduce, the most widely used framework for mining big data. I2MapReduce performs key-value pair level incremental processing rather than task level re-computation, supports not only one-step computation but also more sophisticated iterative computation, which is widely used in data mining applications, and incorporates a set of novel techniques to reduce I/O overhead for accessing preserved fine-grain computation states. To optimize the mining results, evaluate i2MapReduce using a one-step algorithm and three iterative algorithms with diverse computation characteristics for efficient mining.

Keywords: big data, map reduce, incremental processing, iterative computation

Procedia PDF Downloads 346
530 Improving System Performance through User's Resource Access Patterns

Authors: K. C. Wong

Abstract:

This paper demonstrates a number of examples in the hope to shed some light on the possibility of designing future operating systems in a more adaptation-based manner. A modern operating system, we conceive, should possess the capability of 'learning' in such a way that it can dynamically adjust its services and behavior according to the current status of the environment in which it operates. In other words, a modern operating system should play a more proactive role during the session of providing system services to users. As such, a modern operating system is expected to create a computing environment, in which its users are provided with system services more matching their dynamically changing needs. The examples demonstrated in this paper show that user's resource access patterns 'learned' and determined during a session can be utilized to improve system performance and hence to provide users with a better and more effective computing environment. The paper also discusses how to use the frequency, the continuity, and the duration of resource accesses in a session to quantitatively measure and determine user's resource access patterns for the examples shown in the paper.

Keywords: adaptation-based systems, operating systems, resource access patterns, system performance

Procedia PDF Downloads 134
529 Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data

Authors: Victoria Siriaki Jorry, I. S. Mbalawata, Hayong Shin

Abstract:

The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness.

Keywords: average run length, CUSUM control chart, gradual change detection, likelihood ratio test

Procedia PDF Downloads 293
528 Data Collection with Bounded-Sized Messages in Wireless Sensor Networks

Authors: Min Kyung An

Abstract:

In this paper, we study the data collection problem in Wireless Sensor Networks (WSNs) adopting the two interference models: The graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR). The main issue of the problem is to compute schedules with the minimum number of timeslots, that is, to compute the minimum latency schedules, such that data from every node can be collected without any collision or interference to a sink node. While existing works studied the problem with unit-sized and unbounded-sized message models, we investigate the problem with the bounded-sized message model, and introduce a constant factor approximation algorithm. To the best known of our knowledge, our result is the first result of the data collection problem with bounded-sized model in both interference models.

Keywords: data collection, collision-free, interference-free, physical interference model, SINR, approximation, bounded-sized message model, wireless sensor networks

Procedia PDF Downloads 217
527 Simulation of Pedestrian Service Time at Different Delay Times

Authors: Imran Badshah

Abstract:

Pedestrian service time reflects the performance of the facility, and it’s a key parameter to analyze the capability of facilities provided to serve pedestrians. The level of service of pedestrians (LOS) mainly depends on pedestrian time and safety. The pedestrian time utilized by taking a service is mainly influenced by the number of available services and the time utilized by each pedestrian in receiving a service; that is called a delay time. In this paper, we analyzed the simulated pedestrian service time with different delay times. A simulation is performed in AnyLogic by developing a model that reflects the real scenario of pedestrian services such as ticket machine gates at rail stations, airports, shopping malls, and cinema halls. The simulated pedestrian time is determined for various delay values. The simulated result shows how pedestrian time changes with the delay pattern. The histogram and time plot graph of a model gives the mean, maximum and minimum values of the pedestrian time. This study helps us to check the behavior of pedestrian time at various services such as subway stations, airports, shopping malls, and cinema halls.

Keywords: agent-based simulation, anylogic model, pedestrian behavior, time delay

Procedia PDF Downloads 206
526 A Greedy Alignment Algorithm Supporting Medication Reconciliation

Authors: David Tresner-Kirsch

Abstract:

Reconciling patient medication lists from multiple sources is a critical task supporting the safe delivery of patient care. Manual reconciliation is a time-consuming and error-prone process, and recently attempts have been made to develop efficiency- and safety-oriented automated support for professionals performing the task. An important capability of any such support system is automated alignment – finding which medications from a list correspond to which medications from a different source, regardless of misspellings, naming differences (e.g. brand name vs. generic), or changes in treatment (e.g. switching a patient from one antidepressant class to another). This work describes a new algorithmic solution to this alignment task, using a greedy matching approach based on string similarity, edit distances, concept extraction and normalization, and synonym search derived from the RxNorm nomenclature. The accuracy of this algorithm was evaluated against a gold-standard corpus of 681 medication records; this evaluation found that the algorithm predicted alignments with 99% precision and 91% recall. This performance is sufficient to support decision support applications for medication reconciliation.

Keywords: clinical decision support, medication reconciliation, natural language processing, RxNorm

Procedia PDF Downloads 282
525 A Study of Using Different Printed Circuit Board Design Methods on Ethernet Signals

Authors: Bahattin Kanal, Nursel Akçam

Abstract:

Data transmission size and frequency requirements are increasing rapidly in electronic communication protocols. Increasing data transmission speeds have made the design of printed circuit boards much more important. It is important to carefully examine the requirements and make analyses before and after the design of the digital electronic circuit board. It delves into impedance matching techniques, signal trace routing considerations, and the impact of layer stacking on signal performance. The paper extensively explores techniques for minimizing crosstalk issues and interference, presenting a holistic perspective on design strategies to optimize the quality of high-speed signals. Through a comprehensive review of these design methodologies, this study aims to provide insights into achieving reliable and high-performance printed circuit board layouts for these signals. In this study, the effect of different design methods on Ethernet signals was examined from the type of S parameters. Siemens company HyperLynx software tool was used for the analyses.

Keywords: HyperLynx, printed circuit board, s parameters, ethernet

Procedia PDF Downloads 31
524 Temperature Dependence of Relative Permittivity: A Measurement Technique Using Split Ring Resonators

Authors: Sreedevi P. Chakyar, Jolly Andrews, V. P. Joseph

Abstract:

A compact method for measuring the relative permittivity of a dielectric material at different temperatures using a single circular Split Ring Resonator (SRR) metamaterial unit working as a test probe is presented in this paper. The dielectric constant of a material is dependent upon its temperature and the LC resonance of the SRR depends on its dielectric environment. Hence, the temperature of the dielectric material in contact with the resonator influences its resonant frequency. A single SRR placed between transmitting and receiving probes connected to a Vector Network Analyser (VNA) is used as a test probe. The dependence of temperature between 30 oC and 60 oC on resonant frequency of SRR is analysed. Relative permittivities ‘ε’ of test samples for different temperatures are extracted from a calibration graph drawn between the relative permittivity of samples of known dielectric constant and their corresponding resonant frequencies. This method is found to be an easy and efficient technique for analysing the temperature dependent permittivity of different materials.

Keywords: metamaterials, negative permeability, permittivity measurement techniques, split ring resonators, temperature dependent dielectric constant

Procedia PDF Downloads 408
523 Performance Analysis of Elliptic Curve Cryptography Using Onion Routing to Enhance the Privacy and Anonymity in Grid Computing

Authors: H. Parveen Begam, M. A. Maluk Mohamed

Abstract:

Grid computing is an environment that allows sharing and coordinated use of diverse resources in dynamic, heterogeneous and distributed environment using Virtual Organization (VO). Security is a critical issue due to the open nature of the wireless channels in the grid computing which requires three fundamental services: authentication, authorization, and encryption. The privacy and anonymity are considered as an important factor while communicating over publicly spanned network like web. To ensure a high level of security we explored an extension of onion routing, which has been used with dynamic token exchange along with protection of privacy and anonymity of individual identity. To improve the performance of encrypting the layers, the elliptic curve cryptography is used. Compared to traditional cryptosystems like RSA (Rivest-Shamir-Adelman), ECC (Elliptic Curve Cryptosystem) offers equivalent security with smaller key sizes which result in faster computations, lower power consumption, as well as memory and bandwidth savings. This paper presents the estimation of the performance improvements of onion routing using ECC as well as the comparison graph between performance level of RSA and ECC.

Keywords: grid computing, privacy, anonymity, onion routing, ECC, RSA

Procedia PDF Downloads 395
522 Calpoly Autonomous Transportation Experience: Software for Driverless Vehicle Operating on Campus

Authors: F. Tang, S. Boskovich, A. Raheja, Z. Aliyazicioglu, S. Bhandari, N. Tsuchiya

Abstract:

Calpoly Autonomous Transportation Experience (CATE) is a driverless vehicle that we are developing to provide safe, accessible, and efficient transportation of passengers throughout the Cal Poly Pomona campus for events such as orientation tours. Unlike the other self-driving vehicles that are usually developed to operate with other vehicles and reside only on the road networks, CATE will operate exclusively on walk-paths of the campus (potentially narrow passages) with pedestrians traveling from multiple locations. Safety becomes paramount as CATE operates within the same environment as pedestrians. As driverless vehicles assume greater roles in today’s transportation, this project will contribute to autonomous driving with pedestrian traffic in a highly dynamic environment. The CATE project requires significant interdisciplinary work. Researchers from mechanical engineering, electrical engineering and computer science are working together to attack the problem from different perspectives (hardware, software and system). In this abstract, we describe the software aspects of the project, with a focus on the requirements and the major components. CATE shall provide a GUI interface for the average user to interact with the car and access its available functionalities, such as selecting a destination from any origin on campus. We have developed an interface that provides an aerial view of the campus map, the current car location, routes, and the goal location. Users can interact with CATE through audio or manual inputs. CATE shall plan routes from the origin to the selected destination for the vehicle to travel. We will use an existing aerial map for the campus and convert it to a spatial graph configuration where the vertices represent the landmarks and edges represent paths that the car should follow with some designated behaviors (such as stay on the right side of the lane or follow an edge). Graph search algorithms such as A* will be implemented as the default path planning algorithm. D* Lite will be explored to efficiently recompute the path when there are any changes to the map. CATE shall avoid any static obstacles and walking pedestrians within some safe distance. Unlike traveling along traditional roadways, CATE’s route directly coexists with pedestrians. To ensure the safety of the pedestrians, we will use sensor fusion techniques that combine data from both lidar and stereo vision for obstacle avoidance while also allowing CATE to operate along its intended route. We will also build prediction models for pedestrian traffic patterns. CATE shall improve its location and work under a GPS-denied situation. CATE relies on its GPS to give its current location, which has a precision of a few meters. We have implemented an Unscented Kalman Filter (UKF) that allows the fusion of data from multiple sensors (such as GPS, IMU, odometry) in order to increase the confidence of localization. We also noticed that GPS signals can easily get degraded or blocked on campus due to high-rise buildings or trees. UKF can also help here to generate a better state estimate. In summary, CATE will provide on-campus transportation experience that coexists with dynamic pedestrian traffic. In future work, we will extend it to multi-vehicle scenarios.

Keywords: driverless vehicle, path planning, sensor fusion, state estimate

Procedia PDF Downloads 141
521 Student Records Management System Using Smart Cards and Biometric Technology for Educational Institutions

Authors: Patrick O. Bobbie, Prince S. Attrams

Abstract:

In recent times, the rapid change in new technologies has spurred up the way and manner records are handled in educational institutions. Also, there is a need for reliable access and ease-of use to these records, resulting in increased productivity in organizations. In academic institutions, such benefits help in quality assessments, institutional performance, and assessments of teaching and evaluation methods. Students in educational institutions benefit the most when advanced technologies are deployed in accessing records. This research paper discusses the use of biometric technologies coupled with smartcard technologies to provide a unique way of identifying students and matching their data to financial records to grant them access to restricted areas such as examination halls. The system developed in this paper, has an identity verification component as part of its main functionalities. A systematic software development cycle of analysis, design, coding, testing and support was used. The system provides a secured way of verifying student’s identity and real time verification of financial records. An advanced prototype version of the system has been developed for testing purposes.

Keywords: biometrics, smartcards, identity-verification, fingerprints

Procedia PDF Downloads 416
520 Deep Reinforcement Learning Model Using Parameterised Quantum Circuits

Authors: Lokes Parvatha Kumaran S., Sakthi Jay Mahenthar C., Sathyaprakash P., Jayakumar V., Shobanadevi A.

Abstract:

With the evolution of technology, the need to solve complex computational problems like machine learning and deep learning has shot up. But even the most powerful classical supercomputers find it difficult to execute these tasks. With the recent development of quantum computing, researchers and tech-giants strive for new quantum circuits for machine learning tasks, as present works on Quantum Machine Learning (QML) ensure less memory consumption and reduced model parameters. But it is strenuous to simulate classical deep learning models on existing quantum computing platforms due to the inflexibility of deep quantum circuits. As a consequence, it is essential to design viable quantum algorithms for QML for noisy intermediate-scale quantum (NISQ) devices. The proposed work aims to explore Variational Quantum Circuits (VQC) for Deep Reinforcement Learning by remodeling the experience replay and target network into a representation of VQC. In addition, to reduce the number of model parameters, quantum information encoding schemes are used to achieve better results than the classical neural networks. VQCs are employed to approximate the deep Q-value function for decision-making and policy-selection reinforcement learning with experience replay and the target network.

Keywords: quantum computing, quantum machine learning, variational quantum circuit, deep reinforcement learning, quantum information encoding scheme

Procedia PDF Downloads 128
519 Solution of Singularly Perturbed Differential Difference Equations Using Liouville Green Transformation

Authors: Y. N. Reddy

Abstract:

The class of differential-difference equations which have characteristics of both classes, i.e., delay/advance and singularly perturbed behaviour is known as singularly perturbed differential-difference equations. The expression ‘positive shift’ and ‘negative shift’ are also used for ‘advance’ and ‘delay’ respectively. In general, an ordinary differential equation in which the highest order derivative is multiplied by a small positive parameter and containing at least one delay/advance is known as singularly perturbed differential-difference equation. Singularly perturbed differential-difference equations arise in the modelling of various practical phenomena in bioscience, engineering, control theory, specifically in variational problems, in describing the human pupil-light reflex, in a variety of models for physiological processes or diseases and first exit time problems in the modelling of the determination of expected time for the generation of action potential in nerve cells by random synaptic inputs in dendrites. In this paper, we envisage the use of Liouville Green Transformation to find the solution of singularly perturbed differential difference equations. First, using Taylor series, the given singularly perturbed differential difference equation is approximated by an asymptotically equivalent singularly perturbation problem. Then the Liouville Green Transformation is applied to get the solution. Several model examples are solved, and the results are compared with other methods. It is observed that the present method gives better approximate solutions.

Keywords: difference equations, differential equations, singular perturbations, boundary layer

Procedia PDF Downloads 195
518 Dynamics of the Coupled Fitzhugh-Rinzel Neurons

Authors: Sanjeev Kumar Sharma, Arnab Mondal, Ranjit Kumar Upadhyay

Abstract:

Excitable cells often produce different oscillatory activities that help us to understand the transmitting and processing of signals in the neural system. We consider a FitzHugh-Rinzel (FH-R) model and studied the different dynamics of the model by considering the parameter c as the predominant parameter. The model exhibits different types of neuronal responses such as regular spiking, mixed-mode bursting oscillations (MMBOs), elliptic bursting, etc. Based on the bifurcation diagram, we consider the three regimes (MMBOs, elliptic bursting, and quiescent state). An analytical treatment for the occurrence of the supercritical Hopf bifurcation is studied. Further, we extend our study to a network of a hundred neurons by considering the bi-directional synaptic coupling between them. In this article, we investigate the alternation of spiking propagation and bursting phenomena of an uncoupled and coupled FH-R neurons. We explore that the complete graph of heterogenous desynchronized neurons can exhibit different types of bursting oscillations for certain coupling strength. For higher coupling strength, all the neurons in the network show complete synchronization.

Keywords: excitable neuron model, spiking-bursting, stability and bifurcation, synchronization networks

Procedia PDF Downloads 122
517 Object Detection Based on Plane Segmentation and Features Matching for a Service Robot

Authors: António J. R. Neves, Rui Garcia, Paulo Dias, Alina Trifan

Abstract:

With the aging of the world population and the continuous growth in technology, service robots are more and more explored nowadays as alternatives to healthcare givers or personal assistants for the elderly or disabled people. Any service robot should be capable of interacting with the human companion, receive commands, navigate through the environment, either known or unknown, and recognize objects. This paper proposes an approach for object recognition based on the use of depth information and color images for a service robot. We present a study on two of the most used methods for object detection, where 3D data is used to detect the position of objects to classify that are found on horizontal surfaces. Since most of the objects of interest accessible for service robots are on these surfaces, the proposed 3D segmentation reduces the processing time and simplifies the scene for object recognition. The first approach for object recognition is based on color histograms, while the second is based on the use of the SIFT and SURF feature descriptors. We present comparative experimental results obtained with a real service robot.

Keywords: object detection, feature, descriptors, SIFT, SURF, depth images, service robots

Procedia PDF Downloads 541
516 Framework for Detecting External Plagiarism from Monolingual Documents: Use of Shallow NLP and N-Gram Frequency Comparison

Authors: Saugata Bose, Ritambhra Korpal

Abstract:

The internet has increased the copy-paste scenarios amongst students as well as amongst researchers leading to different levels of plagiarized documents. For this reason, much of research is focused on for detecting plagiarism automatically. In this paper, an initiative is discussed where Natural Language Processing (NLP) techniques as well as supervised machine learning algorithms have been combined to detect plagiarized texts. Here, the major emphasis is on to construct a framework which detects external plagiarism from monolingual texts successfully. For successfully detecting the plagiarism, n-gram frequency comparison approach has been implemented to construct the model framework. The framework is based on 120 characteristics which have been extracted during pre-processing the documents using NLP approach. Afterwards, filter metrics has been applied to select most relevant characteristics and then supervised classification learning algorithm has been used to classify the documents in four levels of plagiarism. Confusion matrix was built to estimate the false positives and false negatives. Our plagiarism framework achieved a very high the accuracy score.

Keywords: lexical matching, shallow NLP, supervised machine learning algorithm, word n-gram

Procedia PDF Downloads 355
515 Comparative Study of Wear and Friction Behavior of Tricalcium Phosphate-Fluorapatite Bioceramic

Authors: Rym Taktak, Achwek Elghazel, Jamel Bouaziz

Abstract:

In the present work, we explored the potential of tribological behavior of tricalcium phosphate-Fluorapatite (β Tcp-Fap) bioceramic which has attracted considerable attention for orthopedics and dental applications. The approximate representatives Fap-βTcp were respectively [{13.26 wt%, 86.74 wt%} {19.9 wt%, 80.1 wt%},{ 26.52 wt%, 73.48 wt%}, {33.16 wt%, 66.84 wt%} and {40 wt%, 60 wt%}. The effects of Fluorapatite additives on friction and wear behavior were studied and discussed. The wear test was conducted using pion-disk tribometer at room temperature under dry condition using a constant sliding speed of 0,063 m/s, and three loads 3, 5 and 8 N. The wear rate and friction coefficient of β Tcp with different additive amounts were compared. An Alumina ball specimens were used as the pin and flat surface β Tcp-Fap specimens as the antagonist counterface. The results show a huge difference between the wear rate of β TCP samples and the other β TCP-Fap composites for all normal forces applied. This result shows the beneficial effect of fluorapatite on the tribological behavior of the β TCP. Moreover, we note that β Tcp-26% Fap specimens exhibit, under dry condition, the lower friction coefficient and the smaller wear rate than other biocomposites. Thereby, the friction and wear behavior is influenced by the addition of fluorapatite, the applied normal force, and the sliding velocity. To extend the understanding of the wear process, the surface topography of β Tcp-26% Fap specimens and the wear track obtained during the wear tests were studied using a surface profilometer, optical microscopy, and scanning electron microscopy.

Keywords: alumina, bioceramic, friction and wear test, tricalcium phosphate

Procedia PDF Downloads 231
514 Enhancing the Recruitment Process through Machine Learning: An Automated CV Screening System

Authors: Kaoutar Ben Azzou, Hanaa Talei

Abstract:

Human resources is an important department in each organization as it manages the life cycle of employees from recruitment training to retirement or termination of contracts. The recruitment process starts with a job opening, followed by a selection of the best-fit candidates from all applicants. Matching the best profile for a job position requires a manual way of looking at many CVs, which requires hours of work that can sometimes lead to choosing not the best profile. The work presented in this paper aims at reducing the workload of HR personnel by automating the preliminary stages of the candidate screening process, thereby fostering a more streamlined recruitment workflow. This tool introduces an automated system designed to help with the recruitment process by scanning candidates' CVs, extracting pertinent features, and employing machine learning algorithms to decide the most fitting job profile for each candidate. Our work employs natural language processing (NLP) techniques to identify and extract key features from unstructured text extracted from a CV, such as education, work experience, and skills. Subsequently, the system utilizes these features to match candidates with job profiles, leveraging the power of classification algorithms.

Keywords: automated recruitment, candidate screening, machine learning, human resources management

Procedia PDF Downloads 54
513 Taylor’s Law and Relationship between Life Expectancy at Birth and Variance in Age at Death in Period Life Table

Authors: David A. Swanson, Lucky M. Tedrow

Abstract:

Taylor’s Law is a widely observed empirical pattern that relates variances to means in sets of non-negative measurements via an approximate power function, which has found application to human mortality. This study adds to this research by showing that Taylor’s Law leads to a model that reasonably describes the relationship between life expectancy at birth (e0, which also is equal to mean age at death in a life table) and variance at age of death in seven World Bank regional life tables measured at two points in time, 1970 and 2000. Using as a benchmark a non-random sample of four Japanese female life tables covering the period from 1950 to 2004, the study finds that the simple linear model provides reasonably accurate estimates of variance in age at death in a life table from e0, where the latter range from 60.9 to 85.59 years. Employing 2017 life tables from the Human Mortality Database, the simple linear model is used to provide estimates of variance at age in death for six countries, three of which have high e0 values and three of which have lower e0 values. The paper provides a substantive interpretation of Taylor’s Law relative to e0 and concludes by arguing that reasonably accurate estimates of variance in age at death in a period life table can be calculated using this approach, which also can be used where e0 itself is estimated rather than generated through the construction of a life table, a useful feature of the model.

Keywords: empirical pattern, mean age at death in a life table, mean age of a stationary population, stationary population

Procedia PDF Downloads 328
512 A Deep Learning-Based Pedestrian Trajectory Prediction Algorithm

Authors: Haozhe Xiang

Abstract:

With the rise of the Internet of Things era, intelligent products are gradually integrating into people's lives. Pedestrian trajectory prediction has become a key issue, which is crucial for the motion path planning of intelligent agents such as autonomous vehicles, robots, and drones. In the current technological context, deep learning technology is becoming increasingly sophisticated and gradually replacing traditional models. The pedestrian trajectory prediction algorithm combining neural networks and attention mechanisms has significantly improved prediction accuracy. Based on in-depth research on deep learning and pedestrian trajectory prediction algorithms, this article focuses on physical environment modeling and learning of historical trajectory time dependence. At the same time, social interaction between pedestrians and scene interaction between pedestrians and the environment were handled. An improved pedestrian trajectory prediction algorithm is proposed by analyzing the existing model architecture. With the help of these improvements, acceptable predicted trajectories were successfully obtained. Experiments on public datasets have demonstrated the algorithm's effectiveness and achieved acceptable results.

Keywords: deep learning, graph convolutional network, attention mechanism, LSTM

Procedia PDF Downloads 64
511 Travel Planning in Public Transport Networks Applying the Algorithm A* for Metropolitan District of Quito

Authors: M. Fernanda Salgado, Alfonso Tierra, Wilbert Aguilar

Abstract:

The present project consists in applying the informed search algorithm A star (A*) to solve traveler problems, applying it by urban public transportation routes. The digitization of the information allowed to identify 26% of the total of routes that are registered within the Metropolitan District of Quito. For the validation of this information, data were taken in field on the travel times and the difference with respect to the times estimated by the program, resulting in that the difference between them was not greater than 2:20 minutes. We validate A* algorithm with the Dijkstra algorithm, comparing nodes vectors based on the public transport stops, the validation was established through the student t-test hypothesis. Then we verified that the times estimated by the program using the A* algorithm are similar to those registered on field. Furthermore, we review the performance of the algorithm generating iterations in both algorithms. Finally, with these iterations, a hypothesis test was carried out again with student t-test where it was concluded that the iterations of the base algorithm Dijsktra are greater than those generated by the algorithm A*.

Keywords: algorithm A*, graph, mobility, public transport, travel planning, routes

Procedia PDF Downloads 237
510 An Evaluation Model for Automatic Map Generalization

Authors: Quynhan Tran, Hong Fan, Quockhanh Pham

Abstract:

Automatic map generalization is a well-known problem in cartography. The development of map generalization research accompanied the development of cartography. The traditional map is plotted manually by cartographic experts. The paper studies none-scale automation generalization of resident polygons and house marker symbol, proposes methodology to evaluate the result maps based on minimal spanning tree. In this paper, the minimal spanning tree before and after map generalization is compared to evaluate whether the generalization result maintain the geographical distribution of features. The minimal spanning tree in vector format is firstly converted into a raster format and the grid size is 2mm (distance on the map). The statistical number of matching grid before and after map generalization and the ratio of overlapping grid to the total grids is calculated. Evaluation experiments are conduct to verify the results. Experiments show that this methodology can give an objective evaluation for the feature distribution and give specialist an hand while they evaluate result maps of none-scale automation generalization with their eyes.

Keywords: automatic cartography generalization, evaluation model, geographic feature distribution, minimal spanning tree

Procedia PDF Downloads 633
509 An Approximate Formula for Calculating the Fundamental Mode Period of Vibration of Practical Building

Authors: Abdul Hakim Chikho

Abstract:

Most international codes allow the use of an equivalent lateral load method for designing practical buildings to withstand earthquake actions. This method requires calculating an approximation to the fundamental mode period of vibrations of these buildings. Several empirical equations have been suggested to calculate approximations to the fundamental periods of different types of structures. Most of these equations are knowing to provide an only crude approximation to the required fundamental periods and repeating the calculation utilizing a more accurate formula is usually required. In this paper, a new formula to calculate a satisfactory approximation of the fundamental period of a practical building is proposed. This formula takes into account the mass and the stiffness of the building therefore, it is more logical than the conventional empirical equations. In order to verify the accuracy of the proposed formula, several examples have been solved. In these examples, calculating the fundamental mode periods of several farmed buildings utilizing the proposed formula and the conventional empirical equations has been accomplished. Comparing the obtained results with those obtained from a dynamic computer has shown that the proposed formula provides a more accurate estimation of the fundamental periods of practical buildings. Since the proposed method is still simple to use and requires only a minimum computing effort, it is believed to be ideally suited for design purposes.

Keywords: earthquake, fundamental mode period, design, building

Procedia PDF Downloads 281
508 Efficient Principal Components Estimation of Large Factor Models

Authors: Rachida Ouysse

Abstract:

This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.

Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting

Procedia PDF Downloads 146
507 Understanding Post-Displacement Earnings Losses: The Role of Wealth Inequality

Authors: M. Bartal

Abstract:

A large empirical evidence points to sizable lifetime earnings losses associated with the displacement of tenured workers. The causes of these losses are still not well-understood. Existing explanations are heavily based on human capital depreciation during non-employment spells. In this paper, a new avenue is explored. Evidence on the role of household liquidity constraints in accounting for the persistence of post-displacement earning losses is provided based on SIPP data. Then, a directed search and matching model with endogenous human capital and wealth accumulation is introduced. The model is computationally tractable thanks to its block-recursive structure and highlights a non-trivial, yet intuitive, interaction between wealth and human capital. Constrained workers tend to accept jobs with low firm-sponsored training because the latter are (endogenously) easier to find. This new channel provides a plausible explanation for why young (highly constrained) workers suffer persistent scars after displacement. Finally, the model is calibrated on US data to show that the interplay between wealth and human capital is crucial to replicate the observed lifecycle pattern of earning losses. JEL— E21, E24, J24, J63.

Keywords: directed search, human capital accumulation, job displacement, wealth accumulation

Procedia PDF Downloads 203
506 Classification of Poverty Level Data in Indonesia Using the Naïve Bayes Method

Authors: Anung Style Bukhori, Ani Dijah Rahajoe

Abstract:

Poverty poses a significant challenge in Indonesia, requiring an effective analytical approach to understand and address this issue. In this research, we applied the Naïve Bayes classification method to examine and classify poverty data in Indonesia. The main focus is on classifying data using RapidMiner, a powerful data analysis platform. The analysis process involves data splitting to train and test the classification model. First, we collected and prepared a poverty dataset that includes various factors such as education, employment, and health..The experimental results indicate that the Naïve Bayes classification model can provide accurate predictions regarding the risk of poverty. The use of RapidMiner in the analysis process offers flexibility and efficiency in evaluating the model's performance. The classification produces several values to serve as the standard for classifying poverty data in Indonesia using Naive Bayes. The accuracy result obtained is 40.26%, with a moderate recall result of 35.94%, a high recall result of 63.16%, and a low recall result of 38.03%. The precision for the moderate class is 58.97%, for the high class is 17.39%, and for the low class is 58.70%. These results can be seen from the graph below.

Keywords: poverty, classification, naïve bayes, Indonesia

Procedia PDF Downloads 50
505 Predictive Factors of Prognosis in Acute Stroke Patients Receiving Traditional Chinese Medicine Therapy: A Retrospective Study

Authors: Shaoyi Lu

Abstract:

Background: Traditional Chinese medicine has been used to treat stroke, which is a major cause of morbidity and mortality. There is, however, no clear agreement about the optimal timing, population, efficacy, and predictive prognosis factors of traditional Chinese medicine supplemental therapy. Method: In this study, we used a retrospective analysis with data collection from stroke patients in Stroke Registry In Chang Gung Healthcare System (SRICHS). Stroke patients who received traditional Chinese medicine consultation in neurology ward of Keelung Chang Gung Memorial Hospital from Jan 2010 to Dec 2014 were enrolled. Clinical profiles including the neurologic deficit, activities of daily living and other basic characteristics were analyzed. Through propensity score matching, we compared the NIHSS and Barthel index before and after the hospitalization, and applied with subgroup analysis, and adjusted by multivariate regression method. Results: Totally 115 stroke patients were enrolled with experiment group in 23 and control group in 92. The most important factor for prognosis prediction were the scores of National Institutes of Health Stroke Scale and Barthel index right before the hospitalization. Traditional Chinese medicine intervention had no statistically significant influence on the neurological deficit of acute stroke patients, and mild negative influence on daily activity performance of acute hemorrhagic stroke patient. Conclusion: Efficacy of traditional Chinese medicine as a supplemental therapy for acute stroke patients was controversial. The reason for this phenomenon might be complex and require more research to comprehend. Key words: traditional Chinese medicine, acupuncture, Stroke, NIH stroke scale, Barthel index, predictive factor. Method: In this study, we used a retrospective analysis with data collection from stroke patients in Stroke Registry In Chang Gung Healthcare System (SRICHS). Stroke patients who received traditional Chinese medicine consultation in neurology ward of Keelung Chang Gung Memorial Hospital from Jan 2010 to Dec 2014 were enrolled. Clinical profiles including the neurologic deficit, activities of daily living and other basic characteristics were analyzed. Through propensity score matching, we compared the NIHSS and Barthel index before and after the hospitalization, and applied with subgroup analysis, and adjusted by multivariate regression method. Results: Totally 115 stroke patients were enrolled with experiment group in 23 and control group in 92. The most important factor for prognosis prediction were the scores of National Institutes of Health Stroke Scale and Barthel index right before the hospitalization. Traditional Chinese medicine intervention had no statistically significant influence on the neurological deficit of acute stroke patients, and mild negative influence on daily activity performance of acute hemorrhagic stroke patient. Conclusion: Efficacy of traditional Chinese medicine as a supplemental therapy for acute stroke patients was controversial. The reason for this phenomenon might be complex and require more research to comprehend.

Keywords: traditional Chinese medicine, complementary and alternative medicine, stroke, acupuncture

Procedia PDF Downloads 358
504 Therapeutic Journey towards Self: Developing Positivity with Indications of Cluster B and C Personality Traits

Authors: Shweta Jha, Nandita Chaube

Abstract:

The concept of self has a major role to play in the study of personality which drives the current study in its present form. This is a case of Miss S, a 17-year-old Hindu, currently in eleventh standard, with no family history of mental illness but with a past history of inability to manage relationships, multiple emotional and sexual relationships, repeated self harming behaviour, and sexual abuse over a period of 2 months at the age of 10 years. She comes with a psychiatric history of one episode of dissociative fall followed by a stressful event which left the patient with many psychological disturbances matching the criterion of Cluster B and C traits. Current episode precipitated due to the relationship failure, predisposing factor is her personality traits, and poor social and family support. Considering the patient’s aspiration for positivity and demand of the therapy, ventilation sessions were carried out which made her capable of understanding and dealing with her negative emotions, also strengthened mother child bond, helped her maintain meaningful and healthy relationships, also helped her increase her problem solving ability and adaptive coping skills making her feel more positive and acceptable towards herself, family members and others.

Keywords: cluster B and C traits, personality, therapy, self

Procedia PDF Downloads 283
503 Solution to Riemann Hypothesis Critical Strip Zone Using Non-Linear Complex Variable Functions

Authors: Manojkumar Sabanayagam

Abstract:

The Riemann hypothesis is an unsolved millennium problem and the search for a solution to the Riemann hypothesis is to study the pattern of prime number distribution. The scope of this paper is to identify the solution for the critical strip and the critical line axis, which has the non-trivial zero solutions using complex plane functions. The Riemann graphical plot is constructed using a linear complex variable function (X+iY) and is applicable only when X>1. But the investigation shows that complex variable behavior has two zones. The first zone is the transformation zone, where the definition of the complex plane should be a non-linear variable which is the critical strip zone in the graph (X=0 to 1). The second zone is the transformed zone (X>1) defined using linear variables conventionally. This paper deals with the Non-linear function in the transformation zone derived using cosine and sinusoidal time lag w.r.t imaginary number ‘i’. The alternate complex variable (Cosθ+i Sinθ) is used to understand the variables in the critical strip zone. It is concluded that the non-trivial zeros present in the Real part 0.5 are because the linear function is not the correct approach in the critical strip. This paper provides the solution to Reimann's hypothesis.

Keywords: Reimann hypothesis, critical strip, complex plane, transformation zone

Procedia PDF Downloads 203