Search results for: estimation algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3766

Search results for: estimation algorithms

1996 Urea Amperometric Biosensor Based on Entrapment Immobilization of Urease onto a Nanostructured Polypyrrol and Multi-Walled Carbon Nanotube

Authors: Hamide Amani, Afshin FarahBakhsh, Iman Farahbakhsh

Abstract:

In this paper, an amprometric biosensor based on surface modified polypyrrole (PPy) has been developed for the quantitative estimation of urea in aqueous solutions. The incorporation of urease (Urs) into a bipolymeric substrate consisting of PPy was performed by entrapment to the polymeric matrix, PPy acts as amperometric transducer in these biosensors. To increase the membrane conductivity, multi-walled carbon nanotubes (MWCNT) were added to the PPy solution. The entrapped MWCNT in PPy film and the bipolymer layers were prepared for construction of Pt/PPy/MWCNT/Urs. Two different configurations of working electrodes were evaluated to investigate the potential use of the modified membranes in biosensors. The evaluation of two different configurations of working electrodes suggested that the second configuration, which was composed of an electrode-mediator-(pyrrole and multi-walled carbon nanotube) structure and enzyme, is the best candidate for biosensor applications.

Keywords: urea biosensor, polypyrrole, multi-walled carbon nanotube, urease

Procedia PDF Downloads 316
1995 Kinetic Parameter Estimation from Thermogravimetry and Microscale Combustion Calorimetry

Authors: Rhoda Afriyie Mensah, Lin Jiang, Solomon Asante-Okyere, Xu Qiang, Cong Jin

Abstract:

Flammability analysis of extruded polystyrene (XPS) has become crucial due to its utilization as insulation material for energy efficient buildings. Using the Kissinger-Akahira-Sunose and Flynn-Wall-Ozawa methods, the degradation kinetics of two pure XPS from the local market, red and grey ones, were obtained from the results of thermogravity analysis (TG) and microscale combustion calorimetry (MCC) experiments performed under the same heating rates. From the experiments, it was discovered that red XPS released more heat than grey XPS and both materials showed two mass loss stages. Consequently, the kinetic parameters for red XPS were higher than grey XPS. A comparative evaluation of activation energies from MCC and TG showed an insignificant degree of deviation signifying an equivalent apparent activation energy from both methods. However, different activation energy profiles as a result of the different chemical pathways were presented when the dependencies of the activation energies on extent of conversion for TG and MCC were compared.

Keywords: flammability, microscale combustion calorimetry, thermogravity analysis, thermal degradation, kinetic analysis

Procedia PDF Downloads 171
1994 An Improved Genetic Algorithm for Traveling Salesman Problem with Precedence Constraint

Authors: M. F. F. Ab Rashid, A. N. Mohd Rose, N. M. Z. Nik Mohamed, W. S. Wan Harun, S. A. Che Ghani

Abstract:

Traveling salesman problem with precedence constraint (TSPPC) is one of the most complex problems in combinatorial optimization. The existing algorithms to solve TSPPC cost large computational time to find the optimal solution. The purpose of this paper is to present an efficient genetic algorithm that guarantees optimal solution with less number of generations and iterations time. Unlike the existing algorithm that generates priority factor as chromosome, the proposed algorithm directly generates sequence of solution as chromosome. As a result, the proposed algorithm is capable of generating optimal solution with smaller number of generations and iteration time compare to existing algorithm.

Keywords: traveling salesman problem, sequencing, genetic algorithm, precedence constraint

Procedia PDF Downloads 551
1993 Stochastic Modeling of Secretion Dynamics in Inner Hair Cells of the Auditory Pathway

Authors: Jessica A. Soto-Bear, Virginia González-Vélez, Norma Castañeda-Villa, Amparo Gil

Abstract:

Glutamate release of the cochlear inner hair cell (IHC) ribbon synapse is a fundamental step in transferring sound information in the auditory pathway. Otoferlin is the calcium sensor in the IHC and its activity has been related to many auditory disorders. In order to simulate secretion dynamics occurring in the IHC in a few milliseconds timescale and with high spatial resolution, we proposed an active-zone model solved with Monte Carlo algorithms. We included models for calcium buffered diffusion, calcium-binding schemes for vesicle fusion, and L-type voltage-gated calcium channels. Our results indicate that calcium influx and calcium binding is managing IHC secretion as a function of voltage depolarization, which in turn mean that IHC response depends on sound intensity.

Keywords: inner hair cells, Monte Carlo algorithm, Otoferlin, secretion

Procedia PDF Downloads 214
1992 Spatio-Temporal Analysis and Mapping of Malaria in Thailand

Authors: Krisada Lekdee, Sunee Sammatat, Nittaya Boonsit

Abstract:

This paper proposes a GLMM with spatial and temporal effects for malaria data in Thailand. A Bayesian method is used for parameter estimation via Gibbs sampling MCMC. A conditional autoregressive (CAR) model is assumed to present the spatial effects. The temporal correlation is presented through the covariance matrix of the random effects. The malaria quarterly data have been extracted from the Bureau of Epidemiology, Ministry of Public Health of Thailand. The factors considered are rainfall and temperature. The result shows that rainfall and temperature are positively related to the malaria morbidity rate. The posterior means of the estimated morbidity rates are used to construct the malaria maps. The top 5 highest morbidity rates (per 100,000 population) are in Trat (Q3, 111.70), Chiang Mai (Q3, 104.70), Narathiwat (Q4, 97.69), Chiang Mai (Q2, 88.51), and Chanthaburi (Q3, 86.82). According to the DIC criterion, the proposed model has a better performance than the GLMM with spatial effects but without temporal terms.

Keywords: Bayesian method, generalized linear mixed model (GLMM), malaria, spatial effects, temporal correlation

Procedia PDF Downloads 446
1991 Resource-Constrained Heterogeneous Workflow Scheduling Algorithms in Heterogeneous Computing Clusters

Authors: Lei Wang, Jiahao Zhou

Abstract:

The development of heterogeneous computing clusters provides a strong computility guarantee for large-scale workflows (e.g., scientific computing, artificial intelligence (AI), etc.). However, the tasks within large-scale workflows have also gradually become heterogeneous due to different demands on computing resources, which leads to the addition of a task resource-restricted constraint to the workflow scheduling problem on heterogeneous computing platforms. In this paper, we propose a heterogeneous constrained minimum makespan scheduling algorithm based on the idea of greedy strategy, which provides an efficient solution to the heterogeneous workflow scheduling problem in a heterogeneous platform. In this paper, we test the effectiveness of our proposed scheduling algorithm by randomly generating heterogeneous workflows with heterogeneous computing platform, and the experiments show that our method improves 15.2% over the state-of-the-art methods.

Keywords: heterogeneous computing, workflow scheduling, constrained resources, minimal makespan

Procedia PDF Downloads 9
1990 A Case Study on the Estimation of Design Discharge for Flood Management in Lower Damodar Region, India

Authors: Susmita Ghosh

Abstract:

Catchment area of Damodar River, India experiences seasonal rains due to the south-west monsoon every year and depending upon the intensity of the storms, floods occur. During the monsoon season, the rainfall in the area is mainly due to active monsoon conditions. The upstream reach of Damodar river system has five dams store the water for utilization for various purposes viz, irrigation, hydro-power generation, municipal supplies and last but not the least flood moderation. But, in the downstream reach of Damodar River, known as Lower Damodar region, is severely and frequently suffering from flood due to heavy monsoon rainfall and also release from upstream reservoirs. Therefore, an effective flood management study is required to know in depth the nature and extent of flood, water logging, and erosion related problems, affected area, and damages in the Lower Damodar region, by conducting mathematical model study. The design flood or discharge is needed to decide to assign the respective model for getting several scenarios from the simulation runs. The ultimate aim is to achieve a sustainable flood management scheme from the several alternatives. there are various methods for estimating flood discharges to be carried through the rivers and their tributaries for quick drainage from inundated areas due to drainage congestion and excess rainfall. In the present study, the flood frequency analysis is performed to decide the design flood discharge of the study area. This, on the other hand, has limitations in respect of availability of long peak flood data record for determining long type of probability density function correctly. If sufficient past records are available, the maximum flood on a river with a given frequency can safely be determined. The floods of different frequency for the Damodar has been calculated by five candidate distributions i.e., generalized extreme value, extreme value-I, Pearson type III, Log Pearson and normal. Annual peak discharge series are available at Durgapur barrage for the period of 1979 to 2013 (35 years). The available series are subjected to frequency analysis. The primary objective of the flood frequency analysis is to relate the magnitude of extreme events to their frequencies of occurrence through the use of probability distributions. The design flood for return periods of 10, 15 and 25 years return period at Durgapur barrage are estimated by flood frequency method. It is necessary to develop flood hydrographs for the above floods to facilitate the mathematical model studies to find the depth and extent of inundation etc. Null hypothesis that the distributions fit the data at 95% confidence is checked with goodness of fit test, i.e., Chi Square Test. It is revealed from the goodness of fit test that the all five distributions do show a good fit on the sample population and is therefore accepted. However, it is seen that there is considerable variation in the estimation of frequency flood. It is therefore considered prudent to average out the results of these five distributions for required frequencies. The inundated area from past data is well matched using this flood.

Keywords: design discharge, flood frequency, goodness of fit, sustainable flood management

Procedia PDF Downloads 195
1989 Vibration Based Structural Health Monitoring of Connections in Offshore Wind Turbines

Authors: Cristobal García

Abstract:

The visual inspection of bolted joints in wind turbines is dangerous, expensive, and impractical due to the non-possibility to access the platform by workboat in certain sea state conditions, as well as the high costs derived from the transportation of maintenance technicians to offshore platforms located far away from the coast, especially if helicopters are involved. Consequently, the wind turbine operators have the need for simpler and less demanding techniques for the analysis of the bolts tightening. Vibration-based structural health monitoring is one of the oldest and most widely-used means for monitoring the health of onshore and offshore wind turbines. The core of this work is to find out if the modal parameters can be efficiently used as a key performance indicator (KPIs) for the assessment of joint bolts in a 1:50 scale tower of a floating offshore wind turbine (12 MW). A non-destructive vibration test is used to extract the vibration signals of the towers with different damage statuses. The procedure can be summarized in three consecutive steps. First, an artificial excitation is introduced by means of a commercial shaker mounted on the top of the tower. Second, the vibration signals of the towers are recorded for 8 s at a sampling rate of 20 kHz using an array of commercial accelerometers (Endevco, 44A16-1032). Third, the natural frequencies, damping, and overall vibration mode shapes are calculated using the software Siemens LMS 16A. Experiments show that the natural frequencies, damping, and mode shapes of the tower are directly dependent on the fixing conditions of the towers, and therefore, the variations of both parameters are a good indicator for the estimation of the static axial force acting in the bolt. Thus, this vibration-based structural method proposed can be potentially used as a diagnostic tool to evaluate the tightening torques of the bolted joints with the advantages of being an economical, straightforward, and multidisciplinary approach that can be applied for different typologies of connections by operation and maintenance technicians. In conclusion, TSI, in collaboration with the consortium of the FIBREGY project, is conducting innovative research where vibrations are utilized for the estimation of the tightening torque of a 1:50 scale steel-based tower prototype. The findings of this research carried out in the context of FIBREGY possess multiple implications for the assessment of the bolted joint integrity in multiple types of connections such as tower-to-nacelle, modular, tower-to-column, tube-to-tube, etc. This research is contextualized in the framework of the FIBREGY project. The EU-funded FIBREGY project (H2020, grant number 952966) will evaluate the feasibility of the design and construction of a new generation of marine renewable energy platforms using lightweight FRP materials in certain structural elements (e.g., tower, floating platform). The FIBREGY consortium is composed of 11 partners specialized in the offshore renewable energy sector and funded partially by the H2020 program of the European Commission with an overall budget of 8 million Euros.

Keywords: SHM, vibrations, connections, floating offshore platform

Procedia PDF Downloads 114
1988 Adaptive Online Object Tracking via Positive and Negative Models Matching

Authors: Shaomei Li, Yawen Wang, Chao Gao

Abstract:

To improve tracking drift which often occurs in adaptive tracking, an algorithm based on the fusion of tracking and detection is proposed in this paper. Firstly, object tracking is posed as a binary classification problem and is modeled by partial least squares (PLS) analysis. Secondly, tracking object frame by frame via particle filtering. Thirdly, validating the tracking reliability based on both positive and negative models matching. Finally, relocating the object based on SIFT features matching and voting when drift occurs. Object appearance model is updated at the same time. The algorithm cannot only sense tracking drift but also relocate the object whenever needed. Experimental results demonstrate that this algorithm outperforms state-of-the-art algorithms on many challenging sequences.

Keywords: object tracking, tracking drift, partial least squares analysis, positive and negative models matching

Procedia PDF Downloads 521
1987 Hardware for Genetic Algorithm

Authors: Fariborz Ahmadi, Reza Tati

Abstract:

Genetic algorithm is a soft computing method that works on set of solutions. These solutions are called chromosome and the best one is the absolute solution of the problem. The main problem of this algorithm is that after passing through some generations, it may be produced some chromosomes that had been produced in some generations ago that causes reducing the convergence speed. From another respective, most of the genetic algorithms are implemented in software and less works have been done on hardware implementation. Our work implements genetic algorithm in hardware that doesn’t produce chromosome that have been produced in previous generations. In this work, most of genetic operators are implemented without producing iterative chromosomes and genetic diversity is preserved. Genetic diversity causes that not only do not this algorithm converge to local optimum but also reaching to global optimum. Without any doubts, proposed approach is so faster than software implementations. Evaluation results also show the proposed approach is faster than hardware ones.

Keywords: hardware, genetic algorithm, computer science, engineering

Procedia PDF Downloads 498
1986 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem

Authors: Y. Wang

Abstract:

The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.

Keywords: frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem

Procedia PDF Downloads 226
1985 Influence of Major Axis on the Aerodynamic Characteristics of Elliptical Section

Authors: K. B. Rajasekarababu, J. Karthik, G. Vinayagamurthy

Abstract:

This paper is intended to explain the influence of major axis on aerodynamic characteristics of elliptical section. Many engineering applications such as off shore structures, bridge piers, civil structures and pipelines can be modelled as a circular cylinder but flow over complex bodies like, submarines, Elliptical wing, fuselage, missiles, and rotor blades, in which the parameters such as axis ratio can influence the flow characteristics of the wake and nature of separation. Influence of Major axis in Flow characteristics of elliptical sections are examined both experimentally and computationally in this study. For this research, four elliptical models with varying major axis [*AR=1, 4, 6, 10] are analysed. Experimental works have been conducted in a subsonic wind tunnel. Furthermore, flow characteristics on elliptical model are predicted from k-ε turbulence model using the commercial CFD packages by pressure based transient solver with Standard wall conditions.The analysis can be extended to estimation and comparison of Drag coefficient and Fatigue analysis of elliptical sections.

Keywords: elliptical section, major axis, aerodynamic characteristics, k-ε turbulence model

Procedia PDF Downloads 425
1984 Using Classifiers to Predict Student Outcome at Higher Institute of Telecommunication

Authors: Fuad M. Alkoot

Abstract:

We aim at highlighting the benefits of classifier systems especially in supporting educational management decisions. The paper aims at using classifiers in an educational application where an outcome is predicted based on given input parameters that represent various conditions at the institute. We present a classifier system that is designed using a limited training set with data for only one semester. The achieved system is able to reach at previously known outcomes accurately. It is also tested on new input parameters representing variations of input conditions to see its prediction on the possible outcome value. Given the supervised expectation of the outcome for the new input we find the system is able to predict the correct outcome. Experiments were conducted on one semester data from two departments only, Switching and Mathematics. Future work on other departments with larger training sets and wider input variations will show additional benefits of classifier systems in supporting the management decisions at an educational institute.

Keywords: machine learning, pattern recognition, classifier design, educational management, outcome estimation

Procedia PDF Downloads 273
1983 Sync Consensus Algorithm: Trying to Reach an Agreement at Full Speed

Authors: Yuri Zinchenko

Abstract:

Recently, distributed storage systems have been used more and more in various aspects of everyday life. They provide such necessary properties as Scalability, Fault Tolerance, Durability, and others. At the same time, not only reliable but also fast data storage remains one of the most pressing issues in this area. That brings us to the consensus algorithm as one of the most important components that has a great impact on the functionality of a distributed system. This paper is the result of an analysis of several well-known consensus algorithms, such as Paxos and Raft. The algorithm it offers, called Sync, promotes, but does not insist on simultaneous writing to the nodes (which positively affects the overall writing speed) and tries to minimize the system's inactive time. This allows nodes to reach agreement on the system state in a shorter period, which is a critical factor for distributed systems. Also when developing Sync, a lot of attention was paid to such criteria as simplicity and intuitiveness, the importance of which is difficult to overestimate.

Keywords: sync, consensus algorithm, distributed system, leader-based, synchronization.

Procedia PDF Downloads 54
1982 Evaluation of Quasi-Newton Strategy for Algorithmic Acceleration

Authors: T. Martini, J. M. Martínez

Abstract:

An algorithmic acceleration strategy based on quasi-Newton (or secant) methods is displayed for address the practical problem of accelerating the convergence of the Newton-Lagrange method in the case of convergence to critical multipliers. Since the Newton-Lagrange iteration converges locally at a linear rate, it is natural to conjecture that quasi-Newton methods based on the so called secant equation and some minimal variation principle, could converge superlinearly, thus restoring the convergence properties of Newton's method. This strategy can also be applied to accelerate the convergence of algorithms applied to fixed-points problems. Computational experience is reported illustrating the efficiency of this strategy to solve fixed-point problems with linear convergence rate.

Keywords: algorithmic acceleration, fixed-point problems, nonlinear programming, quasi-newton method

Procedia PDF Downloads 482
1981 A Hybrid Tabu Search Algorithm for the Multi-Objective Job Shop Scheduling Problems

Authors: Aydin Teymourifar, Gurkan Ozturk

Abstract:

In this paper, a hybrid Tabu Search (TS) algorithm is suggested for the multi-objective job shop scheduling problems (MO-JSSPs). The algorithm integrates several shifting bottleneck based neighborhood structures with the Giffler & Thompson algorithm, which improve efficiency of the search. Diversification and intensification are provided with local and global left shift algorithms application and also new semi-active, active, and non-delay schedules creation. The suggested algorithm is tested in the MO-JSSPs benchmarks from the literature based on the Pareto optimality concept. Different performances criteria are used for the multi-objective algorithm evaluation. The proposed algorithm is able to find the Pareto solutions of the test problems in shorter time than other algorithm of the literature.

Keywords: tabu search, heuristics, job shop scheduling, multi-objective optimization, Pareto optimality

Procedia PDF Downloads 438
1980 Research of the Three-Dimensional Visualization Geological Modeling of Mine Based on Surpac

Authors: Honggang Qu, Yong Xu, Rongmei Liu, Zhenji Gao, Bin Wang

Abstract:

Today's mining industry is advancing gradually toward digital and visual direction. The three-dimensional visualization geological modeling of mine is the digital characterization of mineral deposits and is one of the key technology of digital mining. Three-dimensional geological modeling is a technology that combines geological spatial information management, geological interpretation, geological spatial analysis and prediction, geostatistical analysis, entity content analysis and graphic visualization in a three-dimensional environment with computer technology and is used in geological analysis. In this paper, the three-dimensional geological modeling of an iron mine through the use of Surpac is constructed, and the weight difference of the estimation methods between the distance power inverse ratio method and ordinary kriging is studied, and the ore body volume and reserves are simulated and calculated by using these two methods. Compared with the actual mine reserves, its result is relatively accurate, so it provides scientific bases for mine resource assessment, reserve calculation, mining design and so on.

Keywords: three-dimensional geological modeling, geological database, geostatistics, block model

Procedia PDF Downloads 72
1979 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.

Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership

Procedia PDF Downloads 174
1978 Citation Analysis of New Zealand Court Decisions

Authors: Tobias Milz, L. Macpherson, Varvara Vetrova

Abstract:

The law is a fundamental pillar of human societies as it shapes, controls and governs how humans conduct business, behave and interact with each other. Recent advances in computer-assisted technologies such as NLP, data science and AI are creating opportunities to support the practice, research and study of this pervasive domain. It is therefore not surprising that there has been an increase in investments into supporting technologies for the legal industry (also known as “legal tech” or “law tech”) over the last decade. A sub-discipline of particular appeal is concerned with assisted legal research. Supporting law researchers and practitioners to retrieve information from the vast amount of ever-growing legal documentation is of natural interest to the legal research community. One tool that has been in use for this purpose since the early nineteenth century is legal citation indexing. Among other use cases, they provided an effective means to discover new precedent cases. Nowadays, computer-assisted network analysis tools can allow for new and more efficient ways to reveal the “hidden” information that is conveyed through citation behavior. Unfortunately, access to openly available legal data is still lacking in New Zealand and access to such networks is only commercially available via providers such as LexisNexis. Consequently, there is a need to create, analyze and provide a legal citation network with sufficient data to support legal research tasks. This paper describes the development and analysis of a legal citation Network for New Zealand containing over 300.000 decisions from 125 different courts of all areas of law and jurisdiction. Using python, the authors assembled web crawlers, scrapers and an OCR pipeline to collect and convert court decisions from openly available sources such as NZLII into uniform and machine-readable text. This facilitated the use of regular expressions to identify references to other court decisions from within the decision text. The data was then imported into a graph-based database (Neo4j) with the courts and their respective cases represented as nodes and the extracted citations as links. Furthermore, additional links between courts of connected cases were added to indicate an indirect citation between the courts. Neo4j, as a graph-based database, allows efficient querying and use of network algorithms such as PageRank to reveal the most influential/most cited courts and court decisions over time. This paper shows that the in-degree distribution of the New Zealand legal citation network resembles a power-law distribution, which indicates a possible scale-free behavior of the network. This is in line with findings of the respective citation networks of the U.S. Supreme Court, Austria and Germany. The authors of this paper provide the database as an openly available data source to support further legal research. The decision texts can be exported from the database to be used for NLP-related legal research, while the network can be used for in-depth analysis. For example, users of the database can specify the network algorithms and metrics to only include specific courts to filter the results to the area of law of interest.

Keywords: case citation network, citation analysis, network analysis, Neo4j

Procedia PDF Downloads 99
1977 Speech Intelligibility Improvement Using Variable Level Decomposition DWT

Authors: Samba Raju, Chiluveru, Manoj Tripathy

Abstract:

Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methods

Keywords: discrete wavelet transform, speech intelligibility, STOI, standard deviation

Procedia PDF Downloads 142
1976 Sentiment Analysis of Consumers’ Perceptions on Social Media about the Main Mobile Providers in Jamaica

Authors: Sherrene Bogle, Verlia Bogle, Tyrone Anderson

Abstract:

In recent years, organizations have become increasingly interested in the possibility of analyzing social media as a means of gaining meaningful feedback about their products and services. The aspect based sentiment analysis approach is used to predict the sentiment for Twitter datasets for Digicel and Lime, the main mobile companies in Jamaica, using supervised learning classification techniques. The results indicate an average of 82.2 percent accuracy in classifying tweets when comparing three separate classification algorithms against the purported baseline of 70 percent and an average root mean squared error of 0.31. These results indicate that the analysis of sentiment on social media in order to gain customer feedback can be a viable solution for mobile companies looking to improve business performance.

Keywords: machine learning, sentiment analysis, social media, supervised learning

Procedia PDF Downloads 433
1975 An Early Detection Type 2 Diabetes Using K - Nearest Neighbor Algorithm

Authors: Ng Liang Shen, Ngahzaifa Abdul Ghani

Abstract:

This research aimed at developing an early warning system for pre-diabetic and diabetics by analyzing simple and easily determinable signs and symptoms of diabetes among the people living in Malaysia using Particle Swarm Optimized Artificial. With the skyrocketing prevalence of Type 2 diabetes in Malaysia, the system can be used to encourage affected people to seek further medical attention to prevent the onset of diabetes or start managing it early enough to avoid the associated complications. The study sought to find out the best predictive variables of Type 2 Diabetes Mellitus, developed a system to diagnose diabetes from the variables using Artificial Neural Networks and tested the system on accuracy to find out the patent generated from diabetes diagnosis result in machine learning algorithms even at primary or advanced stages.

Keywords: diabetes diagnosis, Artificial Neural Networks, artificial intelligence, soft computing, medical diagnosis

Procedia PDF Downloads 327
1974 Investment Adjustments to Exchange Rate Fluctuations Evidence from Manufacturing Firms in Tunisia

Authors: Mourad Zmami Oussema BenSalha

Abstract:

The current research aims to assess empirically the reaction of private investment to exchange rate fluctuations in Tunisia using a sample of 548 firms operating in manufacturing industries between 1997 and 2002. The micro-econometric model we estimate is based on an accelerator-profit specification investment model increased by two variables that measure the variation and the volatility of exchange rates. Estimates using the system the GMM method reveal that the effects of the exchange rate depreciation on investment are negative since it increases the cost of imported capital goods. Turning to the exchange rate volatility, as measured by the GARCH (1,1) model, our findings assign a significant role to the exchange rate uncertainty in explaining the sluggishness of private investment in Tunisia in the full sample of firms. Other estimation attempts based on various sub samples indicate that the elasticities of investment relative to the exchange rate volatility depend upon many firms’ specific characteristics such as the size and the ownership structure.

Keywords: investment, exchange rate volatility, manufacturing firms, system GMM, Tunisia

Procedia PDF Downloads 402
1973 PredictionSCMS: The Implementation of an AI-Powered Supply Chain Management System

Authors: Ioannis Andrianakis, Vasileios Gkatas, Nikos Eleftheriadis, Alexios Ellinidis, Ermioni Avramidou

Abstract:

The paper discusses the main aspects involved in the development of a supply chain management system using the newly developed PredictionSCMS software as a basis for the discussion. The discussion is focused on three topics: the first is demand forecasting, where we present the predictive algorithms implemented and discuss related concepts such as the calculation of the safety stock, the effect of out-of-stock days etc. The second topic concerns the design of a supply chain, where the core parameters involved in the process are given, together with a methodology of incorporating these parameters in a meaningful order creation strategy. Finally, the paper discusses some critical events that can happen during the operation of a supply chain management system and how the developed software notifies the end user about their occurrence.

Keywords: demand forecasting, machine learning, risk management, supply chain design

Procedia PDF Downloads 81
1972 Financial Inclusion and Modernization: Secure Energy Performance in Shanghai Cooperation Organization

Authors: Shama Urooj

Abstract:

The present work investigates the relationship among financial inclusion, modernization, and energy performance in SCO member countries during the years 2011–2021. PCA is used to create composite indexes of financial inclusion, modernization, and energy performance. We used panel regression models that are both reliable and heteroscedasticity-consistent to look at the relationship among variables. The findings indicate that financial inclusion (FI) and modernization, along with the increased FDI, all appear to contribute to the energy performance in the SCO member countries. However, per capita GDP has a negative impact on energy performance. These results are unbiased and consistent with the robust results obtained by applying different econometric models. Feasible Generalized Least Square (FGLS) estimation is also used for checking the uniformity of the main model results. This research work concludes that there has been no policy coherence in SCO member countries regarding the coordination of growing financial inclusion and modernization for energy sustainability in recent years. In order to improve energy performance with modern development, policies regarding financial inclusion and modernization need be integrated both at national as well as international levels.

Keywords: financial inclusion, energy performance, modernization, technological development, SCO.

Procedia PDF Downloads 68
1971 Analysis of Wall Deformation of the Arterial Plaque Models: Effects of Viscoelasticity

Authors: Eun Kyung Kim, Kyehan Rhee

Abstract:

Viscoelastic wall properties of the arterial plaques change as the disease progresses, and estimation of wall viscoelasticity can provide a valuable assessment tool for plaque rupture prediction. Cross section of the stenotic coronary artery was modeled based on the IVUS image, and the finite element analysis was performed to get wall deformation under pulsatile pressure. The effects of viscoelastic parameters of the plaque on luminal diameter variations were explored. The result showed that decrease of viscous effect reduced the phase angle between the pressure and displacement waveforms, and phase angle was dependent on the viscoelastic properties of the wall. Because viscous effect of tissue components could be identified using the phase angle difference, wall deformation waveform analysis may be applied to predict plaque wall composition change and vascular wall disease progression.

Keywords: atherosclerotic plaque, diameter variation, finite element method, viscoelasticity

Procedia PDF Downloads 212
1970 Verifying the Performance of the Argon-41 Monitoring System from Fluorine-18 Production for Medical Applications

Authors: Nicole Virgili, Romolo Remetti

Abstract:

The aim of this work is to characterize, from radiation protection point of view, the emission into the environment of air contaminated by argon-41. In this research work, 41Ar is produced by a TR19PET cyclotron, operated at 19 MeV, installed at 'A. Gemelli' University Hospital, Rome, Italy, for fluorine-18 production. The production rate of 41Ar has been calculated on the basis of the scheduled operation cycles of the cyclotron and by utilising proper production algorithms. Then extensive Monte Carlo calculations, carried out by MCNP code, have allowed to determine the absolute detection efficiency to 41Ar gamma rays of a Geiger Muller detector placed in the terminal part of the chimney. Results showed unsatisfactory detection efficiency values and the need for integrating the detection system with more efficient detectors.

Keywords: Cyclotron, Geiger Muller detector, MCNPX, argon-41, emission of radioactive gas, detection efficiency determination

Procedia PDF Downloads 144
1969 Resource Allocation Scheme For IEEE802.16 Networks

Authors: Elmabruk Laias

Abstract:

IEEE Standard 802.16 provides QoS (Quality of Service) for the applications such as Voice over IP, video streaming and high bandwidth file transfer. With the ability of broadband wireless access of an IEEE 802.16 system, a WiMAX TDD frame contains one downlink subframe and one uplink subframe. The capacity allocated to each subframe is a system parameter that should be determined based on the expected traffic conditions. a proper resource allocation scheme for packet transmissions is imperatively needed. In this paper, we present a new resource allocation scheme, called additional bandwidth yielding (ABY), to improve transmission efficiency of an IEEE 802.16-based network. Our proposed scheme can be adopted along with the existing scheduling algorithms and the multi-priority scheme without any change. The experimental results show that by using our ABY, the packet queuing delay could be significantly improved, especially for the service flows of higher-priority classes.

Keywords: IEEE 802.16, WiMAX, OFDMA, resource allocation, uplink-downlink mapping

Procedia PDF Downloads 469
1968 An Automatic Feature Extraction Technique for 2D Punch Shapes

Authors: Awais Ahmad Khan, Emad Abouel Nasr, H. M. A. Hussein, Abdulrahman Al-Ahmari

Abstract:

Sheet-metal parts have been widely applied in electronics, communication and mechanical industries in recent decades; but the advancement in sheet-metal part design and manufacturing is still behind in comparison with the increasing importance of sheet-metal parts in modern industry. This paper presents a methodology for automatic extraction of some common 2D internal sheet metal features. The features used in this study are taken from Unipunch ™ catalogue. The extraction process starts with the data extraction from STEP file using an object oriented approach and with the application of suitable algorithms and rules, all features contained in the catalogue are automatically extracted. Since the extracted features include geometry and engineering information, they will be effective for downstream application such as feature rebuilding and process planning.

Keywords: feature extraction, internal features, punch shapes, sheet metal

Procedia PDF Downloads 607
1967 Real-Time Nonintrusive Heart Rate Measurement: Comparative Case Study of LED Sensorics' Accuracy and Benefits in Heart Monitoring

Authors: Goran Begović

Abstract:

In recent years, many researchers are focusing on non-intrusive measuring methods when it comes to human biosignals. These methods provide solutions for everyday use, whether it’s health monitoring or finessing the workout routine. One of the biggest issues with these solutions is that the sensors’ accuracy is highly variable due to many factors, such as ambiental light, skin color diversity, etc. That is why we wanted to explore different outcomes under those kinds of circumstances in order to find the most optimal algorithm(s) for extracting heart rate (HR) information. The optimization of such algorithms can benefit the wider, cheaper, and safer application of home health monitoring, without having to visit medical professionals as often when it comes to observing heart irregularities. In this study, we explored the accuracy of infrared (IR), red, and green LED sensorics in a controlled environment and compared the results with a medically accurate ECG monitoring device.

Keywords: data science, ECG, heart rate, holter monitor, LED sensors

Procedia PDF Downloads 121