Search results for: algorithms and data structure
31792 A Graph Library Development Based on the Service-Oriented Architecture: Used for Representation of the Biological Systems in the Computer Algorithms
Authors: Mehrshad Khosraviani, Sepehr Najjarpour
Abstract:
Considering the usage of graph-based approaches in systems and synthetic biology, and the various types of the graphs employed by them, a comprehensive graph library based on the three-tier architecture (3TA) was previously introduced for full representation of the biological systems. Although proposing a 3TA-based graph library, three following reasons motivated us to redesign the graph library based on the service-oriented architecture (SOA): (1) Maintaining the accuracy of the data related to an input graph (including its edges, its vertices, its topology, etc.) without involving the end user: Since, in the case of using 3TA, the library files are available to the end users, they may be utilized incorrectly, and consequently, the invalid graph data will be provided to the computer algorithms. However, considering the usage of the SOA, the operation of the graph registration is specified as a service by encapsulation of the library files. In other words, overall control operations needed for registration of the valid data will be the responsibility of the services. (2) Partitioning of the library product into some different parts: Considering 3TA, a whole library product was provided in general. While here, the product can be divided into smaller ones, such as an AND/OR graph drawing service, and each one can be provided individually. As a result, the end user will be able to select any parts of the library product, instead of all features, to add it to a project. (3) Reduction of the complexities: While using 3TA, several other libraries must be needed to add for connecting to the database, responsibility of the provision of the needed library resources in the SOA-based graph library is entrusted with the services by themselves. Therefore, the end user who wants to use the graph library is not involved with its complexity. In the end, in order to make the library easier to control in the system, and to restrict the end user from accessing the files, it was preferred to use the service-oriented architecture (SOA) over the three-tier architecture (3TA) and to redevelop the previously proposed graph library based on it.Keywords: Bio-Design Automation, Biological System, Graph Library, Service-Oriented Architecture, Systems and Synthetic Biology
Procedia PDF Downloads 31531791 HPPDFIM-HD: Transaction Distortion and Connected Perturbation Approach for Hierarchical Privacy Preserving Distributed Frequent Itemset Mining over Horizontally-Partitioned Dataset
Authors: Fuad Ali Mohammed Al-Yarimi
Abstract:
Many algorithms have been proposed to provide privacy preserving in data mining. These protocols are based on two main approaches named as: the perturbation approach and the Cryptographic approach. The first one is based on perturbation of the valuable information while the second one uses cryptographic techniques. The perturbation approach is much more efficient with reduced accuracy while the cryptographic approach can provide solutions with perfect accuracy. However, the cryptographic approach is a much slower method and requires considerable computation and communication overhead. In this paper, a new scalable protocol is proposed which combines the advantages of the perturbation and distortion along with cryptographic approach to perform privacy preserving in distributed frequent itemset mining on horizontally distributed data. Both the privacy and performance characteristics of the proposed protocol are studied empirically.Keywords: anonymity data, data mining, distributed frequent itemset mining, gaussian perturbation, perturbation approach, privacy preserving data mining
Procedia PDF Downloads 50831790 Elephant Herding Optimization for Service Selection in QoS-Aware Web Service Composition
Authors: Samia Sadouki Chibani, Abdelkamel Tari
Abstract:
Web service composition combines available services to provide new functionality. Given the number of available services with similar functionalities and different non functional aspects (QoS), the problem of finding a QoS-optimal web service composition is considered as an optimization problem belonging to NP-hard class. Thus, an optimal solution cannot be found by exact algorithms within a reasonable time. In this paper, a meta-heuristic bio-inspired is presented to address the QoS aware web service composition; it is based on Elephant Herding Optimization (EHO) algorithm, which is inspired by the herding behavior of elephant group. EHO is characterized by a process of dividing and combining the population to sub populations (clan); this process allows the exchange of information between local searches to move toward a global optimum. However, with Applying others evolutionary algorithms the problem of early stagnancy in a local optimum cannot be avoided. Compared with PSO, the results of experimental evaluation show that our proposition significantly outperforms the existing algorithm with better performance of the fitness value and a fast convergence.Keywords: bio-inspired algorithms, elephant herding optimization, QoS optimization, web service composition
Procedia PDF Downloads 33131789 Fuzzy Wavelet Model to Forecast the Exchange Rate of IDR/USD
Authors: Tri Wijayanti Septiarini, Agus Maman Abadi, Muhammad Rifki Taufik
Abstract:
The exchange rate of IDR/USD can be the indicator to analysis Indonesian economy. The exchange rate as a important factor because it has big effect in Indonesian economy overall. So, it needs the analysis data of exchange rate. There is decomposition data of exchange rate of IDR/USD to be frequency and time. It can help the government to monitor the Indonesian economy. This method is very effective to identify the case, have high accurate result and have simple structure. In this paper, data of exchange rate that used is weekly data from December 17, 2010 until November 11, 2014.Keywords: the exchange rate, fuzzy mamdani, discrete wavelet transforms, fuzzy wavelet
Procedia PDF Downloads 57731788 Spectrum Assignment Algorithms in Optical Networks with Protection
Authors: Qusay Alghazali, Tibor Cinkler, Abdulhalim Fayad
Abstract:
In modern optical networks, the flex grid spectrum usage is most widespread, where higher bit rate streams get larger spectrum slices while lower bit rate traffic streams get smaller spectrum slices. To our practice, under the ITU-T recommendation, G.694.1, spectrum slices of 50, 75, and 100 GHz are being used with central frequency at 193.1 THz. However, when these spectrum slices are not sufficient, multiple spectrum slices can use either one next to another or anywhere in the optical wavelength. In this paper, we propose the analysis of the wavelength assignment problem. We compare different algorithms for this spectrum assignment with and without protection. As a reference for comparisons, we concluded that the Integer Linear Programming (ILP) provides the global optimum for all cases. The most scalable algorithm is the greedy one, which yields results in subsequent ranges even for more significant network instances. The algorithms’ benchmark implemented using the LEMON C++ optimization library and simulation runs based on a minimum number of spectrum slices assigned to lightpaths and their execution time.Keywords: spectrum assignment, integer linear programming, greedy algorithm, international telecommunication union, library for efficient modeling and optimization in networks
Procedia PDF Downloads 17531787 Detection and Classification of Mammogram Images Using Principle Component Analysis and Lazy Classifiers
Authors: Rajkumar Kolangarakandy
Abstract:
Feature extraction and selection is the primary part of any mammogram classification algorithms. The choice of feature, attribute or measurements have an important influence in any classification system. Discrete Wavelet Transformation (DWT) coefficients are one of the prominent features for representing images in frequency domain. The features obtained after the decomposition of the mammogram images using wavelet transformations have higher dimension. Even though the features are higher in dimension, they were highly correlated and redundant in nature. The dimensionality reduction techniques play an important role in selecting the optimum number of features from the higher dimension data, which are highly correlated. PCA is a mathematical tool that reduces the dimensionality of the data while retaining most of the variation in the dataset. In this paper, a multilevel classification of mammogram images using reduced discrete wavelet transformation coefficients and lazy classifiers is proposed. The classification is accomplished in two different levels. In the first level, mammogram ROIs extracted from the dataset is classified as normal and abnormal types. In the second level, all the abnormal mammogram ROIs is classified into benign and malignant too. A further classification is also accomplished based on the variation in structure and intensity distribution of the images in the dataset. The Lazy classifiers called Kstar, IBL and LWL are used for classification. The classification results obtained with the reduced feature set is highly promising and the result is also compared with the performance obtained without dimension reduction.Keywords: PCA, wavelet transformation, lazy classifiers, Kstar, IBL, LWL
Procedia PDF Downloads 33631786 Dynamics Analyses of Swing Structure Subject to Rotational Forces
Authors: Buntheng Chhorn, WooYoung Jung
Abstract:
Large-scale swing has been used in entertainment and performance, especially in circus, for a very long time. To increase the safety of this type of structure, a thorough analysis for displacement and bearing stress was performed for an extreme condition where a full cycle swing occurs. Different masses, ranging from 40 kg to 220 kg, and velocities were applied on the swing. Then, based on the solution of differential dynamics equation, swing velocity response to harmonic force was obtained. Moreover, the resistance capacity was estimated based on ACI steel structure design guide. Subsequently, numerical analysis was performed in ABAQUS to obtain the stress on each frame of the swing. Finally, the analysis shows that the expansion of swing structure frame section was required for mass bigger than 150kg.Keywords: swing structure, displacement, bearing stress, dynamic loads response, finite element analysis
Procedia PDF Downloads 37831785 Optimum Design of Steel Space Frames by Hybrid Teaching-Learning Based Optimization and Harmony Search Algorithms
Authors: Alper Akin, Ibrahim Aydogdu
Abstract:
This study presents a hybrid metaheuristic algorithm to obtain optimum designs for steel space buildings. The optimum design problem of three-dimensional steel frames is mathematically formulated according to provisions of LRFD-AISC (Load and Resistance factor design of American Institute of Steel Construction). Design constraints such as the strength requirements of structural members, the displacement limitations, the inter-story drift and the other structural constraints are derived from LRFD-AISC specification. In this study, a hybrid algorithm by using teaching-learning based optimization (TLBO) and harmony search (HS) algorithms is employed to solve the stated optimum design problem. These algorithms are two of the recent additions to metaheuristic techniques of numerical optimization and have been an efficient tool for solving discrete programming problems. Using these two algorithms in collaboration creates a more powerful tool and mitigates each other’s weaknesses. To demonstrate the powerful performance of presented hybrid algorithm, the optimum design of a large scale steel building is presented and the results are compared to the previously obtained results available in the literature.Keywords: optimum structural design, hybrid techniques, teaching-learning based optimization, harmony search algorithm, minimum weight, steel space frame
Procedia PDF Downloads 54831784 Monitoring the Drying and Grinding Process during Production of Celitement through a NIR-Spectroscopy Based Approach
Authors: Carolin Lutz, Jörg Matthes, Patrick Waibel, Ulrich Precht, Krassimir Garbev, Günter Beuchle, Uwe Schweike, Peter Stemmermann, Hubert B. Keller
Abstract:
Online measurement of the product quality is a challenging task in cement production, especially in the production of Celitement, a novel environmentally friendly hydraulic binder. The mineralogy and chemical composition of clinker in ordinary Portland cement production is measured by X-ray diffraction (XRD) and X ray fluorescence (XRF), where only crystalline constituents can be detected. But only a small part of the Celitement components can be measured via XRD, because most constituents have an amorphous structure. This paper describes the development of algorithms suitable for an on-line monitoring of the final processing step of Celitement based on NIR-data. For calibration intermediate products were dried at different temperatures and ground for variable durations. The products were analyzed using XRD and thermogravimetric analyses together with NIR-spectroscopy to investigate the dependency between the drying and the milling processes on one and the NIR-signal on the other side. As a result, different characteristic parameters have been defined. A short overview of the Celitement process and the challenging tasks of the online measurement and evaluation of the product quality will be presented. Subsequently, methods for systematic development of near-infrared calibration models and the determination of the final calibration model will be introduced. The application of the model on experimental data illustrates that NIR-spectroscopy allows for a quick and sufficiently exact determination of crucial process parameters.Keywords: calibration model, celitement, cementitious material, NIR spectroscopy
Procedia PDF Downloads 50331783 A Hybrid Pareto-Based Swarm Optimization Algorithm for the Multi-Objective Flexible Job Shop Scheduling Problems
Authors: Aydin Teymourifar, Gurkan Ozturk
Abstract:
In this paper, a new hybrid particle swarm optimization algorithm is proposed for the multi-objective flexible job shop scheduling problem that is very important and hard combinatorial problem. The Pareto approach is used for solving the multi-objective problem. Several new local search heuristics are integrated into an algorithm based on the critical block concept to enhance the performance of the algorithm. The algorithm is compared with the recently published multi-objective algorithms based on benchmarks selected from the literature. Several metrics are used for quantifying performance and comparison of the achieved solutions. The algorithms are also compared based on the Weighting summation of objectives approach. The proposed algorithm can find the Pareto solutions more efficiently than the compared algorithms in less computational time.Keywords: swarm-based optimization, local search, Pareto optimality, flexible job shop scheduling, multi-objective optimization
Procedia PDF Downloads 37331782 Applying Laser Scanning and Digital Photogrammetry for Developing an Archaeological Model Structure for Old Castle in Germany
Authors: Bara' Al-Mistarehi
Abstract:
Documentation and assessment of conservation state of an archaeological structure is a significant procedure in any management plan. However, it has always been a challenge to apply this with a low coast and safe methodology. It is also a time-demanding procedure. Therefore, a low cost, efficient methodology for documenting the state of a structure is needed. In the scope of this research, this paper will employ digital photogrammetry and laser scanner to one of highly significant structures in Germany, The Old Castle (German: Altes Schloss). The site is well known for its unique features. However, the castle suffers from serious deterioration threats because of the environmental conditions and the absence of continuous monitoring, maintenance and repair plans. Digital photogrammetry is a generally accepted technique for the collection of 3D representations of the environment. For this reason, this image-based technique has been extensively used to produce high quality 3D models of heritage sites and historical buildings for documentation and presentation purposes. Additionally, terrestrial laser scanners are used, which directly measure 3D surface coordinates based on the run-time of reflected light pulses. These systems feature high data acquisition rates, good accuracy and high spatial data density. Despite the potential of each single approach, in this research work maximum benefit is to be expected by a combination of data from both digital cameras and terrestrial laser scanners. Within the paper, the usage, application and advantages of the technique will be investigated in terms of building high realistic 3D textured model for some parts of the old castle. The model will be used as diagnosing tool of the conservation state of the castle and monitoring mean for future changes.Keywords: Digital photogrammetry, Terrestrial laser scanners, 3D textured model, archaeological structure
Procedia PDF Downloads 18631781 Quantum Entangled States and Image Processing
Authors: Sanjay Singh, Sushil Kumar, Rashmi Jain
Abstract:
Quantum registering is another pattern in computational hypothesis and a quantum mechanical framework has a few helpful properties like Entanglement. We plan to store data concerning the structure and substance of a basic picture in a quantum framework. Consider a variety of n qubits which we propose to use as our memory stockpiling. In recent years classical processing is switched to quantum image processing. Quantum image processing is an elegant approach to overcome the problems of its classical counter parts. Image storage, retrieval and its processing on quantum machines is an emerging area. Although quantum machines do not exist in physical reality but theoretical algorithms developed based on quantum entangled states gives new insights to process the classical images in quantum domain. Here in the present work, we give the brief overview, such that how entangled states can be useful for quantum image storage and retrieval. We discuss the properties of tripartite Greenberger-Horne-Zeilinger and W states and their usefulness to store the shapes which may consist three vertices. We also propose the techniques to store shapes having more than three vertices.Keywords: Greenberger-Horne-Zeilinger, image storage and retrieval, quantum entanglement, W states
Procedia PDF Downloads 31131780 Learning Algorithms for Fuzzy Inference Systems Composed of Double- and Single-Input Rule Modules
Authors: Hirofumi Miyajima, Kazuya Kishida, Noritaka Shigei, Hiromi Miyajima
Abstract:
Most of self-tuning fuzzy systems, which are automatically constructed from learning data, are based on the steepest descent method (SDM). However, this approach often requires a large convergence time and gets stuck into a shallow local minimum. One of its solutions is to use fuzzy rule modules with a small number of inputs such as DIRMs (Double-Input Rule Modules) and SIRMs (Single-Input Rule Modules). In this paper, we consider a (generalized) DIRMs model composed of double and single-input rule modules. Further, in order to reduce the redundant modules for the (generalized) DIRMs model, pruning and generative learning algorithms for the model are suggested. In order to show the effectiveness of them, numerical simulations for function approximation, Box-Jenkins and obstacle avoidance problems are performed.Keywords: Box-Jenkins's problem, double-input rule module, fuzzy inference model, obstacle avoidance, single-input rule module
Procedia PDF Downloads 35531779 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College
Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa
Abstract:
This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling
Procedia PDF Downloads 23531778 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence
Authors: Sogand Barghi
Abstract:
The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting
Procedia PDF Downloads 7731777 Structural Damage Detection via Incomplete Model Data Using Output Data Only
Authors: Ahmed Noor Al-qayyim, Barlas Özden Çağlayan
Abstract:
Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on obtaining very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. This study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using “Two Points - Condensation (TPC) technique”. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices are obtained from optimization of the equation of motion using the measured test data. The current stiffness matrices are compared with original (undamaged) stiffness matrices. High percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element. Where two cases are considered, the method detects the damage and determines its location accurately in both cases. In addition, the results illustrate that these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can also be used for big structures.Keywords: damage detection, optimization, signals processing, structural health monitoring, two points–condensation
Procedia PDF Downloads 36731776 Criteria for Assessing Prostate Structure after Proton Radiotherapy for Prostate Cancer
Authors: Kuplevatsky V., Kuplevatskay, Cherkashin M., Berezina N.
Abstract:
After 6 months, a violation of the differentiation of the structure of the gland due to edema in 100%. 20% retained signs of a tumor according to DWI/ADC data. By 12 months, the reduction in the size of the gland is 100%. In all cases, no diffusion restriction was observed. The study after 18 months showed no significant changes in all (100%) patients. In the study, 24 months after treatment, the size of the gland was stable in all cases (+/- up to 5%). Diffuse decrease in T2VI signals from peripheral zones, without signs of diffusion restriction in 100%. After 30 months, signs of recovery of adenomatous changes in the transient zone were revealed in 85%. After 36 and 42 months, the restoration of organ differentiation was observed in 93% of patients. In 4 patients, by the 48th month, signs of biochemical relapse were clinically noted. According to the MRI data, signs of a local relapse were revealed. After 48 months, there were signs of restoration of organ differentiation, which allowed the use of PI-RADS criteria. The study after 54 months showed no changes compared to the control. 60 months after treatment, 97% of patients showed a restoration of differentiation of the gland structure, which allows evaluating the organ according to PI-RADS criteria Conclusions: The beginning of restoration of the structure of the prostate gland began 24 months after proton radiation therapy, the PI-RADS criteria can be fully applied after 48 months of treatment. Control studies every 6 months without clinical signs of relapse are not advisable. Local control of the prostate tumor after proton radiation therapy was achieved in 95% of patients during the entire follow-up period ( 60 months).Keywords: proton therapy, prostate cancer, MRI imaging, PI-RADS
Procedia PDF Downloads 10631775 Blind Super-Resolution Reconstruction Based on PSF Estimation
Authors: Osama A. Omer, Amal Hamed
Abstract:
Successful blind image Super-Resolution algorithms require the exact estimation of the Point Spread Function (PSF). In the absence of any prior information about the imagery system and the true image; this estimation is normally done by trial and error experimentation until an acceptable restored image quality is obtained. Multi-frame blind Super-Resolution algorithms often have disadvantages of slow convergence and sensitiveness to complex noises. This paper presents a Super-Resolution image reconstruction algorithm based on estimation of the PSF that yields the optimum restored image quality. The estimation of PSF is performed by the knife-edge method and it is implemented by measuring spreading of the edges in the reproduced HR image itself during the reconstruction process. The proposed image reconstruction approach is using L1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. A series of experiment results show that the proposed method can outperform other previous work robustly and efficiently.Keywords: blind, PSF, super-resolution, knife-edge, blurring, bilateral, L1 norm
Procedia PDF Downloads 36831774 Bus Transit Demand Modeling and Fare Structure Analysis of Kabul City
Authors: Ramin Mirzada, Takuya Maruyama
Abstract:
Kabul is the heart of political, commercial, cultural, educational and social life in Afghanistan and the fifth fastest growing city in the world. Minimum income inclined most of Kabul residents to use public transport, especially buses, although there is no proper bus system, beside that there is no proper fare exist in Kabul city Due to wars. From 1992 to 2001 during civil wars, Kabul suffered damage and destruction of its transportation facilities including pavements, sidewalks, traffic circles, drainage systems, traffic signs and signals, trolleybuses and almost all of the public transport system (e.g. Millie bus). This research is mainly focused on Kabul city’s transportation system. In this research, the data used have been gathered by Japan International Cooperation Agency (JICA) in 2008 and this data will be used to find demand and fare structure, additionally a survey was done in 2016 to find satisfaction level of Kabul residents for fare structure. Aim of this research is to observe the demand for Large Buses, compare to the actual supply from the government, analyze the current fare structure and compare it with the proposed fare (distance based fare) structure which has already been analyzed. Outcome of this research shows that the demand of Kabul city residents for the public transport (Large Buses) exceeds from the current supply, so that current public transportation (Large Buses) is not sufficient to serve public transport in Kabul city, worth to be mentioned, that in order to overcome this problem, there is no need to build new roads or exclusive way for buses. This research proposes government to change the fare from fixed fare to distance based fare, invest on public transportation and increase the number of large buses so that the current demand for public transport is met.Keywords: transportation, planning, public transport, large buses, Kabul, Afghanistan
Procedia PDF Downloads 31731773 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 10931772 Gamification Using Stochastic Processes: Engage Children to Have Healthy Habits
Authors: Andre M. Carvalho, Pedro Sebastiao
Abstract:
This article is based on a dissertation that intends to analyze and make a model, intelligently, algorithms based on stochastic processes of a gamification application applied to marketing. Gamification is used in our daily lives to engage us to perform certain actions in order to achieve goals and gain rewards. This strategy is an increasingly adopted way to encourage and retain customers through game elements. The application of gamification aims to encourage children between 6 and 10 years of age to have healthy habits and the purpose of serving as a model for use in marketing. This application was developed in unity; we implemented intelligent algorithms based on stochastic processes, web services to respond to all requests of the application, a back-office website to manage the application and the database. The behavioral analysis of the use of game elements and stochastic processes in children’s motivation was done. The application of algorithms based on stochastic processes in-game elements is very important to promote cooperation and to ensure fair and friendly competition between users which consequently stimulates the user’s interest and their involvement in the application and organization.Keywords: engage, games, gamification, randomness, stochastic processes
Procedia PDF Downloads 33431771 Predicting National Football League (NFL) Match with Score-Based System
Authors: Marcho Setiawan Handok, Samuel S. Lemma, Abdoulaye Fofana, Naseef Mansoor
Abstract:
This paper is proposing a method to predict the outcome of the National Football League match with data from 2019 to 2022 and compare it with other popular models. The model uses open-source statistical data of each team, such as passing yards, rushing yards, fumbles lost, and scoring. Each statistical data has offensive and defensive. For instance, a data set of anticipated values for a specific matchup is created by comparing the offensive passing yards obtained by one team to the defensive passing yards given by the opposition. We evaluated the model’s performance by contrasting its result with those of established prediction algorithms. This research is using a neural network to predict the score of a National Football League match and then predict the winner of the game.Keywords: game prediction, NFL, football, artificial neural network
Procedia PDF Downloads 8931770 Dissecting Big Trajectory Data to Analyse Road Network Travel Efficiency
Authors: Rania Alshikhe, Vinita Jindal
Abstract:
Digital innovation has played a crucial role in managing smart transportation. For this, big trajectory data collected from traveling vehicles, such as taxis through installed global positioning system (GPS)-enabled devices can be utilized. It offers an unprecedented opportunity to trace the movements of vehicles in fine spatiotemporal granularity. This paper aims to explore big trajectory data to measure the travel efficiency of road networks using the proposed statistical travel efficiency measure (STEM) across an entire city. Further, it identifies the cause of low travel efficiency by proposed least square approximation network-based causality exploration (LANCE). Finally, the resulting data analysis reveals the causes of low travel efficiency, along with the road segments that need to be optimized to improve the traffic conditions and thus minimize the average travel time from given point A to point B in the road network. Obtained results show that our proposed approach outperforms the baseline algorithms for measuring the travel efficiency of the road network.Keywords: GPS trajectory, road network, taxi trips, digital map, big data, STEM, LANCE
Procedia PDF Downloads 15931769 Spatio-Temporal Data Mining with Association Rules for Lake Van
Authors: Tolga Aydin, M. Fatih Alaeddinoğlu
Abstract:
People, throughout the history, have made estimates and inferences about the future by using their past experiences. Developing information technologies and the improvements in the database management systems make it possible to extract useful information from knowledge in hand for the strategic decisions. Therefore, different methods have been developed. Data mining by association rules learning is one of such methods. Apriori algorithm, one of the well-known association rules learning algorithms, is not commonly used in spatio-temporal data sets. However, it is possible to embed time and space features into the data sets and make Apriori algorithm a suitable data mining technique for learning spatio-temporal association rules. Lake Van, the largest lake of Turkey, is a closed basin. This feature causes the volume of the lake to increase or decrease as a result of change in water amount it holds. In this study, evaporation, humidity, lake altitude, amount of rainfall and temperature parameters recorded in Lake Van region throughout the years are used by the Apriori algorithm and a spatio-temporal data mining application is developed to identify overflows and newly-formed soil regions (underflows) occurring in the coastal parts of Lake Van. Identifying possible reasons of overflows and underflows may be used to alert the experts to take precautions and make the necessary investments.Keywords: apriori algorithm, association rules, data mining, spatio-temporal data
Procedia PDF Downloads 37731768 Profit-Based Artificial Neural Network (ANN) Trained by Migrating Birds Optimization: A Case Study in Credit Card Fraud Detection
Authors: Ashkan Zakaryazad, Ekrem Duman
Abstract:
A typical classification technique ranks the instances in a data set according to the likelihood of belonging to one (positive) class. A credit card (CC) fraud detection model ranks the transactions in terms of probability of being fraud. In fact, this approach is often criticized, because firms do not care about fraud probability but about the profitability or costliness of detecting a fraudulent transaction. The key contribution in this study is to focus on the profit maximization in the model building step. The artificial neural network proposed in this study works based on profit maximization instead of minimizing the error of prediction. Moreover, some studies have shown that the back propagation algorithm, similar to other gradient–based algorithms, usually gets trapped in local optima and swarm-based algorithms are more successful in this respect. In this study, we train our profit maximization ANN using the Migrating Birds optimization (MBO) which is introduced to literature recently.Keywords: neural network, profit-based neural network, sum of squared errors (SSE), MBO, gradient descent
Procedia PDF Downloads 47831767 Sensor Registration in Multi-Static Sonar Fusion Detection
Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin
Abstract:
In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem
Procedia PDF Downloads 17431766 Seismic Fragility of Weir Structure Considering Aging Degradation of Concrete Material
Authors: HoYoung Son, DongHoon Shin, WooYoung Jung
Abstract:
This study presented the seismic fragility framework of concrete weir structure subjected to strong seismic ground motions and in particular, concrete aging condition of the weir structure was taken into account in this study. In order to understand the influence of concrete aging on the weir structure, by using probabilistic risk assessment, the analytical seismic fragility of the weir structure was derived for pre- and post-deterioration of concrete. The performance of concrete weir structure after five years was assumed for the concrete aging or deterioration, and according to after five years’ condition, the elastic modulus was simply reduced about one–tenth compared with initial condition of weir structures. A 2D nonlinear finite element analysis was performed considering the deterioration of concrete in weir structures using ABAQUS platform, a commercial structural analysis program. Simplified concrete degradation was resulted in the increase of almost 45% of the probability of failure at Limit State 3, in comparison to initial construction stage, by analyzing the seismic fragility.Keywords: weir, FEM, concrete, fragility, aging
Procedia PDF Downloads 48631765 Application of Regularized Low-Rank Matrix Factorization in Personalized Targeting
Authors: Kourosh Modarresi
Abstract:
The Netflix problem has brought the topic of “Recommendation Systems” into the mainstream of computer science, mathematics, and statistics. Though much progress has been made, the available algorithms do not obtain satisfactory results. The success of these algorithms is rarely above 5%. This work is based on the belief that the main challenge is to come up with “scalable personalization” models. This paper uses an adaptive regularization of inverse singular value decomposition (SVD) that applies adaptive penalization on the singular vectors. The results show far better matching for recommender systems when compared to the ones from the state of the art models in the industry.Keywords: convex optimization, LASSO, regression, recommender systems, singular value decomposition, low rank approximation
Procedia PDF Downloads 46031764 Seismic Behavior and Loss Assessment of High–Rise Buildings with Light Gauge Steel–Concrete Hybrid Structure
Authors: Bing Lu, Shuang Li, Hongyuan Zhou
Abstract:
The steel–concrete hybrid structure has been extensively employed in high–rise buildings and super high–rise buildings. The light gauge steel–concrete hybrid structure, including light gauge steel structure and concrete hybrid structure, is a new–type steel–concrete hybrid structure, which possesses some advantages of light gauge steel structure and concrete hybrid structure. The seismic behavior and loss assessment of three high–rise buildings with three different concrete hybrid structures were investigated through finite element software, respectively. The three concrete hybrid structures are reinforced concrete column–steel beam (RC‒S) hybrid structure, concrete–filled steel tube column–steel beam (CFST‒S) hybrid structure, and tubed concrete column–steel beam (TC‒S) hybrid structure. The nonlinear time-history analysis of three high–rise buildings under 80 earthquakes was carried out. After simulation, it indicated that the seismic performances of three high–rise buildings were superior. Under extremely rare earthquakes, the maximum inter–storey drifts of three high–rise buildings are significantly lower than 1/50. The inter–storey drift and floor acceleration of high–rise building with CFST‒S hybrid structure were bigger than those of high–rise buildings with RC‒S hybrid structure, and smaller than those of high–rise building with TC‒S hybrid structure. Then, based on the time–history analysis results, the post-earthquake repair cost ratio and repair time of three high–rise buildings were predicted through an economic performance analysis method proposed in FEMA‒P58 report. Under frequent earthquakes, basic earthquakes and rare earthquakes, the repair cost ratio and repair time of three high-rise buildings were less than 5% and 15 days, respectively. Under extremely rare earthquakes, the repair cost ratio and repair time of high-rise buildings with TC‒S hybrid structure were the most among three high rise buildings. Due to the advantages of CFST-S hybrid structure, it could be extensively employed in high-rise buildings subjected to earthquake excitations.Keywords: seismic behavior, loss assessment, light gauge steel–concrete hybrid structure, high–rise building, time–history analysis
Procedia PDF Downloads 19231763 Analyze and Visualize Eye-Tracking Data
Authors: Aymen Sekhri, Emmanuel Kwabena Frimpong, Bolaji Mubarak Ayeyemi, Aleksi Hirvonen, Matias Hirvonen, Tedros Tesfay Andemichael
Abstract:
Fixation identification, which involves isolating and identifying fixations and saccades in eye-tracking protocols, is an important aspect of eye-movement data processing that can have a big impact on higher-level analyses. However, fixation identification techniques are frequently discussed informally and rarely compared in any meaningful way. With two state-of-the-art algorithms, we will implement fixation detection and analysis in this work. The velocity threshold fixation algorithm is the first algorithm, and it identifies fixation based on a threshold value. For eye movement detection, the second approach is U'n' Eye, a deep neural network algorithm. The goal of this project is to analyze and visualize eye-tracking data from an eye gaze dataset that has been provided. The data was collected in a scenario in which individuals were shown photos and asked whether or not they recognized them. The results of the two-fixation detection approach are contrasted and visualized in this paper.Keywords: human-computer interaction, eye-tracking, CNN, fixations, saccades
Procedia PDF Downloads 140