Search results for: Models of information systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9300

Search results for: Models of information systems

1020 Influence of Driving Strategy on Power and Fuel Consumption of Lightweight PEM Fuel Cell Vehicle Powertrain

Authors: Suhadiyana Hanapi, Alhassan Salami Tijani, W. A. N Wan Mohamed

Abstract:

In this paper, a prototype PEM fuel cell vehicle integrated with a 1 kW air-blowing proton exchange membrane fuel cell (PEMFC) stack as a main power sources has been developed for a lightweight cruising vehicle. The test vehicle is equipped with a PEM fuel cell system that provides electric power to a brushed DC motor. This vehicle was designed to compete with industrial lightweight vehicle with the target of consuming least amount of energy and high performance. Individual variations in driving style have a significant impact on vehicle energy efficiency and it is well established from the literature. The primary aim of this study was to assesses the power and fuel consumption of a hydrogen fuel cell vehicle operating at three difference driving technique (i.e. 25 km/h constant speed, 22-28 km/h speed range, 20-30 km/h speed range). The goal is to develop the best driving strategy to maximize performance and minimize fuel consumption for the vehicle system. The relationship between power demand and hydrogen consumption has also been discussed. All the techniques can be evaluated and compared on broadly similar terms. Automatic intelligent controller for driving prototype fuel cell vehicle on different obstacle while maintaining all systems at maximum efficiency was used. The result showed that 25 km/h constant speed was identified for optimal driving with less fuel consumption.

Keywords: Prototype fuel cell electric vehicles, energy efficient, control/driving technique, fuel economy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2060
1019 Prototype of Business Directory for Micro, Small and Medium Enterprises Using Google Maps API and Multimedia

Authors: Suselo Thomas, Suyoto, Dwiandiyanta B. Yudi

Abstract:

This paper explain about prototype of a business directory for micro-scale businesses, small and medium enterprises (SMEs), the third phase of the research. The third phase is the phase of software development based on the model of SME business directory that had been developed, to create prototype software SME business directory. In the fourth phase, namely the implementation, these units have been developed are tested to obtain input from potential users. The fifth phase is the testing phase to determine the strengths and weaknesses of software has been developed. The result of this phase is the software in the form of on-line (web based) and multimedia-based. Business Directory, if implemented will facilitate and optimize the access of SMEs to ease supplier access to marketing. Business Directory will be equipped with the power of geocoding, so each location can be easily viewed SMEs on the map. The map will be constructed by using the functionality of a web-based Google Maps API. The information presented in the form of multimedia that can be more interesting and interactive. Methodology used to achieve the goal: observation, interviews, modeling and classifying business directory for SMEs. 

Keywords: Business directories, SMEs, Google Maps API, Multimedia, Prototype.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2133
1018 Product Features Extraction from Opinions According to Time

Authors: Kamal Amarouche, Houda Benbrahim, Ismail Kassou

Abstract:

Nowadays, e-commerce shopping websites have experienced noticeable growth. These websites have gained consumers’ trust. After purchasing a product, many consumers share comments where opinions are usually embedded about the given product. Research on the automatic management of opinions that gives suggestions to potential consumers and portrays an image of the product to manufactures has been growing recently. After launching the product in the market, the reviews generated around it do not usually contain helpful information or generic opinions about this product (e.g. telephone: great phone...); in the sense that the product is still in the launching phase in the market. Within time, the product becomes old. Therefore, consumers perceive the advantages/ disadvantages about each specific product feature. Therefore, they will generate comments that contain their sentiments about these features. In this paper, we present an unsupervised method to extract different product features hidden in the opinions which influence its purchase, and that combines Time Weighting (TW) which depends on the time opinions were expressed with Term Frequency-Inverse Document Frequency (TF-IDF). We conduct several experiments using two different datasets about cell phones and hotels. The results show the effectiveness of our automatic feature extraction, as well as its domain independent characteristic.

Keywords: Opinion mining, product feature extraction, sentiment analysis, SentiWordNet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1299
1017 Effectiveness of Cellular Phone with Active RFID Tag for Evacuation - The Case of Evacuation from the Underground Shopping Mall of Tenjin

Authors: Masatora Daito, Noriyuki Tanida

Abstract:

The underground shopping mall has the constructional problem of the fire evacuation. Also, the people sometimes lose their direction and information of current time in the mall. If the emergencies such as terrorist explosions or gas explosions are happened, they have to go out soon. Under such circumstances, inside of the mall has high risk for life. In this research, the authors propose a way that he/she can go out from the underground shopping mall quickly. If the narrow exits are discovered by using active RFID (Radio Frequency Identification) tags and using cellular phones, they can evacuate as soon as possible. To verify this hypothesis, the authors design the model and carry out the agent-based simulation. They treat, as a case study, the Tenjin mall in Fukuoka Prefecture in Japan. The result of the simulation is that the case of the pedestrian with using active RFID tags and cellular phones reduced the amount of time to spend on the evacuation. Even if the diffusion of RFID tags and cellular phones was not perfect, they could show the effectiveness of reducing the time of evacuation.

Keywords: Evacuation, active RFID tag and cellular phone, underground shopping mall, agent-based simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517
1016 Intelligent Transport System: Classification of Traffic Signs Using Deep Neural Networks in Real Time

Authors: Anukriti Kumar, Tanmay Singh, Dinesh Kumar Vishwakarma

Abstract:

Traffic control has been one of the most common and irritating problems since the time automobiles have hit the roads. Problems like traffic congestion have led to a significant time burden around the world and one significant solution to these problems can be the proper implementation of the Intelligent Transport System (ITS). It involves the integration of various tools like smart sensors, artificial intelligence, position technologies and mobile data services to manage traffic flow, reduce congestion and enhance driver's ability to avoid accidents during adverse weather. Road and traffic signs’ recognition is an emerging field of research in ITS. Classification problem of traffic signs needs to be solved as it is a major step in our journey towards building semi-autonomous/autonomous driving systems. The purpose of this work focuses on implementing an approach to solve the problem of traffic sign classification by developing a Convolutional Neural Network (CNN) classifier using the GTSRB (German Traffic Sign Recognition Benchmark) dataset. Rather than using hand-crafted features, our model addresses the concern of exploding huge parameters and data method augmentations. Our model achieved an accuracy of around 97.6% which is comparable to various state-of-the-art architectures.

Keywords: Multiclass classification, convolution neural network, OpenCV, Data Augmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 812
1015 Just-In-Time for Reducing Inventory Costs throughout a Supply Chain: A Case Study

Authors: Faraj Farhat El Dabee, Rajab Abdullah Hokoma

Abstract:

Supply Chain Management (SCM) is the integration between manufacturer, transporter and customer in order to form one seamless chain that allows smooth flow of raw materials, information and products throughout the entire network that help in minimizing all related efforts and costs. The main objective of this paper is to develop a model that can accept a specified number of spare-parts within the supply chain, simulating its inventory operations throughout all stages in order to minimize the inventory holding costs, base-stock, safety-stock, and to find the optimum quantity of inventory levels, thereby suggesting a way forward to adapt some factors of Just-In-Time to minimizing the inventory costs throughout the entire supply chain. The model has been developed using Micro- Soft Excel & Visual Basic in order to study inventory allocations in any network of the supply chain. The application and reproducibility of this model were tested by comparing the actual system that was implemented in the case study with the results of the developed model. The findings showed that the total inventory costs of the developed model are about 50% less than the actual costs of the inventory items within the case study.

Keywords: Holding Costs, Inventory, JIT, Modeling, SCM

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3478
1014 General Purpose Graphic Processing Units Based Real Time Video Tracking System

Authors: Mallikarjuna Rao Gundavarapu, Ch. Mallikarjuna Rao, K. Anuradha Bai

Abstract:

Real Time Video Tracking is a challenging task for computing professionals. The performance of video tracking techniques is greatly affected by background detection and elimination process. Local regions of the image frame contain vital information of background and foreground. However, pixel-level processing of local regions consumes a good amount of computational time and memory space by traditional approaches. In our approach we have explored the concurrent computational ability of General Purpose Graphic Processing Units (GPGPU) to address this problem. The Gaussian Mixture Model (GMM) with adaptive weighted kernels is used for detecting the background. The weights of the kernel are influenced by local regions and are updated by inter-frame variations of these corresponding regions. The proposed system has been tested with GPU devices such as GeForce GTX 280, GeForce GTX 280 and Quadro K2000. The results are encouraging with maximum speed up 10X compared to sequential approach.

Keywords: Connected components, Embrace threads, Local weighted kernel, Structuring element.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1170
1013 A Bi-Objective Stochastic Mathematical Model for Agricultural Supply Chain Network

Authors: Mohammad Mahdi Paydar, Armin Cheraghalipour, Mostafa Hajiaghaei-Keshteli

Abstract:

Nowadays, in advanced countries, agriculture as one of the most significant sectors of the economy, plays an important role in its political and economic independence. Due to farmers' lack of information about products' demand and lack of proper planning for harvest time, annually the considerable amount of products is corrupted. Besides, in this paper, we attempt to improve these unfavorable conditions via designing an effective supply chain network that tries to minimize total costs of agricultural products along with minimizing shortage in demand points. To validate the proposed model, a stochastic optimization approach by using a branch and bound solver of the LINGO software is utilized. Furthermore, to accumulate the data of parameters, a case study in Mazandaran province placed in the north of Iran has been applied. Finally, using ɛ-constraint approach, a Pareto front is obtained and one of its Pareto solutions as best solution is selected. Then, related results of this solution are explained. Finally, conclusions and suggestions for the future research are presented.

Keywords: Perishable products, stochastic optimization, agricultural supply chain, ɛ-constraint.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1002
1012 Wavelet Enhanced CCA for Minimization of Ocular and Muscle Artifacts in EEG

Authors: B. S. Raghavendra, D. Narayana Dutt

Abstract:

Electroencephalogram (EEG) recordings are often contaminated with ocular and muscle artifacts. In this paper, the canonical correlation analysis (CCA) is used as blind source separation (BSS) technique (BSS-CCA) to decompose the artifact contaminated EEG into component signals. We combine the BSSCCA technique with wavelet filtering approach for minimizing both ocular and muscle artifacts simultaneously, and refer the proposed method as wavelet enhanced BSS-CCA. In this approach, after careful visual inspection, the muscle artifact components are discarded and ocular artifact components are subjected to wavelet filtering to retain high frequency cerebral information, and then clean EEG is reconstructed. The performance of the proposed wavelet enhanced BSS-CCA method is tested on real EEG recordings contaminated with ocular and muscle artifacts, for which power spectral density is used as a quantitative measure. Our results suggest that the proposed hybrid approach minimizes ocular and muscle artifacts effectively, minimally affecting underlying cerebral activity in EEG recordings.

Keywords: Blind source separation, Canonical correlationanalysis, Electroencephalogram, Muscle artifact, Ocular artifact, Power spectrum, Wavelet threshold.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2333
1011 Numerical Study of Oxygen Enrichment on NO Pollution Spread in a Combustion Chamber

Authors: Zohreh Orshesh

Abstract:

In this study, a 3D combustion chamber was simulated using FLUENT 6.32. Aim to obtain detailed information on combustion characteristics and _ nitrogen oxides in the furnace and the effect of oxygen enrichment in a combustion process. Oxygenenriched combustion is an effective way to reduce emissions. This paper analyzes NO emission, including thermal NO and prompt NO. Flow rate ratio of air to fuel is varied as 1.3, 3.2 and 5.1 and the oxygen enriched flow rates are 28, 54 and 68 lit/min. The 3D Reynolds Averaged Navier Stokes (RANS) equations with standard k-ε turbulence model are solved together by Fluent 6.32 software. First order upwind scheme is used to model governing equations and the SIMPLE algorithm is used as pressure velocity coupling. Results show that for AF=1.3, increase the oxygen flow rate of oxygen reduction in NO emissions is Lance. Moreover, in a fixed oxygen enrichment condition, increasing the air to fuel ratio will increase the temperature peak, but not the NO emission rate. As a result, oxygen enrichment can reduce the NO emission at this kind of furnace in low air to fuel rates.

Keywords: Combustion chamber, Oxygen enrichment, Reynolds Averaged Navier- Stokes, NO emission

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635
1010 A Combined Conventional and Differential Evolution Method for Model Order Reduction

Authors: J. S. Yadav, N. P. Patidar, J. Singhai, S. Panda, C. Ardil

Abstract:

In this paper a mixed method by combining an evolutionary and a conventional technique is proposed for reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM). In the conventional technique, the mixed advantages of Mihailov stability criterion and continued Fraction Expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. Then, retaining the numerator polynomial, the denominator polynomial is recalculated by an evolutionary technique. In the evolutionary method, the recently proposed Differential Evolution (DE) optimization technique is employed. DE method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. The proposed method is illustrated through a numerical example and compared with ROM where both numerator and denominator polynomials are obtained by conventional method to show its superiority.

Keywords: Reduced Order Modeling, Stability, Mihailov Stability Criterion, Continued Fraction Expansions, Differential Evolution, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2162
1009 Dynamic Anonymity

Authors: Emin Islam Tatlı, Dirk Stegemann, Stefan Lucks

Abstract:

Encryption protects communication partners from disclosure of their secret messages but cannot prevent traffic analysis and the leakage of information about “who communicates with whom". In the presence of collaborating adversaries, this linkability of actions can danger anonymity. However, reliably providing anonymity is crucial in many applications. Especially in contextaware mobile business, where mobile users equipped with PDAs request and receive services from service providers, providing anonymous communication is mission-critical and challenging at the same time. Firstly, the limited performance of mobile devices does not allow for heavy use of expensive public-key operations which are commonly used in anonymity protocols. Moreover, the demands for security depend on the application (e.g., mobile dating vs. pizza delivery service), but different users (e.g., a celebrity vs. a normal person) may even require different security levels for the same application. Considering both hardware limitations of mobile devices and different sensitivity of users, we propose an anonymity framework that is dynamically configurable according to user and application preferences. Our framework is based on Chaum-s mixnet. We explain the proposed framework, its configuration parameters for the dynamic behavior and the algorithm to enforce dynamic anonymity.

Keywords: Anonymity, context-awareness, mix-net, mobile business, policy management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1707
1008 Faster FPGA Routing Solution using DNA Computing

Authors: Manpreet Singh, Parvinder Singh Sandhu, Manjinder Singh Kahlon

Abstract:

There are many classical algorithms for finding routing in FPGA. But Using DNA computing we can solve the routes efficiently and fast. The run time complexity of DNA algorithms is much less than other classical algorithms which are used for solving routing in FPGA. The research in DNA computing is in a primary level. High information density of DNA molecules and massive parallelism involved in the DNA reactions make DNA computing a powerful tool. It has been proved by many research accomplishments that any procedure that can be programmed in a silicon computer can be realized as a DNA computing procedure. In this paper we have proposed two tier approaches for the FPGA routing solution. First, geometric FPGA detailed routing task is solved by transforming it into a Boolean satisfiability equation with the property that any assignment of input variables that satisfies the equation specifies a valid routing. Satisfying assignment for particular route will result in a valid routing and absence of a satisfying assignment implies that the layout is un-routable. In second step, DNA search algorithm is applied on this Boolean equation for solving routing alternatives utilizing the properties of DNA computation. The simulated results are satisfactory and give the indication of applicability of DNA computing for solving the FPGA Routing problem.

Keywords: FPGA, Routing, DNA Computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591
1007 Steady State Power Flow Calculations with STATCOM under Load Increase Scenario and Line Contingencies

Authors: A. S. Telang, P. P. Bedekar

Abstract:

Flexible AC transmission system controllers play an important role in controlling the line power flow and in improving voltage profiles of the power system network. They can be used to increase the reliability and efficiency of transmission and distribution system. The modeling of these FACTS controllers in power flow calculations have become a challenging research problem. This paper presents a simple and systematic approach for a steady state power flow calculations of power system with STATCOM (Static Synchronous Compensator). It shows how systematically STATCOM can be implemented in conventional power flow calculations. The main contribution of this paper is to investigate this approach for two special conditions i.e. consideration of load increase pattern incorporating load change (active, reactive and both active and reactive) at all load buses simultaneously and the line contingencies under such load change. Such investigation proves to be relevant for determination of strategy for the optimal placement of STATCOM to enhance the voltage stability. The performance has been evaluated on many standard IEEE test systems. The results for standard IEEE-30 bus test system are presented here.

Keywords: Load flow analysis, Newton-Raphson (N-R) power flow, Flexible AC transmission system, FACTS, Static synchronous compensator, STATCOM, voltage profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1165
1006 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: Probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1196
1005 The Impact of the Knowledge-Sharing Factors on Improving Decision-Making at Sultan Qaboos University Libraries

Authors: Aseela Alhinaai, Suliman Abdullah, Adil Albusaidi

Abstract:

Knowledge has been considered an important asset in private and public organizations. It is utilized in the libraries sector to run different operations of technical services and administrative works. This study aims to identify the impact of the knowledge-sharing factors (technology, collaboration, management support) to improve decision-making at Sultan Qaboos University Libraries. This study conducted a quantitative method using a questionnaire instrument to measure the impact of technology, collaboration, and management support on knowledge sharing that lead to improved decision-making. The study population is the Sultan Qaboos University (SQU) libraries (Main Library, Medical Library, College of Economic and Political Science Library, and Art Library). The results showed that management support, collaboration, and technology use have a positive impact on the knowledge-sharing process, and knowledge sharing positively affects decision making process.

Keywords: Knowledge sharing, decision making, information technology, management support, corroboration, Sultan Qaboos University.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152
1004 Space Vector Pulse Width Modulation Technique Based Design and Simulation of a Three-Phase Voltage Source Converter Systems

Authors: Farhan Beg

Abstract:

A Space Vector based Pulse Width Modulation control technique for the three-phase PWM converter is proposed in this paper. The proposed control scheme is based on a synchronous reference frame model. High performance and efficiency is obtained with regards to the DC bus voltage and the power factor considerations of the PWM rectifier thus leading to low losses. MATLAB/SIMULINK are used as a platform for the simulations and a SIMULINK model is presented in the paper. The results show that the proposed model demonstrates better performance and properties compared to the traditional SPWM method and the method improves the dynamic performance of the closed loop drastically. For the Space Vector based Pulse Width Modulation, Sine signal is the reference waveform and triangle waveform is the carrier waveform. When the value sine signal is large than triangle signal, the pulse will start produce to high. And then when the triangular signals higher than sine signal, the pulse will come to low. SPWM output will changed by changing the value of the modulation index and frequency used in this system to produce more pulse width. The more pulse width produced, the output voltage will have lower harmonics contents and the resolution increase.

Keywords: Power Factor, SVPWM, PWM rectifier, SPWM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4022
1003 Software Reliability Prediction Model Analysis

Authors: L. Mirtskhulava, M. Khunjgurua, N. Lomineishvili, K. Bakuria

Abstract:

Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.

Keywords: Exponential distribution, conditional mean time to failure, distribution function, mathematical model, software reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1679
1002 Analyzing the Market Growth in API Economy Using Time-Evolving Model

Authors: Hiroki Yoshikai, Shin’ichi Arakawa, Tetsuya Takine, Masayuki Murata

Abstract:

API (Application Programming Interface) economy is expected to create new value by converting corporate services such as information processing and data provision into APIs and using these APIs to connect services. Understanding dynamics of a market of API economy under strategies of participants is crucial to fully maximize the values of API economy. To capture the behavior of a market in which the number of participants changes over time, we present a time-evolving market model for a platform in which API providers who provide APIs to service providers participate in addition to service providers and consumers. Then, we use the market model to clarify the role API providers play in expanding market participants and forming ecosystems. The results show that the platform with API providers increased the number of market participants by 67% and decreased the cost to develop services by 25% compared to the platform without API providers. Furthermore, during the expansion phase of the market, it is found that the profits of participants are mostly the same when 70% of the revenue from consumers is distributed to service providers and API providers. It is also found that, when the market is mature, the profits of the service provider and API provider will decrease significantly due to their competitions and the profit of the platform increases.

Keywords: API Economy, ecosystem, platform, API providers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 242
1001 Ethnobotany and Distribution of Wild Edible Tubers in Pulau Redang and Nearby Islands of Terengganu, Malaysia

Authors: M. Nashriyah, M. Y. Nur Athiqah, H. Syahril Amin, N. Norhayati, A. W. Mohamad Azhar, M. Khairil

Abstract:

An ethnobotanical study was conducted to document local knowledge and potentials of wild edible tubers that has been reported and sighted and to investigate and record their distribution in Pulau Redang and nearby islands of Terengganu, Malaysia. Information was gathered from 42 villagers by using semi-structured questionnaire. These respondents were selected randomly and no appointment was made prior to the visits. For distribution, the locations of wild edible tubers were recorded by using the Global Positioning System (GPS). The wild edible tubers recorded were ubi gadung, ubi toyo, ubi kasu, ubi jaga, ubi seratus and ubi kertas. Dioscorea or commonly known as yam is reported to be one of the major food sources worldwide. The majority of villagers used Dioscorea hispida Dennst. or ubi gadung in many ways in their life such as for food, medicinal purposes and fish poison. The villagers have identified this ubi gadung by looking at the morphological characteristics; that include leaf shape, stem and the color of the tuber-s flesh.

Keywords: Ethnobotany, distribution, wild edible tubers, Dioscorea hispida Dennst., ubi gadung

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2956
1000 A Review on Medical Image Registration Techniques

Authors: Shadrack Mambo, Karim Djouani, Yskandar Hamam, Barend van Wyk, Patrick Siarry

Abstract:

This paper discusses the current trends in medical image registration techniques and addresses the need to provide a solid theoretical foundation for research endeavours. Methodological analysis and synthesis of quality literature was done, providing a platform for developing a good foundation for research study in this field which is crucial in understanding the existing levels of knowledge. Research on medical image registration techniques assists clinical and medical practitioners in diagnosis of tumours and lesion in anatomical organs, thereby enhancing fast and accurate curative treatment of patients. Literature review aims to provide a solid theoretical foundation for research endeavours in image registration techniques. Developing a solid foundation for a research study is possible through a methodological analysis and synthesis of existing contributions. Out of these considerations, the aim of this paper is to enhance the scientific community’s understanding of the current status of research in medical image registration techniques and also communicate to them, the contribution of this research in the field of image processing. The gaps identified in current techniques can be closed by use of artificial neural networks that form learning systems designed to minimise error function. The paper also suggests several areas of future research in the image registration.

Keywords: Image registration techniques, medical images, neural networks, optimisation, transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1811
999 Sorting Primitives and Genome Rearrangementin Bioinformatics: A Unified Perspective

Authors: Swapnoneel Roy, Minhazur Rahman, Ashok Kumar Thakur

Abstract:

Bioinformatics and computational biology involve the use of techniques including applied mathematics, informatics, statistics, computer science, artificial intelligence, chemistry, and biochemistry to solve biological problems usually on the molecular level. Research in computational biology often overlaps with systems biology. Major research efforts in the field include sequence alignment, gene finding, genome assembly, protein structure alignment, protein structure prediction, prediction of gene expression and proteinprotein interactions, and the modeling of evolution. Various global rearrangements of permutations, such as reversals and transpositions,have recently become of interest because of their applications in computational molecular biology. A reversal is an operation that reverses the order of a substring of a permutation. A transposition is an operation that swaps two adjacent substrings of a permutation. The problem of determining the smallest number of reversals required to transform a given permutation into the identity permutation is called sorting by reversals. Similar problems can be defined for transpositions and other global rearrangements. In this work we perform a study about some genome rearrangement primitives. We show how a genome is modelled by a permutation, introduce some of the existing primitives and the lower and upper bounds on them. We then provide a comparison of the introduced primitives.

Keywords: Sorting Primitives, Genome Rearrangements, Transpositions, Block Interchanges, Strip Exchanges.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2160
998 Super Resolution Blind Reconstruction of Low Resolution Images using Wavelets based Fusion

Authors: Liyakathunisa, V. K. Ananthashayana

Abstract:

Crucial information barely visible to the human eye is often embedded in a series of low resolution images taken of the same scene. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. The ideal algorithm should be fast, and should add sharpness and details, both at edges and in regions without adding artifacts. In this paper we propose a super resolution blind reconstruction technique for linearly degraded images. In our proposed technique the algorithm is divided into three parts an image registration, wavelets based fusion and an image restoration. In this paper three low resolution images are considered which may sub pixels shifted, rotated, blurred or noisy, the sub pixel shifted images are registered using affine transformation model; A wavelet based fusion is performed and the noise is removed using soft thresolding. Our proposed technique reduces blocking artifacts and also smoothens the edges and it is also able to restore high frequency details in an image. Our technique is efficient and computationally fast having clear perspective of real time implementation.

Keywords: Affine Transforms, Denoiseing, DWT, Fusion, Image registration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2669
997 Emerging Wireless Standards - WiFi, ZigBee and WiMAX

Authors: Bhavneet Sidhu, Hardeep Singh, Amit Chhabra

Abstract:

The world of wireless telecommunications is rapidly evolving. Technologies under research and development promise to deliver more services to more users in less time. This paper presents the emerging technologies helping wireless systems grow from where we are today into our visions of the future. This paper will cover the applications and characteristics of emerging wireless technologies: Wireless Local Area Networks (WiFi-802.11n), Wireless Personal Area Networks (ZigBee) and Wireless Metropolitan Area Networks (WiMAX). The purpose of this paper is to explain the impending 802.11n standard and how it will enable WLANs to support emerging media-rich applications. The paper will also detail how 802.11n compares with existing WLAN standards and offer strategies for users considering higher-bandwidth alternatives. The emerging IEEE 802.15.4 (ZigBee) standard aims to provide low data rate wireless communications with high-precision ranging and localization, by employing UWB technologies for a low-power and low cost solution. WiMAX (Worldwide Interoperability for Microwave Access) is a standard for wireless data transmission covering a range similar to cellular phone towers. With high performance in both distance and throughput, WiMAX technology could be a boon to current Internet providers seeking to become the leader of next generation wireless Internet access. This paper also explores how these emerging technologies differ from one another.

Keywords: MIMO technology, WiFi, WiMAX, ZigBee.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4994
996 Automatic Motion Trajectory Analysis for Dual Human Interaction Using Video Sequences

Authors: Yuan-Hsiang Chang, Pin-Chi Lin, Li-Der Jeng

Abstract:

Advance in techniques of image and video processing has enabled the development of intelligent video surveillance systems. This study was aimed to automatically detect moving human objects and to analyze events of dual human interaction in a surveillance scene. Our system was developed in four major steps: image preprocessing, human object detection, human object tracking, and motion trajectory analysis. The adaptive background subtraction and image processing techniques were used to detect and track moving human objects. To solve the occlusion problem during the interaction, the Kalman filter was used to retain a complete trajectory for each human object. Finally, the motion trajectory analysis was developed to distinguish between the interaction and non-interaction events based on derivatives of trajectories related to the speed of the moving objects. Using a database of 60 video sequences, our system could achieve the classification accuracy of 80% in interaction events and 95% in non-interaction events, respectively. In summary, we have explored the idea to investigate a system for the automatic classification of events for interaction and non-interaction events using surveillance cameras. Ultimately, this system could be incorporated in an intelligent surveillance system for the detection and/or classification of abnormal or criminal events (e.g., theft, snatch, fighting, etc.). 

Keywords: Motion detection, motion tracking, trajectory analysis, video surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1729
995 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment

Authors: U. Yerlikaya, R. T. Balkan

Abstract:

In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.

Keywords: A* Algorithm, autonomous turrets, high-dimensional C-Space, manifold C-Space, point clouds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 385
994 Deep Reinforcement Learning Approach for Trading Automation in the Stock Market

Authors: Taylan Kabbani, Ekrem Duman

Abstract:

Deep Reinforcement Learning (DRL) algorithms can scale to previously intractable problems. The automation of profit generation in the stock market is possible using DRL, by combining  the financial assets price ”prediction” step and the ”allocation” step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. This work represents a DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem as a Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. We then solved the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm and achieved a 2.68 Sharpe ratio on the test dataset. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of DRL in financial markets over other types of machine learning and proves its credibility and advantages of strategic decision-making.

Keywords: Autonomous agent, deep reinforcement learning, MDP, sentiment analysis, stock market, technical indicators, twin delayed deep deterministic policy gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 523
993 Probabilistic Method of Wind Generation Placement for Congestion Management

Authors: S. Z. Moussavi, A. Badri, F. Rastegar Kashkooli

Abstract:

Wind farms (WFs) with high level of penetration are being established in power systems worldwide more rapidly than other renewable resources. The Independent System Operator (ISO), as a policy maker, should propose appropriate places for WF installation in order to maximize the benefits for the investors. There is also a possibility of congestion relief using the new installation of WFs which should be taken into account by the ISO when proposing the locations for WF installation. In this context, efficient wind farm (WF) placement method is proposed in order to reduce burdens on congested lines. Since the wind speed is a random variable and load forecasts also contain uncertainties, probabilistic approaches are used for this type of study. AC probabilistic optimal power flow (P-OPF) is formulated and solved using Monte Carlo Simulations (MCS). In order to reduce computation time, point estimate methods (PEM) are introduced as efficient alternative for time-demanding MCS. Subsequently, WF optimal placement is determined using generation shift distribution factors (GSDF) considering a new parameter entitled, wind availability factor (WAF). In order to obtain more realistic results, N-1 contingency analysis is employed to find the optimal size of WF, by means of line outage distribution factors (LODF). The IEEE 30-bus test system is used to show and compare the accuracy of proposed methodology.

Keywords: Probabilistic optimal power flow, Wind power, Pointestimate methods, Congestion management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1886
992 A Fuzzy Approach to Liver Tumor Segmentation with Zernike Moments

Authors: Abder-Rahman Ali, Antoine Vacavant, Manuel Grand-Brochier, Adélaïde Albouy-Kissi, Jean-Yves Boire

Abstract:

In this paper, we present a new segmentation approach for liver lesions in regions of interest within MRI (Magnetic Resonance Imaging). This approach, based on a two-cluster Fuzzy CMeans methodology, considers the parameter variable compactness to handle uncertainty. Fine boundaries are detected by a local recursive merging of ambiguous pixels with a sequential forward floating selection with Zernike moments. The method has been tested on both synthetic and real images. When applied on synthetic images, the proposed approach provides good performance, segmentations obtained are accurate, their shape is consistent with the ground truth, and the extracted information is reliable. The results obtained on MR images confirm such observations. Our approach allows, even for difficult cases of MR images, to extract a segmentation with good performance in terms of accuracy and shape, which implies that the geometry of the tumor is preserved for further clinical activities (such as automatic extraction of pharmaco-kinetics properties, lesion characterization, etc.).

Keywords: Defuzzification, floating search, fuzzy clustering, Zernike moments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
991 From F2F to Online Sessions: Changing Pattern of Instructions in Open and Distance Learning in India

Authors: Subramaniam Chandran

Abstract:

This paper presents an assessment study conducted among the distance learners in India. Open and distance learning systems have traveled a long way since its inception and its journey has witnessed the evolution and adoption of different generations of technology. This study focuses on the distant learners in India. Sampling for this study has been derived from the mass enrollment from Tamil Nadu area, a southern state of India. Learners were chosen from dual mode universities, private universities, Tamil Nadu Open University and IGNOU. The main focus of the study is to examine the coverage and appropriation of students support services and learning aids. It explores two aspects: the facilities available and the awareness and use of such services. It includes, self-learning materials, face-to-face counseling, multimedia learning materials, website, e-learning, radio and television services etc. While exploring the student-s perspective on these learning aspects, it is important to understand the perspectives of the teachers. Two different interests are visible among the teachers. Majority of the teachers support faceto- face counseling. However, the young teachers are in favour of online learning and multimedia supports in teaching. Through the awareness is somewhat high, the actual participation in online is very low. This is due to the inadequate infrastructure as well as the traditional attitudes of the teachers. Still the face-to-face sessions remain popular than online.

Keywords: Face-to-face session, online session, distance learning, multimedia

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491