Search results for: graphical decision models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3949

Search results for: graphical decision models

2749 Proposal of Design Method in the Semi-Acausal System Model

Authors: Junji Kaneko, Shigeyuki Haruyama, Ken Kaminishi, Tadayuki Kyoutani, Siti Ruhana Omar, Oke Oktavianty

Abstract:

This study is used as a definition method to the value and function in manufacturing sector. In concurrence of discussion about present condition of modeling method, until now definition of 1D-CAE is ambiguity and not conceptual. Across all the physic fields, those methods are defined with the formulation of differential algebraic equation which only applied time derivation and simulation. At the same time, we propose semi-acausal modeling concept and differential algebraic equation method as a newly modeling method which the efficiency has been verified through the comparison of numerical analysis result between the semi-acausal modeling calculation and FEM theory calculation.

Keywords: System Model, Physical Models, Empirical Models, Conservation Law, Differential Algebraic Equation, Object-Oriented.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218
2748 Coloured Reconfigurable Nets for Code Mobility Modeling

Authors: Kahloul Laid, Chaoui Allaoua

Abstract:

Code mobility technologies attract more and more developers and consumers. Numerous domains are concerned, many platforms are developed and interest applications are realized. However, developing good software products requires modeling, analyzing and proving steps. The choice of models and modeling languages is so critical on these steps. Formal tools are powerful in analyzing and proving steps. However, poorness of classical modeling language to model mobility requires proposition of new models. The objective of this paper is to provide a specific formalism “Coloured Reconfigurable Nets" and to show how this one seems to be adequate to model different kinds of code mobility.

Keywords: Code mobility, modelling mobility, labelled reconfigurable nets, Coloured reconfigurable nets, mobile code design paradigms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544
2747 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 394
2746 Lowering Error Floors by Concatenation of Low-Density Parity-Check and Array Code

Authors: Cinna Soltanpur, Mohammad Ghamari, Behzad Momahed Heravi, Fatemeh Zare

Abstract:

Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss.

Keywords: Concatenated coding, low–density parity–check codes, array code, error floors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 979
2745 Stability Analysis of Two-delay Differential Equation for Parkinson's Disease Models with Positive Feedback

Authors: M. A. Sohaly, M. A. Elfouly

Abstract:

Parkinson's disease (PD) is a heterogeneous movement disorder that often appears in the elderly. PD is induced by a loss of dopamine secretion. Some drugs increase the secretion of dopamine. In this paper, we will simply study the stability of PD models as a nonlinear delay differential equation. After a period of taking drugs, these act as positive feedback and increase the tremors of patients, and then, the differential equation has positive coefficients and the system is unstable under these conditions. We will present a set of suggested modifications to make the system more compatible with the biodynamic system. When giving a set of numerical examples, this research paper is concerned with the mathematical analysis, and no clinical data have been used.

Keywords: Parkinson's disease, stability, simulation, two delay differential equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 643
2744 Non-parametric Linear Technique for Measuring the Efficiency of Winter Road Maintenance in the Arctic Area

Authors: Mahshid Hatamzad, Geanette Polanco

Abstract:

Improving the performance of Winter Road Maintenance (WRM) can increase the traffic safety and reduce the cost as well as environmental impacts. This study evaluates the efficiency of WRM technique, named salting, in the Arctic area by using Data Envelopment Analysis (DEA), which is a non-parametric linear method to measure the efficiencies of decision-making units (DMUs) based on handling multiple inputs and multiple outputs at the same time that their associated weights are not known. Here, roads are considered as DMUs for which the efficiency must be determined. The three input variables considered are traffic flow, road area and WRM cost. In addition, the two output variables included are level of safety in the roads and environment impacts resulted from WRM, which is also considered as an uncontrollable factor in the second scenario. The results show the performance of DMUs from the most efficient WRM to the inefficient/least efficient one and this information provides decision makers with technical support and the required suggested improvements for inefficient WRM, in order to achieve a cost-effective WRM and a safe road transportation during wintertime in the Arctic areas.

Keywords: DEA, environmental impacts, risk and safety, WRM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 534
2743 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models

Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales

Abstract:

The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.

Keywords: Concrete bridges, deterioration, Markov chains, probability matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1421
2742 Application of Build-up and Wash-off Models for an East-Australian Catchment

Authors: Iqbal Hossain, Monzur Alam Imteaz, Mohammed Iqbal Hossain

Abstract:

Estimation of stormwater pollutants is a pre-requisite for the protection and improvement of the aquatic environment and for appropriate management options. The usual practice for the stormwater quality prediction is performed through water quality modeling. However, the accuracy of the prediction by the models depends on the proper estimation of model parameters. This paper presents the estimation of model parameters for a catchment water quality model developed for the continuous simulation of stormwater pollutants from a catchment to the catchment outlet. The model is capable of simulating the accumulation and transportation of the stormwater pollutants; suspended solids (SS), total nitrogen (TN) and total phosphorus (TP) from a particular catchment. Rainfall and water quality data were collected for the Hotham Creek Catchment (HTCC), Gold Coast, Australia. Runoff calculations from the developed model were compared with the calculated discharges from the widely used hydrological models, WBNM and DRAINS. Based on the measured water quality data, model water quality parameters were calibrated for the above-mentioned catchment. The calibrated parameters are expected to be helpful for the best management practices (BMPs) of the region. Sensitivity analyses of the estimated parameters were performed to assess the impacts of the model parameters on overall model estimations of runoff water quality.

Keywords: Calibration, Model Parameters, Suspended Solids, TotalNitrogen, Total Phosphorus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2165
2741 An Innovative Approach to Improve Skills of Students in Qatar University Spending in Virtual Class through Learning Management System

Authors: Mohammad Shahid Jamil, Mohamed Chabi

Abstract:

In this study, students’ learning has been investigated and satisfaction in one of the course offered at Qatar University Foundation Program. Innovative teaching has been implied methodology that emphasizes on enhancing students’ thinking skills, decision making, and problem solving skills. Some interesting results were found which could be used to further improvement of the teaching methodology. In Fall 2012 in Foundation Program Math department at Qatar University has started implementing new ways of teaching Math by introducing MyMathLab (MML) as an innovative interactive tool in addition of the use Blackboard to support standard teaching such as Discussion board in Virtual class to engage students outside of classroom and to enhance independent, active learning that promote students’ critical thinking skills, decision making, and problem solving skills through the learning process.

Keywords: Blackboard, MyMathLab, study plan, discussion board, critical thinking, active and independent learning, problem solving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402
2740 Further Investigation of α+12C and α+16O Elastic Scattering

Authors: Sh. Hamada

Abstract:

The current work aims to study the rainbow like-structure observed in the elastic scattering of alpha particles on both 12C and 16O nuclei. We reanalyzed the experimental elastic scattering angular distributions data for α+12C and α+16O nuclear systems at different energies using both optical model and double folding potential of different interaction models such as: CDM3Y1, DDM3Y1, CDM3Y6 and BDM3Y1. Potential created by BDM3Y1 interaction model has the shallowest depth which reflects the necessity to use higher renormalization factor (Nr). Both optical model and double folding potential of different interaction models fairly reproduce the experimental data.

Keywords: Nuclear rainbow, elastic scattering, optical model, double folding, density distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
2739 Effect of Atmospheric Turbulence on AcquisitionTime of Ground to Deep Space Optical Communication System

Authors: Hemani Kaushal, V.K.Jain, Subrat Kar

Abstract:

The performance of ground to deep space optical communication systems is degraded by distortion of the beam as it propagates through the turbulent atmosphere. Turbulence causes fluctuations in the intensity of the received signal which ultimately affects the acquisition time required to acquire and locate the spaceborne target using narrow laser beam. In this paper, performance of free-space optical (FSO) communication system in atmospheric turbulence has been analyzed in terms of acquisition time for coherent and non-coherent modulation schemes. Numerical results presented in graphical and tabular forms show that the acquisition time increases with the increase in turbulence level. This is true for both schemes. The BPSK has lowest acquisition time among all schemes. In non-coherent schemes, M-PPM performs better than the other schemes. With the increase in M, acquisition time becomes lower, but at the cost of increase in system complexity.

Keywords: Atmospheric Turbulence, Acquisition Time, BinaryPhase Shift Keying (BPSK), Free-Space Optical (FSO)Communication System, M-ary Pulse Position Modulation (M-PPM), Coherent/Non-coherent Modulation Schemes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762
2738 Towards Development of Solution for Business Process-Oriented Data Analysis

Authors: M. Klimavicius

Abstract:

This paper proposes a modeling methodology for the development of data analysis solution. The Author introduce the approach to address data warehousing issues at the at enterprise level. The methodology covers the process of the requirements eliciting and analysis stage as well as initial design of data warehouse. The paper reviews extended business process model, which satisfy the needs of data warehouse development. The Author considers that the use of business process models is necessary, as it reflects both enterprise information systems and business functions, which are important for data analysis. The Described approach divides development into three steps with different detailed elaboration of models. The Described approach gives possibility to gather requirements and display them to business users in easy manner.

Keywords: Data warehouse, data analysis, business processmanagement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1380
2737 A Robust Al-Hawalees Gaming Automation using Minimax and BPNN Decision

Authors: Ahmad Sharieh, R Bremananth

Abstract:

Artificial Intelligence based gaming is an interesting topic in the state-of-art technology. This paper presents an automation of a tradition Omani game, called Al-Hawalees. Its related issues are resolved and implemented using artificial intelligence approach. An AI approach called mini-max procedure is incorporated to make a diverse budges of the on-line gaming. If number of moves increase, time complexity will be increased in terms of propositionally. In order to tackle the time and space complexities, we have employed a back propagation neural network (BPNN) to train in off-line to make a decision for resources required to fulfill the automation of the game. We have utilized Leverberg- Marquardt training in order to get the rapid response during the gaming. A set of optimal moves is determined by the on-line back propagation training fashioned with alpha-beta pruning. The results and analyses reveal that the proposed scheme will be easily incorporated in the on-line scenario with one player against the system.

Keywords: Artificial neural network, back propagation gaming, Leverberg-Marquardt, minimax procedure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1916
2736 Prediction on Housing Price Based on Deep Learning

Authors: Li Yu, Chenlu Jiao, Hongrun Xin, Yan Wang, Kaiyang Wang

Abstract:

In order to study the impact of various factors on the housing price, we propose to build different prediction models based on deep learning to determine the existing data of the real estate in order to more accurately predict the housing price or its changing trend in the future. Considering that the factors which affect the housing price vary widely, the proposed prediction models include two categories. The first one is based on multiple characteristic factors of the real estate. We built Convolution Neural Network (CNN) prediction model and Long Short-Term Memory (LSTM) neural network prediction model based on deep learning, and logical regression model was implemented to make a comparison between these three models. Another prediction model is time series model. Based on deep learning, we proposed an LSTM-1 model purely regard to time series, then implementing and comparing the LSTM model and the Auto-Regressive and Moving Average (ARMA) model. In this paper, comprehensive study of the second-hand housing price in Beijing has been conducted from three aspects: crawling and analyzing, housing price predicting, and the result comparing. Ultimately the best model program was produced, which is of great significance to evaluation and prediction of the housing price in the real estate industry.

Keywords: Deep learning, convolutional neural network, LSTM, housing prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4962
2735 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: Autonomous surveillance, Bayesian reasoning, decision-support, interventions, patterns-of-life, predictive analytics, predictive insights.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 515
2734 Seismic Performance of Slopes Subjected to Earthquake Mainshock Aftershock Sequences

Authors: Alisha Khanal, Gokhan Saygili

Abstract:

It is commonly observed that aftershocks follow the mainshock. Aftershocks continue over a period of time with a decreasing frequency and typically there is not sufficient time for repair and retrofit between a mainshock–aftershock sequence. Usually, aftershocks are smaller in magnitude; however, aftershock ground motion characteristics such as the intensity and duration can be greater than the mainshock due to the changes in the earthquake mechanism and location with respect to the site. The seismic performance of slopes is typically evaluated based on the sliding displacement predicted to occur along a critical sliding surface. Various empirical models are available that predict sliding displacement as a function of seismic loading parameters, ground motion parameters, and site parameters but these models do not include the aftershocks. The seismic risks associated with the post-mainshock slopes ('damaged slopes') subjected to aftershocks is significant. This paper extends the empirical sliding displacement models for flexible slopes subjected to earthquake mainshock-aftershock sequences (a multi hazard approach). A dataset was developed using 144 pairs of as-recorded mainshock-aftershock sequences using the Pacific Earthquake Engineering Research Center (PEER) database. The results reveal that the combination of mainshock and aftershock increases the seismic demand on slopes relative to the mainshock alone; thus, seismic risks are underestimated if aftershocks are neglected.

Keywords: Seismic slope stability, sliding displacement, mainshock, aftershock, landslide, earthquake.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 875
2733 Combining the Description Features of UMLRT and CSP+T Specifications Applied to a Complete Design of Real-Time Systems

Authors: Kawtar Benghazi Akhlaki, Manuel I. Capel-Tuñón

Abstract:

UML is a collection of notations for capturing a software system specification. These notations have a specific syntax defined by the Object Management Group (OMG), but many of their constructs only present informal semantics. They are primarily graphical, with textual annotation. The inadequacies of standard UML as a vehicle for complete specification and implementation of real-time embedded systems has led to a variety of competing and complementary proposals. The Real-time UML profile (UML-RT), developed and standardized by OMG, defines a unified framework to express the time, scheduling and performance aspects of a system. We present in this paper a framework approach aimed at deriving a complete specification of a real-time system. Therefore, we combine two methods, a semiformal one, UML-RT, which allows the visual modeling of a realtime system and a formal one, CSP+T, which is a design language including the specification of real-time requirements. As to show the applicability of the approach, a correct design of a real-time system with hard real time constraints by applying a set of mapping rules is obtained.

Keywords: CSP+T, formal software specification, process algebras, real-time systems, unified modeling language.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1792
2732 The Proof of Analogous Results for Martingales and Partial Differential Equations Options Price Valuation Formulas Using Stochastic Differential Equation Models in Finance

Authors: H. D. Ibrahim, H. C. Chinwenyi, A. H. Usman

Abstract:

Valuing derivatives (options, futures, swaps, forwards, etc.) is one uneasy task in financial mathematics. The two ways this problem can be effectively resolved in finance is by the use of two methods (Martingales and Partial Differential Equations (PDEs)) to obtain their respective options price valuation formulas. This research paper examined two different stochastic financial models which are Constant Elasticity of Variance (CEV) model and Black-Karasinski term structure model. Assuming their respective option price valuation formulas, we proved the analogous of the Martingales and PDEs options price valuation formulas for the two different Stochastic Differential Equation (SDE) models. This was accomplished by using the applications of Girsanov theorem for defining an Equivalent Martingale Measure (EMM) and the Feynman-Kac theorem. The results obtained show the systematic proof for analogous of the two (Martingales and PDEs) options price valuation formulas beginning with the Martingales option price formula and arriving back at the Black-Scholes parabolic PDEs and vice versa.

Keywords: Option price valuation, Martingales, Partial Differential Equations, PDEs, Equivalent Martingale Measure, Girsanov Theorem, Feyman-Kac Theorem, European Put Option.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 349
2731 Creation of Economic and Social Value by Social Entrepreneurship for Sustainable Development

Authors: Ahaskar Pandey, Gaurav Mukherjee, Sushil Kumar

Abstract:

The ever growing sentiment of environmentalism across the globe has made many people think on the green lines. But most of such ideas halt short of implementation because of the short term economic viability issues with the concept of going green. In this paper we have tried to amalgamate the green concept with social entrepreneurship for solving a variety of issues faced by the society today. In addition the paper also tries to ensure that the short term economic viability does not act as a deterrent. The paper comes up three sustainable models of social entrepreneurship which tackle a wide assortment of issues such as nutrition problem, land problems, pollution problems and employment problems. The models described fall under the following heads: - Spirulina cultivation: The model addresses nutrition, land and employment issues. It deals with cultivation of a blue green alga called Spirulina which can be used as a very nutritious food. Also, the implementation of this model would bring forth employment to the poor people of the area. - Biocomposites: The model comes up with various avenues in which biocomposites can be used in an economically sustainable manner. This model deals with the environmental concerns and addresses the depletion of natural resources. - Packaging material from empty fruit bunches (EFB) of oil palm: This one deals with air and land pollution. It is intended to be a substitute for packaging materials made from Styrofoam and plastics which are non-biodegradable. It takes care of the biodegradability and land pollution issues. It also reduces air pollution as the empty fruit bunches are not incinerated. All the three models are sustainable and do not deplete the natural resources any further. This paper explains each of the models in detail and deals with the operational/manufacturing procedures and cost analysis while also throwing light on the benefits derived and sustainability aspects.

Keywords: Biodegradable, Pollution, Social entrepreneurship, Sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1801
2730 Fast and Efficient On-Chip Interconnection Modeling for High Speed VLSI Systems

Authors: A.R. Aswatha, T. Basavaraju, S. Sandeep Kumar

Abstract:

Timing driven physical design, synthesis, and optimization tools need efficient closed-form delay models for estimating the delay associated with each net in an integrated circuit (IC) design. The total number of nets in a modern IC design has increased dramatically and exceeded millions. Therefore efficient modeling of interconnection is needed for high speed IC-s. This paper presents closed–form expressions for RC and RLC interconnection trees in current mode signaling, which can be implemented in VLSI design tool. These analytical model expressions can be used for accurate calculation of delay after the design clock tree has been laid out and the design is fully routed. Evaluation of these analytical models is several orders of magnitude faster than simulation using SPICE.

Keywords: IC design, RC/RLC Interconnection, VLSI Systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1482
2729 Neural Network Control of a Biped Robot Model with Composite Adaptation Low

Authors: Ahmad Forouzantabar

Abstract:

this paper presents a novel neural network controller with composite adaptation low to improve the trajectory tracking problems of biped robots comparing with classical controller. The biped model has 5_link and 6 degrees of freedom and actuated by Plated Pneumatic Artificial Muscle, which have a very high power to weight ratio and it has large stoke compared to similar actuators. The proposed controller employ a stable neural network in to approximate unknown nonlinear functions in the robot dynamics, thereby overcoming some limitation of conventional controllers such as PD or adaptive controllers and guarantee good performance. This NN controller significantly improve the accuracy requirements by retraining the basic PD/PID loop, but adding an inner adaptive loop that allows the controller to learn unknown parameters such as friction coefficient, therefore improving tracking accuracy. Simulation results plus graphical simulation in virtual reality show that NN controller tracking performance is considerably better than PD controller tracking performance.

Keywords: Biped robot, Neural network, Plated Pneumatic Artificial Muscle, Composite adaptation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835
2728 Comparison of Two-Phase Critical Flow Models for Estimation of Leak Flow Rate through Cracks

Authors: Tadashi Watanabe, Jinya Katsuyama, Akihiro Mano

Abstract:

The estimation of leak flow rates through narrow cracks in structures is of importance for nuclear reactor safety, since the leak flow could be detected before occurrence of loss-of-coolant accidents. The two-phase critical leak flow rates are calculated using the system analysis code, and two representative non-homogeneous critical flow models, Henry-Fauske model and Ransom-Trapp model, are compared. The pressure decrease and vapor generation in the crack, and the leak flow rates are found to be larger for the Henry-Fauske model. It is shown that the leak flow rates are not affected by the structural temperature, but affected largely by the roughness of crack surface.

Keywords: Crack, critical flow, leak, roughness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 812
2727 Effect of Teaching Games for Understanding Approach on Students- Cognitive Learning Outcome

Authors: Malathi Balakrishnan, Shabeshan Rengasamy, Mohd Salleh Aman

Abstract:

The study investigated the effects of Teaching Games for Understanding approach on students ‘cognitive learning outcome. The study was a quasi-experimental non-equivalent pretest-posttest control group design whereby 10 year old primary school students (n=72) were randomly assigned to an experimental and a control group. The experimental group students were exposed with TGfU approach and the control group with the Traditional Skill approach of handball game. Game Performance Assessment Instrument (GPAI) was used to measure students' tactical understanding and decision making in 3 versus 3 handball game situations. Analysis of covariance (ANCOVA) was used to analyze the data. The results reveal that there was a significant difference between the TGfU approach group and the traditional skill approach group students on post test score (F (1, 69) = 248.83, p < .05). The findings of this study suggested the importance of TGfU approach to improve primary students’ tactical understanding and decision making in handball game.

Keywords: Constructivism, learning outcome, tactical understanding, and Teaching Game for Understanding (TGfU)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4577
2726 Extraction of Symbolic Rules from Artificial Neural Networks

Authors: S. M. Kamruzzaman, Md. Monirul Islam

Abstract:

Although backpropagation ANNs generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, it is desirable to extract knowledge from trained ANNs for the users to gain a better understanding of how the networks solve the problems. A new rule extraction algorithm, called rule extraction from artificial neural networks (REANN) is proposed and implemented to extract symbolic rules from ANNs. A standard three-layer feedforward ANN is the basis of the algorithm. A four-phase training algorithm is proposed for backpropagation learning. Explicitness of the extracted rules is supported by comparing them to the symbolic rules generated by other methods. Extracted rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy. Extensive experimental studies on several benchmarks classification problems, such as breast cancer, iris, diabetes, and season classification problems, demonstrate the effectiveness of the proposed approach with good generalization ability.

Keywords: Backpropagation, clustering algorithm, constructivealgorithm, continuous activation function, pruning algorithm, ruleextraction algorithm, symbolic rules.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1601
2725 Verification and Proposal of Information Processing Model Using EEG-Based Brain Activity Monitoring

Authors: Toshitaka Higashino, Naoki Wakamiya

Abstract:

Human beings perform a task by perceiving information from outside, recognizing them, and responding them. There have been various attempts to analyze and understand internal processes behind the reaction to a given stimulus by conducting psychological experiments and analysis from multiple perspectives. Among these, we focused on Model Human Processor (MHP). However, it was built based on psychological experiments and thus the relation with brain activity was unclear so far. To verify the validity of the MHP and propose our model from a viewpoint of neuroscience, EEG (Electroencephalography) measurements are performed during experiments in this study. More specifically, first, experiments were conducted where Latin alphabet characters were used as visual stimuli. In addition to response time, ERPs (event-related potentials) such as N100 and P300 were measured by using EEG. By comparing cycle time predicted by the MHP and latency of ERPs, it was found that N100, related to perception of stimuli, appeared at the end of the perceptual processor. Furthermore, by conducting an additional experiment, it was revealed that P300, related to decision making, appeared during the response decision process, not at the end. Second, by experiments using Japanese Hiragana characters, i.e. Japan's own phonetic symbols, those findings were confirmed. Finally, Japanese Kanji characters were used as more complicated visual stimuli. A Kanji character usually has several readings and several meanings. Despite the difference, a reading-related task and a meaning-related task exhibited similar results, meaning that they involved similar information processing processes of the brain. Based on those results, our model was proposed which reflects response time and ERP latency. It consists of three processors: the perception processor from an input of a stimulus to appearance of N100, the cognitive processor from N100 to P300, and the decision-action processor from P300 to response. Using our model, an application system which reflects brain activity can be established.

Keywords: Brain activity, EEG, information processing model, model human processor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 671
2724 Developing a Statistical Model for Electromagnetic Environment for Mobile Wireless Networks

Authors: C. Temaneh Nyah

Abstract:

The analysis of electromagnetic environment using deterministic mathematical models is characterized by the impossibility of analyzing a large number of interacting network stations with a priori unknown parameters, and this is characteristic, for example, of mobile wireless communication networks. One of the tasks of the tools used in designing, planning and optimization of mobile wireless network is to carry out simulation of electromagnetic environment based on mathematical modelling methods, including computer experiment, and to estimate its effect on radio communication devices. This paper proposes the development of a statistical model of electromagnetic environment of a mobile wireless communication network by describing the parameters and factors affecting it including the propagation channel and their statistical models.

Keywords: Electromagnetic Environment, Statistical model, Wireless communication network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1894
2723 Platform-as-a-Service Sticky Policies for Privacy Classification in the Cloud

Authors: Maha Shamseddine, Amjad Nusayr, Wassim Itani

Abstract:

In this paper, we present a Platform-as-a-Service (PaaS) model for controlling the privacy enforcement mechanisms applied on user data when stored and processed in Cloud data centers. The proposed architecture consists of establishing user configurable ‘sticky’ policies on the Graphical User Interface (GUI) data-bound components during the application development phase to specify the details of privacy enforcement on the contents of these components. Various privacy classification classes on the data components are formally defined to give the user full control on the degree and scope of privacy enforcement including the type of execution containers to process the data in the Cloud. This not only enhances the privacy-awareness of the developed Cloud services, but also results in major savings in performance and energy efficiency due to the fact that the privacy mechanisms are solely applied on sensitive data units and not on all the user content. The proposed design is implemented in a real PaaS cloud computing environment on the Microsoft Azure platform.

Keywords: Privacy enforcement, Platform-as-a-Service privacy awareness, cloud computing privacy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 734
2722 Prioritization Method in the Fuzzy Analytic Network Process by Fuzzy Preferences Programming Method

Authors: Tarifa S. Almulhim, Ludmil Mikhailov, Dong-Ling Xu

Abstract:

In this paper, a method for deriving a group priority vector in the Fuzzy Analytic Network Process (FANP) is proposed. By introducing importance weights of multiple decision makers (DMs) based on their experiences, the Fuzzy Preferences Programming Method (FPP) is extended to a fuzzy group prioritization problem in the FANP. Additionally, fuzzy pair-wise comparison judgments are presented rather than exact numerical assessments in order to model the uncertainty and imprecision in the DMs- judgments and then transform the fuzzy group prioritization problem into a fuzzy non-linear programming optimization problem which maximize the group satisfaction. Unlike the known fuzzy prioritization techniques, the new method proposed in this paper can easily derive crisp weights from incomplete and inconsistency fuzzy set of comparison judgments and does not require additional aggregation producers. Detailed numerical examples are used to illustrate the implement of our approach and compare with the latest fuzzy prioritization method.

Keywords: Fuzzy Analytic Network Process (FANP), Fuzzy Non-linear Programming, Fuzzy Preferences Programming Method (FPP), Multiple Criteria Decision-Making (MCDM), Triangular Fuzzy Number.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2371
2721 Implementation of the Outputs of Computer Simulation to Support Decision-Making Processes

Authors: Jiří Barta

Abstract:

At the present time, awareness, education, computer simulation and information systems protection are very serious and relevant topics. The article deals with perspectives and possibilities of implementation of emergence or natural hazard threats into the system which is developed for communication among members of crisis management staffs. The Czech Hydro-Meteorological Institute with its System of Integrated Warning Service resents the largest usable base of information. National information systems are connected to foreign systems, especially to flooding emergency systems of neighboring countries, systems of European Union and international organizations where the Czech Republic is a member. Use of outputs of particular information systems and computer simulations on a single communication interface of information system for communication among members of crisis management staff and setting the site interoperability in the net will lead to time savings in decision-making processes in solving extraordinary events and crisis situations. Faster managing of an extraordinary event or a crisis situation will bring positive effects and minimize the impact of negative effects on the environment.

Keywords: Computer simulation, communication, continuity, critical infrastructure, information systems, safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698
2720 Detecting Earnings Management via Statistical and Neural Network Techniques

Authors: Mohammad Namazi, Mohammad Sadeghzadeh Maharluie

Abstract:

Predicting earnings management is vital for the capital market participants, financial analysts and managers. The aim of this research is attempting to respond to this query: Is there a significant difference between the regression model and neural networks’ models in predicting earnings management, and which one leads to a superior prediction of it? In approaching this question, a Linear Regression (LR) model was compared with two neural networks including Multi-Layer Perceptron (MLP), and Generalized Regression Neural Network (GRNN). The population of this study includes 94 listed companies in Tehran Stock Exchange (TSE) market from 2003 to 2011. After the results of all models were acquired, ANOVA was exerted to test the hypotheses. In general, the summary of statistical results showed that the precision of GRNN did not exhibit a significant difference in comparison with MLP. In addition, the mean square error of the MLP and GRNN showed a significant difference with the multi variable LR model. These findings support the notion of nonlinear behavior of the earnings management. Therefore, it is more appropriate for capital market participants to analyze earnings management based upon neural networks techniques, and not to adopt linear regression models.

Keywords: Earnings management, generalized regression neural networks, linear regression, multi-layer perceptron, Tehran stock exchange.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2091