Search results for: exponential functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2840

Search results for: exponential functions

2450 Vibration Measurements of Single-Lap Cantilevered SPR Beams

Authors: Xiaocong He

Abstract:

Self-pierce riveting (SPR) is a new high-speed mechanical fastening technique which is suitable for point joining dissimilar sheet materials, as well as coated and pre-painted sheet materials. Mechanical structures assembled by SPR are expected to possess a high damping capacity. In this study, experimental measurement techniques were proposed for the prediction of vibration behavior of single-lap cantilevered SPR beams. The dynamic test software and the data acquisition hardware were used in the experimental measurement of the dynamic response of the single-lap cantilevered SPR beams. Free and forced vibration behavior of the single-lap cantilevered SPR beams was measured using the LMS CADA-X experimental modal analysis software and the LMS-DIFA Scadas II data acquisition hardware. The frequency response functions of the SPR beams of different rivet number were compared. The main goal of the paper is to provide a basic measuring method for further research on vibration based non-destructive damage detection in single-lap cantilevered SPR beams.

Keywords: self-piercing riveting, dynamic response, experimental measurement, frequency response functions

Procedia PDF Downloads 429
2449 Dual-Rail Logic Unit in Double Pass Transistor Logic

Authors: Hamdi Belgacem, Fradi Aymen

Abstract:

In this paper we present a low power, low cost differential logic unit (LU). The proposed LU receives dual-rail inputs and generates dual-rail outputs. The proposed circuit can be used in Arithmetic and Logic Units (ALU) of processor. It can be also dedicated for self-checking applications based on dual duplication code. Four logic functions as well as their inverses are implemented within a single Logic Unit. The hardware overhead for the implementation of the proposed LU is lower than the hardware overhead required for standard LU implemented with standard CMOS logic style. This new implementation is attractive as fewer transistors are required to implement important logic functions. The proposed differential logic unit can perform 8 Boolean logical operations by using only 16 transistors. Spice simulations using a 32 nm technology was utilized to evaluate the performance of the proposed circuit and to prove its acceptable electrical behaviour.

Keywords: differential logic unit, double pass transistor logic, low power CMOS design, low cost CMOS design

Procedia PDF Downloads 452
2448 Flocking Swarm of Robots Using Artificial Innate Immune System

Authors: Muneeb Ahmad, Ali Raza

Abstract:

A computational method inspired by the immune system (IS) is presented, leveraging its shared characteristics of robustness, fault tolerance, scalability, and adaptability with swarm intelligence. This method aims to showcase flocking behaviors in a swarm of robots (SR). The innate part of the IS offers a variety of reactive and probabilistic cell functions alongside its self-regulation mechanism which have been translated to enable swarming behaviors. Although, the research is specially focused on flocking behaviors in a variety of simulated environments using e-puck robots in a physics-based simulator (CoppeliaSim); the artificial innate immune system (AIIS) can exhibit other swarm behaviors as well. The effectiveness of the immuno-inspired approach has been established with extensive experimentations, for scalability and adaptability, using standard swarm benchmarks as well as the immunological regulatory functions (i.e., Dendritic Cells’ Maturity and Inflammation). The AIIS-based approach has proved to be a scalable and adaptive solution for emulating the flocking behavior of SR.

Keywords: artificial innate immune system, flocking swarm, immune system, swarm intelligence

Procedia PDF Downloads 104
2447 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 272
2446 A Study of Evolutional Control Systems

Authors: Ti-Jun Xiao, Zhe Xu

Abstract:

Controllability is one of the fundamental issues in control systems. In this paper, we study the controllability of second order evolutional control systems in Hilbert spaces with memory and boundary controls, which model dynamic behaviors of some viscoelastic materials. Transferring the control problem into a moment problem and showing the Riesz property of a family of functions related to Cauchy problems for some integrodifferential equations, we obtain a general boundary controllability theorem for these second order evolutional control systems. This controllability theorem is applicable to various concrete 1D viscoelastic systems and recovers some previous related results. It is worth noting that Riesz sequences can be used for numerical computations of the control functions and the identification of new Riesz sequence is of independent interest for the basis-function theory. Moreover, using the Riesz sequences, we obtain the existence and uniqueness of (weak) solutions to these second order evolutional control systems in Hilbert spaces. Finally, we derive the exact boundary controllability of a viscoelastic beam equation, as an application of our abstract theorem.

Keywords: evolutional control system, controllability, boundary control, existence and uniqueness

Procedia PDF Downloads 222
2445 Anisotropic Approach for Discontinuity Preserving in Optical Flow Estimation

Authors: Pushpendra Kumar, Sanjeev Kumar, R. Balasubramanian

Abstract:

Estimation of optical flow from a sequence of images using variational methods is one of the most successful approach. Discontinuity between different motions is one of the challenging problem in flow estimation. In this paper, we design a new anisotropic diffusion operator, which is able to provide smooth flow over a region and efficiently preserve discontinuity in optical flow. This operator is designed on the basis of intensity differences of the pixels and isotropic operator using exponential function. The combination of these are used to control the propagation of flow. Experimental results on the different datasets verify the robustness and accuracy of the algorithm and also validate the effect of anisotropic operator in the discontinuity preserving.

Keywords: optical flow, variational methods, computer vision, anisotropic operator

Procedia PDF Downloads 873
2444 Finding the Elastic Field in an Arbitrary Anisotropic Media by Implementing Accurate Generalized Gaussian Quadrature Solution

Authors: Hossein Kabir, Amir Hossein Hassanpour Mati-Kolaie

Abstract:

In the current study, the elastic field in an anisotropic elastic media is determined by implementing a general semi-analytical method. In this specific methodology, the displacement field is computed as a sum of finite functions with unknown coefficients. These aforementioned functions satisfy exactly both the homogeneous and inhomogeneous boundary conditions in the proposed media. It is worth mentioning that the unknown coefficients are determined by implementing the principle of minimum potential energy. The numerical integration is implemented by employing the Generalized Gaussian Quadrature solution. Furthermore, with the aid of the calculated unknown coefficients, the displacement field, as well as the other parameters of the elastic field, are obtainable as well. Finally, the comparison of the previous analytical method with the current semi-analytical method proposes the efficacy of the present methodology.

Keywords: anisotropic elastic media, semi-analytical method, elastic field, generalized gaussian quadrature solution

Procedia PDF Downloads 321
2443 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping

Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa

Abstract:

The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.

Keywords: neural network computing, continuous functions generating the input-output mapping, decreasing the training time, machines with big memories

Procedia PDF Downloads 283
2442 Secure Cryptographic Operations on SIM Card for Mobile Financial Services

Authors: Kerem Ok, Serafettin Senturk, Serdar Aktas, Cem Cevikbas

Abstract:

Mobile technology is very popular nowadays and it provides a digital world where users can experience many value-added services. Service Providers are also eager to offer diverse value-added services to users such as digital identity, mobile financial services and so on. In this context, the security of data storage in smartphones and the security of communication between the smartphone and service provider are critical for the success of these services. In order to provide the required security functions, the SIM card is one acceptable alternative. Since SIM cards include a Secure Element, they are able to store sensitive data, create cryptographically secure keys, encrypt and decrypt data. In this paper, we design and implement a SIM and a smartphone framework that uses a SIM card for secure key generation, key storage, data encryption, data decryption and digital signing for mobile financial services. Our frameworks show that the SIM card can be used as a controlled Secure Element to provide required security functions for popular e-services such as mobile financial services.

Keywords: SIM card, mobile financial services, cryptography, secure data storage

Procedia PDF Downloads 312
2441 Artificial Intelligent Methodology for Liquid Propellant Engine Design Optimization

Authors: Hassan Naseh, Javad Roozgard

Abstract:

This paper represents the methodology based on Artificial Intelligent (AI) applied to Liquid Propellant Engine (LPE) optimization. The AI methodology utilized from Adaptive neural Fuzzy Inference System (ANFIS). In this methodology, the optimum objective function means to achieve maximum performance (specific impulse). The independent design variables in ANFIS modeling are combustion chamber pressure and temperature and oxidizer to fuel ratio and output of this modeling are specific impulse that can be applied with other objective functions in LPE design optimization. To this end, the LPE’s parameter has been modeled in ANFIS methodology based on generating fuzzy inference system structure by using grid partitioning, subtractive clustering and Fuzzy C-Means (FCM) clustering for both inferences (Mamdani and Sugeno) and various types of membership functions. The final comparing optimization results shown accuracy and processing run time of the Gaussian ANFIS Methodology between all methods.

Keywords: ANFIS methodology, artificial intelligent, liquid propellant engine, optimization

Procedia PDF Downloads 587
2440 Dynamic Reroute Modeling for Emergency Evacuation: Case Study of Brunswick City, Germany

Authors: Yun-Pang Flötteröd, Jakob Erdmann

Abstract:

The human behaviors during evacuations are quite complex. One of the critical behaviors which affect the efficiency of evacuation is route choice. Therefore, the respective simulation modeling work needs to function properly. In this paper, Simulation of Urban Mobility’s (SUMO) current dynamic route modeling during evacuation, i.e. the rerouting functions, is examined with a real case study. The result consistency of the simulation and the reality is checked as well. Four influence factors (1) time to get information, (2) probability to cancel a trip, (3) probability to use navigation equipment, and (4) rerouting and information updating period are considered to analyze possible traffic impacts during the evacuation and to examine the rerouting functions in SUMO. Furthermore, some behavioral characters of the case study are analyzed with use of the corresponding detector data and applied in the simulation. The experiment results show that the dynamic route modeling in SUMO can deal with the proposed scenarios properly. Some issues and function needs related to route choice are discussed and further improvements are suggested.

Keywords: evacuation, microscopic traffic simulation, rerouting, SUMO

Procedia PDF Downloads 194
2439 Pragmatic Discoursal Study of Hedging Constructions in English Language

Authors: Mohammed Hussein Ahmed, Bahar Mohammed Kareem

Abstract:

This study is concerned with the pragmatic discoursal study of hedging constructions in English language. Hedging is a mitigated word used to lessen the impact of the utterance uttered by the speakers. Hedging could be either adverbs, adjectives, verbs and sometimes it may consist of clauses. It aims at finding out the extent to which speakers and participants of the discourse use hedging constructions during their conversations. The study also aims at finding out whether or not there are any significant differences in the types and functions of the frequency of hedging constructions employed by male and female. It is hypothesized that hedging constructions are frequent in English discourse more than any other languages due to its formality and that the frequency of the types and functions are influenced by the gender of the participants. To achieve the aims of the study, two types of procedures have been followed: theoretical and practical. The theoretical procedure consists of presenting a theoretical background of hedging topic which includes its definitions, etymology and theories. The practical procedure consists of selecting a sample of texts and analyzing them according to an adopted model. A number of conclusions will be drawn based on the findings of the study.

Keywords: hedging, pragmatics, politeness, theoretical

Procedia PDF Downloads 587
2438 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust

Procedia PDF Downloads 130
2437 Investigating a Deterrence Function for Work Trips for Perth Metropolitan Area

Authors: Ali Raouli, Amin Chegenizadeh, Hamid Nikraz

Abstract:

The Perth metropolitan area and its surrounding regions have been expanding rapidly in recent decades and it is expected that this growth will continue in the years to come. With this rapid growth and the resulting increase in population, consideration should be given to strategic planning and modelling for the future expansion of Perth. The accurate estimation of projected traffic volumes has always been a major concern for the transport modelers and planners. Development of a reliable strategic transport model depends significantly on the inputs data into the model and the calibrated parameters of the model to reflect the existing situation. Trip distribution is the second step in four-step modelling (FSM) which is complex due to its behavioral nature. Gravity model is the most common method for trip distribution. The spatial separation between the Origin and Destination (OD) zones will be reflected in gravity model by applying deterrence functions which provide an opportunity to include people’s behavior in choosing their destinations based on distance, time and cost of their journeys. Deterrence functions play an important role for distribution of the trips within a study area and would simulate the trip distances and therefore should be calibrated for any particular strategic transport model to correctly reflect the trip behavior within the modelling area. This paper aims to review the most common deterrence functions and propose a calibrated deterrence function for work trips within the Perth Metropolitan Area based on the information obtained from the latest available Household data and Perth and Region Travel Survey (PARTS) data. As part of this study, a four-step transport model using EMME software has been developed for Perth Metropolitan Area to assist with the analysis and findings.

Keywords: deterrence function, four-step modelling, origin destination, transport model

Procedia PDF Downloads 168
2436 Effects of Oral L-Carnitine on Liver Functions after Trans arterial Chemoembolization in Hepatocellular Carcinoma Patients

Authors: Ali Kassem, Aly Taha, Abeer Hassan, Kazuhide Higuchi

Abstract:

Introduction: Trans arterial chemoembolization (TACE) for hepatocellular carcinoma (HCC) is usually followed by hepatic dysfunction that limits its efficacy. L-carnitine is recently studied as hepatoprotective agent. Our aim is to evaluate the L-carnitine effects against the deterioration of liver functions after TACE. Method: 53 patients with intermediate stage HCC were assigned into two groups; L-carnitine group (26 patients) who received L-carnitine 300 mg tablet twice daily from 2 weeks before to 12 weeks after TACE and control group (27 patients) without L-carnitine therapy. 28 of studied patients received branched chain amino acids granules. Results: There were significant differences between L-carnitine Vs. control group in mean serum albumin change from baseline to 1 week and 4 weeks after TACE (p < 0.05). L-Carnitine maintained Child-Pugh score at 1 week after TACE and exhibited improvement at 4 weeks after TACE (p < 0.01 Vs 1 week after TACE). Control group has significant Child-Pugh score deterioration from baseline to 1 week after TACE (p < 0.05) and 12 weeks after TACE (p < 0.05). There were significant differences between L-carnitine and control groups in mean Child-Pugh score change from baseline to 4 weeks (p < 0.05) and 12 weeks after TACE (p < 0.05). L-carnitine displayed improvement in (PT) from baseline to 1 week, 4 w (p < 0.05) and 12 weeks after TACE. PT in control group declined less than baseline along all follow up intervals. Total bilirubin in L-carnitine group decreased at 1 week post TACE while in control group, it significantly increased at 1 week (p = 0.01). ALT and C-reactive protein elevation were suppressed at 1 week after TACE in Lcarnitine group. The hepatoprotective effects of L-carnitine were enhanced by concomitant use of branched chain amino acids. Conclusion: L-carnitine and BCAA combination therapy offer a novel supportive strategy after TACE in HCC patients.

Keywords: hepatocellular carcinoma, L-carnitine, liver functions , trans-arterial embolization

Procedia PDF Downloads 155
2435 A Hazard Rate Function for the Time of Ruin

Authors: Sule Sahin, Basak Bulut Karageyik

Abstract:

This paper introduces a hazard rate function for the time of ruin to calculate the conditional probability of ruin for very small intervals. We call this function the force of ruin (FoR). We obtain the expected time of ruin and conditional expected time of ruin from the exact finite time ruin probability with exponential claim amounts. Then we introduce the FoR which gives the conditional probability of ruin and the condition is that ruin has not occurred at time t. We analyse the behavior of the FoR function for different initial surpluses over a specific time interval. We also obtain FoR under the excess of loss reinsurance arrangement and examine the effect of reinsurance on the FoR.

Keywords: conditional time of ruin, finite time ruin probability, force of ruin, reinsurance

Procedia PDF Downloads 405
2434 Fracture Behaviour of Functionally Graded Materials Using Graded Finite Elements

Authors: Mohamad Molavi Nojumi, Xiaodong Wang

Abstract:

In this research fracture behaviour of linear elastic isotropic functionally graded materials (FGMs) are investigated using modified finite element method (FEM). FGMs are advantageous because they enhance the bonding strength of two incompatible materials, and reduce the residual stress and thermal stress. Ceramic/metals are a main type of FGMs. Ceramic materials are brittle. So, there is high possibility of crack existence during fabrication or in-service loading. In addition, damage analysis is necessary for a safe and efficient design. FEM is a strong numerical tool for analyzing complicated problems. Thus, FEM is used to investigate the fracture behaviour of FGMs. Here an accurate 9-node biquadratic quadrilateral graded element is proposed in which the influence of the variation of material properties is considered at the element level. The stiffness matrix of graded elements is obtained using the principle of minimum potential energy. The implementation of graded elements prevents the forced sudden jump of material properties in traditional finite elements for modelling FGMs. Numerical results are verified with existing solutions. Different numerical simulations are carried out to model stationary crack problems in nonhomogeneous plates. In these simulations, material variation is supposed to happen in directions perpendicular and parallel to the crack line. Two special linear and exponential functions have been utilized to model the material gradient as they are mostly discussed in literature. Also, various sizes of the crack length are considered. A major difference in the fracture behaviour of FGMs and homogeneous materials is related to the break of material symmetry. For example, when the material gradation direction is normal to the crack line, even under applying the mode I loading there exists coupled modes I and II of fracture which originates from the induced shear in the model. Therefore, the necessity of the proper modelling of the material variation should be considered in capturing the fracture behaviour of FGMs specially, when the material gradient index is high. Fracture properties such as mode I and mode II stress intensity factors (SIFs), energy release rates, and field variables near the crack tip are investigated and compared with results obtained using conventional homogeneous elements. It is revealed that graded elements provide higher accuracy with less effort in comparison with conventional homogeneous elements.

Keywords: finite element, fracture mechanics, functionally graded materials, graded element

Procedia PDF Downloads 174
2433 Modified Weibull Approach for Bridge Deterioration Modelling

Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight

Abstract:

State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.

Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models

Procedia PDF Downloads 727
2432 A Deterministic Approach for Solving the Hull and White Interest Rate Model with Jump Process

Authors: Hong-Ming Chen

Abstract:

This work considers the resolution of the Hull and White interest rate model with the jump process. A deterministic process is adopted to model the random behavior of interest rate variation as deterministic perturbations, which is depending on the time t. The Brownian motion and jumps uncertainty are denoted as the integral functions piecewise constant function w(t) and point function θ(t). It shows that the interest rate function and the yield function of the Hull and White interest rate model with jump process can be obtained by solving a nonlinear semi-infinite programming problem. A relaxed cutting plane algorithm is then proposed for solving the resulting optimization problem. The method is calibrated for the U.S. treasury securities at 3-month data and is used to analyze several effects on interest rate prices, including interest rate variability, and the negative correlation between stock returns and interest rates. The numerical results illustrate that our approach essentially generates the yield functions with minimal fitting errors and small oscillation.

Keywords: optimization, interest rate model, jump process, deterministic

Procedia PDF Downloads 161
2431 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 136
2430 An E-Retailing System Architecture Based on Cloud Computing

Authors: Chanchai Supaartagorn

Abstract:

E-retailing is the sale of goods online that takes place over the Internet. The Internet has shrunk the entire World. The world e-retailing is growing at an exponential rate in the Americas, Europe, and Asia. However, e-retailing costs require expensive investment, such as hardware, software, and security systems. Cloud computing technology is internet-based computing for the management and delivery of applications and services. Cloud-based e-retailing application models allow enterprises to lower their costs with their effective implementation of e-retailing activities. In this paper, we describe the concept of cloud computing and present the architecture of cloud computing, combining the features of e-retailing. In addition, we propose a strategy for implementing cloud computing with e-retailing. Finally, we explain the benefits from the architecture.

Keywords: architecture, cloud computing, e-retailing, internet-based

Procedia PDF Downloads 396
2429 Morphostructural Characterization of Zinc and Manganese Nano-Oxides

Authors: Adriana-Gabriela Plaiasu, Catalin Marian Ducu

Abstract:

The interest in the unique properties associated with materials having structures on a nanometer scale has been increasing at an exponential rate in last decade. Among the functional mineral compounds such as perovskite (CaTiO3), rutile (TiO2), CaF2, spinel (MgAl2O4), wurtzite (ZnS), zincite (ZnO) and the cupric oxide (CuO) has been used in numerous applications such as catalysis, semiconductors, batteries, gas sensors, biosensors, field transistors and medicine. The Solar Physical Vapor Deposition (SPVD) presented in the paper as elaboration method is an original process to prepare nanopowders working under concentrated sunlight in 2kW solar furnaces. The influence of the synthesis parameters on the chemical and microstructural characteristics of zinc and manganese oxides synthesized nanophases has been systematically studied using XRD, TEM and SEM.

Keywords: characterization, morphological, nano-oxides, structural

Procedia PDF Downloads 278
2428 Key Frame Based Video Summarization via Dependency Optimization

Authors: Janya Sainui

Abstract:

As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.

Keywords: video summarization, key frame extraction, dependency measure, quadratic mutual information

Procedia PDF Downloads 266
2427 Path Integrals and Effective Field Theory of Large Scale Structure

Authors: Revant Nayar

Abstract:

In this work, we recast the equations describing large scale structure, and by extension all nonlinear fluids, in the path integral formalism. We first calculate the well known two and three point functions using Schwinger Keldysh formalism used commonly to perturbatively solve path integrals in non- equilibrium systems. Then we include EFT corrections due to pressure, viscosity, and noise as effects on the time-dependent propagator. We are able to express results for arbitrary two and three point correlation functions in LSS in terms of differential operators acting on a triple K master intergral. We also, for the first time, get analytical results for more general initial conditions deviating from the usual power law P∝kⁿ by introducing a mass scale in the initial conditions. This robust field theoretic formalism empowers us with tools from strongly coupled QFT to study the strongly non-linear regime of LSS and turbulent fluid dynamics such as OPE and holographic duals. These could be used to capture fully the strongly non-linear dynamics of fluids and move towards solving the open problem of classical turbulence.

Keywords: quantum field theory, cosmology, effective field theory, renormallisation

Procedia PDF Downloads 135
2426 Self-denigration in Doctoral Defense Sessions: Scale Development and Validation

Authors: Alireza Jalilifar, Nadia Mayahi

Abstract:

The dissertation defense as a complicated conflict-prone context entails the adoption of elegant interactional strategies, one of which is self-denigration. This study aimed to develop and validate a self-denigration model that fits the context of doctoral defense sessions in applied linguistics. Two focus group discussions provided the basis for developing this conceptual model, which assumed 10 functions for self-denigration, namely good manners, modesty, affability, altruism, assertiveness, diffidence, coercive self-deprecation, evasion, diplomacy, and flamboyance. These functions were used to design a 40-item questionnaire on the attitudes of applied linguists concerning self-denigration in defense sessions. The confirmatory factor analysis of the questionnaire indicated the predictive ability of the measurement model. The findings of this study suggest that self-denigration in doctoral defense sessions is the social representation of the participants’ values, ideas and practices adopted as a negotiation strategy and a conflict management policy for the purpose of establishing harmony and maintaining resilience. This study has implications for doctoral students and academics and illuminates further research on self-denigration in other contexts.

Keywords: academic discourse, politeness, self-denigration, grounded theory, dissertation defense

Procedia PDF Downloads 137
2425 Performance Evaluation of Content Based Image Retrieval Using Indexed Views

Authors: Tahir Iqbal, Mumtaz Ali, Syed Wajahat Kareem, Muhammad Harris

Abstract:

Digital information is expanding in exponential order in our life. Information that is residing online and offline are stored in huge repositories relating to every aspect of our lives. Getting the required information is a task of retrieval systems. Content based image retrieval (CBIR) is a retrieval system that retrieves the required information from repositories on the basis of the contents of the image. Time is a critical factor in retrieval system and using indexed views with CBIR system improves the time efficiency of retrieved results.

Keywords: content based image retrieval (CBIR), indexed view, color, image retrieval, cross correlation

Procedia PDF Downloads 470
2424 A Practical and Efficient Evaluation Function for 3D Model Based Vehicle Matching

Authors: Yuan Zheng

Abstract:

3D model-based vehicle matching provides a new way for vehicle recognition, localization and tracking. Its key is to construct an evaluation function, also called fitness function, to measure the degree of vehicle matching. The existing fitness functions often poorly perform when the clutter and occlusion exist in traffic scenarios. In this paper, we present a practical and efficient fitness function. Unlike the existing evaluation functions, the proposed fitness function is to study the vehicle matching problem from both local and global perspectives, which exploits the pixel gradient information as well as the silhouette information. In view of the discrepancy between 3D vehicle model and real vehicle, a weighting strategy is introduced to differently treat the fitting of the model’s wireframes. Additionally, a normalization operation for the model’s projection is performed to improve the accuracy of the matching. Experimental results on real traffic videos reveal that the proposed fitness function is efficient and robust to the cluttered background and partial occlusion.

Keywords: 3D-2D matching, fitness function, 3D vehicle model, local image gradient, silhouette information

Procedia PDF Downloads 399
2423 An Interpolation Tool for Data Transfer in Two-Dimensional Ice Accretion Problems

Authors: Marta Cordero-Gracia, Mariola Gomez, Olivier Blesbois, Marina Carrion

Abstract:

One of the difficulties in icing simulations is for extended periods of exposure, when very large ice shapes are created. As well as being large, they can have complex shapes, such as a double horn. For icing simulations, these configurations are currently computed in several steps. The icing step is stopped when the ice shapes become too large, at which point a new mesh has to be created to allow for further CFD and ice growth simulations to be performed. This can be very costly, and is a limiting factor in the simulations that can be performed. A way to avoid the costly human intervention in the re-meshing step of multistep icing computation is to use mesh deformation instead of re-meshing. The aim of the present work is to apply an interpolation method based on Radial Basis Functions (RBF) to transfer deformations from surface mesh to volume mesh. This deformation tool has been developed specifically for icing problems. It is able to deal with localized, sharp and large deformations, unlike the tools traditionally used for more smooth wing deformations. This tool will be presented along with validation on typical two-dimensional icing shapes.

Keywords: ice accretion, interpolation, mesh deformation, radial basis functions

Procedia PDF Downloads 313
2422 Development of a Fuzzy Logic Based Model for Monitoring Child Pornography

Authors: Mariam Ismail, Kazeem Rufai, Jeremiah Balogun

Abstract:

A study was conducted to apply fuzzy logic to the development of a monitoring model for child pornography based on associated risk factors, which can be used by forensic experts or integrated into forensic systems for the early detection of child pornographic activities. A number of methods were adopted in the study, which includes an extensive review of related works was done in order to identify the factors that are associated with child pornography following which they were validated by an expert sex psychologist and guidance counselor, and relevant data was collected. Fuzzy membership functions were used to fuzzify the associated variables identified alongside the risk of the occurrence of child pornography based on the inference rules that were provided by the experts consulted, and the fuzzy logic expert system was simulated using the Fuzzy Logic Toolbox available in the MATLAB Software Release 2016. The results of the study showed that there were 4 categories of risk factors required for assessing the risk of a suspect committing child pornography offenses. The results of the study showed that 2 and 3 triangular membership functions were used to formulate the risk factors based on the 2 and 3 number of labels assigned, respectively. The results of the study showed that 5 fuzzy logic models were formulated such that the first 4 was used to assess the impact of each category on child pornography while the last one takes the 4 outputs from the 4 fuzzy logic models as inputs required for assessing the risk of child pornography. The following conclusion was made; there were factors that were related to personal traits, social traits, history of child pornography crimes, and self-regulatory deficiency traits by the suspects required for the assessment of the risk of child pornography crimes committed by a suspect. Using the values of the identified risk factors selected for this study, the risk of child pornography can be easily assessed from their values in order to determine the likelihood of a suspect perpetuating the crime.

Keywords: fuzzy, membership functions, pornography, risk factors

Procedia PDF Downloads 129
2421 Trainability of Executive Functions during Preschool Age Analysis of Inhibition of 5-Year-Old Children

Authors: Christian Andrä, Pauline Hähner, Sebastian Ludyga

Abstract:

Introduction: In the recent past, discussions on the importance of physical activity for child development have contributed to a growing interest in executive functions, which refer to cognitive processes. By controlling, modulating and coordinating sub-processes, they make it possible to achieve superior goals. Major components include working memory, inhibition and cognitive flexibility. While executive functions can be trained easily in school children, there are still research deficits regarding the trainability during preschool age. Methodology: This quasi-experimental study with pre- and post-design analyzes 23 children [age: 5.0 (mean value) ± 0.7 (standard deviation)] from four different sports groups. The intervention group was made up of 13 children (IG: 4.9 ± 0.6), while the control group consisted of ten children (CG: 5.1 ± 0.9). Between pre-test and post-test, children from the intervention group participated special games that train executive functions (i.e., changing rules of the game, introduction of new stimuli in familiar games) for ten units of their weekly sports program. The sports program of the control group was not modified. A computer-based version of the Eriksen Flanker Task was employed in order to analyze the participants’ inhibition ability. In two rounds, the participants had to respond 50 times and as fast as possible to a certain target (direction of sight of a fish; the target was always placed in a central position between five fish). Congruent (all fish have the same direction of sight) and incongruent (central fish faces opposite direction) stimuli were used. Relevant parameters were response time and accuracy. The main objective was to investigate whether children from the intervention group show more improvement in the two parameters than the children from the control group. Major findings: The intervention group revealed significant improvements in congruent response time (pre: 1.34 s, post: 1.12 s, p<.01), while the control group did not show any statistically relevant difference (pre: 1.31 s, post: 1.24 s). Likewise, the comparison of incongruent response times indicates a comparable result (IG: pre: 1.44 s, post: 1.25 s, p<.05 vs. CG: pre: 1.38 s, post: 1.38 s). In terms of accuracy for congruent stimuli, the intervention group showed significant improvements (pre: 90.1 %, post: 95.9 %, p<.01). In contrast, no significant improvement was found for the control group (pre: 88.8 %, post: 92.9 %). Vice versa, the intervention group did not display any significant results for incongruent stimuli (pre: 74.9 %, post: 83.5 %), while the control group revealed a significant difference (pre: 68.9 %, post: 80.3 %, p<.01). The analysis of three out of four criteria demonstrates that children who took part in a special sports program improved more than children who did not. The contrary results for the last criterion could be caused by the control group’s low results from the pre-test. Conclusion: The findings illustrate that inhibition can be trained as early as in preschool age. The combination of familiar games with increased requirements for attention and control processes appears to be particularly suitable.

Keywords: executive functions, flanker task, inhibition, preschool children

Procedia PDF Downloads 253