Search results for: Loss given default.
983 Simultaneous Term Structure Estimation of Hazard and Loss Given Default with a Statistical Model using Credit Rating and Financial Information
Authors: Tomohiro Ando, Satoshi Yamashita
Abstract:
The objective of this study is to propose a statistical modeling method which enables simultaneous term structure estimation of the risk-free interest rate, hazard and loss given default, incorporating the characteristics of the bond issuing company such as credit rating and financial information. A reduced form model is used for this purpose. Statistical techniques such as spline estimation and Bayesian information criterion are employed for parameter estimation and model selection. An empirical analysis is conducted using the information on the Japanese bond market data. Results of the empirical analysis confirm the usefulness of the proposed method.Keywords: Empirical Bayes, Hazard term structure, Loss given default.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1666982 Join and Meet Block Based Default Definite Decision Rule Mining from IDT and an Incremental Algorithm
Authors: Chen Wu, Jingyu Yang
Abstract:
Using maximal consistent blocks of tolerance relation on the universe in incomplete decision table, the concepts of join block and meet block are introduced and studied. Including tolerance class, other blocks such as tolerant kernel and compatible kernel of an object are also discussed at the same time. Upper and lower approximations based on those blocks are also defined. Default definite decision rules acquired from incomplete decision table are proposed in the paper. An incremental algorithm to update default definite decision rules is suggested for effective mining tasks from incomplete decision table into which data is appended. Through an example, we demonstrate how default definite decision rules based on maximal consistent blocks, join blocks and meet blocks are acquired and how optimization is done in support of discernibility matrix and discernibility function in the incomplete decision table.Keywords: rough set, incomplete decision table, maximalconsistent block, default definite decision rule, join and meet block.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1287981 Cash Flow Optimization on Synthetic CDOs
Authors: Timothée Bligny, Clément Codron, Antoine Estruch, Nicolas Girodet, Clément Ginet
Abstract:
Collateralized Debt Obligations are not as widely used nowadays as they were before 2007 Subprime crisis. Nonetheless there remains an enthralling challenge to optimize cash flows associated with synthetic CDOs. A Gaussian-based model is used here in which default correlation and unconditional probabilities of default are highlighted. Then numerous simulations are performed based on this model for different scenarios in order to evaluate the associated cash flows given a specific number of defaults at different periods of time. Cash flows are not solely calculated on a single bought or sold tranche but rather on a combination of bought and sold tranches. With some assumptions, the simplex algorithm gives a way to find the maximum cash flow according to correlation of defaults and maturities. The used Gaussian model is not realistic in crisis situations. Besides present system does not handle buying or selling a portion of a tranche but only the whole tranche. However the work provides the investor with relevant elements on how to know what and when to buy and sell.
Keywords: Synthetic Collateralized Debt Obligation (CDO), Credit Default Swap (CDS), Cash Flow Optimization, Probability of Default, Default Correlation, Strategies, Simulation, Simplex.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1904980 Enhancement Approaches for Supporting Default Hierarchies Formation for Robot Behaviors
Authors: Saeed Mohammed Baneamoon, Rosalina Abdul Salam
Abstract:
Robotic system is an important area in artificial intelligence that aims at developing the performance techniques of the robot and making it more efficient and more effective in choosing its correct behavior. In this paper the distributed learning classifier system is used for designing a simulated control system for robot to perform complex behaviors. A set of enhanced approaches that support default hierarchies formation is suggested and compared with each other in order to make the simulated robot more effective in mapping the input to the correct output behavior.
Keywords: Learning Classifier System, Default Hierarchies, Robot Behaviors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1423979 Measured versus Default Interstate Traffic Data in New Mexico, USA
Authors: M. A. Hasan, M. R. Islam, R. A. Tarefder
Abstract:
This study investigates how the site specific traffic data differs from the Mechanistic Empirical Pavement Design Software default values. Two Weigh-in-Motion (WIM) stations were installed in Interstate-40 (I-40) and Interstate-25 (I-25) to developed site specific data. A computer program named WIM Data Analysis Software (WIMDAS) was developed using Microsoft C-Sharp (.Net) for quality checking and processing of raw WIM data. A complete year data from November 2013 to October 2014 was analyzed using the developed WIM Data Analysis Program. After that, the vehicle class distribution, directional distribution, lane distribution, monthly adjustment factor, hourly distribution, axle load spectra, average number of axle per vehicle, axle spacing, lateral wander distribution, and wheelbase distribution were calculated. Then a comparative study was done between measured data and AASHTOWare default values. It was found that the measured general traffic inputs for I-40 and I-25 significantly differ from the default values.Keywords: AASHTOWare, Traffic, Weigh-in-Motion, Axle load Distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699978 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.
Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 420977 Optimal Allocation Between Subprime Structured Mortgage Products and Treasuries
Authors: MP. Mulaudzi, MA. Petersen, J. Mukuddem-Petersen , IM. Schoeman, B. de Waal, JM. Manale
Abstract:
This conference paper discusses a risk allocation problem for subprime investing banks involving investment in subprime structured mortgage products (SMPs) and Treasuries. In order to solve this problem, we develop a L'evy process-based model of jump diffusion-type for investment choice in subprime SMPs and Treasuries. This model incorporates subprime SMP losses for which credit default insurance in the form of credit default swaps (CDSs) can be purchased. In essence, we solve a mean swap-at-risk (SaR) optimization problem for investment which determines optimal allocation between SMPs and Treasuries subject to credit risk protection via CDSs. In this regard, SaR is indicative of how much protection investors must purchase from swap protection sellers in order to cover possible losses from SMP default. Here, SaR is defined in terms of value-at-risk (VaR). Finally, we provide an analysis of the aforementioned optimization problem and its connections with the subprime mortgage crisis (SMC).
Keywords: Investors; Jump Diffusion Process, Structured Mortgage Products, Treasuries, Credit Risk, Credit Default Swaps, Tranching Risk, Counterparty Risk, Value-at-Risk, Swaps-at-Risk, Subprime Mortgage Crisis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730976 Vector Control Using Series Iron Loss Model of Induction, Motors and Power Loss Minimization
Authors: Kheldoun Aissa, Khodja Djalal Eddine
Abstract:
The iron loss is a source of detuning in vector controlled induction motor drives if the classical rotor vector controller is used for decoupling. In fact, the field orientation will not be satisfied and the output torque will not truck the reference torque mostly used by Loss Model Controllers (LMCs). In addition, this component of loss, among others, may be excessive if the vector controlled induction motor is driving light loads. In this paper, the series iron loss model is used to develop a vector controller immune to iron loss effect and then an LMC to minimize the total power loss using the torque generated by the speed controller.Keywords: Field Oriented Controller, Induction Motor, Loss ModelController, Series Iron Loss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2702975 Loss of P16/INK4A Protein Expression is a Common Abnormality in Hodgkin's Lymphoma
Authors: Fawzi Irshaid, Fatiha Dilmi, Khaled Tarawneh, Raji Hadeth, Adnan Jaran, Ahad Al-Khatib
Abstract:
P16/INK4A is tumor suppressor protein that plays a critical role in cell cycle regulation. Loss of P16 protein expression has been implicated in pathogenesis of many cancers, including lymphoma. Therefore, we sought to investigate if loss of P16 protein expression is associated with lymphoma and/or any specific lymphoma subtypes (Hodgkin-s lymphoma (HL) and nonHodgkin-s lymphoma (NHL)). Fifty-five lymphoma cases consisted of 30 cases of HL and 25 cases of NHL, with an age range of 3 to 78 years, were examined for loss of P16 by immunohistochemical technique using a specific antibody reacting against P16. In total, P16 loss was seen in 33% of all lymphoma cases. P16 loss was identified in 47.7% of HL cases. In contrast, only 16% of NHL showed loss of P16. Loss of P16 was seen in 67% of HL patients with 50 years of age or older, whereas P16 loss was found in only 42% of HL patients with less than 50 years of age. P16 loss in HL is somewhat higher in male (55%) than in female (30%). In subtypes of HL, P16 loss was found exclusively in all cases of lymphocyte depletion, lymphocyte predominance and unclassified cases, whereas P16 loss was seen in 39% of mixed cellularity and 29% of nodular sclerosis cases. In low grade NHL patients, P16 loss was seen in approximately one-third of cases, whereas no or very rare of P16 loss was found in intermediate and high grade cases. P16 loss did not show any correlation with age or gender of NHL patients. In conclusion, the high rate of P16 loss seen in our study suggests that loss of P16 expression plays a critical role in the pathogenesis of lymphoma, particularly with HL.
Keywords: B-cells, immunostaining, P16 protein, Reed-Sternberg cells, tumors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1665974 Transmission Loss Allocation via Loss Function Decomposition and Current Projection Concept
Authors: M.R. Ebrahimi, Z. Ghofrani, M. Ehsan
Abstract:
One of the major problems in liberalized power markets is loss allocation. In this paper, a different method for allocating transmission losses to pool market participants is proposed. The proposed method is fundamentally based on decomposition of loss function and current projection concept. The method has been implemented and tested on several networks and one sample summarized in the paper. The results show that the method is comprehensive and fair to allocating the energy losses of a power market to its participants.Keywords: Transmission loss, loss allocation, current projectionconcept, loss function decomposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744973 Italian Central Guarantee Fund: An Analysis of the Guaranteed SMEs’ Default Risk
Authors: M. C. Arcuri, L. Gai, F. Ielasi
Abstract:
Italian Central Guarantee Fund (CGF) has the purpose to facilitate Small and Medium-sized Enterprises (SMEs)’ access to credit. The aim of the paper is to study the evaluation method adopted by the CGF with regard to SMEs requiring its intervention. This is even more important in the light of the recent CGF reform. We analyse an initial sample of more than 500.000 guarantees from 2012 to 2018. We distinguish between a counter-guarantee delivered to a mutual guarantee institution and a guarantee directly delivered to a bank. We investigate the impact of variables related to the operations and the SMEs on Altman Z’’-score and the score consistent with CGF methodology. We verify that the type of intervention affects the scores and the initial condition changes with the new assessment criterions.
Keywords: Banks, default risk, Italian Guarantee Fund, mutual guarantee institutions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1106972 Using Vulnerability to Reduce False Positive Rate in Intrusion Detection Systems
Authors: Nadjah Chergui, Narhimene Boustia
Abstract:
Intrusion Detection Systems are an essential tool for network security infrastructure. However, IDSs have a serious problem which is the generating of massive number of alerts, most of them are false positive ones which can hide true alerts and make the analyst confused to analyze the right alerts for report the true attacks. The purpose behind this paper is to present a formalism model to perform correlation engine by the reduction of false positive alerts basing on vulnerability contextual information. For that, we propose a formalism model based on non-monotonic JClassicδє description logic augmented with a default (δ) and an exception (є) operator that allows a dynamic inference according to contextual information.Keywords: Context, exception, default, IDS, Non-monotonic Description Logic JClassicδє, vulnerability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430971 Modeling Default Probabilities of the Chosen Czech Banks in the Time of the Financial Crisis
Authors: Petr Gurný
Abstract:
One of the most important tasks in the risk management is the correct determination of probability of default (PD) of particular financial subjects. In this paper a possibility of determination of financial institution’s PD according to the creditscoring models is discussed. The paper is divided into the two parts. The first part is devoted to the estimation of the three different models (based on the linear discriminant analysis, logit regression and probit regression) from the sample of almost three hundred US commercial banks. Afterwards these models are compared and verified on the control sample with the view to choose the best one. The second part of the paper is aimed at the application of the chosen model on the portfolio of three key Czech banks to estimate their present financial stability. However, it is not less important to be able to estimate the evolution of PD in the future. For this reason, the second task in this paper is to estimate the probability distribution of the future PD for the Czech banks. So, there are sampled randomly the values of particular indicators and estimated the PDs’ distribution, while it’s assumed that the indicators are distributed according to the multidimensional subordinated Lévy model (Variance Gamma model and Normal Inverse Gaussian model, particularly). Although the obtained results show that all banks are relatively healthy, there is still high chance that “a financial crisis” will occur, at least in terms of probability. This is indicated by estimation of the various quantiles in the estimated distributions. Finally, it should be noted that the applicability of the estimated model (with respect to the used data) is limited to the recessionary phase of the financial market.
Keywords: Credit-scoring Models, Multidimensional Subordinated Lévy Model, Probability of Default.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1919970 Quantification of Methane Emissions from Solid Waste in Oman Using IPCC Default Methodology
Authors: Wajeeha A. Qazi, Mohammed-Hasham Azam, Umais A. Mehmood, Ghithaa A. Al-Mufragi, Noor-Alhuda Alrawahi, Mohammed F. M. Abushammala
Abstract:
Municipal Solid Waste (MSW) disposed in landfill sites decompose under anaerobic conditions and produce gases which mainly contain carbon dioxide (CO2) and methane (CH4). Methane has the potential of causing global warming 25 times more than CO2, and can potentially affect human life and environment. Thus, this research aims to determine MSW generation and the annual CH4 emissions from the generated waste in Oman over the years 1971-2030. The estimation of total waste generation was performed using existing models, while the CH4 emissions estimation was performed using the intergovernmental panel on climate change (IPCC) default method. It is found that total MSW generation in Oman might be reached 3,089 Gg in the year 2030, which approximately produced 85 Gg of CH4 emissions in the year 2030.
Keywords: Methane, emissions, landfills, solid waste.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2128969 Economic Loss due to Ganoderma Disease in Oil Palm
Authors: K. Assis, K. P. Chong, A. S. Idris, C. M. Ho
Abstract:
Oil palm or Elaeis guineensis is considered as the golden crop in Malaysia. But oil palm industry in this country is now facing with the most devastating disease called as Ganoderma Basal Stem Rot disease. The objective of this paper is to analyze the economic loss due to this disease. There were three commercial oil palm sites selected for collecting the required data for economic analysis. Yield parameter used to measure the loss was the total weight of fresh fruit bunch in six months. The predictors include disease severity, change in disease severity, number of infected neighbor palms, age of palm, planting generation, topography, and first order interaction variables. The estimation model of yield loss was identified by using backward elimination based regression method. Diagnostic checking was conducted on the residual of the best yield loss model. The value of mean absolute percentage error (MAPE) was used to measure the forecast performance of the model. The best yield loss model was then used to estimate the economic loss by using the current monthly price of fresh fruit bunch at mill gate.
Keywords: Ganoderma, oil palm, regression model, yield loss, economic loss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3237968 Modeling Salam Contract for Profit and Loss Sharing
Authors: Dchieche Amina, Aboulaich Rajae
Abstract:
Profit and loss sharing suggests an equitable sharing of risks and profits between the parts involved in a financial transaction. Salam is a contract in which advance payment is made for goods to be delivered at a future date. The purpose of this work is to price a new contract for profit and loss sharing based on Salam contract, using Khiyar Al Ghabn which is an agreement of choice in case of misrepresent facts.Keywords: Islamic finance, Shariah compliance, profit and loss sharing, derivatives, risks, hedging, salam contract.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1922967 Numerical Investigation on the Progressive Collapse Resistance of an RC Building with Brick Infills under Column Loss
Authors: Meng-Hao Tsai, Tsuei-Chiang Huang
Abstract:
Interior brick-infill partitions are usually considered as non-structural components and only their weight is accounted for in practical structural design. In this study, their effect on the progressive collapse resistance of an RC building subjected to sudden column loss is investigated. Three notional column loss conditions with four different brick-infill locations are considered. Column-loss response analyses of the RC building with and without brick infills are carried out. Analysis results indicate that the collapse resistance is only slightly influenced by the brick infills due to their brittle failure characteristic. Even so, they may help to reduce the inelastic displacement response under column loss. For practical engineering, it is reasonably conservative to only consider the weight of brick-infill partitions in the structural analysis.Keywords: Progressive collapse, column loss, brick-infill partition, compression strut.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2133966 Estimation of Bayesian Sample Size for Binomial Proportions Using Areas P-tolerance with Lowest Posterior Loss
Authors: H. Bevrani, N. Najafi
Abstract:
This paper uses p-tolerance with the lowest posterior loss, quadratic loss function, average length criteria, average coverage criteria, and worst outcome criterion for computing of sample size to estimate proportion in Binomial probability function with Beta prior distribution. The proposed methodology is examined, and its effectiveness is shown.Keywords: Bayesian inference, Beta-binomial Distribution, LPLcriteria, quadratic loss function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749965 Investigation of Heat Loss in Ethanol-Water Distillation Column with Direct Vapour Recompression Heat Pump
Authors: Christopher C. Enweremadu, Hilary L. Rutto
Abstract:
Vapour recompression system has been used to enhance reduction in energy consumption and improvement in energy effectiveness of distillation columns. However, the effects of certain parameters have not been taken into consideration. One of such parameters is the column heat loss which has either been assumed to be a certain percent of reboiler heat transfer or negligible. The purpose of this study was to evaluate the heat loss from an ethanol-water vapour recompression distillation column with pressure increase across the compressor (VRCAS) and compare the results obtained and its effect on some parameters in similar system (VRCCS) where the column heat loss has been assumed or neglected. Results show that the heat loss evaluated was higher when compared with that obtained for the column VRCCS. The results also showed that increase in heat loss could have significant effect on the total energy consumption, reboiler heat transfer, the number of trays and energy effectiveness of the column.Keywords: Compressor, distillation column, heat loss, vapourrecompression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4992964 Exergy Analysis of a Cogeneration Plant
Authors: Derya Burcu Ozkan, Onur Kiziler, Duriye Bilge
Abstract:
Cogeneration may be defined as a system which contains electricity production and regain of the thermo value of exhaust gases simultaneously. The examination is based on the data-s of an active cogeneration plant. This study, it is aimed to determine which component of the system should be revised first to raise the efficiency and decrease the loss of exergy. For this purpose, second law analysis of thermodynamics is applied to each component due to consider the effects of environmental conditions and take the quality of energy into consideration as well as the quantity of it. The exergy balance equations are produced and exergy loss is calculated for each component. 44,44 % loss of exergy in heat exchanger, 29,59 % in combustion chamber, 18,68 % in steam boiler, 5,25 % in gas turbine and 2,03 % in compressor is calculated.Keywords: Cogeneration, Exergy loss, Second law analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2517963 GA based Optimal Sizing and Placement of Distributed Generation for Loss Minimization
Authors: Deependra Singh, Devender Singh, K. S. Verma
Abstract:
This paper addresses a novel technique for placement of distributed generation (DG) in electric power systems. A GA based approach for sizing and placement of DG keeping in view of system power loss minimization in different loading conditions is explained. Minimal system power loss is obtained under voltage and line loading constraints. Proposed strategy is applied to power distribution systems and its effectiveness is verified through simulation results on 16, 37-bus and 75-bus test systems.
Keywords: Distributed generation (DG), Genetic algorithms (GA), optimal sizing and placement, Power loss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3467962 Method of Intelligent Fault Diagnosis of Preload Loss for Single Nut Ball Screws through the Sensed Vibration Signals
Authors: Yi-Cheng Huang, Yan-Chen Shin
Abstract:
This paper proposes method of diagnosing ball screw preload loss through the Hilbert-Huang Transform (HHT) and Multiscale entropy (MSE) process. The proposed method can diagnose ball screw preload loss through vibration signals when the machine tool is in operation. Maximum dynamic preload of 2 %, 4 %, and 6 % ball screws were predesigned, manufactured, and tested experimentally. Signal patterns are discussed and revealed using Empirical Mode Decomposition(EMD)with the Hilbert Spectrum. Different preload features are extracted and discriminated using HHT. The irregularity development of a ball screw with preload loss is determined and abstracted using MSE based on complexity perception. Experiment results show that the proposed method can predict the status of ball screw preload loss. Smart sensing for the health of the ball screw is also possible based on a comparative evaluation of MSE by the signal processing and pattern matching of EMD/HHT. This diagnosis method realizes the purposes of prognostic effectiveness on knowing the preload loss and utilizing convenience.Keywords: Empirical Mode Decomposition, Hilbert-Huang Transform, Multi-scale Entropy, Preload Loss, Single-nut Ball Screw
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2842961 DHT-LMS Algorithm for Sensorineural Loss Patients
Authors: Sunitha S. L., V. Udayashankara
Abstract:
Hearing impairment is the number one chronic disability affecting many people in the world. Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Hartley Transform Power Normalized Least Mean Square algorithm (DHT-LMS) to improve the SNR and to reduce the convergence rate of the Least Means Square (LMS) for sensorineural loss patients. The DHT transforms n real numbers to n real numbers, and has the convenient property of being its own inverse. It can be effectively used for noise cancellation with less convergence time. The simulated result shows the superior characteristics by improving the SNR at least 9 dB for input SNR with zero dB and faster convergence rate (eigenvalue ratio 12) compare to time domain method and DFT-LMS.Keywords: Hearing Impairment, DHT-LMS, Convergence rate, SNR improvement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724960 Effective Relay Communication for Scalable Video Transmission
Authors: Jung Ah Park, Zhijie Zhao, Doug Young Suh, Joern Ostermann
Abstract:
In this paper, we propose an effective relay communication for layered video transmission as an alternative to make the most of limited resources in a wireless communication network where loss often occurs. Relaying brings stable multimedia services to end clients, compared to multiple description coding (MDC). Also, retransmission of only parity data about one or more video layer using channel coder to the end client of the relay device is paramount to the robustness of the loss situation. Using these methods in resource-constrained environments, such as real-time user created content (UCC) with layered video transmission, can provide high-quality services even in a poor communication environment. Minimal services are also possible. The mathematical analysis shows that the proposed method reduced the probability of GOP loss rate compared to MDC and raptor code without relay. The GOP loss rate is about zero, while MDC and raptor code without relay have a GOP loss rate of 36% and 70% in case of 10% frame loss rate.Keywords: Relay communication, Multiple Description Coding, Scalable Video Coding
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436959 On the Network Packet Loss Tolerance of SVM Based Activity Recognition
Authors: Gamze Uslu, Sebnem Baydere, Alper K. Demir
Abstract:
In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.
Keywords: Activity recognition, support vector machines, acceleration sensor, wireless sensor networks, packet loss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2870958 A Practical Scheme for Transmission Loss Allocation to Generators and Loads in Restructured Power Systems
Authors: M.R. Ebrahimi, M. Ehsan
Abstract:
This paper presents a practical scheme that can be used for allocating the transmission loss to generators and loads. In this scheme first the share of a generator or load on the current through a branch is determined using Z-bus modified matrix. Then the current components are decomposed and the branch loss allocation is obtained. A motivation of proposed scheme is to improve the results of Z-bus method and to reach more fair allocation. The proposed scheme has been implemented and tested on several networks. To achieve practical and applicable results, the proposed scheme is simulated and compared on the transmission network (400kv) of Khorasan region in Iran and the 14-bus standard IEEE network. The results show that the proposed scheme is comprehensive and fair to allocating the energy losses of a power market to its participants.
Keywords: Transmission Loss, Loss Allocation, Z-bus modifiedmatrix, current Components Decomposition and Restructured PowerSystems
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1509957 Optimal Transmission Network Usage and Loss Allocation Using Matrices Methodology and Cooperative Game Theory
Authors: Baseem Khan, Ganga Agnihotri
Abstract:
Restructuring of Electricity supply industry introduced many issues such as transmission pricing, transmission loss allocation and congestion management. Many methodologies and algorithms were proposed for addressing these issues. In this paper a power flow tracing based method is proposed which involves Matrices methodology for the transmission usage and loss allocation for generators and demands. This method provides loss allocation in a direct way because all the computation is previously done for usage allocation. The proposed method is simple and easy to implement in a large power system. Further it is less computational because it requires matrix inversion only a single time. After usage and loss allocation cooperative game theory is applied to results for finding efficient economic signals. Nucleolus and Shapely value approach is used for optimal allocation of results. Results are shown for the IEEE 6 bus system and IEEE 14 bus system.
Keywords: Modified Kirchhoff Matrix, Power flow tracing, Transmission Pricing, Transmission Loss Allocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2593956 Evaluation of Expected Annual Loss Probabilities of RC Moment Resisting Frames
Authors: Saemee Jun, Dong-Hyeon Shin, Tae-Sang Ahn, Hyung-Joon Kim
Abstract:
Building loss estimation methodologies which have been advanced considerably in recent decades are usually used to estimate socio and economic impacts resulting from seismic structural damage. In accordance with these methods, this paper presents the evaluation of an annual loss probability of a reinforced concrete moment resisting frame designed according to Korean Building Code. The annual loss probability is defined by (1) a fragility curve obtained from a capacity spectrum method which is similar to a method adopted from HAZUS, and (2) a seismic hazard curve derived from annual frequencies of exceedance per peak ground acceleration. Seismic fragilities are computed to calculate the annual loss probability of a certain structure using functions depending on structural capacity, seismic demand, structural response and the probability of exceeding damage state thresholds. This study carried out a nonlinear static analysis to obtain the capacity of a RC moment resisting frame selected as a prototype building. The analysis results show that the probability of being extensive structural damage in the prototype building is expected to 0.01% in a year.
Keywords: Expected annual loss, Loss estimation, RC structure, Fragility analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2375955 Delay and Packet Loss Analysis for Handovers between MANETs and NEMO Networks
Authors: Jirawat Thaenthong, Steven Gordon
Abstract:
MANEMO is the integration of Network Mobility (NEMO) and Mobile Ad Hoc Network (MANET). A MANEMO node has an interface to both a MANET and NEMO network, and therefore should choose the optimal interface for packet delivery, however such a handover between interfaces will introduce packet loss. We define the steps necessary for a MANEMO handover, using Mobile IP and NEMO to signal the new binding to the relevant Home Agent(s). The handover steps aim to minimize the packet loss by avoiding waiting for Duplicate Address Detection and Neighbour Unreachability Detection. We present expressions for handover delay and packet loss, and then use numerical examples to evaluate a MANEMO handover. The analysis shows how the packet loss depends on level of nesting within NEMO, the delay between Home Agents and the load on the MANET, and hence can be used to developing optimal MANEMO handover algorithms.Keywords: IP mobility, handover, MANET, network mobility
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2082954 Dual Band Microstrip Patch Antenna for IEEE802.11b Application
Authors: Biplab Bag
Abstract:
In this paper, the design of a coaxial feed single layer rectangular microstrip patch antenna for IEEE802.11b application is presented. The proposed antenna is designed by using substrate FR4_epoxy having permittivity of about 4.4 and tangent loss of 0.013. The characteristics of the substrate are designed and to evaluate the performance of modeled antenna using HFSS v.11 EM simulator, from Ansoft. The proposed antenna dual resonant frequency has been achieved in the band of 1.57GHz-1.68GHz (with BW 30 MHz) and 2.25 GHz -2.55GHz (with BW 40MHz). The simulation results with frequency response, radiation pattern and return loss, VSWR, Input Impedance are presented with appropriate table and graph.
Keywords: Microstrip, Radiation Pattern, Return Loss, Tangent Loss, VSWR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3046