Search results for: discrete Fourier analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27718

Search results for: discrete Fourier analysis

27478 Blockchain’s Feasibility in Military Data Networks

Authors: Brenden M. Shutt, Lubjana Beshaj, Paul L. Goethals, Ambrose Kam

Abstract:

Communication security is of particular interest to military data networks. A relatively novel approach to network security is blockchain, a cryptographically secured distribution ledger with a decentralized consensus mechanism for data transaction processing. Recent advances in blockchain technology have proposed new techniques for both data validation and trust management, as well as different frameworks for managing dataflow. The purpose of this work is to test the feasibility of different blockchain architectures as applied to military command and control networks. Various architectures are tested through discrete-event simulation and the feasibility is determined based upon a blockchain design’s ability to maintain long-term stable performance at industry standards of throughput, network latency, and security. This work proposes a consortium blockchain architecture with a computationally inexpensive consensus mechanism, one that leverages a Proof-of-Identity (PoI) concept and a reputation management mechanism.

Keywords: blockchain, consensus mechanism, discrete-event simulation, fog computing

Procedia PDF Downloads 106
27477 Conventional Synthesis and Characterization of Zirconium Molybdate, Nd2Zr3(MoO4)9

Authors: G. Çelik Gül, F. Kurtuluş

Abstract:

Rare earths containing complex metal oxides have drawn much attention due to physical, chemical and optical properties which make them feasible in so many areas such as non-linear optical materials and ion exchanger. We have researched a systematic study to obtain rare earth containing zirconium molybdate compound, characterization, investigation of crystal system and calculation of unit cell parameters.  After a successful synthesis of Nd2Zr3(MoO4)9 which is a member of rare earth metal containing complex oxides family, X-ray diffraction (XRD), High Score Plus/Rietveld refinement analysis, and Fourier Transform Infrared Spectroscopy (FTIR) were completed to determine the crystal structure. Morphological properties and elemental composition were determined by scanning electron microscopy (SEM) and energy dispersive X-ray (EDX) analysis. Thermal properties were observed via Thermogravimetric-differential thermal analysis (TG/DTA).

Keywords: Nd₂Zr₃(MoO₄)₉, powder x-ray diffraction, solid state synthesis, zirconium molybdates

Procedia PDF Downloads 366
27476 Petri Net Modeling and Simulation of a Call-Taxi System

Authors: T. Godwin

Abstract:

A call-taxi system is a type of taxi service where a taxi could be requested through a phone call or mobile app. A schematic functioning of a call-taxi system is modeled using Petri net, which provides the necessary conditions for a taxi to be assigned by a dispatcher to pick a customer as well as the conditions for the taxi to be released by the customer. A Petri net is a graphical modeling tool used to understand sequences, concurrences, and confluences of activities in the working of discrete event systems. It uses tokens on a directed bipartite multi-graph to simulate the activities of a system. The Petri net model is translated into a simulation model and a call-taxi system is simulated. The simulation model helps in evaluating the operation of a call-taxi system based on the fleet size as well as the operating policies for call-taxi assignment and empty call-taxi repositioning. The developed Petri net based simulation model can be used to decide the fleet size as well as the call-taxi assignment policies for a call-taxi system.

Keywords: call-taxi, discrete event system, petri net, simulation modeling

Procedia PDF Downloads 399
27475 Experimental and Numerical Analysis of Mustafa Paşa Mosque in Skopje

Authors: Ozden Saygili, Eser Cakti

Abstract:

The masonry building stock in Istanbul and in other cities of Turkey are exposed to significant earthquake hazard. Determination of the safety of masonry structures against earthquakes is a complex challenge. This study deals with experimental tests and non-linear dynamic analysis of masonry structures modeled through discrete element method. The 1:10 scale model of Mustafa Paşa Mosque was constructed and the data were obtained from the sensors on it during its testing on the shake table. The results were used in the calibration/validation of the numerical model created on the basis of the 1:10 scale model built for shake table testing. 3D distinct element model was developed that represents the linear and nonlinear behavior of the shake table model as closely as possible during experimental tests. Results of numerical analyses with those from the experimental program were compared and discussed.

Keywords: dynamic analysis, non-linear modeling, shake table tests, masonry

Procedia PDF Downloads 388
27474 Active Surface Tracking Algorithm for All-Fiber Common-Path Fourier-Domain Optical Coherence Tomography

Authors: Bang Young Kim, Sang Hoon Park, Chul Gyu Song

Abstract:

A conventional optical coherence tomography (OCT) system has limited imaging depth, which is 1-2 mm, and suffers unwanted noise such as speckle noise. The motorized-stage-based OCT system, using a common-path Fourier-domain optical coherence tomography (CP-FD-OCT) configuration, provides enhanced imaging depth and less noise so that we can overcome these limitations. Using this OCT systems, OCT images were obtained from an onion, and their subsurface structure was observed. As a result, the images obtained using the developed motorized-stage-based system showed enhanced imaging depth than the conventional system, since it is real-time accurate depth tracking. Consequently, the developed CP-FD-OCT systems and algorithms have good potential for the further development of endoscopic OCT for microsurgery.

Keywords: common-path OCT, FD-OCT, OCT, tracking algorithm

Procedia PDF Downloads 354
27473 Investigating Selected Traditional African Medicinal Plants for Anti-fibrotic Potential: Identification and Characterization of Bioactive Compounds Through Fourier-Transform Infrared Spectroscopy and Gas Chromatography-Mass Spectrometry Analysis

Authors: G. V. Manzane, S. J. Modise

Abstract:

Uterine fibroids, also known as leiomyomas or myomas, are non-cancerous growths that develop in the muscular wall of the uterus during the reproductive years. The cause of uterine fibroids includes hormonal, genetic, growth factors, and extracellular matrix factors. Common symptoms of uterine fibroids include heavy and prolonged menstrual bleeding which can lead to a high risk of anemia, lower abdominal pains, pelvic pressure, infertility, and pregnancy loss. The growth of this tumor is a concern because of its negative impact on women’s health and the increase in their economic burden. Traditional medicinal plants have long been used in Africa for their potential therapeutic effects against various ailments. In this study, we aimed to identify and characterize bioactive compounds from selected African medicinal plants with potential anti-fibrotic properties using Fourier-transform infrared spectroscopy (FTIR) and gas chromatography-mass spectrometry (GCMS) analysis. Two medicinal plant species known for their traditional use in fibrosis-related conditions were selected for investigation. Aqueous extracts were prepared from the plant materials, and FTIR analysis was conducted to determine the functional groups present in the extracts. GCMS analysis was performed to identify the chemical constituents of the extracts. The FTIR analysis revealed the presence of various functional groups, such as phenols, flavonoids, terpenoids, and alkaloids, known for their potential therapeutic activities. These functional groups are associated with antioxidant, anti-inflammatory, and anti-fibrotic properties. The GCMS analysis identified several bioactive compounds, including flavonoids, alkaloids, terpenoids, and phenolic compounds, which are known for their pharmacological activities. The discovery of bioactive compounds in African medicinal plants that exhibit anti-fibrotic effects, opens up promising avenues for further research and development of potential treatments for fibrosis. This suggests the potential of these plants as a valuable source of novel therapeutic agents for treating fibrosis-related conditions. In conclusion, our study identified and characterized bioactive compounds from selected African medicinal plants using FTIR and GCMS analysis. The presence of compounds with known antifibrotic properties suggests that these plants hold promise as a potential source of natural products for the development of novel anti-fibrotic therapies.

Keywords: uterine fibroids, african medicinal plants, bioactive compounds, identify and characterized

Procedia PDF Downloads 59
27472 A Hybrid Watermarking Model Based on Frequency of Occurrence

Authors: Hamza A. A. Al-Sewadi, Adnan H. M. Al-Helali, Samaa A. K. Khamis

Abstract:

Ownership proofs of multimedia such as text, image, audio or video files can be achieved by the burial of watermark is them. It is achieved by introducing modifications into these files that are imperceptible to the human senses but easily recoverable by a computer program. These modifications would be in the time domain or frequency domain or both. This paper presents a procedure for watermarking by mixing amplitude modulation with frequency transformation histogram; namely a specific value is used to modulate the intensity component Y of the YIQ components of the carrier image. This scheme is referred to as histogram embedding technique (HET). Results comparison with those of other techniques such as discrete wavelet transform (DWT), discrete cosine transform (DCT) and singular value decomposition (SVD) have shown an enhance efficiency in terms of ease and performance. It has manifested a good degree of robustness against various environment effects such as resizing, rotation and different kinds of noise. This method would prove very useful technique for copyright protection and ownership judgment.

Keywords: authentication, copyright protection, information hiding, ownership, watermarking

Procedia PDF Downloads 541
27471 Soil Macronutrients Sensing for Precision Agriculture Purpose Using Fourier Transform Infrared Spectroscopy

Authors: Hossein Navid, Maryam Adeli Khadem, Shahin Oustan, Mahmoud Zareie

Abstract:

Among the nutrients needed by the plants, three elements containing nitrate, phosphorus and potassium are more important. The objective of this research was measuring these nutrient amounts in soil using Fourier transform infrared spectroscopy in range of 400- 4000 cm-1. Soil samples for different soil types (sandy, clay and loam) were collected from different areas of East Azerbaijan. Three types of fertilizers in conventional farming (urea, triple superphosphate, potassium sulphate) were used for soil treatment. Each specimen was divided into two categories: The first group was used in the laboratory (direct measurement) to extract nitrate, phosphorus and potassium uptake by colorimetric method of Olsen and ammonium acetate. The second group was used to measure drug absorption spectrometry. In spectrometry, the small amount of soil samples mixed with KBr and was taken in a small pill form. For the tests, the pills were put in the center of infrared spectrometer and graphs were obtained. Analysis of data was done using MINITAB and PLSR software. The data obtained from spectrometry method were compared with amount of soil nutrients obtained from direct drug absorption using EXCEL software. There were good fitting between these two data series. For nitrate, phosphorus and potassium R2 was 79.5%, 92.0% and 81.9%, respectively. Also, results showed that the range of MIR (mid-infrared) is appropriate for determine the amount of soil nitrate and potassium and can be used in future research to obtain detailed maps of land in agricultural use.

Keywords: nitrate, phosphorus, potassium, soil nutrients, spectroscopy

Procedia PDF Downloads 365
27470 Frequency of Occurrence Hybrid Watermarking Scheme

Authors: Hamza A. Ali, Adnan H. M. Al-Helali

Abstract:

Generally, a watermark is information that identifies the ownership of multimedia (text, image, audio or video files). It is achieved by introducing modifications into these files that are imperceptible to the human senses but easily recoverable by a computer program. These modifications are done according to a secret key in a descriptive model that would be either in the time domain or frequency domain or both. This paper presents a procedure for watermarking by mixing amplitude modulation with frequency transformation histogram; namely a specific value is used to modulate the intensity component Y of the YIQ components of the carrier image. This scheme is referred to as histogram embedding technique (HET). Results comparison with those of other techniques such as discrete wavelet transform (DWT), discrete cosine transform (DCT) and singular value decomposition (SVD) have shown an enhance efficiency in terms of ease and performance. It has manifested a good degree of robustness against various environment effects such as resizing, rotation and different kinds of noise. This method would prove very useful technique for copyright protection and ownership judgment.

Keywords: watermarking, ownership, copyright protection, steganography, information hiding, authentication

Procedia PDF Downloads 346
27469 DWT-SATS Based Detection of Image Region Cloning

Authors: Michael Zimba

Abstract:

A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency sub-band of the DWT of the suspicious image thereby leaving valuable information in the other three sub-bands, the proposed algorithm simultaneously extracts features from all the four sub-bands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates.

Keywords: affine transformation, discrete wavelet transform, radix sort, SATS

Procedia PDF Downloads 201
27468 Image Compression Based on Regression SVM and Biorthogonal Wavelets

Authors: Zikiou Nadia, Lahdir Mourad, Ameur Soltane

Abstract:

In this paper, we propose an effective method for image compression based on SVM Regression (SVR), with three different kernels, and biorthogonal 2D Discrete Wavelet Transform. SVM regression could learn dependency from training data and compressed using fewer training points (support vectors) to represent the original data and eliminate the redundancy. Biorthogonal wavelet has been used to transform the image and the coefficients acquired are then trained with different kernels SVM (Gaussian, Polynomial, and Linear). Run-length and Arithmetic coders are used to encode the support vectors and its corresponding weights, obtained from the SVM regression. The peak signal noise ratio (PSNR) and their compression ratios of several test images, compressed with our algorithm, with different kernels are presented. Compared with other kernels, Gaussian kernel achieves better image quality. Experimental results show that the compression performance of our method gains much improvement.

Keywords: image compression, 2D discrete wavelet transform (DWT-2D), support vector regression (SVR), SVM Kernels, run-length, arithmetic coding

Procedia PDF Downloads 352
27467 The Kinks, the Solitons, and the Shocks in Series Connected Discrete Josephson Transmission Lines

Authors: Eugene Kogan

Abstract:

We analytically study the localized running waves in the discrete Josephson transmission lines (JTL), constructed from Josephson junctions (JJ) and capacitors. The quasi-continuum approximation reduces the calculation of the running wave properties to the problem of equilibrium of an elastic rod in the potential field. Making additional approximations, we reduce the problem to the motion of the fictitious Newtonian particle in the potential well. We show that there exist running waves in the form of supersonic kinks and solitons and calculate their velocities and profiles. We show that the nonstationary smooth waves, which are small perturbations on the homogeneous non-zero background, are described by Korteweg-de Vries equation, and those on zero background -by the modified Korteweg-de Vries equation. We also study the effect of dissipation on the running waves in JTL and find that in the presence of the resistors, shunting the JJ and/or in series with the ground capacitors, the only possible stationary running waves are the shock waves, whose profiles are also found.

Keywords: Josephson transmission line, shocks, solitary waves, nonlinear waves

Procedia PDF Downloads 88
27466 Suitability Evaluation of Human Settlements Using a Global Sensitivity Analysis Method: A Case Study in of China

Authors: Feifei Wu, Pius Babuna, Xiaohua Yang

Abstract:

The suitability evaluation of human settlements over time and space is essential to track potential challenges towards suitable human settlements and provide references for policy-makers. This study established a theoretical framework of human settlements based on the nature, human, economy, society and residence subsystems. Evaluation indicators were determined with the consideration of the coupling effect among subsystems. Based on the extended Fourier amplitude sensitivity test algorithm, the global sensitivity analysis that considered the coupling effect among indicators was used to determine the weights of indicators. The human settlement suitability was evaluated at both subsystems and comprehensive system levels in 30 provinces of China between 2000 and 2016. The findings were as follows: (1) human settlements suitability index (HSSI) values increased significantly in all 30 provinces from 2000 to 2016. Among the five subsystems, the suitability index of the residence subsystem in China exhibited the fastest growinggrowth, fol-lowed by the society and economy subsystems. (2) HSSI in eastern provinces with a developed economy was higher than that in western provinces with an underdeveloped economy. In con-trast, the growing rate of HSSI in eastern provinces was significantly higher than that in western provinces. (3) The inter-provincial difference of in HSSI decreased from 2000 to 2016. For sub-systems, it decreased for the residence system, whereas it increased for the economy system. (4) The suitability of the natural subsystem has become a limiting factor for the improvement of human settlements suitability, especially in economically developed provinces such as Beijing, Shanghai, and Guangdong. The results can be helpful to support decision-making and policy for improving the quality of human settlements in a broad nature, human, economy, society and residence context.

Keywords: human settlements, suitability evaluation, extended fourier amplitude, human settlement suitability

Procedia PDF Downloads 47
27465 A Novel Integration of Berth Allocation, Quay Cranes and Trucks Scheduling Problems in Container Terminals

Authors: M. Moharami Gargari, S. Javdani Zamani, A. Mohammadnejad, S. Abuali

Abstract:

As maritime container transport is developing fast, the need arises for efficient operations at container terminals. One of the most important determinants of container handling efficiency is the productivity of quay cranes and internal transportation vehicles, which are responsible transporting of containers for unloading and loading operations for container vessels. For this reason, this paper presents an integrated mathematical model formulation for discrete berths with quay cranes and internal transportations vehicles. The problems have received increasing attention in the literature and the present paper deals with the integration of these interrelated problems. A new mixed integer linear formulation is developed for the Berth Allocation Problem (BAP), Quay Crane Assignment and Scheduling Problem (QCASP) and Internal Transportation Scheduling (ITS), which accounts for cranes and trucks positioning conditions.

Keywords: discrete berths, container terminal, truck scheduling, dynamic vessel arrival

Procedia PDF Downloads 368
27464 The Influence of Contact Models on Discrete Element Modeling of the Ballast Layer Subjected to Cyclic Loading

Authors: Peyman Aela, Lu Zong, Guoqing Jing

Abstract:

Recently, there has been growing interest in numerical modeling of ballast railway tracks. A commonly used mechanistic modeling approach for ballast is the discrete element method (DEM). Up to now, the effects of the contact model on ballast particle behavior have not been precisely examined. In this regard, selecting the appropriate contact model is mainly associated with the particle characteristics and the loading condition. Since ballast is cohesionless material, different contact models, including the linear spring, Hertz-Mindlin, and Hysteretic models, could be used to calculate particle-particle or wall-particle contact forces. Moreover, the simulation of a dynamic test is vital to investigate the effect of damping parameters on the ballast deformation. In this study, ballast box tests were simulated by DEM to examine the influence of different contact models on the mechanical behavior of the ballast layer under cyclic loading. This paper shows how the contact model can affect the deformation and damping of a ballast layer subjected to cyclic loading in a ballast box.

Keywords: ballast, contact model, cyclic loading, DEM

Procedia PDF Downloads 152
27463 Donoho-Stark’s and Hardy’s Uncertainty Principles for the Short-Time Quaternion Offset Linear Canonical Transform

Authors: Mohammad Younus Bhat

Abstract:

The quaternion offset linear canonical transform (QOLCT), which isa time-shifted and frequency-modulated version of the quaternion linear canonical transform (QLCT), provides a more general framework of most existing signal processing tools. For the generalized QOLCT, the classical Heisenberg’s and Lieb’s uncertainty principles have been studied recently. In this paper, we first define the short-time quaternion offset linear canonical transform (ST-QOLCT) and drive its relationship with the quaternion Fourier transform (QFT). The crux of the paper lies in the generalization of several well-known uncertainty principles for the ST-QOLCT, including Donoho-Stark’s uncertainty principle, Hardy’s uncertainty principle, Beurling’s uncertainty principle, and the logarithmic uncertainty principle.

Keywords: Quaternion Fourier transform, Quaternion offset linear canonical transform, short-time quaternion offset linear canonical transform, uncertainty principle

Procedia PDF Downloads 160
27462 Finite Element and Split Bregman Methods for Solving a Family of Optimal Control Problem with Partial Differential Equation Constraint

Authors: Mahmoud Lot

Abstract:

In this article, we will discuss the solution of elliptic optimal control problem. First, by using the nite element method, we obtain the discrete form of the problem. The obtained discrete problem is actually a large scale constrained optimization problem. Solving this optimization problem with traditional methods is difficult and requires a lot of CPU time and memory. But split Bergman method converts the constrained problem to an unconstrained, and hence it saves time and memory requirement. Then we use the split Bregman method for solving this problem, and examples show the speed and accuracy of split Bregman methods for solving these types of problems. We also use the SQP method for solving the examples and compare with the split Bregman method.

Keywords: Split Bregman Method, optimal control with elliptic partial differential equation constraint, finite element method

Procedia PDF Downloads 115
27461 Characterizing the Geometry of Envy Human Behaviour Using Game Theory Model with Two Types of Homogeneous Players

Authors: A. S. Mousa, R. I. Rajab, A. A. Pinto

Abstract:

An envy behavioral game theoretical model with two types of homogeneous players is considered in this paper. The strategy space of each type of players is a discrete set with only two alternatives. The preferences of each type of players is given by a discrete utility function. All envy strategies that form Nash equilibria and the corresponding envy Nash domains for each type of players have been characterized. We use geometry to construct two dimensional envy tilings where the horizontal axis reflects the preference for players of type one, while the vertical axis reflects the preference for the players of type two. The influence of the envy behavior parameters on the Cartesian position of the equilibria has been studied, and in each envy tiling we determine the envy Nash equilibria. We observe that there are 1024 combinatorial classes of envy tilings generated from envy chromosomes: 256 of them are being structurally stable while 768 are with bifurcation. Finally, some conditions for the disparate envy Nash equilibria are stated.

Keywords: game theory, Nash equilibrium, envy Nash behavior, geometric tilings, bifurcation thresholds

Procedia PDF Downloads 184
27460 Simulation of Utility Accrual Scheduling and Recovery Algorithm in Multiprocessor Environment

Authors: A. Idawaty, O. Mohamed, A. Z. Zuriati

Abstract:

This paper presents the development of an event based Discrete Event Simulation (DES) for a recovery algorithm known Backward Recovery Global Preemptive Utility Accrual Scheduling (BR_GPUAS). This algorithm implements the Backward Recovery (BR) mechanism as a fault recovery solution under the existing Time/Utility Function/ Utility Accrual (TUF/UA) scheduling domain for multiprocessor environment. The BR mechanism attempts to take the faulty tasks back to its initial safe state and then proceeds to re-execute the affected section of the faulty tasks to enable recovery. Considering that faults may occur in the components of any system; a fault tolerance system that can nullify the erroneous effect is necessary to be developed. Current TUF/UA scheduling algorithm uses the abortion recovery mechanism and it simply aborts the erroneous task as their fault recovery solution. None of the existing algorithm in TUF/UA scheduling domain in multiprocessor scheduling environment have considered the transient fault and implement the BR mechanism as a fault recovery mechanism to nullify the erroneous effect and solve the recovery problem in this domain. The developed BR_GPUAS simulator has derived the set of parameter, events and performance metrics according to a detailed analysis of the base model. Simulation results revealed that BR_GPUAS algorithm can saved almost 20-30% of the accumulated utilities making it reliable and efficient for the real-time application in the multiprocessor scheduling environment.

Keywords: real-time system (RTS), time utility function/ utility accrual (TUF/UA) scheduling, backward recovery mechanism, multiprocessor, discrete event simulation (DES)

Procedia PDF Downloads 279
27459 A Similar Image Retrieval System for Auroral All-Sky Images Based on Local Features and Color Filtering

Authors: Takanori Tanaka, Daisuke Kitao, Daisuke Ikeda

Abstract:

The aurora is an attractive phenomenon but it is difficult to understand the whole mechanism of it. An approach of data-intensive science might be an effective approach to elucidate such a difficult phenomenon. To do that we need labeled data, which shows when and what types of auroras, have appeared. In this paper, we propose an image retrieval system for auroral all-sky images, some of which include discrete and diffuse aurora, and the other do not any aurora. The proposed system retrieves images which are similar to the query image by using a popular image recognition method. Using 300 all-sky images obtained at Tromso Norway, we evaluate two methods of image recognition methods with or without our original color filtering method. The best performance is achieved when SIFT with the color filtering is used and its accuracy is 81.7% for discrete auroras and 86.7% for diffuse auroras.

Keywords: data-intensive science, image classification, content-based image retrieval, aurora

Procedia PDF Downloads 423
27458 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 119
27457 Image Compression on Region of Interest Based on SPIHT Algorithm

Authors: Sudeepti Dayal, Neelesh Gupta

Abstract:

Image abbreviation is utilized for reducing the size of a file without demeaning the quality of the image to an objectionable level. The depletion in file size permits more images to be deposited in a given number of spaces. It also minimizes the time necessary for images to be transferred. Storage of medical images is a most researched area in the current scenario. To store a medical image, there are two parameters on which the image is divided, regions of interest and non-regions of interest. The best way to store an image is to compress it in such a way that no important information is lost. Compression can be done in two ways, namely lossy, and lossless compression. Under that, several compression algorithms are applied. In the paper, two algorithms are used which are, discrete cosine transform, applied to non-region of interest (lossy), and discrete wavelet transform, applied to regions of interest (lossless). The paper introduces SPIHT (set partitioning hierarchical tree) algorithm which is applied onto the wavelet transform to obtain good compression ratio from which an image can be stored efficiently.

Keywords: Compression ratio, DWT, SPIHT, DCT

Procedia PDF Downloads 324
27456 An Investigation of the Fracture Behavior of Model MgO-C Refractories Using the Discrete Element Method

Authors: Júlia Cristina Bonaldo, Christophe L. Martin, Martiniano Piccico, Keith Beale, Roop Kishore, Severine Romero-Baivier

Abstract:

Refractory composite materials employed in steel casting applications are prone to cracking and material damage because of the very high operating temperature (thermal shock) and mismatched properties of the constituent phases. The fracture behavior of a model MgO-C composite refractory is investigated to quantify and characterize its thermal shock resistance, employing a cold crushing test and Brazilian test with fractographic analysis. The discrete element method (DEM) is used to generate numerical refractory composites. The composite in DEM is represented by an assembly of bonded particle clusters forming perfectly spherical aggregates and single spherical particles. For the stresses to converge with a low standard deviation and a minimum number of particles to allow reasonable CPU calculation time, representative volume element (RVE) numerical packings are created with various numbers of particles. Key microscopic properties are calibrated sequentially by comparing stress-strain curves from crushing experimental data. Comparing simulations with experiments also allows for the evaluation of crack propagation, fracture energy, and strength. The crack propagation during Brazilian experimental tests is monitored with digital image correlation (DIC). Simulations and experiments reveal three distinct types of fracture. The crack may spread throughout the aggregate, at the aggregate-matrix interface, or throughout the matrix.

Keywords: refractory composite, fracture mechanics, crack propagation, DEM

Procedia PDF Downloads 46
27455 A Two-Dimensional Problem Micropolar Thermoelastic Medium under the Effect of Laser Irradiation and Distributed Sources

Authors: Devinder Singh, Rajneesh Kumar, Arvind Kumar

Abstract:

The present investigation deals with the deformation of micropolar generalized thermoelastic solid subjected to thermo-mechanical loading due to a thermal laser pulse. Laplace transform and Fourier transform techniques are used to solve the problem. Thermo-mechanical laser interactions are taken as distributed sources to describe the application of the approach. The closed form expressions of normal stress, tangential stress, coupled stress and temperature are obtained in the domain. Numerical inversion technique of Laplace transform and Fourier transform has been implied to obtain the resulting quantities in the physical domain after developing a computer program. The normal stress, tangential stress, coupled stress and temperature are depicted graphically to show the effect of relaxation times. Some particular cases of interest are deduced from the present investigation.

Keywords: pulse laser, integral transform, thermoelastic, boundary value problem

Procedia PDF Downloads 583
27454 Parameter Fitting of the Discrete Element Method When Modeling the DISAMATIC Process

Authors: E. Hovad, J. H. Walther, P. Larsen, J. Thorborg, J. H. Hattel

Abstract:

In sand casting of metal parts for the automotive industry such as brake disks and engine blocks, the molten metal is poured into a sand mold to get its final shape. The DISAMATIC molding process is a way to construct these sand molds for casting of steel parts and in the present work numerical simulations of this process are presented. During the process green sand is blown into a chamber and subsequently squeezed to finally obtain the sand mould. The sand flow is modelled with the Discrete Element method (DEM) and obtaining the correct material parameters for the simulation is the main goal. Different tests will be used to find or calibrate the DEM parameters needed; Poisson ratio, Young modulus, rolling friction coefficient, sliding friction coefficient and coefficient of restitution (COR). The Young modulus and Poisson ratio are found from compression tests of the bulk material and subsequently used in the DEM model according to the Hertz-Mindlin model. The main focus will be on calibrating the rolling resistance and sliding friction in the DEM model with respect to the behavior of “real” sand piles. More specifically, the surface profile of the “real” sand pile will be compared to the sand pile predicted with the DEM for different values of the rolling and sliding friction coefficients. When the DEM parameters are found for the particle-particle (sand-sand) interaction, the particle-wall interaction parameter values are also found. Here the sliding coefficient will be found from experiments and the rolling resistance is investigated by comparing with observations of how the green sand interacts with the chamber wall during experiments and the DEM simulations will be calibrated accordingly. The coefficient of restitution will be tested with different values in the DEM simulations and compared to video footages of the DISAMATIC process. Energy dissipation will be investigated in these simulations for different particle sizes and coefficient of restitution, where scaling laws will be considered to relate the energy dissipation for these parameters. Finally, the found parameter values are used in the overall discrete element model and compared to the video footage of the DISAMATIC process.

Keywords: discrete element method, physical properties of materials, calibration, granular flow

Procedia PDF Downloads 457
27453 Analytical Technique for Definition of Internal Forces in Links of Robotic Systems and Mechanisms with Statically Indeterminate and Determinate Structures Taking into Account the Distributed Dynamical Loads and Concentrated Forces

Authors: Saltanat Zhilkibayeva, Muratulla Utenov, Nurzhan Utenov

Abstract:

The distributed inertia forces of complex nature appear in links of rod mechanisms within the motion process. Such loads raise a number of problems, as the problems of destruction caused by a large force of inertia; elastic deformation of the mechanism can be considerable, that can bring the mechanism out of action. In this work, a new analytical approach for the definition of internal forces in links of robotic systems and mechanisms with statically indeterminate and determinate structures taking into account the distributed inertial and concentrated forces is proposed. The relations between the intensity of distributed inertia forces and link weight with geometrical, physical and kinematic characteristics are determined in this work. The distribution laws of inertia forces and dead weight make it possible at each position of links to deduce the laws of distribution of internal forces along the axis of the link, in which loads are found at any point of the link. The approximation matrixes of forces of an element under the action of distributed inertia loads with the trapezoidal intensity are defined. The obtained approximation matrixes establish the dependence between the force vector in any cross-section of the element and the force vector in calculated cross-sections, as well as allow defining the physical characteristics of the element, i.e., compliance matrix of discrete elements. Hence, the compliance matrixes of an element under the action of distributed inertial loads of trapezoidal shape along the axis of the element are determined. The internal loads of each continual link are unambiguously determined by a set of internal loads in its separate cross-sections and by the approximation matrixes. Therefore, the task is reduced to the calculation of internal forces in a final number of cross-sections of elements. Consequently, it leads to a discrete model of elastic calculation of links of rod mechanisms. The discrete model of the elements of mechanisms and robotic systems and their discrete model as a whole are constructed. The dynamic equilibrium equations for the discrete model of the elements are also received in this work as well as the equilibrium equations of the pin and rigid joints expressed through required parameters of internal forces. Obtained systems of dynamic equilibrium equations are sufficient for the definition of internal forces in links of mechanisms, which structure is statically definable. For determination of internal forces of statically indeterminate mechanisms (in the way of determination of internal forces), it is necessary to build a compliance matrix for the entire discrete model of the rod mechanism, that is reached in this work. As a result by means of developed technique the programs in the MAPLE18 system are made and animations of the motion of the fourth class mechanisms of statically determinate and statically indeterminate structures with construction on links the intensity of cross and axial distributed inertial loads, the bending moments, cross and axial forces, depending on kinematic characteristics of links are obtained.

Keywords: distributed inertial forces, internal forces, statically determinate mechanisms, statically indeterminate mechanisms

Procedia PDF Downloads 192
27452 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)

Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula

Abstract:

This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.

Keywords: MINLP, mixed-integer non-linear programming, optimization, structures

Procedia PDF Downloads 12
27451 Simulation and Experimental Study on Dual Dense Medium Fluidization Features of Air Dense Medium Fluidized Bed

Authors: Cheng Sheng, Yuemin Zhao, Chenlong Duan

Abstract:

Air dense medium fluidized bed is a typical application of fluidization techniques for coal particle separation in arid areas, where it is costly to implement wet coal preparation technologies. In the last three decades, air dense medium fluidized bed, as an efficient dry coal separation technique, has been studied in many aspects, including energy and mass transfer, hydrodynamics, bubbling behaviors, etc. Despite numerous researches have been published, the fluidization features, especially dual dense medium fluidization features have been rarely reported. In dual dense medium fluidized beds, different combinations of different dense mediums play a significant role in fluidization quality variation, thus influencing coal separation efficiency. Moreover, to what extent different dense mediums mix and to what extent the two-component particulate mixture affects the fluidization performance and quality have been in suspense. The proposed work attempts to reveal underlying mechanisms of generation and evolution of two-component particulate mixture in the fluidization process. Based on computational fluid dynamics methods and discrete particle modelling, movement and evolution of dual dense mediums in air dense medium fluidized bed have been simulated. Dual dense medium fluidization experiments have been conducted. Electrical capacitance tomography was employed to investigate the distribution of two-component mixture in experiments. Underlying mechanisms involving two-component particulate fluidization are projected to be demonstrated with the analysis and comparison of simulation and experimental results.

Keywords: air dense medium fluidized bed, particle separation, computational fluid dynamics, discrete particle modelling

Procedia PDF Downloads 352
27450 Creation and Annihilation of Spacetime Elements

Authors: Dnyanesh P. Mathur, Gregory L. Slater

Abstract:

Gravitation and the expansion of the universe at a large scale are generally regarded as two completely distinct phenomena. Yet, in general, relativity theory, they both manifest as 'curvature' of spacetime. We propose a hypothesis which treats these two 'curvature-producing' phenomena as aspects of an underlying process. This process treats spacetime itself as composed of discrete units (Plancktons) and is 'dynamic' in the sense that these elements of spacetime are continually being both created and annihilated. It is these two complementary processes of Planckton creation and Planckton annihilation which manifest themselves as - 'cosmic expansion' on the one hand and as 'gravitational attraction’ on the other. The Planckton hypothesis treats spacetime as a perfect fluid in the same manner as the co-moving frame of reference of Friedman equations and the Gullstrand-Painleve metric; i.e.Planckton hypothesis replaces 'curvature' of spacetime by the 'flow' of Plancktons (spacetime). Here we discuss how this perspective may allow a unified description of both cosmological and gravitational acceleration as well as providing a mechanism for inducing an irreducible action at every point associated with the creation and annihilation of Plancktons, which could be identified as the zero point energy.

Keywords: discrete spacetime, spacetime flow, zero point energy, planktons

Procedia PDF Downloads 81
27449 High-Capacity Image Steganography using Wavelet-based Fusion on Deep Convolutional Neural Networks

Authors: Amal Khalifa, Nicolas Vana Santos

Abstract:

Steganography has been known for centuries as an efficient approach for covert communication. Due to its popularity and ease of access, image steganography has attracted researchers to find secure techniques for hiding information within an innocent looking cover image. In this research, we propose a novel deep-learning approach to digital image steganography. The proposed method, DeepWaveletFusion, uses convolutional neural networks (CNN) to hide a secret image into a cover image of the same size. Two CNNs are trained back-to-back to merge the Discrete Wavelet Transform (DWT) of both colored images and eventually be able to blindly extract the hidden image. Based on two different image similarity metrics, a weighted gain function is used to guide the learning process and maximize the quality of the retrieved secret image and yet maintaining acceptable imperceptibility. Experimental results verified the high recoverability of DeepWaveletFusion which outperformed similar deep-learning-based methods.

Keywords: deep learning, steganography, image, discrete wavelet transform, fusion

Procedia PDF Downloads 43