Search results for: fractional Ornstein-Uhlenbeck process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14983

Search results for: fractional Ornstein-Uhlenbeck process

14893 Edge Detection in Low Contrast Images

Authors: Koushlendra Kumar Singh, Manish Kumar Bajpai, Rajesh K. Pandey

Abstract:

The edges of low contrast images are not clearly distinguishable to the human eye. It is difficult to find the edges and boundaries in it. The present work encompasses a new approach for low contrast images. The Chebyshev polynomial based fractional order filter has been used for filtering operation on an image. The preprocessing has been performed by this filter on the input image. Laplacian of Gaussian method has been applied on preprocessed image for edge detection. The algorithm has been tested on two test images.

Keywords: low contrast image, fractional order differentiator, Laplacian of Gaussian (LoG) method, chebyshev polynomial

Procedia PDF Downloads 593
14892 Chebyshev Wavelets and Applications

Authors: Emanuel Guariglia

Abstract:

In this paper we deal with Chebyshev wavelets. We analyze their properties computing their Fourier transform. Moreover, we discuss the differential properties of Chebyshev wavelets due the connection coefficients. The differential properties of Chebyshev wavelets, expressed by the connection coefficients (also called refinable integrals), are given by finite series in terms of the Kronecker delta. Moreover, we treat the p-order derivative of Chebyshev wavelets and compute its Fourier transform. Finally, we expand the mother wavelet in Taylor series with an application both in fractional calculus and fractal geometry.

Keywords: Chebyshev wavelets, Fourier transform, connection coefficients, Taylor series, local fractional derivative, Cantor set

Procedia PDF Downloads 91
14891 Assessing the Use of Fractional Radiofrequency for the Improvement of Skin Texture in Asian Patients

Authors: Mandy W. M. Chan, Samantha Y. N. Shek, Chi K. Yeung, Taro Kono, Henry H. L. Chan

Abstract:

Fractional radiofrequency devices have shown to improve skin texture such as smoothness, rhytides, brightness as well as atrophic acne scars by increasing dermal thickness, dermal collagen content and dermal fibrillin content. The objective of the study is to assess the efficacy and adverse effects of this device on Asian patients with skin textural changes. In this study, 20 Chinese patients (ranging from 21-60 years old) with irregularities of skin texture, rhytides and acne scars were recruited. Patients received six treatments at 2-4 week intervals. Treatment was initiated with maximum energy tolerated and was adjustable during treatment if patients felt excessive discomfort. A total of two passes were delivered at each session. Physician assessment and standardized photographs were taken at baseline, all treatment visits and at one, two, and six month after final treatment. As a result, 17 patients were recruited and completed the study according to the study protocol. One patient withdrew after the first treatment due to reaction to local anesthesia and two patients were lost to follow-up. At six months follow-up, 71% of the patients were satisfied and 24% were very satisfied, while treatment physician reported various degrees of improvement based on the global assessment scale in 60% of the subjects. Anticipated side effects including erythema, edema, pinpoint bleeding, scabs formation and flare of acne were recorded, but there were no serious adverse effects noted. Conclude up, the use of fractional radiofrequency improves skin texture and appears to be safe in Asian patients. No long-term serious adverse effect was noted.

Keywords: Asian, fractional radiogrequency, skin, texture

Procedia PDF Downloads 120
14890 Fractional Residue Number System

Authors: Parisa Khoshvaght, Mehdi Hosseinzadeh

Abstract:

During the past few years, the Residue Number System (RNS) has been receiving considerable interest due to its parallel and fault-tolerant properties. This system is a useful tool for Digital Signal Processing (DSP) since it can support parallel, carry-free, high-speed and low power arithmetic. One of the drawbacks of Residue Number System is the fractional numbers, that is, the corresponding circuit is very hard to realize in conventional CMOS technology. In this paper, we propose a method in which the numbers of transistors are significantly reduced. The related delay is extremely diminished, in the first glance we use this method to solve concerning problem of one decimal functional number some how this proposition can be extended to generalize the idea. Another advantage of this method is the independency on the kind of moduli.

Keywords: computer arithmetic, residue number system, number system, one-Hot, VLSI

Procedia PDF Downloads 471
14889 Existence of Minimal and Maximal Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type

Authors: Jorge Gonzalez-Camus

Abstract:

In this work is proved the existence of at least one minimal and maximal mild solutions to the Cauchy problem, for fractional evolution equation of neutral type, involving a general kernel. An operator A generating a resolvent family and integral resolvent family on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Kuratowski measure of noncompactness and fixed point theorems, specifically Darbo-type, and an iterative method of lower and upper solutions, based in an order in X induced by a normal cone P. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the theory of resolvent families, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, the existence of minimal and maximal mild solutions was proved through in an iterative method of lower and upper solutions, using the Azcoli-Arzela Theorem, and the Gronwall’s inequality. Finally, we recovered the case derivate in Caputo sense.

Keywords: fractional evolution equations, Volterra integral equations, minimal and maximal mild solutions, neutral type equations, non-local in time equations

Procedia PDF Downloads 145
14888 Secure Image Encryption via Enhanced Fractional Order Chaotic Map

Authors: Ismail Haddad, Djamel Herbadji, Aissa Belmeguenai, Selma Boumerdassi

Abstract:

in this paper, we provide a novel approach for image encryption that employs the Fibonacci matrix and an enhanced fractional order chaotic map. The enhanced map overcomes the drawbacks of the classical map, especially the limited chaotic range and non-uniform distribution of chaotic sequences, resulting in a larger encryption key space. As a result, this strategy improves the encryption system's security. Our experimental results demonstrate that our proposed algorithm effectively encrypts grayscale images with exceptional efficiency. Furthermore, our technique is resistant to a wide range of potential attacks, including statistical and entropy attacks.

Keywords: image encryption, logistic map, fibonacci matrix, grayscale images

Procedia PDF Downloads 276
14887 Ideological Framing in Television News: The Case of “Settlement Process”

Authors: Mete Kazaz, Birol Gülnar

Abstract:

Television news has gained a new dimension in terms of ideological approaches as a result of such factors as globalization, cross monopolization, presence of international companies etc. and certain strategies have been developed at the production, presentation and distribution stages of news. In this study, television news about a process called “settlement process” was investigated. In this framework, news about the settlement process on TV channels of TRT 1, ATV, FOX TV, NTV, HABERTÜRK, TRT HABER and STV was investigated using the content analysis method in terms of the strategies the ideology construction, attitude towards the party in power, attitude towards parties in opposition and attitude towards BDP (Peace and Democracy Part) and Imrali (the island where Abdullah Ocalan, head of PKK, is kept). First, the aforementioned TV channels were selected randomly from 3 groups in order to be able to reveal the representational capacity of commercial, news and public channels. The study covers 557 news items broadcast in the main news bulletins between the dates of 15 March 2013 and 15 March 2013. While there was a positive attitude towards the government in a sizable portion of the news about the settlement process (63.6%), the attitude of 25.3% of the news was impartial towards the government and 11.3% had a negative attitude. On the other hand, there was a negative attitude towards the Opposition in a considerable portion of the news about the settlement process (56.1%). The attitude of 35.9% of the news towards the Opposition was impartial whereas 8.0% had a positive attitude. While 34.9% of the news about the settlement process used the legitimization strategy from among the ideology construction strategies, 22.8% used the unification strategy, 15.7% the reification strategy, 15.6% fractional and 11% concealment/mystification strategy.

Keywords: attitude, ideological framing, television news, social sciences

Procedia PDF Downloads 324
14886 High Accuracy Analytic Approximation for Special Functions Applied to Bessel Functions J₀(x) and Its Zeros

Authors: Fernando Maass, Pablo Martin, Jorge Olivares

Abstract:

The Bessel function J₀(x) is very important in Electrodynamics and Physics, as well as its zeros. In this work, a method to obtain high accuracy approximation is presented through an application to that function. In most of the applications of this function, the values of the zeros are very important. In this work, analytic approximations for this function have been obtained valid for all positive values of the variable x, which have high accuracy for the function as well as for the zeros. The approximation is determined by the simultaneous used of the power series and asymptotic expansion. The structure of the approximation is a combination of two rational functions with elementary functions as trigonometric and fractional powers. Here us in Pade method, rational functions are used, but now there combined with elementary functions us fractional powers hyperbolic or trigonometric functions, and others. The reason of this is that now power series of the exact function are used, but together with the asymptotic expansion, which usually includes fractional powers trigonometric functions and other type of elementary functions. The approximation must be a bridge between both expansions, and this can not be accomplished using only with rational functions. In the simplest approximation using 4 parameters the maximum absolute error is less than 0.006 at x ∼ 4.9. In this case also the maximum relative error for the zeros is less than 0.003 which is for the second zero, but that value decreases rapidly for the other zeros. The same kind of behaviour happens for the relative error of the maximum and minimum of the functions. Approximations with higher accuracy and more parameters will be also shown. All the approximations are valid for any positive value of x, and they can be calculated easily.

Keywords: analytic approximations, asymptotic approximations, Bessel functions, quasirational approximations

Procedia PDF Downloads 222
14885 Robust Diagnosis of an Electro-Mechanical Actuators, Bond Graph LFT Approach

Authors: A. Boulanoir, B. Ould Bouamama, A. Debiane, N. Achour

Abstract:

The paper deals with robust Fault Detection and isolation with respect to parameter uncertainties based on linear fractional transformation form (LFT) Bond graph. The innovative interest of the proposed methodology is the use only one representation for systematic generation of robust analytical redundancy relations and adaptive residual thresholds for sensibility analysis. Furthermore, the parameter uncertainties are introduced graphically in the bond graph model. The methodology applied to the nonlinear industrial Electro-Mechanical Actuators (EMA) used in avionic systems, has determined first the structural monitorability analysis (which component can be monitored) with given instrumentation architecture with any need of complex calculation and secondly robust fault indicators for online supervision.

Keywords: bond graph (BG), electro mechanical actuators (EMA), fault detection and isolation (FDI), linear fractional transformation (LFT), mechatronic systems, parameter uncertainties, avionic system

Procedia PDF Downloads 326
14884 Multi-Criteria Goal Programming Model for Sustainable Development of India

Authors: Irfan Ali, Srikant Gupta, Aquil Ahmed

Abstract:

Every country needs a sustainable development (SD) for its economic growth by forming suitable policies and initiative programs for the development of different sectors of the country. This paper is comprised of modeling and optimization of different sectors of India that form a multi-criterion model. In this paper, we developed a fractional goal programming (FGP) model that helps in providing the efficient allocation of resources simultaneously by achieving the sustainable goals in gross domestic product (GDP), electricity consumption (EC) and greenhouse gasses (GHG) emission by the year 2030. Also, a weighted model of FGP is presented to obtain varying solution according to the priorities set by the policy maker for achieving future goals of GDP growth, EC, and GHG emission. The presented models provide a useful insight to the decision makers for implementing strategies in a different sector.

Keywords: sustainable and economic development, multi-objective fractional programming, fuzzy goal programming, weighted fuzzy goal programming

Procedia PDF Downloads 196
14883 Optimized and Secured Digital Watermarking Using Fuzzy Entropy, Bezier Curve and Visual Cryptography

Authors: R. Rama Kishore, Sunesh

Abstract:

Recent development in the usage of internet for different purposes creates a great threat for the copyright protection of the digital images. Digital watermarking can be used to address the problem. This paper presents detailed review of the different watermarking techniques, latest trends in the field of secured, robust and imperceptible watermarking. It also discusses the different optimization techniques used in the field of watermarking in order to improve the robustness and imperceptibility of the method. Different measures are discussed to evaluate the performance of the watermarking algorithm. At the end, this paper proposes a watermarking algorithm using (2, 2) share visual cryptography and Bezier curve based algorithm to improve the security of the watermark. The proposed method uses fractional transformation to improve the robustness of the copyright protection of the method. The algorithm is optimized using fuzzy entropy for better results.

Keywords: digital watermarking, fractional transform, visual cryptography, Bezier curve, fuzzy entropy

Procedia PDF Downloads 336
14882 Predictors and 3-Year Outcomes of Compromised Left Circumflex Coronary Artery After Left Main Crossover Stenting

Authors: Hameed Ullah, Karim Elakabawi, Han KE, Najeeb Ullah, Habib Ullah, Sardar Ali Shah, Hamad Haider Khan, Muhammad Asad Khan, Ning Guo, Zuyi Yuan

Abstract:

Background: Predictors of decreased fractional flow reserve at left circumflex coronary artery after left main (LM) crossover stenting are still lacking. The objectives of the present study were to provide the predictors for low Fractional flow reserve (FFR) at coronary artery (LCx) and the possible treatment strategies for the compromised LCx-together with their long term outcomes. Methods: A total of 563 included patients out of 1974 patients admitted to our hospital from February 2015 to November 2020 with significant distal LM-bifurcation lesions. The enrolled patients underwent single-stent cross-over PCI under IVUS guidance with further LCx intervention as indicated by measured FFR. Results: The included patients showed angiographic significant LCx ostial affection after LM-stenting, but only 116 (20.6%) patients had FFR <0.8. The 3-year composite MACE rates were comparable between the high and low FFR groups (16.8% vs. 15.5%, respectively; P=0.744). In a multivariable analysis, a low FFR in the LCx was associated with post-stenting MLA of the LCx (OR: 0.032, P <0.001), post-stenting LCx-plaque burden (OR: 1.166, P <0.001), post-stenting LM-MLA (OR: 0.821, P =0.038) and pre-stenting LCx-MLA (OR: 0.371, P =0.044). In patients with low FFR, management of compromised LCx with DEB had the lowest 3-year MACE rate (8.1%) as compared to either KBI (17.5%) or stenting group (20.5%), P =0.299. Conclusion: FFR-guided LCx intervention can avoid unnecessary LCx intervention. The post-stenting predictors of low FFR include post-stenting MLA and plaque burden of the LCx and MV stent length. The 3-year MACE rates were comparable between high FFR patients and patients who had low FFR and were adequately managed.

Keywords: fractional flow reserve, left main stem, percutaneous coronary interventions, intravascular ultrasound

Procedia PDF Downloads 12
14881 Simulation to Detect Virtual Fractional Flow Reserve in Coronary Artery Idealized Models

Authors: Nabila Jaman, K. E. Hoque, S. Sawall, M. Ferdows

Abstract:

Coronary artery disease (CAD) is one of the most lethal diseases of the cardiovascular diseases. Coronary arteries stenosis and bifurcation angles closely interact for myocardial infarction. We want to use computer-aided design model coupled with computational hemodynamics (CHD) simulation for detecting several types of coronary artery stenosis with different locations in an idealized model for identifying virtual fractional flow reserve (vFFR). The vFFR provides us the information about the severity of stenosis in the computational models. Another goal is that we want to imitate patient-specific computed tomography coronary artery angiography model for constructing our idealized models with different left anterior descending (LAD) and left circumflex (LCx) bifurcation angles. Further, we want to analyze whether the bifurcation angles has an impact on the creation of narrowness in coronary arteries or not. The numerical simulation provides the CHD parameters such as wall shear stress (WSS), velocity magnitude and pressure gradient (PGD) that allow us the information of stenosis condition in the computational domain.

Keywords: CAD, CHD, vFFR, bifurcation angles, coronary stenosis

Procedia PDF Downloads 135
14880 Solid State Fermentation Process Development for Trichoderma asperellum Using Inert Support in a Fixed Bed Fermenter

Authors: Mauricio Cruz, Andrés Díaz García, Martha Isabel Gómez, Juan Carlos Serrato Bermúdez

Abstract:

The disadvantages of using natural substrates in SSF processes have been well recognized and mainly are associated to gradual decomposition of the substrate, formation of agglomerates and decrease of porosity bed generating limitations in the mass and heat transfer. Additionally, in several cases, materials with a high agricultural value such as sour milk, beets, rice, beans and corn have been used. Thus, the use of economic inert supports (natural or synthetic) in combination with a nutrient suspension for the production of biocontrol microorganisms is a good alternative in SSF processes, but requires further studies in the fields of modeling and optimization. Therefore, the aim of this work is to compare the performance of two inert supports, a synthetic (polyurethane foam) and a natural one (rice husk), identifying the factors that have the major effects on the productivity of T. asperellum Th204 and the maximum specific growth rate in a PROPHYTA L05® fixed bed bioreactor. For this, the six factors C:N ratio, temperature, inoculation rate, bed height, air moisture content and airflow were evaluated using a fractional design. The factors C:N and air flow were identified as significant on the productivity (expressed as conidia/dry substrate•h). The polyurethane foam showed higher maximum specific growth rate (0.1631 h-1) and productivities of 3.89 x107 conidia/dry substrate•h compared to rice husk (2.83x106) and natural substrate based on rice (8.87x106) used as control. Finally, a quadratic model was generated and validated, obtaining productivities higher than 3.0x107 conidia/dry substrate•h with air flow at 0.9 m3/h and C:N ratio at 18.1.

Keywords: bioprocess, scale up, fractional design, C:N ratio, air flow

Procedia PDF Downloads 482
14879 The Use of Fractional Brownian Motion in the Generation of Bed Topography for Bodies of Water Coupled with the Lattice Boltzmann Method

Authors: Elysia Barker, Jian Guo Zhou, Ling Qian, Steve Decent

Abstract:

A method of modelling topography used in the simulation of riverbeds is proposed in this paper, which removes the need for datapoints and measurements of physical terrain. While complex scans of the contours of a surface can be achieved with other methods, this requires specialised tools, which the proposed method overcomes by using fractional Brownian motion (FBM) as a basis to estimate the real surface within a 15% margin of error while attempting to optimise algorithmic efficiency. This removes the need for complex, expensive equipment and reduces resources spent modelling bed topography. This method also accounts for the change in topography over time due to erosion, sediment transport, and other external factors which could affect the topography of the ground by updating its parameters and generating a new bed. The lattice Boltzmann method (LBM) is used to simulate both stationary and steady flow cases in a side-by-side comparison over the generated bed topography using the proposed method and a test case taken from an external source. The method, if successful, will be incorporated into the current LBM program used in the testing phase, which will allow an automatic generation of topography for the given situation in future research, removing the need for bed data to be specified.

Keywords: bed topography, FBM, LBM, shallow water, simulations

Procedia PDF Downloads 72
14878 Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network

Authors: Souhir Ben Amor, Heni Boubaker, Lotfi Belkacem

Abstract:

This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.

Keywords: electricity price, k-factor GARMA, LLWNN, G-GARCH, forecasting

Procedia PDF Downloads 203
14877 Volatility and Stylized Facts

Authors: Kalai Lamia, Jilani Faouzi

Abstract:

Measuring and controlling risk is one of the most attractive issues in finance. With the persistence of uncontrolled and erratic stocks movements, volatility is perceived as a barometer of daily fluctuations. An objective measure of this variable seems then needed to control risks and cover those that are considered the most important. Non-linear autoregressive modeling is our first evaluation approach. In particular, we test the presence of “persistence” of conditional variance and the presence of a degree of a leverage effect. In order to resolve for the problem of “asymmetry” in volatility, the retained specifications point to the importance of stocks reactions in response to news. Effects of shocks on volatility highlight also the need to study the “long term” behaviour of conditional variance of stocks returns and articulate the presence of long memory and dependence of time series in the long run. We note that the integrated fractional autoregressive model allows for representing time series that show long-term conditional variance thanks to fractional integration parameters. In order to stop at the dynamics that manage time series, a comparative study of the results of the different models will allow for better understanding volatility structure over the Tunisia stock market, with the aim of accurately predicting fluctuation risks.

Keywords: asymmetry volatility, clustering, stylised facts, leverage effect

Procedia PDF Downloads 275
14876 Forecasting Performance Comparison of Autoregressive Fractional Integrated Moving Average and Jordan Recurrent Neural Network Models on the Turbidity of Stream Flows

Authors: Daniel Fulus Fom, Gau Patrick Damulak

Abstract:

In this study, the Autoregressive Fractional Integrated Moving Average (ARFIMA) and Jordan Recurrent Neural Network (JRNN) models were employed to model the forecasting performance of the daily turbidity flow of White Clay Creek (WCC). The two methods were applied to the log difference series of the daily turbidity flow series of WCC. The measurements of error employed to investigate the forecasting performance of the ARFIMA and JRNN models are the Root Mean Square Error (RMSE) and the Mean Absolute Error (MAE). The outcome of the investigation revealed that the forecasting performance of the JRNN technique is better than the forecasting performance of the ARFIMA technique in the mean square error sense. The results of the ARFIMA and JRNN models were obtained by the simulation of the models using MATLAB version 8.03. The significance of using the log difference series rather than the difference series is that the log difference series stabilizes the turbidity flow series than the difference series on the ARFIMA and JRNN.

Keywords: auto regressive, mean absolute error, neural network, root square mean error

Procedia PDF Downloads 244
14875 Fractional, Component and Morphological Composition of Ambient Air Dust in the Areas of Mining Industry

Authors: S.V. Kleyn, S.Yu. Zagorodnov, А.А. Kokoulina

Abstract:

Technogenic emissions of the mining and processing complex are characterized by a high content of chemical components and solid dust particles. However, each industrial enterprise and the surrounding area have features that require refinement and parameterization. Numerous studies have shown the negative impact of fine dust PM10 and PM2.5 on the health, as well as the possibility of toxic components absorption, including heavy metals by dust particles. The target of the study was the quantitative assessment of the fractional and particle size composition of ambient air dust in the area of impact by primary magnesium production complex. Also, we tried to describe the morphology features of dust particles. Study methods. To identify the dust emission sources, the analysis of the production process has been carried out. The particulate composition of the emissions was measured using laser particle analyzer Microtrac S3500 (covered range of particle size is 20 nm to 2000 km). Particle morphology and the component composition were established by electron microscopy by scanning microscope of high resolution (magnification rate - 5 to 300 000 times) with X-ray fluorescence device S3400N ‘HITACHI’. The chemical composition was identified by X-ray analysis of the samples using an X-ray diffractometer XRD-700 ‘Shimadzu’. Determination of the dust pollution level was carried out using model calculations of emissions in the atmosphere dispersion. The calculations were verified by instrumental studies. Results of the study. The results demonstrated that the dust emissions of different technical processes are heterogeneous and fractional structure is complicated. The percentage of particle sizes up to 2.5 micrometres inclusive was ranged from 0.00 to 56.70%; particle sizes less than 10 microns inclusive – 0.00 - 85.60%; particle sizes greater than 10 microns - 14.40% -100.00%. During microscopy, the presence of nanoscale size particles has been detected. Studied dust particles are round, irregular, cubic and integral shapes. The composition of the dust includes magnesium, sodium, potassium, calcium, iron, chlorine. On the base of obtained results, it was performed the model calculations of dust emissions dispersion and establishment of the areas of fine dust РМ 10 and РМ 2.5 distribution. It was found that the dust emissions of fine powder fractions PM10 and PM2.5 are dispersed over large distances and beyond the border of the industrial site of the enterprise. The population living near the enterprise is exposed to the risk of diseases associated with dust exposure. Data are transferred to the economic entity to make decisions on the measures to minimize the risks. Exposure and risks indicators on the health are used to provide named patient health and preventive care to the citizens living in the area of negative impact of the facility.

Keywords: dust emissions, еxposure assessment, PM 10, PM 2.5

Procedia PDF Downloads 237
14874 Fructooligosaccharide Prebiotics: Optimization of Different Cultivation Parameters on Their Microbial Production

Authors: Elsayed Ahmed Elsayed, Azza Noor El-Deen, Mohamed A. Farid, Mohamed A. Wadaan

Abstract:

Recently, a great attention has been paid to the use of dietary carbohydrates as prebiotic functional foods. Among the new commercially available products, fructooligosaccharides (FOS), which are microbial produced from sucrose, have attracted special interest due to their valuable properties and, thus, have a great economic potential for the sugar industrial branch. They are non-cariogenic sweeteners of low caloric value, as they are not hydrolyzed by the gastro-intestinal enzymes, promoting selectively the growth of the bifidobacteria in the colon, helping to eliminate the harmful microbial species to human and animal health and preventing colon cancer. FOS has been also found to reduce cholesterol, phospholipids and triglyceride levels in blood. FOS has been mainly produced by microbial fructosyltransferase (FTase) enzymes. The present work outlines bioprocess optimization for different cultivation parameters affecting the production of FTase by Penicillium aurantiogriseum AUMC 5605. The optimization involves both traditional as well as fractional factorial design approaches. Additionally, the production process will be compared under batch and fed-batch conditions. Finally, the optimized process conditions will be applied to 5-L stirred tank bioreactor cultivations.

Keywords: prebiotics, fructooligosaccharides, optimization, cultivation

Procedia PDF Downloads 361
14873 Polymer Flooding: Chemical Enhanced Oil Recovery Technique

Authors: Abhinav Bajpayee, Shubham Damke, Rupal Ranjan, Neha Bharti

Abstract:

Polymer flooding is a dramatic improvement in water flooding and quickly becoming one of the EOR technologies. Used for improving oil recovery. With the increasing energy demand and depleting oil reserves EOR techniques are becoming increasingly significant .Since most oil fields have already begun water flooding, chemical EOR technique can be implemented by using fewer resources than any other EOR technique. Polymer helps in increasing the viscosity of injected water thus reducing water mobility and hence achieves a more stable displacement .Polymer flooding helps in increasing the injection viscosity as has been revealed through field experience. While the injection of a polymer solution improves reservoir conformance the beneficial effect ceases as soon as one attempts to push the polymer solution with water. It is most commonly applied technique because of its higher success rate. In polymer flooding, a water-soluble polymer such as Polyacrylamide is added to the water in the water flood. This increases the viscosity of the water to that of a gel making the oil and water greatly improving the efficiency of the water flood. It also improves the vertical and areal sweep efficiency as a consequence of improving the water/oil mobility ratio. Polymer flooding plays an important role in oil exploitation, but around 60 million ton of wastewater is produced per day with oil extraction together. Therefore the treatment and reuse of wastewater becomes significant which can be carried out by electro dialysis technology. This treatment technology can not only decrease environmental pollution, but also achieve closed-circuit of polymer flooding wastewater during crude oil extraction. There are three potential ways in which a polymer flood can make the oil recovery process more efficient: (1) through the effects of polymers on fractional flow, (2) by decreasing the water/oil mobility ratio, and (3) by diverting injected water from zones that have been swept. It has also been suggested that the viscoelastic behavior of polymers can improve displacement efficiency Polymer flooding may also have an economic impact because less water is injected and produced compared with water flooding. In future we need to focus on developing polymers that can be used in reservoirs of high temperature and high salinity, applying polymer flooding in different reservoir conditions and also combine polymer with other processes (e.g., surfactant/ polymer flooding).

Keywords: fractional flow, polymer, viscosity, water/oil mobility ratio

Procedia PDF Downloads 360
14872 Object-Centric Process Mining Using Process Cubes

Authors: Anahita Farhang Ghahfarokhi, Alessandro Berti, Wil M.P. van der Aalst

Abstract:

Process mining provides ways to analyze business processes. Common process mining techniques consider the process as a whole. However, in real-life business processes, different behaviors exist that make the overall process too complex to interpret. Process comparison is a branch of process mining that isolates different behaviors of the process from each other by using process cubes. Process cubes organize event data using different dimensions. Each cell contains a set of events that can be used as an input to apply process mining techniques. Existing work on process cubes assume single case notions. However, in real processes, several case notions (e.g., order, item, package, etc.) are intertwined. Object-centric process mining is a new branch of process mining addressing multiple case notions in a process. To make a bridge between object-centric process mining and process comparison, we propose a process cube framework, which supports process cube operations such as slice and dice on object-centric event logs. To facilitate the comparison, the framework is integrated with several object-centric process discovery approaches.

Keywords: multidimensional process mining, mMulti-perspective business processes, OLAP, process cubes, process discovery, process mining

Procedia PDF Downloads 224
14871 The Application of to Optimize Pellet Quality in Broiler Feeds

Authors: Reza Vakili

Abstract:

The aim of this experiment was to optimize the effect of moisture, the production rate, grain particle size and steam conditioning temperature on pellet quality in broiler feed using Taguchi method and a 43 fractional factorial arrangement was conducted. Production rate, steam conditioning temperatures, particle sizes and moisture content were performed. During the production process, sampling was done, and then pellet durability index (PDI) and hardness evaluated in broiler feed grower and finisher. There was a significant effect of processing parameters on PDI and hardness. Based on the results of this experiment Taguchi method can be used to find the best combination of factors for optimal pellet quality.

Keywords: broiler, feed physical quality, hardness, processing parameters, PDI

Procedia PDF Downloads 144
14870 Production Process of Coconut-Shell Product in Amphawa District

Authors: Wannee Sutthachaidee

Abstract:

The study of the production process of coconut-shell product in Amphawa, Samutsongkram Province is objected to study the pattern of the process of coconut-shell product by focusing in the 3 main processes which are inbound logistics process, production process and outbound process. The result of the research: There were 4 main results from the study. Firstly, most of the manufacturer of coconut-shell product is usually owned by a single owner and the quantity of the finished product is quite low and the main labor group is local people. Secondly, the production process can be divided into 4 stages which are pre-production process, production process, packaging process and distribution process. Thirdly, each 3 of the logistics process of coconut shell will find process which may cause the problem to the business but the process which finds the most problem is the production process because the production process needs the skilled labor and the quantity of the labor does not match with the demand from the customers. Lastly, the factors which affect the production process of the coconut shell can be founded in almost every process of the process such as production design, packaging design, sourcing supply and distribution management.

Keywords: production process, coconut-shell product, Amphawa District, inbound logistics process

Procedia PDF Downloads 490
14869 Development, Optimization, and Validation of a Synchronous Fluorescence Spectroscopic Method with Multivariate Calibration for the Determination of Amlodipine and Olmesartan Implementing: Experimental Design

Authors: Noha Ibrahim, Eman S. Elzanfaly, Said A. Hassan, Ahmed E. El Gendy

Abstract:

Objectives: The purpose of the study is to develop a sensitive synchronous spectrofluorimetric method with multivariate calibration after studying and optimizing the different variables affecting the native fluorescence intensity of amlodipine and olmesartan implementing an experimental design approach. Method: In the first step, the fractional factorial design used to screen independent factors affecting the intensity of both drugs. The objective of the second step was to optimize the method performance using a Central Composite Face-centred (CCF) design. The optimal experimental conditions obtained from this study were; a temperature of (15°C ± 0.5), the solvent of 0.05N HCl and methanol with a ratio of (90:10, v/v respectively), Δλ of 42 and the addition of 1.48 % surfactant providing a sensitive measurement of amlodipine and olmesartan. The resolution of the binary mixture with a multivariate calibration method has been accomplished mainly by using partial least squares (PLS) model. Results: The recovery percentage for amlodipine besylate and atorvastatin calcium in tablets dosage form were found to be (102 ± 0.24, 99.56 ± 0.10, for amlodipine and Olmesartan, respectively). Conclusion: Method is valid according to some International Conference on Harmonization (ICH) guidelines, providing to be linear over a range of 200-300, 500-1500 ng mL⁻¹ for amlodipine and Olmesartan. The methods were successful to estimate amlodipine besylate and olmesartan in bulk powder and pharmaceutical preparation.

Keywords: amlodipine, central composite face-centred design, experimental design, fractional factorial design, multivariate calibration, olmesartan

Procedia PDF Downloads 121
14868 Numerical Simulation of the Fractional Flow Reserve in the Coronary Artery with Serial Stenoses of Varying Configuration

Authors: Mariia Timofeeva, Andrew Ooi, Eric K. W. Poon, Peter Barlis

Abstract:

Atherosclerotic plaque build-up, commonly known as stenosis, limits blood flow and hence oxygen and nutrient supplies to the heart muscle. Thus, assessment of its severity is of great interest to health professionals. Numerical simulation of the fractional flow reserve (FFR) has proved to be well correlated with invasively measured FFR used for physiological assessment of the severity of coronary stenosis in arteries. Atherosclerosis may impact the diseased artery in several locations causing serial stenoses, which is a complicated subset of coronary artery disease that requires careful treatment planning. However, hemodynamic of the serial sequential stenoses in coronary arteries has not been extensively studied. The hemodynamics of the serial stenoses is complex because the stenoses in the series interact and affect the flow through each other. To address this, serial stenoses in a 3.4 mm left anterior descending (LAD) artery are examined in this study. Two diameter stenoses (DS) are considered, 30 and 50 percent of the reference diameter. Serial stenoses configurations are divided into three groups based on the order of the stenoses in the series, spacing between them, and deviation of the stenoses’ symmetry (eccentricity). A patient-specific pulsatile waveform is used in the simulations. Blood flow within the stenotic artery is assumed to be laminar, Newtonian, and incompressible. Results for the FFR are reported. Based on the simulation results, it can be deduced that the larger drop in pressure (smaller value of the FFR) is expected when the percentage of the second stenosis in the series is bigger. Varying the distance between the stenoses affects the location of the maximum drop in the pressure, while the minimal FFR in the artery remains unchanged. Eccentric serial stenoses are characterized by a noticeably larger decrease in pressure through the stenoses and by the development of the chaotic flow downstream of the stenoses. The largest drop in the pressure (about 4% difference compared to the axisymmetric case) is obtained for the serial stenoses, where both the stenoses are highly eccentric with the centerlines deflected to the different sides of the LAD. In conclusion, varying configuration of the sequential serial stenoses results in a different distribution of FFR through the LAD. Results presented in this study provide insight into the clinical assessment of the severity of the coronary serial stenoses, which is proved to depend on the relative position of the stenoses and the deviation of the stenoses’ symmetry.

Keywords: computational fluid dynamics, coronary artery, fractional flow reserve, serial stenoses

Procedia PDF Downloads 162
14867 Effect of Tool Size and Cavity Depth on Response Characteristics during Electric Discharge Machining on Superalloy Metal - An Experimental Investigation

Authors: Sudhanshu Kumar

Abstract:

Electrical discharge machining, also known as EDM, process is one of the most applicable machining process for removal of material in hard to machine materials like superalloy metals. EDM process utilizes electrical energy into sparks to erode the metals in presence of dielectric medium. In the present investigation, superalloy, Inconel 718 has been selected as workpiece and electrolytic copper as tool electrode. Attempt has been made to understand the effect of size of tool with varying cavity depth during drilling of hole through EDM process. In order to systematic investigate, tool size in terms of tool diameter and cavity depth along with other important electrical parameters namely, peak current, pulse-on time and servo voltage have been varied at three different values and the experiments has been designed using fractional factorial (Taguchi) method. Each experiment has been repeated twice under the same condition in order to understand the variability within the experiments. The effect of variations in parameters has been evaluated in terms of material removal rate, tool wear rate and surface roughness. Results revel that change in tool diameter during machining affects the response characteristics significantly. Larger tool diameter yielded 13% more material removal rate than smaller tool diameter. Analysis of the effect of variation in cavity depth is notable. There is no significant effect of cavity depth on material removal rate, tool wear rate and surface quality. This indicates that number of experiments can be performed to analyze other parameters effect even at smaller depth of cavity which can reduce the cost and time of experiments. Further, statistical analysis has been carried out to identify the interaction effect between parameters.

Keywords: EDM, Inconel 718, material removal rate, roughness, tool wear, tool size

Procedia PDF Downloads 178
14866 The Volume–Volatility Relationship Conditional to Market Efficiency

Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese

Abstract:

The relation between stock price volatility and trading volume represents a controversial issue which has received a remarkable attention over the past decades. In fact, an extensive literature shows a positive relation between price volatility and trading volume in the financial markets, but the causal relationship which originates such association is an open question, from both a theoretical and empirical point of view. In this regard, various models, which can be considered as complementary rather than competitive, have been introduced to explain this relationship. They include the long debated Mixture of Distributions Hypothesis (MDH); the Sequential Arrival of Information Hypothesis (SAIH); the Dispersion of Beliefs Hypothesis (DBH); the Noise Trader Hypothesis (NTH). In this work, we analyze whether stock market efficiency can explain the diversity of results achieved during the years. For this purpose, we propose an alternative measure of market efficiency, based on the pointwise regularity of a stochastic process, which is the Hurst–H¨older dynamic exponent. In particular, we model the stock market by means of the multifractional Brownian motion (mBm) that displays the property of a time-changing regularity. Mostly, such models have in common the fact that they locally behave as a fractional Brownian motion, in the sense that their local regularity at time t0 (measured by the local Hurst–H¨older exponent in a neighborhood of t0 equals the exponent of a fractional Brownian motion of parameter H(t0)). Assuming that the stock price follows an mBm, we introduce and theoretically justify the Hurst–H¨older dynamical exponent as a measure of market efficiency. This allows to measure, at any time t, markets’ departures from the martingale property, i.e. from efficiency as stated by the Efficient Market Hypothesis. This approach is applied to financial markets; using data for the SP500 index from 1978 to 2017, on the one hand we find that when efficiency is not accounted for, a positive contemporaneous relationship emerges and is stable over time. Conversely, it disappears as soon as efficiency is taken into account. In particular, this association is more pronounced during time frames of high volatility and tends to disappear when market becomes fully efficient.

Keywords: volume–volatility relationship, efficient market hypothesis, martingale model, Hurst–Hölder exponent

Procedia PDF Downloads 54
14865 Robust Diagnosability of PEMFC Based on Bond Graph LFT

Authors: Ould Bouamama, M. Bressel, D. Hissel, M. Hilairet

Abstract:

Fuel cell (FC) is one of the best alternatives of fossil energy. Recently, the research community of fuel cell has shown a considerable interest for diagnosis in view to ensure safety, security, and availability when faults occur in the process. The problematic for model based FC diagnosis consists in that the model is complex because of coupling of several kind of energies and the numerical values of parameters are not always known or are uncertain. The present paper deals with use of one tool: the Linear Fractional Transformation bond graph tool not only for uncertain modelling but also for monitorability (ability to detect and isolate faults) analysis and formal generation of robust fault indicators with respect to parameter uncertainties.The developed theory applied to a nonlinear FC system has proved its efficiency.

Keywords: bond graph, fuel cell, fault detection and isolation (FDI), robust diagnosis, structural analysis

Procedia PDF Downloads 340
14864 A Study on Unix Process Crash Based on Efficient Process Management Method

Authors: Guo Haonan, Chen Peiyu, Zhao Hanyu, Burra Venkata Durga Kumar

Abstract:

Unix and Unix-like operating systems are widely used due to their high stability but are limited by the parent-child process structure, and the child process depends on the parent process, so the crash of a single process may cause the entire process group or even the entire system to fail. Another possibility of unexpected process termination is that the system administrator inadvertently closed the terminal or pseudo-terminal where the application was launched, causing the application process to terminate unexpectedly. This paper mainly analyzes the reasons for the problems and proposes two solutions.

Keywords: process management, daemon, login-bash and non-login bash, process group

Procedia PDF Downloads 105