Search results for: Exponential
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 207

Search results for: Exponential

27 Increase of Organization in Complex Systems

Authors: Georgi Yordanov Georgiev, Michael Daly, Erin Gombos, Amrit Vinod, Gajinder Hoonjan

Abstract:

Measures of complexity and entropy have not converged to a single quantitative description of levels of organization of complex systems. The need for such a measure is increasingly necessary in all disciplines studying complex systems. To address this problem, starting from the most fundamental principle in Physics, here a new measure for quantity of organization and rate of self-organization in complex systems based on the principle of least (stationary) action is applied to a model system - the central processing unit (CPU) of computers. The quantity of organization for several generations of CPUs shows a double exponential rate of change of organization with time. The exact functional dependence has a fine, S-shaped structure, revealing some of the mechanisms of self-organization. The principle of least action helps to explain the mechanism of increase of organization through quantity accumulation and constraint and curvature minimization with an attractor, the least average sum of actions of all elements and for all motions. This approach can help describe, quantify, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self organization to improve their quality. It can be applied to other complex systems from Physics, Chemistry, Biology, Ecology, Economics, Cities, network theory and others where complex systems are present.

Keywords: Organization, self-organization, complex system, complexification, quantitative measure, principle of least action, principle of stationary action, attractor, progressive development, acceleration, stochastic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593
26 Flow-Through Supercritical Installation for Producing Biodiesel Fuel

Authors: Y. A. Shapovalov, F. M. Gumerov, M. K. Nauryzbaev, S. V. Mazanov, R. A. Usmanov, A. V. Klinov, L. K. Safiullina, S. A. Soshin

Abstract:

A flow-through installation was created and manufactured for the transesterification of triglycerides of fatty acids and production of biodiesel fuel under supercritical fluid conditions. Transesterification of rapeseed oil with ethanol was carried out according to two parameters: temperature and the ratio of alcohol/oil mixture at the constant pressure of 19 MPa. The kinetics of the yield of fatty acids ethyl esters (FAEE) was determined in the temperature range of 320-380 °C at the alcohol/oil molar ratio of 6:1-20:1. The content of the formed FAEE was determined by the method of correlation of the resulting biodiesel fuel by its kinematic viscosity. The maximum FAEE yield (about 90%) was obtained within 30 min at the ethanol/oil molar ratio of 12:1 and a temperature of 380 °C. When studying of transesterification of triglycerides, a kinetic model of an isothermal flow reactor was used. The reaction order implemented in the flow reactor has been determined. The first order of the reaction was confirmed by data on the conversion of FAEE during the reaction at different temperatures and the molar ratios of the initial reagents (ethanol/oil). Using the Arrhenius equation, the values of the effective constants of the transesterification reaction rate were calculated at different reaction temperatures. In addition, based on the experimental data, the activation energy and the pre-exponential factor of the transesterification reaction were determined.

Keywords: Biodiesel, fatty acid esters, supercritical fluid technology, transesterification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 346
25 Time/Temperature-Dependent Finite Element Model of Laminated Glass Beams

Authors: Alena Zemanová, Jan Zeman, Michal Šejnoha

Abstract:

The polymer foil used for manufacturing of laminated glass members behaves in a viscoelastic manner with temperature dependance. This contribution aims at incorporating the time/temperature-dependent behavior of interlayer to our earlier elastic finite element model for laminated glass beams. The model is based on a refined beam theory: each layer behaves according to the finite-strain shear deformable formulation by Reissner and the adjacent layers are connected via the Lagrange multipliers ensuring the inter-layer compatibility of a laminated unit. The time/temperature-dependent behavior of the interlayer is accounted for by the generalized Maxwell model and by the time-temperature superposition principle due to the Williams, Landel, and Ferry. The resulting system is solved by the Newton method with consistent linearization and the viscoelastic response is determined incrementally by the exponential algorithm. By comparing the model predictions against available experimental data, we demonstrate that the proposed formulation is reliable and accurately reproduces the behavior of the laminated glass units.

Keywords: Laminated glass, finite element method, finite-strain Reissner model, Lagrange multipliers, generalized Maxwell model, Williams-Landel-Ferry equation, Newton method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
24 Thermal Analysis of Extrusion Process in Plastic Making

Authors: S. K. Fasogbon, T. M. Oladosu, O. S. Osasuyi

Abstract:

Plastic extrusion has been an important process of plastic production since 19th century. Meanwhile, in plastic extrusion process, wide variation in temperature along the extrudate usually leads to scraps formation on the side of finished products. To avoid this situation, there is a need to deeply understand temperature distribution along the extrudate in plastic extrusion process. This work developed an analytical model that predicts the temperature distribution over the billet (the polymers melt) along the extrudate during extrusion process with the limitation that the polymer in question does not cover biopolymer such as DNA. The model was solved and simulated. Results for two different plastic materials (polyvinylchloride and polycarbonate) using self-developed MATLAB code and a commercially developed software (ANSYS) were generated and ultimately compared. It was observed that there is a thermodynamic heat transfer from the entry level of the billet into the die down to the end of it. The graph plots indicate a natural exponential decay of temperature with time and along the die length, with the temperature being 413 K and 474 K for polyvinylchloride and polycarbonate respectively at the entry level and 299.3 K and 328.8 K at the exit when the temperature of the surrounding was 298 K. The extrusion model was validated by comparison of MATLAB code simulation with a commercially available ANSYS simulation and the results favourably agree. This work concludes that the developed mathematical model and the self-generated MATLAB code are reliable tools in predicting temperature distribution along the extrudate in plastic extrusion process.

Keywords: ANSYS, extrusion process, MATLAB, plastic making, thermal analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1783
23 Study of Human Upper Arm Girth during Elbow Isokinetic Contractions Based on a Smart Circumferential Measuring System

Authors: Xi Wang, Xiaoming Tao, Raymond C. H. So

Abstract:

As one of the convenient and noninvasive sensing approaches, the automatic limb girth measurement has been applied to detect intention behind human motion from muscle deformation. The sensing validity has been elaborated by preliminary researches but still need more fundamental studies, especially on kinetic contraction modes. Based on the novel fabric strain sensors, a soft and smart limb girth measurement system was developed by the authors’ group, which can measure the limb girth in-motion. Experiments were carried out on elbow isometric flexion and elbow isokinetic flexion (biceps’ isokinetic contractions) of 90°/s, 60°/s, and 120°/s for 10 subjects (2 canoeists and 8 ordinary people). After removal of natural circumferential increments due to elbow position, the joint torque is found not uniformly sensitive to the limb circumferential strains, but declining as elbow joint angle rises, regardless of the angular speed. Moreover, the maximum joint torque was found as an exponential function of the joint’s angular speed. This research highly contributes to the application of the automatic limb girth measuring during kinetic contractions, and it is useful to predict the contraction level of voluntary skeletal muscles.

Keywords: Fabric strain sensor, muscle deformation, isokinetic contraction, joint torque, limb girth strain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2062
22 All Types of Base Pair Substitutions Induced by γ-Rays in Haploid and Diploid Yeast Cells

Authors: Natalia Koltovaya, Nadezhda Zhuchkina, Ksenia Lyubimova

Abstract:

We study the biological effects induced by ionizing radiation in view of therapeutic exposure and the idea of space flights beyond Earth's magnetosphere. In particular, we examine the differences between base pair substitution induction by ionizing radiation in model haploid and diploid yeast Saccharomyces cerevisiae cells. Such mutations are difficult to study in higher eukaryotic systems. In our research, we have used a collection of six isogenic trp5-strains and 14 isogenic haploid and diploid cyc1-strains that are specific markers of all possible base-pair substitutions. These strains differ from each other only in single base substitutions within codon-50 of the trp5 gene or codon-22 of the cyc1 gene. Different mutation spectra for two different haploid genetic trp5- and cyc1-assays and different mutation spectra for the same genetic cyc1-system in cells with different ploidy — haploid and diploid — have been obtained. It was linear function for dose-dependence in haploid and exponential in diploid cells. We suggest that the differences between haploid yeast strains reflect the dependence on the sequence context, while the differences between haploid and diploid strains reflect the different molecular mechanisms of mutations.

Keywords: Base pair substitutions, γ-rays, haploid and diploid cells, yeast Saccharomyces cerevisiae.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 767
21 Understanding Innovation by Analyzing the Pillars of the Global Competitiveness Index

Authors: Ujjwala Bhand, Mridula Goel

Abstract:

Global Competitiveness Index (GCI) prepared by World Economic Forum has become a benchmark in studying the competitiveness of countries and for understanding the factors that enable competitiveness. Innovation is a key pillar in competitiveness and has the unique property of enabling exponential economic growth. This paper attempts to analyze how the pillars comprising the Global Competitiveness Index affect innovation and whether GDP growth can directly affect innovation outcomes for a country. The key objective of the study is to identify areas on which governments of developing countries can focus policies and programs to improve their country’s innovativeness. We have compiled a panel data set for top innovating countries and large emerging economies called BRICS from 2007-08 to 2014-15 in order to find the significant factors that affect innovation. The results of the regression analysis suggest that government should make policies to improve labor market efficiency, establish sophisticated business networks, provide basic health and primary education to its people and strengthen the quality of higher education and training services in the economy. The achievements of smaller economies on innovation suggest that concerted efforts by governments can counter any size related disadvantage, and in fact can provide greater flexibility and speed in encouraging innovation.

Keywords: Innovation, Global Competitiveness Index, BRICS, economic growth.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 987
20 Enhancing Performance of Bluetooth Piconets Using Priority Scheduling and Exponential Back-Off Mechanism

Authors: Dharmendra Chourishi “Maitraya”, Sridevi Seshadri

Abstract:

Bluetooth is a personal wireless communication technology and is being applied in many scenarios. It is an emerging standard for short range, low cost, low power wireless access technology. Current existing MAC (Medium Access Control) scheduling schemes only provide best-effort service for all masterslave connections. It is very challenging to provide QoS (Quality of Service) support for different connections due to the feature of Master Driven TDD (Time Division Duplex). However, there is no solution available to support both delay and bandwidth guarantees required by real time applications. This paper addresses the issue of how to enhance QoS support in a Bluetooth piconet. The Bluetooth specification proposes a Round Robin scheduler as possible solution for scheduling the transmissions in a Bluetooth Piconet. We propose an algorithm which will reduce the bandwidth waste and enhance the efficiency of network. We define token counters to estimate traffic of real-time slaves. To increase bandwidth utilization, a back-off mechanism is then presented for best-effort slaves to decrease the frequency of polling idle slaves. Simulation results demonstrate that our scheme achieves better performance over the Round Robin scheduling.

Keywords: Piconet, Medium Access Control, Polling algorithm, Scheduling, QoS, Time Division Duplex (TDD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662
19 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis

Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen

Abstract:

Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.

Keywords: Hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1586
18 An Investigation of Performance versus Security in Cognitive Radio Networks with Supporting Cloud Platforms

Authors: Kurniawan D. Irianto, Demetres D. Kouvatsos

Abstract:

The growth of wireless devices affects the availability of limited frequencies or spectrum bands as it has been known that spectrum bands are a natural resource that cannot be added. Meanwhile, the licensed frequencies are idle most of the time. Cognitive radio is one of the solutions to solve those problems. Cognitive radio is a promising technology that allows the unlicensed users known as secondary users (SUs) to access licensed bands without making interference to licensed users or primary users (PUs). As cloud computing has become popular in recent years, cognitive radio networks (CRNs) can be integrated with cloud platform. One of the important issues in CRNs is security. It becomes a problem since CRNs use radio frequencies as a medium for transmitting and CRNs share the same issues with wireless communication systems. Another critical issue in CRNs is performance. Security has adverse effect to performance and there are trade-offs between them. The goal of this paper is to investigate the performance related to security trade-off in CRNs with supporting cloud platforms. Furthermore, Queuing Network Models with preemptive resume and preemptive repeat identical priority are applied in this project to measure the impact of security to performance in CRNs with or without cloud platform. The generalized exponential (GE) type distribution is used to reflect the bursty inter-arrival and service times at the servers. The results show that the best performance is obtained when security is disabled and cloud platform is enabled.

Keywords: Cloud Platforms, Cognitive Radio Networks, GEtype Distribution, Performance Vs Security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2465
17 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes

Authors: Jihad S. Daba, J. P. Dubois

Abstract:

Fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper utilized a Poisson modulated-weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multidiversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.

Keywords: Cellular communication, femto- and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1306
16 A Computational Stochastic Modeling Formalism for Biological Networks

Authors: Werner Sandmann, Verena Wolf

Abstract:

Stochastic models of biological networks are well established in systems biology, where the computational treatment of such models is often focused on the solution of the so-called chemical master equation via stochastic simulation algorithms. In contrast to this, the development of storage-efficient model representations that are directly suitable for computer implementation has received significantly less attention. Instead, a model is usually described in terms of a stochastic process or a "higher-level paradigm" with graphical representation such as e.g. a stochastic Petri net. A serious problem then arises due to the exponential growth of the model-s state space which is in fact a main reason for the popularity of stochastic simulation since simulation suffers less from the state space explosion than non-simulative numerical solution techniques. In this paper we present transition class models for the representation of biological network models, a compact mathematical formalism that circumvents state space explosion. Transition class models can also serve as an interface between different higher level modeling paradigms, stochastic processes and the implementation coded in a programming language. Besides, the compact model representation provides the opportunity to apply non-simulative solution techniques thereby preserving the possible use of stochastic simulation. Illustrative examples of transition class representations are given for an enzyme-catalyzed substrate conversion and a part of the bacteriophage λ lysis/lysogeny pathway.

Keywords: Computational Modeling, Biological Networks, Stochastic Models, Markov Chains, Transition Class Models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533
15 Application of Interferometric Techniques for Quality Control of Oils Used in the Food Industry

Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich

Abstract:

The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.

Keywords: Food industry, interferometric, oils, quality control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2126
14 The Effect of Magnetite Particle Size on Methane Production by Fresh and Degassed Anaerobic Sludge

Authors: E. Al-Essa, R. Bello-Mendoza, D. G. Wareham

Abstract:

Anaerobic batch experiments were conducted to investigate the effect of magnetite-supplementation (7 mM) on methane production from digested sludge undergoing two different microbial growth phases, namely fresh sludge (exponential growth phase) and degassed sludge (endogenous decay phase). Three different particle sizes were assessed: small (50 - 150 nm), medium (168 – 490 nm) and large (800 nm - 4.5 µm) particles. Results show that, in the case of the fresh sludge, magnetite significantly enhanced the methane production rate (up to 32%) and reduced the lag phase (by 15% - 41%) as compared to the control, regardless of the particle size used. However, the cumulative methane produced at the end of the incubation was comparable in all treatment and control bottles. In the case of the degassed sludge, only the medium-sized magnetite particles increased significantly the methane production rate (12% higher) as compared to the control. Small and large particles had little effect on the methane production rate but did result in an extended lag phase which led to significantly lower cumulative methane production at the end of the incubation period. These results suggest that magnetite produces a clear and positive effect on methane production only when an active and balanced microbial community is present in the anaerobic digester. It is concluded that, (i) the effect of magnetite particle size on increasing the methane production rate and reducing lag phase duration is strongly influenced by the initial metabolic state of the microbial consortium, and (ii) the particle size would positively affect the methane production if it is provided within the nanometer size range.

Keywords: Anaerobic digestion, iron oxide (Fe3O4), methanogenesis, nanoparticle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 728
13 Advantages of Large Strands in Precast/Prestressed Concrete Highway Application

Authors: Amin Akhnoukh

Abstract:

The objective of this research is to investigate the advantages of using large-diameter 0.7 inch prestressing strands in pretention applications. The advantages of large-diameter strands are mainly beneficial in the heavy construction applications. Bridges and tunnels are subjected to a higher daily traffic with an exponential increase in trucks ultimate weight, which raise the demand for higher structural capacity of bridges and tunnels. In this research, precast prestressed I-girders were considered as a case study. Flexure capacities of girders fabricated using 0.7 inch strands and different concrete strengths were calculated and compared to capacities of 0.6 inch strands girders fabricated using equivalent concrete strength. The effect of bridge deck concrete strength on composite deck-girder section capacity was investigated due to its possible effect on final section capacity. Finally, a comparison was made to compare the bridge cross-section of girders designed using regular 0.6 inch strands and the large-diameter 0.7 inch. The research findings showed that structural advantages of 0.7 inch strands allow for using fewer bridge girders, reduced material quantity, and light-weight members. The structural advantages of 0.7 inch strands are maximized when high strength concrete (HSC) are used in girder fabrication, and concrete of minimum 5ksi compressive strength is used in pouring bridge decks. The use of 0.7 inch strands in bridge industry can partially contribute to the improvement of bridge conditions, minimize construction cost, and reduce the construction duration of the project.

Keywords: 0.7 Inch Strands, I-Girders, Pretension, Flexure Capacity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2691
12 Simplified Stress Gradient Method for Stress-Intensity Factor Determination

Authors: Jeries J. Abou-Hanna

Abstract:

Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.

Keywords: Fracture mechanics, finite element method, stress intensity factor, stress gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 692
11 Novel Adaptive Channel Equalization Algorithms by Statistical Sampling

Authors: János Levendovszky, András Oláh

Abstract:

In this paper, novel statistical sampling based equalization techniques and CNN based detection are proposed to increase the spectral efficiency of multiuser communication systems over fading channels. Multiuser communication combined with selective fading can result in interferences which severely deteriorate the quality of service in wireless data transmission (e.g. CDMA in mobile communication). The paper introduces new equalization methods to combat interferences by minimizing the Bit Error Rate (BER) as a function of the equalizer coefficients. This provides higher performance than the traditional Minimum Mean Square Error equalization. Since the calculation of BER as a function of the equalizer coefficients is of exponential complexity, statistical sampling methods are proposed to approximate the gradient which yields fast equalization and superior performance to the traditional algorithms. Efficient estimation of the gradient is achieved by using stratified sampling and the Li-Silvester bounds. A simple mechanism is derived to identify the dominant samples in real-time, for the sake of efficient estimation. The equalizer weights are adapted recursively by minimizing the estimated BER. The near-optimal performance of the new algorithms is also demonstrated by extensive simulations. The paper has also developed a (Cellular Neural Network) CNN based approach to detection. In this case fast quadratic optimization has been carried out by t, whereas the task of equalizer is to ensure the required template structure (sparseness) for the CNN. The performance of the method has also been analyzed by simulations.

Keywords: Cellular Neural Network, channel equalization, communication over fading channels, multiuser communication, spectral efficiency, statistical sampling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1474
10 Simulation of Organic Matter Variability on a Sugarbeet Field Using the Computer Based Geostatistical Methods

Authors: M. Rüstü Karaman, Tekin Susam, Fatih Er, Servet Yaprak, Osman Karkacıer

Abstract:

Computer based geostatistical methods can offer effective data analysis possibilities for agricultural areas by using vectorial data and their objective informations. These methods will help to detect the spatial changes on different locations of the large agricultural lands, which will lead to effective fertilization for optimal yield with reduced environmental pollution. In this study, topsoil (0-20 cm) and subsoil (20-40 cm) samples were taken from a sugar beet field by 20 x 20 m grids. Plant samples were also collected from the same plots. Some physical and chemical analyses for these samples were made by routine methods. According to derived variation coefficients, topsoil organic matter (OM) distribution was more than subsoil OM distribution. The highest C.V. value of 17.79% was found for topsoil OM. The data were analyzed comparatively according to kriging methods which are also used widely in geostatistic. Several interpolation methods (Ordinary,Simple and Universal) and semivariogram models (Spherical, Exponential and Gaussian) were tested in order to choose the suitable methods. Average standard deviations of values estimated by simple kriging interpolation method were less than average standard deviations (topsoil OM ± 0.48, N ± 0.37, subsoil OM ± 0.18) of measured values. The most suitable interpolation method was simple kriging method and exponantial semivariogram model for topsoil, whereas the best optimal interpolation method was simple kriging method and spherical semivariogram model for subsoil. The results also showed that these computer based geostatistical methods should be tested and calibrated for different experimental conditions and semivariogram models.

Keywords: Geostatistic, kriging, organic matter, sugarbeet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522
9 Analysis of Linked in Series Servers with Blocking, Priority Feedback Service and Threshold Policy

Authors: Walenty Oniszczuk

Abstract:

The use of buffer thresholds, blocking and adequate service strategies are well-known techniques for computer networks traffic congestion control. This motivates the study of series queues with blocking, feedback (service under Head of Line (HoL) priority discipline) and finite capacity buffers with thresholds. In this paper, the external traffic is modelled using the Poisson process and the service times have been modelled using the exponential distribution. We consider a three-station network with two finite buffers, for which a set of thresholds (tm1 and tm2) is defined. This computer network behaves as follows. A task, which finishes its service at station B, gets sent back to station A for re-processing with probability o. When the number of tasks in the second buffer exceeds a threshold tm2 and the number of task in the first buffer is less than tm1, the fed back task is served under HoL priority discipline. In opposite case, for fed backed tasks, “no two priority services in succession" procedure (preventing a possible overflow in the first buffer) is applied. Using an open Markovian queuing schema with blocking, priority feedback service and thresholds, a closed form cost-effective analytical solution is obtained. The model of servers linked in series is very accurate. It is derived directly from a twodimensional state graph and a set of steady-state equations, followed by calculations of main measures of effectiveness. Consequently, efficient expressions of the low computational cost are determined. Based on numerical experiments and collected results we conclude that the proposed model with blocking, feedback and thresholds can provide accurate performance estimates of linked in series networks.

Keywords: Blocking, Congestion control, Feedback, Markov chains, Performance evaluation, Threshold-base networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1246
8 Forecast of the Small Wind Turbines Sales with Replacement Purchases and with or without Account of Price Changes

Authors: V. Churkin, M. Lopatin

Abstract:

The purpose of the paper is to estimate the US small wind turbines market potential and forecast the small wind turbines sales in the US. The forecasting method is based on the application of the Bass model and the generalized Bass model of innovations diffusion under replacement purchases. In the work an exponential distribution is used for modeling of replacement purchases. Only one parameter of such distribution is determined by average lifetime of small wind turbines. The identification of the model parameters is based on nonlinear regression analysis on the basis of the annual sales statistics which has been published by the American Wind Energy Association (AWEA) since 2001 up to 2012. The estimation of the US average market potential of small wind turbines (for adoption purchases) without account of price changes is 57080 (confidence interval from 49294 to 64866 at P = 0.95) under average lifetime of wind turbines 15 years, and 62402 (confidence interval from 54154 to 70648 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 90,7%, while in the second - 91,8%. The effect of the wind turbines price changes on their sales was estimated using generalized Bass model. This required a price forecast. To do this, the polynomial regression function, which is based on the Berkeley Lab statistics, was used. The estimation of the US average market potential of small wind turbines (for adoption purchases) in that case is 42542 (confidence interval from 32863 to 52221 at P = 0.95) under average lifetime of wind turbines 15 years, and 47426 (confidence interval from 36092 to 58760 at P = 0.95) under average lifetime of wind turbines 20 years. In the first case the explained variance is 95,3%, while in the second – 95,3%.

Keywords: Bass model, generalized Bass model, replacement purchases, sales forecasting of innovations, statistics of sales of small wind turbines in the United States.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1840
7 Determination of Cd, Zn, K, pH, TNV, Organic Material and Electrical Conductivity (EC) Distribution in Agricultural Soils using Geostatistics and GIS (Case Study: South- Western of Natanz- Iran)

Authors: Abbas Hani, Seyed Ali Hoseini Abari

Abstract:

Soil chemical and physical properties have important roles in compartment of the environment and agricultural sustainability and human health. The objectives of this research is determination of spatial distribution patterns of Cd, Zn, K, pH, TNV, organic material and electrical conductivity (EC) in agricultural soils of Natanz region in Esfehan province. In this study geostatistic and non-geostatistic methods were used for prediction of spatial distribution of these parameters. 64 composite soils samples were taken at 0-20 cm depth. The study area is located in south of NATANZ agricultural lands with area of 21660 hectares. Spatial distribution of Cd, Zn, K, pH, TNV, organic material and electrical conductivity (EC) was determined using geostatistic and geographic information system. Results showed that Cd, pH, TNV and K data has normal distribution and Zn, OC and EC data had not normal distribution. Kriging, Inverse Distance Weighting (IDW), Local Polynomial Interpolation (LPI) and Redial Basis functions (RBF) methods were used to interpolation. Trend analysis showed that organic carbon in north-south and east to west did not have trend while K and TNV had second degree trend. We used some error measurements include, mean absolute error(MAE), mean squared error (MSE) and mean biased error(MBE). Ordinary kriging(exponential model), LPI(Local polynomial interpolation), RBF(radial basis functions) and IDW methods have been chosen as the best methods to interpolating of the soil parameters. Prediction maps by disjunctive kriging was shown that in whole study area was intensive shortage of organic matter and more than 63.4 percent of study area had shortage of K amount.

Keywords: Electrical conductivity, Geostatistics, Geographical Information System, TNV

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2631
6 An Overview of the Islamic Banking Development in the United Kingdom, Malaysia, Saudi Arabia, Iran, Nigeria, Kenya and Uganda

Authors: Pradeep Kulshrestha, Maulana Ayoub Ali

Abstract:

The level of penetration of Islamic banking products and services has recorded a reasonable growth at an exponential rate in many parts of the world. There are many factors which have contributed to this growth including, but not limited to the rapid growth of number of Muslims who are uncomfortable with the conventional ways of banking, interest and higher interest rates scheduled by conventional banks and financial institutions as well as the financial inclusion campaign conducted in many countries. The system is facing legal challenges which open the research fdoor for practitioners and academicians for the sake of finding out solutions to those challenges. This paper tries to investigate the development of the Islamic banking system in the United Kingdom (UK), Saudi Arabia, Malaysia, Iran, Kenya, Nigeria and Uganda in order to understand the modalities which have been employed to run an Islamic banking system in the aforementioned countries. The methodology which has been employed in doing this research paper is Doctrinal, of which legislations, policies and other legal tools have been carefully studied and analysed. Again, papers from academic journals, books and financial reports have been deeply analysed for the purpose of enriching the paper and come up with a tangible results. The paper found that in Asia, Malaysia has created the smoothest legal platform for Islamic banking system to work properly in the country. The United Kingdom has tried harder to smooth the banking system without affecting the conventional banking methods and without favouring the operations of Islamic banks. It also tries harder to make UK as an Islamic banking and finance hub in Europe. The entire banking system in Iran is Islamic, while Nigeria has undergone several legal reforms to suit Islamic banking system in the country. Kenya and Uganda are at a different pace in making Islamic Banking system work alongside the conventional banking system.  

Keywords: Shariah, Islamic banking, law, alternative banking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583
5 Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome

Authors: Iacopo Testi, Diego Pajarito, Nicoletta Roberto, Carmen Greco

Abstract:

Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.

Keywords: Data science, spatial analysis, composite index, Rome, urban planning, youth economic discomfort index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 811
4 Conflation Methodology Applied to Flood Recovery

Authors: E. L. Suarez, D. E. Meeroff, Y. Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: Community resilience, conflation, flood risk, nuisance flooding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38
3 An Overview of Some High Order and Multi-Level Finite Difference Schemes in Computational Aeroacoustics

Authors: Appanah Rao Appadu, Muhammad Zaid Dauhoo

Abstract:

In this paper, we have combined some spatial derivatives with the optimised time derivative proposed by Tam and Webb in order to approximate the linear advection equation which is given by = 0. Ôêé Ôêé + Ôêé Ôêé x f t u These spatial derivatives are as follows: a standard 7-point 6 th -order central difference scheme (ST7), a standard 9-point 8 th -order central difference scheme (ST9) and optimised schemes designed by Tam and Webb, Lockard et al., Zingg et al., Zhuang and Chen, Bogey and Bailly. Thus, these seven different spatial derivatives have been coupled with the optimised time derivative to obtain seven different finite-difference schemes to approximate the linear advection equation. We have analysed the variation of the modified wavenumber and group velocity, both with respect to the exact wavenumber for each spatial derivative. The problems considered are the 1-D propagation of a Boxcar function, propagation of an initial disturbance consisting of a sine and Gaussian function and the propagation of a Gaussian profile. It is known that the choice of the cfl number affects the quality of results in terms of dissipation and dispersion characteristics. Based on the numerical experiments solved and numerical methods used to approximate the linear advection equation, it is observed in this work, that the quality of results is dependent on the choice of the cfl number, even for optimised numerical methods. The errors from the numerical results have been quantified into dispersion and dissipation using a technique devised by Takacs. Also, the quantity, Exponential Error for Low Dispersion and Low Dissipation, eeldld has been computed from the numerical results. Moreover, based on this work, it has been found that when the quantity, eeldld can be used as a measure of the total error. In particular, the total error is a minimum when the eeldld is a minimum.

Keywords: Optimised time derivative, dissipation, dispersion, cfl number, Nomenclature: k : time step, h : spatial step, β :advection velocity, r: cfl/Courant number, hkrβ= , w =θ, h : exact wave number, n :time level, RPE : Relative phase error per unit time step, AFM :modulus of amplification factor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1581
2 Application of Unstructured Mesh Modeling in Evolving SGE of an Airport at the Confluence of Multiple Rivers in a Macro Tidal Region

Authors: A. A. Purohit, M. M. Vaidya, M. D. Kudale

Abstract:

Among the various developing countries in the world like China, Malaysia, Korea etc., India is also developing its infrastructures in the form of Road/Rail/Airports and Waterborne facilities at an exponential rate. Mumbai, the financial epicenter of India is overcrowded and to relieve the pressure of congestion, Navi Mumbai suburb is being developed on the east bank of Thane creek near Mumbai. The government due to limited space at existing Mumbai Airports (domestic and international) to cater for the future demand of airborne traffic, proposes to build a new international airport near Panvel at Navi Mumbai. Considering the precedence of extreme rainfall on 26th July 2005 and nearby townships being in a low-lying area, wherein new airport is proposed, it is inevitable to study this complex confluence area from a hydrodynamic consideration under both tidal and extreme events (predicted discharge hydrographs), to avoid inundation of the surrounding due to the proposed airport reclamation (1160 hectares) and to determine the safe grade elevation (SGE). The model studies conducted using the application of unstructured mesh to simulate the Panvel estuarine area (93 km2), calibration, validation of a model for hydraulic field measurements and determine the maxima water levels around the airport for various extreme hydrodynamic events, namely the simultaneous occurrence of highest tide from the Arabian Sea and peak flood discharges (Probable Maximum Precipitation and 26th July 2005) from five rivers, the Gadhi, Kalundri, Taloja, Kasadi and Ulwe, meeting at the proposed airport area revealed that: (a) The Ulwe River flowing beneath the proposed airport needs to be diverted. The 120m wide proposed Ulwe diversion channel having a wider base width of 200 m at SH-54 Bridge on the Ulwe River along with the removal of the existing bund in Moha Creek is inevitable to keep the SGE of the airport to a minimum. (b) The clear waterway of 80 m at SH-54 Bridge (Ulwe River) and 120 m at Amra Marg Bridge near Moha Creek is also essential for the Ulwe diversion and (c) The river bank protection works on the right bank of Gadhi River between the NH-4B and SH-54 bridges as well as upstream of the Ulwe River diversion channel are essential to avoid inundation of low lying areas. The maxima water levels predicted around the airport keeps SGE to a minimum of 11m with respect to Chart datum of Ulwe Bundar and thus development is not only technologically-economically feasible but also sustainable. The unstructured mesh modeling is a promising tool to simulate complex extreme hydrodynamic events and provides a reliable solution to evolve optimal SGE of airport.

Keywords: Airport, hydrodynamics, hydrographs, safe grade elevation, tides.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 955
1 Sand Production Modelled with Darcy Fluid Flow Using Discrete Element Method

Authors: M. N. Nwodo, Y. P. Cheng, N. H. Minh

Abstract:

In the process of recovering oil in weak sandstone formations, the strength of sandstones around the wellbore is weakened due to the increase of effective stress/load from the completion activities around the cavity. The weakened and de-bonded sandstone may be eroded away by the produced fluid, which is termed sand production. It is one of the major trending subjects in the petroleum industry because of its significant negative impacts, as well as some observed positive impacts. For efficient sand management therefore, there has been need for a reliable study tool to understand the mechanism of sanding. One method of studying sand production is the use of the widely recognized Discrete Element Method (DEM), Particle Flow Code (PFC3D) which represents sands as granular individual elements bonded together at contact points. However, there is limited knowledge of the particle-scale behavior of the weak sandstone, and the parameters that affect sanding. This paper aims to investigate the reliability of using PFC3D and a simple Darcy flow in understanding the sand production behavior of a weak sandstone. An isotropic tri-axial test on a weak oil sandstone sample was first simulated at a confining stress of 1MPa to calibrate and validate the parallel bond models of PFC3D using a 10m height and 10m diameter solid cylindrical model. The effect of the confining stress on the number of bonds failure was studied using this cylindrical model. With the calibrated data and sample material properties obtained from the tri-axial test, simulations without and with fluid flow were carried out to check on the effect of Darcy flow on bonds failure using the same model geometry. The fluid flow network comprised of every four particles connected with tetrahedral flow pipes with a central pore or flow domain. Parametric studies included the effects of confining stress, and fluid pressure; as well as validating flow rate – permeability relationship to verify Darcy’s fluid flow law. The effect of model size scaling on sanding was also investigated using 4m height, 2m diameter model. The parallel bond model successfully calibrated the sample’s strength of 4.4MPa, showing a sharp peak strength before strain-softening, similar to the behavior of real cemented sandstones. There seems to be an exponential increasing relationship for the bigger model, but a curvilinear shape for the smaller model. The presence of the Darcy flow induced tensile forces and increased the number of broken bonds. For the parametric studies, flow rate has a linear relationship with permeability at constant pressure head. The higher the fluid flow pressure, the higher the number of broken bonds/sanding. The DEM PFC3D is a promising tool to studying the micromechanical behavior of cemented sandstones.

Keywords: Discrete Element Method, fluid flow, parametric study, sand production/bonds failure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1737