Search results for: mathematical proportions
566 Achieving the Elevated Nitritation for Autotrophic/Heterotrophic Denitritation in CSTR by Treating Livestock Wastewater
Authors: Hammad Khan, Wookeun Bae
Abstract:
The objective of this study was to achieve, optimize and control the highly loaded and efficient nitrite production having suitability for autotrophic and heterotrophic denitritation. A lab scale CSTR for partial and full nitritation was operated to treat the livestock manure digester liquor having an ammonium concentration of ~2000 mg-NH4+-N/L and biodegradable contents of ~0.8 g-COD/L. The experiments were performed at 30°C, pH: 8.0, DO: 1.5 mg/L and SRT ranging from 7-20 days. After 125 days operation, >95% nitrite buildup having the ammonium loading rate of ~3.2 kg-NH4+-N/m3-day was seen with almost complete ammonium conversion. On increasing the loading rate further (i-e, from 3.2-6.2 kg-NH4+-N/m3-day), stability of the system remained unaffected. On decreasing the pH from 8 to 7.5 and further 7.2, removal rate can be easily controlled as 95%, 75% and even 50%. Results demonstrated that nitritation stability and desired removal rates are controlled by a balance of simultaneous inhibition by FA & FNA, pH affect and DO limitation. These parameters proved to be effective even to produce an appropriate influent for anammox. In addition, a mathematical model, identified through the occurring biological reactions, is proposed to optimize the full and partial nitritation process. The proposed model present relationship between pH, ammonium and produced nitrite for full and partial nitritation under the varying concentrations of DO, and simultaneous inhibition by FA and FNA.Keywords: stable nitritation, high loading, autrophic denitritation, hetrotrophic denitritation
Procedia PDF Downloads 274565 Decision-Making using Fuzzy Linguistic Hypersoft Set Topology
Authors: Muhammad Saqlain, Poom Kumam
Abstract:
Language being an abstract system and creative act, is quite complicated as its meaning varies depending on the context. The context is determined by the empirical knowledge of a person, which is derived from observation and experience. About further subdivided attributes, the decision-making challenges may entail quantitative and qualitative factors. However, because there is no norm for putting a numerical value on language, existing approaches cannot carry out the operations of linguistic knowledge. The assigning of mathematical values (fuzzy, intuitionistic, and neutrosophic) to any decision-making problem; without considering any rule of linguistic knowledge is ambiguous and inaccurate. Thus, this paper aims to provide a generic model for these issues. This paper provides the linguistic set structure of the fuzzy hypersoft set (FLHSS) to solve decision-making issues. We have proposed the definition some basic operations like AND, NOT, OR, AND, compliment, negation, etc., along with Topology and examples, and properties. Secondly, the operational laws for the fuzzy linguistic hypersoft set have been proposed to deal with the decision-making issues. Implementing proposed aggregate operators and operational laws can be used to convert linguistic quantifiers into numerical values. This will increase the accuracy and precision of the fuzzy hypersoft set structure to deal with decision-making issues.Keywords: linguistic quantifiers, aggregate operators, multi-criteria decision making (mcdm)., fuzzy topology
Procedia PDF Downloads 97564 Non-Parametric Changepoint Approximation for Road Devices
Authors: Loïc Warscotte, Jehan Boreux
Abstract:
The scientific literature of changepoint detection is vast. Today, a lot of methods are available to detect abrupt changes or slight drift in a signal, based on CUSUM or EWMA charts, for example. However, these methods rely on strong assumptions, such as the stationarity of the stochastic underlying process, or even the independence and Gaussian distributed noise at each time. Recently, the breakthrough research on locally stationary processes widens the class of studied stochastic processes with almost no assumptions on the signals and the nature of the changepoint. Despite the accurate description of the mathematical aspects, this methodology quickly suffers from impractical time and space complexity concerning the signals with high-rate data collection, if the characteristics of the process are completely unknown. In this paper, we then addressed the problem of making this theory usable to our purpose, which is monitoring a high-speed weigh-in-motion system (HS-WIM) towards direct enforcement without supervision. To this end, we first compute bounded approximations of the initial detection theory. Secondly, these approximating bounds are empirically validated by generating many independent long-run stochastic processes. The abrupt changes and the drift are both tested. Finally, this relaxed methodology is tested on real signals coming from a HS-WIM device in Belgium, collected over several months.Keywords: changepoint, weigh-in-motion, process, non-parametric
Procedia PDF Downloads 78563 Estimating View-Through Ad Attribution from User Surveys Using Convex Optimization
Authors: Yuhan Lin, Rohan Kekatpure, Cassidy Yeung
Abstract:
In Digital Marketing, robust quantification of View-through attribution (VTA) is necessary for evaluating channel effectiveness. VTA occurs when a product purchase is aided by an Ad but without an explicit click (e.g. a TV ad). A lack of a tracking mechanism makes VTA estimation challenging. Most prevalent VTA estimation techniques rely on post-purchase in-product user surveys. User surveys enable the calculation of channel multipliers, which are the ratio of the view-attributed to the click-attributed purchases of each marketing channel. Channel multipliers thus provide a way to estimate the unknown VTA for a channel from its known click attribution. In this work, we use Convex Optimization to compute channel multipliers in a way that enables a mathematical encoding of the expected channel behavior. Large fluctuations in channel attributions often result from overfitting the calculations to user surveys. Casting channel attribution as a Convex Optimization problem allows an introduction of constraints that limit such fluctuations. The result of our study is a distribution of channel multipliers across the entire marketing funnel, with important implications for marketing spend optimization. Our technique can be broadly applied to estimate Ad effectiveness in a privacy-centric world that increasingly limits user tracking.Keywords: digital marketing, survey analysis, operational research, convex optimization, channel attribution
Procedia PDF Downloads 199562 Visualizing the Commercial Activity of a City by Analyzing the Data Information in Layers
Authors: Taras Agryzkov, Jose L. Oliver, Leandro Tortosa, Jose Vicent
Abstract:
This paper aims to demonstrate how network models can be used to understand and to deal with some aspects of urban complexity. As it is well known, the Theory of Architecture and Urbanism has been using for decades’ intellectual tools based on the ‘sciences of complexity’ as a strategy to propose theoretical approaches about cities and about architecture. In this sense, it is possible to find a vast literature in which for instance network theory is used as an instrument to understand very diverse questions about cities: from their commercial activity to their heritage condition. The contribution of this research consists in adding one step of complexity to this process: instead of working with one single primal graph as it is usually done, we will show how new network models arise from the consideration of two different primal graphs interacting in two layers. When we model an urban network through a mathematical structure like a graph, the city is usually represented by a set of nodes and edges that reproduce its topology, with the data generated or extracted from the city embedded in it. All this information is normally displayed in a single layer. Here, we propose to separate the information in two layers so that we can evaluate the interaction between them. Besides, both layers may be composed of structures that do not have to coincide: from this bi-layer system, groups of interactions emerge, suggesting reflections and in consequence, possible actions.Keywords: graphs, mathematics, networks, urban studies
Procedia PDF Downloads 180561 An Image Enhancement Method Based on Curvelet Transform for CBCT-Images
Authors: Shahriar Farzam, Maryam Rastgarpour
Abstract:
Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).Keywords: curvelet transform, CBCT, image enhancement, image denoising
Procedia PDF Downloads 300560 Heuristic Methods for the Capacitated Location- Allocation Problem with Stochastic Demand
Authors: Salinee Thumronglaohapun
Abstract:
The proper number and appropriate locations of service centers can save cost, raise revenue and gain more satisfaction from customers. Establishing service centers is high-cost and difficult to relocate. In long-term planning periods, several factors may affect the service. One of the most critical factors is uncertain demand of customers. The opened service centers need to be capable of serving customers and making a profit although the demand in each period is changed. In this work, the capacitated location-allocation problem with stochastic demand is considered. A mathematical model is formulated to determine suitable locations of service centers and their allocation to maximize total profit for multiple planning periods. Two heuristic methods, a local search and genetic algorithm, are used to solve this problem. For the local search, five different chances to choose each type of moves are applied. For the genetic algorithm, three different replacement strategies are considered. The results of applying each method to solve numerical examples are compared. Both methods reach to the same best found solution in most examples but the genetic algorithm provides better solutions in some cases.Keywords: location-allocation problem, stochastic demand, local search, genetic algorithm
Procedia PDF Downloads 124559 Mathematical Model for Flow and Sediment Yield Estimation on Tel River Basin, India
Authors: Santosh Kumar Biswal, Ramakar Jha
Abstract:
Soil erosion is a slow and continuous process and one of the prominent problems across the world leading to many serious problems like loss of soil fertility, loss of soil structure, poor internal drainage, sedimentation deposits etc. In this paper remote sensing and GIS based methods have been applied for the determination of soil erosion and sediment yield. Tel River basin which is the second largest tributary of the river Mahanadi laying between latitude 19° 15' 32.4"N and, 20° 45' 0"N and longitude 82° 3' 36"E and 84° 18' 18"E chosen for the present study. The catchment was discretized into approximately homogeneous sub-areas (grid cells) to overcome the catchment heterogeneity. The gross soil erosion in each cell was computed using Universal Soil Loss Equation (USLE). Various parameters for USLE was determined as a function of land topography, soil texture, land use/land cover, rainfall, erosivity and crop management and practice in the watershed. The concept of transport limited accumulation was formulated and the transport capacity maps were generated. The gross soil erosion was routed to the catchment outlet. This study can help in recognizing critical erosion prone areas of the study basin so that suitable control measures can be implemented.Keywords: Universal Soil Loss Equation (USLE), GIS, land use, sediment yield,
Procedia PDF Downloads 308558 Effect of Pre-Aging and Aging Parameters on Mechanical Behavior of Be-Treated 7075 Aluminum Alloys: Experimental Correlation using Minitab Software
Authors: M. Tash, S. Alkahtani
Abstract:
The present study was undertaken to investigate the effect of pre-aging and aging parameters (time and temperature) on the mechanical properties of Al-Mg-Zn (7075) alloys. Ultimate tensile strength, 0.5% offset yield strength and % elongation measurements were carried out on specimens prepared from cast and heat treated 7075 alloys. Duplex aging treatments were carried out for the as solution treated (SHT) specimens (pre-aged at different time and temperature followed by high temperature aging). A statistical design of experiments (DOE) approach using fractional factorial design was applied to determine the influence of controlling variables of pre-aging and aging treatment parameters and any interactions between them on the mechanical properties of 7075 alloys. A mathematical models are developed to relate the alloy ultimate tensile strength, yield strength and % elongation with the different pre-aging and aging parameters i.e. Pre-aging Temperature (PA T0C), Pre-aging time (PA t h), Aging temperature (AT0C), Aging time (At h), to acquire an understanding of the effects of these variables and their interactions on the mechanical properties of be-treated 7075 alloys.Keywords: aging heat Treatment, tensile properties, be-treated cast Al-Mg-Zn (7075) alloys, experimental correlation
Procedia PDF Downloads 275557 Finite Elemental Simulation of the Combined Process of Asymmetric Rolling and Plastic Bending
Authors: A. Pesin, D. Pustovoytov, M. Sverdlik
Abstract:
Traditionally, the need in items represents a large body of rotation (e.g. shrouds of various process units: a converter, a mixer, a scrubber, a steel ladle and etc.) is satisfied by using them at engineering enterprises. At these enterprises large parts of bodies of rotation are made on stamping units or bending and forming machines. In Nosov Magnitogorsk State Technical University in alliance with JSC "Magnitogorsk Metal and Steel Works" there was suggested and implemented the technology for producing such items based on a combination of asymmetric rolling processes and plastic bending under conditions of the plate mill. In this paper, based on finite elemental mathematical simulation in technology of a combined process of asymmetric rolling and bending plastic has been improved. It is shown that for the same curvature along the entire length of the metal sheet it is necessary to introduce additional asymmetry speed when rolling front end and tape trailer. Production of large bodies of rotation at mill 4500 JSC "Magnitogorsk Metal and Steel Works" showed good convergence of theoretical and experimental values of the curvature of the metal. Economic effect obtained more than 1.0 million dollars.Keywords: asymmetric rolling, plastic bending, combined process, FEM
Procedia PDF Downloads 320556 Dynamical Relation of Poisson Spike Trains in Hodkin-Huxley Neural Ion Current Model and Formation of Non-Canonical Bases, Islands, and Analog Bases in DNA, mRNA, and RNA at or near the Transcription
Authors: Michael Fundator
Abstract:
Groundbreaking application of biomathematical and biochemical research in neural networks processes to formation of non-canonical bases, islands, and analog bases in DNA and mRNA at or near the transcription that contradicts the long anticipated statistical assumptions for the distribution of bases and analog bases compounds is implemented through statistical and stochastic methods apparatus with addition of quantum principles, where the usual transience of Poisson spike train becomes very instrumental tool for finding even almost periodical type of solutions to Fokker-Plank stochastic differential equation. Present article develops new multidimensional methods of finding solutions to stochastic differential equations based on more rigorous approach to mathematical apparatus through Kolmogorov-Chentsov continuity theorem that allows the stochastic processes with jumps under certain conditions to have γ-Holder continuous modification that is used as basis for finding analogous parallels in dynamics of neutral networks and formation of analog bases and transcription in DNA.Keywords: Fokker-Plank stochastic differential equation, Kolmogorov-Chentsov continuity theorem, neural networks, translation and transcription
Procedia PDF Downloads 406555 Investigation of Corrosion of Steel Buried in Unsaturated Soil in the Presence of Cathodic Protection: The Modified Voltammetry Technique
Authors: Mandlenkosi G. R. Mahlobo, Peter A. Olubambi, Philippe Refait
Abstract:
The aim of this study was to use voltammetry as a method to understand the behaviour of steel in unsaturated soil in the presence of cathodic protection (CP). Three carbon steel coupons were buried in artificial soil wetted at 65-70% of saturation for 37 days. All three coupons were left at open circuit potential (OCP) for the first seven days in the unsaturated soil before CP, which was only applied on two of the three coupons at the protection potential -0.8 V vs Cu/CuSO₄ for the remaining 30 days of the experiment. Voltammetry was performed weekly on the coupon without CP, while electrochemical impedance spectroscopy (EIS) was performed daily to monitor and correct the applied CP potential from the ohmic drop. Voltammetry was finally performed on the last day on the coupons under CP. All the voltammograms were modeled with mathematical equations in order to compute the electrochemical parameters and subsequently deduced the corrosion rate of the steel coupons. For the coupon without CP, the corrosion rate was determined at 300 µm/y. For the coupons under CP, the residual corrosion rate under CP was estimated at 12 µm/y while the corrosion rate of the coupons, after interruption of CP, was estimated at 25 µm/y. This showed that CP was efficient due to two effects: a direct effect from the decreased potential and an induced effect associated with the increased interfacial pH that promoted the formation of a protective layer on the steel surface.Keywords: carbon steel, cathodic protection, voltammetry, unsaturated soil, Raman spectroscopy
Procedia PDF Downloads 62554 Effect of Atmospheric Turbulence on Hybrid FSO/RF Link Availability under Qatar's Harsh Climate
Authors: Abir Touati, Syed Jawad Hussain, Farid Touati, Ammar Bouallegue
Abstract:
Although there has been a growing interest in the hybrid free-space optical link and radio frequency FSO/RF communication system, the current literature is limited to results obtained in moderate or cold environment. In this paper, using a soft switching approach, we investigate the effect of weather inhomogeneities on the strength of turbulence hence the channel refractive index under Qatar harsh environment and their influence on the hybrid FSO/RF availability. In this approach, either FSO/RF or simultaneous or none of them can be active. Based on soft switching approach and a finite state Markov Chain (FSMC) process, we model the channel fading for the two links and derive a mathematical expression for the outage probability of the hybrid system. Then, we evaluate the behavior of the hybrid FSO/RF under hazy and harsh weather. Results show that the FSO/RF soft switching renders the system outage probability less than that of each link individually. A soft switching algorithm is being implemented on FPGAs using Raptor code interfaced to the two terminals of a 1Gbps/100 Mbps FSO/RF hybrid system, the first being implemented in the region. Experimental results are compared to the above simulation results.Keywords: atmospheric turbulence, haze, hybrid FSO/RF, outage probability, refractive index
Procedia PDF Downloads 419553 Prediction of Concrete Hydration Behavior and Cracking Tendency Based on Electrical Resistivity Measurement, Cracking Test and ANSYS Simulation
Authors: Samaila Muazu Bawa
Abstract:
Hydration process, crack potential and setting time of concrete grade C30, C40 and C50 were separately monitored using non-contact electrical resistivity apparatus, a plastic ring mould and penetration resistance method respectively. The results show highest resistivity of C30 at the beginning until reaching the acceleration point when C50 accelerated and overtaken the others, and this period corresponds to its final setting time range, from resistivity derivative curve, hydration process can be divided into dissolution, induction, acceleration and deceleration periods, restrained shrinkage crack and setting time tests demonstrated the earliest cracking and setting time of C50, therefore, this method conveniently and rapidly determines the concrete’s crack potential. The highest inflection time (ti), the final setting time (tf) were obtained and used with crack time in coming up with mathematical models for the prediction of concrete’s cracking age for the range being considered. Finally, ANSYS numerical simulations supports the experimental findings in terms of the earliest crack age of C50 and the crack location that, highest stress concentration is always beneath the artificially introduced expansion joint of C50.Keywords: concrete hydration, electrical resistivity, restrained shrinkage crack, ANSYS simulation
Procedia PDF Downloads 240552 Predictive Modelling Approaches in Food Processing and Safety
Authors: Amandeep Sharma, Digvaijay Verma, Ruplal Choudhary
Abstract:
Food processing is an activity across the globe that help in better handling of agricultural produce, including dairy, meat, and fish. The operations carried out in the food industry includes raw material quality authenticity; sorting and grading; processing into various products using thermal treatments – heating, freezing, and chilling; packaging; and storage at the appropriate temperature to maximize the shelf life of the products. All this is done to safeguard the food products and to ensure the distribution up to the consumer. The approaches to develop predictive models based on mathematical or statistical tools or empirical models’ development has been reported for various milk processing activities, including plant maintenance and wastage. Recently AI is the key factor for the fourth industrial revolution. AI plays a vital role in the food industry, not only in quality and food security but also in different areas such as manufacturing, packaging, and cleaning. A new conceptual model was developed, which shows that smaller sample size as only spectra would be required to predict the other values hence leads to saving on raw materials and chemicals otherwise used for experimentation during the research and new product development activity. It would be a futuristic approach if these tools can be further clubbed with the mobile phones through some software development for their real time application in the field for quality check and traceability of the product.Keywords: predictive modlleing, ann, ai, food
Procedia PDF Downloads 82551 Hohmann Transfer and Bi-Elliptic Hohmann Transfer in TRAPPIST-1 System
Authors: Jorge L. Nisperuza, Wilson Sandoval, Edward. A. Gil, Johan A. Jimenez
Abstract:
In orbital mechanics, an active research topic is the calculation of interplanetary trajectories efficient in terms of energy and time. In this sense, this work concerns the calculation of the orbital elements for sending interplanetary probes in the extrasolar system TRAPPIST-1. Specifically, using the mathematical expressions of the circular and elliptical trajectory parameters, expressions for the flight time and the orbital transfer rate increase between orbits, the orbital parameters and the graphs of the trajectories of Hohmann and Hohmann bi-elliptic for sending a probe from the innermost planet to all the other planets of the studied system, are obtained. The relationship between the orbital transfer rate increments and the relationship between the flight times for the two transfer types is found. The results show that, for all cases under consideration, the Hohmann transfer results to be the least energy and temporary cost, a result according to the theory associated with Hohmann and Hohmann bi-elliptic transfers. Saving in the increase of the speed reaches up to 87% was found, and it happens for the transference between the two innermost planets, whereas the time of flight increases by a factor of up to 6.6 if one makes use of the bi-elliptic transfer, this for the case of sending a probe from the innermost planet to the outermost.Keywords: bi-elliptic Hohmann transfer, exoplanet, extrasolar system, Hohmann transfer, TRAPPIST-1
Procedia PDF Downloads 192550 Chairussyuhur Arman, Totti Tjiptosumirat, Muhammad Gunawan, Mastur, Joko Priyono, Baiq Tri Ratna Erawati
Authors: Maria M. Giannakou, Athanasios K. Ziliaskopoulos
Abstract:
Transmission pipelines carrying natural gas are often routed through populated cities, industrial and environmentally sensitive areas. While the need for these networks is unquestionable, there are serious concerns about the risk these lifeline networks pose to the people, to their habitat and to the critical infrastructures, especially in view of natural disasters such as earthquakes. This work presents an Integrated Pipeline Risk Management methodology (IPRM) for assessing the hazard associated with a natural gas pipeline failure due to natural or manmade disasters. IPRM aims to optimize the allocation of the available resources to countermeasures in order to minimize the impacts of pipeline failure to humans, the environment, the infrastructure and the economic activity. A proposed knapsack mathematical programming formulation is introduced that optimally selects the proper mitigation policies based on the estimated cost – benefit ratios. The proposed model is demonstrated with a small numerical example. The vulnerability analysis of these pipelines and the quantification of consequences from such failures can be useful for natural gas industries on deciding which mitigation measures to implement on the existing pipeline networks with the minimum cost in an acceptable level of hazard.Keywords: cost benefit analysis, knapsack problem, natural gas distribution network, risk management, risk mitigation
Procedia PDF Downloads 295549 Lab Bench for Synthetic Aperture Radar Imaging System
Authors: Karthiyayini Nagarajan, P. V. Ramakrishna
Abstract:
Radar Imaging techniques provides extensive applications in the field of remote sensing, majorly Synthetic Aperture Radar (SAR) that provide high resolution target images. This paper work puts forward the effective and realizable signal generation and processing for SAR images. The major units in the system include camera, signal generation unit, signal processing unit and display screen. The real radio channel is replaced by its mathematical model based on optical image to calculate a reflected signal model in real time. Signal generation realizes the algorithm and forms the radar reflection model. Signal processing unit provides range and azimuth resolution through matched filtering and spectrum analysis procedure to form radar image on the display screen. The restored image has the same quality as that of the optical image. This SAR imaging system has been designed and implemented using MATLAB and Quartus II tools on Stratix III device as a System (Lab Bench) that works in real time to study/investigate on radar imaging rudiments and signal processing scheme for educational and research purposes.Keywords: synthetic aperture radar, radio reflection model, lab bench, imaging engineering
Procedia PDF Downloads 497548 Stability Analysis of Tumor-Immune Fractional Order Model
Authors: Sadia Arshad, Yifa Tang, Dumitru Baleanu
Abstract:
A fractional order mathematical model is proposed that incorporate CD8+ cells, natural killer cells, cytokines and tumor cells. The tumor cells growth in the absence of an immune response is modeled by logistic law as it was the simplest form for which predictions also agreed with the experimental data. Natural Killer Cells are our first line of defense. NK cells directly kill tumor cells through several mechanisms, including the release of cytoplasmic granules containing perforin and granzyme, expression of tumor necrosis factor (TNF) family members. The effect of the NK cells on the tumor cell population is expressed with the product term. Rational form is used to describe interaction between CD8+ cells and tumor cells. A number of cytokines are produced by NKs, including tumor necrosis factor TNF, IFN, and interleukin (IL-10). Source term for cytokines is modeled by Michaelis-Menten form to indicate the saturated effects of the immune response. Stability of the equilibrium points is discussed for biologically significant values of bifurcation parameters. We studied the treatment of fractional order system by investigating analytical conditions of tumor eradication. Numerical simulations are presented to illustrate the analytical results.Keywords: cancer model, fractional calculus, numerical simulations, stability analysis
Procedia PDF Downloads 315547 Design and Implementation of a Lab Bench for Synthetic Aperture Radar Imaging System
Authors: Karthiyayini Nagarajan, P. V. RamaKrishna
Abstract:
Radar Imaging techniques provides extensive applications in the field of remote sensing, majorly Synthetic Aperture Radar(SAR) that provide high resolution target images. This paper work puts forward the effective and realizable signal generation and processing for SAR images. The major units in the system include camera, signal generation unit, signal processing unit and display screen. The real radio channel is replaced by its mathematical model based on optical image to calculate a reflected signal model in real time. Signal generation realizes the algorithm and forms the radar reflection model. Signal processing unit provides range and azimuth resolution through matched filtering and spectrum analysis procedure to form radar image on the display screen. The restored image has the same quality as that of the optical image. This SAR imaging system has been designed and implemented using MATLAB and Quartus II tools on Stratix III device as a System(lab bench) that works in real time to study/investigate on radar imaging rudiments and signal processing scheme for educational and research purposes.Keywords: synthetic aperture radar, radio reflection model, lab bench
Procedia PDF Downloads 468546 Aircraft Automatic Collision Avoidance Using Spiral Geometric Approach
Authors: M. Orefice, V. Di Vito
Abstract:
This paper provides a description of a Collision Avoidance algorithm that has been developed starting from the mathematical modeling of the flight of insects, in terms of spirals and conchospirals geometric paths. It is able to calculate a proper avoidance manoeuver aimed to prevent the infringement of a predefined distance threshold between ownship and the considered intruder, while minimizing the ownship trajectory deviation from the original path and in compliance with the aircraft performance limitations and dynamic constraints. The algorithm is designed in order to be suitable for real-time applications, so that it can be considered for the implementation in the most recent airborne automatic collision avoidance systems using the traffic data received through an ADS-B IN device. The presented approach is able to take into account the rules-of-the-air, due to the possibility to select, through specifically designed decision making logic based on the consideration of the encounter geometry, the direction of the calculated collision avoidance manoeuver that allows complying with the rules-of-the-air, as for instance the fundamental right of way rule. In the paper, the proposed collision avoidance algorithm is presented and its preliminary design and software implementation is described. The applicability of this method has been proved through preliminary simulation tests performed in a 2D environment considering single intruder encounter geometries, as reported and discussed in the paper.Keywords: ADS-B Based Application, Collision Avoidance, RPAS, Spiral Geometry.
Procedia PDF Downloads 241545 Effect of pH-Dependent Surface Charge on the Electroosmotic Flow through Nanochannel
Authors: Partha P. Gopmandal, Somnath Bhattacharyya, Naren Bag
Abstract:
In this article, we have studied the effect of pH-regulated surface charge on the electroosmotic flow (EOF) through nanochannel filled with binary symmetric electrolyte solution. The channel wall possesses either an acidic or a basic functional group. Going beyond the widely employed Debye-Huckel linearization, we develop a mathematical model based on Nernst-Planck equation for the charged species, Poisson equation for the induced potential, Stokes equation for fluid flow. A finite volume based numerical algorithm is adopted to study the effect of key parameters on the EOF. We have computed the coupled governing equations through the finite volume method and our results found to be in good agreement with the analytical solution obtained from the corresponding linear model based on low surface charge condition or strong electrolyte solution. The influence of the surface charge density, reaction constant of the functional groups, bulk pH, and concentration of the electrolyte solution on the overall flow rate is studied extensively. We find the effect of surface charge diminishes with the increase in electrolyte concentration. In addition for strong electrolyte, the surface charge becomes independent of pH due to complete dissociation of the functional groups.Keywords: electroosmosis, finite volume method, functional group, surface charge
Procedia PDF Downloads 419544 Analysis of Translational Ship Oscillations in a Realistic Environment
Authors: Chen Zhang, Bernhard Schwarz-Röhr, Alexander Härting
Abstract:
To acquire accurate ship motions at the center of gravity, a single low-cost inertial sensor is utilized and applied on board to measure ship oscillating motions. As observations, the three axes accelerations and three axes rotational rates provided by the sensor are used. The mathematical model of processing the observation data includes determination of the distance vector between the sensor and the center of gravity in x, y, and z directions. After setting up the transfer matrix from sensor’s own coordinate system to the ship’s body frame, an extended Kalman filter is applied to deal with nonlinearities between the ship motion in the body frame and the observation information in the sensor’s frame. As a side effect, the method eliminates sensor noise and other unwanted errors. Results are not only roll and pitch, but also linear motions, in particular heave and surge at the center of gravity. For testing, we resort to measurements recorded on a small vessel in a well-defined sea state. With response amplitude operators computed numerically by a commercial software (Seaway), motion characteristics are estimated. These agree well with the measurements after processing with the suggested method.Keywords: extended Kalman filter, nonlinear estimation, sea trial, ship motion estimation
Procedia PDF Downloads 522543 Scheduling of Cross-Docking Center: An Auction-Based Algorithm
Authors: Eldho Paul, Brijesh Paul
Abstract:
This work proposes an auction mechanism based solution methodology for the optimum scheduling of trucks in a cross-docking centre. The cross-docking centre is an important element of lean supply chain. It reduces the amount of storage and transportation costs in the distribution system compared to an ordinary warehouse. Better scheduling of trucks in a cross-docking center is the best way to reduce storage and transportation costs. Auction mechanism is commonly used for allocation of limited resources in different real-life applications. Here, we try to schedule inbound trucks by integrating auction mechanism with the functioning of a cross-docking centre. A mathematical model is developed for the optimal scheduling of inbound trucks based on the auction methodology. The determination of exact solution for problems involving large number of trucks was found to be computationally difficult, and hence a genetic algorithm based heuristic methodology is proposed in this work. A comparative study of exact and heuristic solutions is done using five classes of data sets. It is observed from the study that the auction-based mechanism is capable of providing good solutions to scheduling problem in cross-docking centres.Keywords: auction mechanism, cross-docking centre, genetic algorithm, scheduling of trucks
Procedia PDF Downloads 412542 A Mathematical Based Prediction of the Forming Limit of Thin-Walled Sheet Metals
Authors: Masoud Ghermezi
Abstract:
Studying the sheet metals is one of the most important research areas in the field of metal forming due to their extensive applications in the aerospace industries. A useful method for determining the forming limit of these materials and consequently preventing the rupture of sheet metals during the forming process is the use of the forming limit curve (FLC). In addition to specifying the forming limit, this curve also delineates a boundary for the allowed values of strain in sheet metal forming; these characteristics of the FLC along with its accuracy of computation and wide range of applications have made this curve the basis of research in the present paper. This study presents a new model that not only agrees with the results obtained from the above mentioned theory, but also eliminates its shortcomings. In this theory, like in the M-K theory, a thin sheet with an inhomogeneity as a gradient thickness reduction with a sinusoidal function has been chosen and subjected to two-dimensional stress. Through analytical evaluation, ultimately, a governing differential equation has been obtained. The numerical solution of this equation for the range of positive strains (stretched region) yields the results that agree with the results obtained from M-K theory. Also the solution of this equation for the range of negative strains (tension region) completes the FLC curve. The findings obtained by applying this equation on two alloys with the hardening exponents of 0.4 and 0.24 indicate the validity of the presented equation.Keywords: sheet metal, metal forming, forming limit curve (FLC), M-K theory
Procedia PDF Downloads 364541 Software Tool Design for Heavy Oil Upgrading by Hydrogen Donor Addition in a Hydrodynamic Cavitation Process
Authors: Munoz A. Tatiana, Solano R. Brandon, Montes C. Juan, Cierco G. Javier
Abstract:
The hydrodynamic cavitation is a process in which the energy that the fluids have in the phase changes is used. From this energy, local temperatures greater than 5000 °C are obtained where thermal cracking of the fluid molecules takes place. The process applied to heavy oil affects variables such as viscosity, density, and composition, which constitutes an important improvement in the quality of crude oil. In this study, the need to design a software through mathematical integration models of mixing, cavitation, kinetics, and reactor, allows modeling changes in density, viscosity, and composition of a heavy oil crude, when the fluid passes through a hydrodynamic cavitation reactor. In order to evaluate the viability of this technique in the industry, a heavy oil of 18° API gravity, was simulated using naphtha as a hydrogen donor at concentrations of 1, 2 and 5% vol, where the simulation results showed an API gravity increase to 0.77, 1.21 and 1.93° respectively and a reduction viscosity by 9.9, 12.9 and 15.8%. The obtained results allow to have a favorable panorama on this technological development, an appropriate visualization on the generation of innovative knowledge of this technique and the technical-economic opportunity that benefits the development of the hydrocarbon sector related to heavy crude oil that includes the largest world oil production.Keywords: hydrodynamic cavitation, thermal cracking, hydrogen donor, heavy oil upgrading, simulator
Procedia PDF Downloads 150540 Estimation of Transition and Emission Probabilities
Authors: Aakansha Gupta, Neha Vadnere, Tapasvi Soni, M. Anbarsi
Abstract:
Protein secondary structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine and biotechnology. Some aspects of protein functions and genome analysis can be predicted by secondary structure prediction. This is used to help annotate sequences, classify proteins, identify domains, and recognize functional motifs. In this paper, we represent protein secondary structure as a mathematical model. To extract and predict the protein secondary structure from the primary structure, we require a set of parameters. Any constants appearing in the model are specified by these parameters, which also provide a mechanism for efficient and accurate use of data. To estimate these model parameters there are many algorithms out of which the most popular one is the EM algorithm or called the Expectation Maximization Algorithm. These model parameters are estimated with the use of protein datasets like RS126 by using the Bayesian Probabilistic method (data set being categorical). This paper can then be extended into comparing the efficiency of EM algorithm to the other algorithms for estimating the model parameters, which will in turn lead to an efficient component for the Protein Secondary Structure Prediction. Further this paper provides a scope to use these parameters for predicting secondary structure of proteins using machine learning techniques like neural networks and fuzzy logic. The ultimate objective will be to obtain greater accuracy better than the previously achieved.Keywords: model parameters, expectation maximization algorithm, protein secondary structure prediction, bioinformatics
Procedia PDF Downloads 480539 Smart Polymeric Nanoparticles Loaded with Vincristine Sulfate for Applications in Breast Cancer Drug Delivery in MDA-MB 231 and MCF7 Cell Lines
Authors: Reynaldo Esquivel, Pedro Hernandez, Aaron Martinez-Higareda, Sergio Tena-Cano, Enrique Alvarez-Ramos, Armando Lucero-Acuna
Abstract:
Stimuli-responsive nanomaterials play an essential role in loading, transporting and well-distribution of anti-cancer compounds in the cellular surroundings. The outstanding properties as the Lower Critical Solution Temperature (LCST), hydrolytic cleavage and protonation/deprotonation cycle, govern the release and delivery mechanisms of payloads. In this contribution, we experimentally determine the load efficiency and release of antineoplastic Vincristine Sulfate into PNIPAM-Interpenetrated-Chitosan (PIntC) nanoparticles. Structural analysis was performed by Fourier Transform Infrared Spectroscopy (FT-IR) and Proton Nuclear Magnetic Resonance (1HNMR). ζ-Potential (ζ) and Hydrodynamic diameter (DH) measurements were monitored by Electrophoretic Mobility (EM) and Dynamic Light scattering (DLS) respectively. Mathematical analysis of the release pharmacokinetics reveals a three-phase model above LCST, while a monophasic of Vincristine release model was observed at 32 °C. Cytotoxic essays reveal a noticeable enhancement of Vincristine effectiveness at low drug concentration on HeLa cervix cancer and MDA-MB-231 breast cancer.Keywords: nanoparticles, vincristine, drug delivery, PNIPAM
Procedia PDF Downloads 156538 Multiscale Model of Blast Explosion Human Injury Biomechanics
Authors: Raj K. Gupta, X. Gary Tan, Andrzej Przekwas
Abstract:
Bomb blasts from Improvised Explosive Devices (IEDs) account for vast majority of terrorist attacks worldwide. Injuries caused by IEDs result from a combination of the primary blast wave, penetrating fragments, and human body accelerations and impacts. This paper presents a multiscale computational model of coupled blast physics, whole human body biodynamics and injury biomechanics of sensitive organs. The disparity of the involved space- and time-scales is used to conduct sequential modeling of an IED explosion event, CFD simulation of blast loads on the human body and FEM modeling of body biodynamics and injury biomechanics. The paper presents simulation results for blast-induced brain injury coupling macro-scale brain biomechanics and micro-scale response of sensitive neuro-axonal structures. Validation results on animal models and physical surrogates are discussed. Results of our model can be used to 'replicate' filed blast loadings in laboratory controlled experiments using animal models and in vitro neuro-cultures.Keywords: blast waves, improvised explosive devices, injury biomechanics, mathematical models, traumatic brain injury
Procedia PDF Downloads 249537 Comparison of Unit Hydrograph Models to Simulate Flood Events at the Field Scale
Authors: Imene Skhakhfa, Lahbaci Ouerdachi
Abstract:
To ensure the overall coherence of simulated results, it is necessary to develop a robust validation process. In many applications, it is no longer content to calibrate and validate the model only in relation to the hydro graph measured at the outlet, but we try to better simulate the functioning of the watershed in space. Therefore the timing also performs compared to other variables such as water level measurements in intermediate stations or groundwater levels. As part of this work, we limit ourselves to modeling flood of short duration for which the process of evapotranspiration is negligible. The main parameters to identify the models are related to the method of unit hydro graph (HU). Three different models were tested: SNYDER, CLARK and SCS. These models differ in their mathematical structure and parameters to be calibrated while hydrological data are the same, the initial water content and precipitation. The models are compared on the basis of their performance in terms six objective criteria, three global criteria and three criteria representing volume, peak flow, and the mean square error. The first type of criteria gives more weight to strong events whereas the second considers all events to be of equal weight. The results show that the calibrated parameter values are dependent and also highlight the problems associated with the simulation of low flow events and intermittent precipitation.Keywords: model calibration, intensity, runoff, hydrograph
Procedia PDF Downloads 486