Search results for: testing simulation
2324 Typology of the Physic-Chemical Quality of the Water of the Area of Touggourt Case: Aquifers of the Intercalary Continental and the Terminal Complex, S-E of Algeria
Authors: Habes Sameh, Bettahar Asma, Nezli Imad Eddine
Abstract:
The region of Touggourt is situated in the southern part is Algeria, it receives important quantities of waters, the latter are extracted from the fossil groundwater (the Intercalary Continental and the Terminal Complex). The mineralization of these waters of the Terminal Complex is between 3 and 6,5 g/l and for waters of Intercalary Continental is 1,8 and 8,7 g/l, thus it constitutes an obstacle as for its use. To highlight the origins of this mineralization, we used the hydrochemical tool. So the chemical analyses in our ownership, were treated by means of the software "Statistica", what allowed us to realize an analysis in main components (ACP), the latter showed a competition between sodic or magnesian chlorinated water and calcic bicarbonate water, rich in potassium for the TC, while for the IC, we have a competition between sodic or calcic chlorinated and magnesian water treated with copper sulphate waters. The simulation realized thermodynamics showed a variation of the index of saturation which do not exceed zero, for waters of two aquifer TC and IC, so indicating one under saturation of waters towards minerals, highlighting the influence of the geologic formation in the outcrop on the quality of waters. However, we notice that these waters remain acceptable for the irrigation of plants but must be treated before what are consumed by the human being.Keywords: ACP, intercalary, continental, mineralization, SI, Terminal Complex
Procedia PDF Downloads 5282323 Application of the Bionic Wavelet Transform and Psycho-Acoustic Model for Speech Compression
Authors: Chafik Barnoussi, Mourad Talbi, Adnane Cherif
Abstract:
In this paper we propose a new speech compression system based on the application of the Bionic Wavelet Transform (BWT) combined with the psychoacoustic model. This compression system is a modified version of the compression system using a MDCT (Modified Discrete Cosine Transform) filter banks of 32 filters each and the psychoacoustic model. This modification consists in replacing the banks of the MDCT filter banks by the bionic wavelet coefficients which are obtained from the application of the BWT to the speech signal to be compressed. These two methods are evaluated and compared with each other by computing bits before and bits after compression. They are tested on different speech signals and the obtained simulation results show that the proposed technique outperforms the second technique and this in term of compressed file size. In term of SNR, PSNR and NRMSE, the outputs speech signals of the proposed compression system are with acceptable quality. In term of PESQ and speech signal intelligibility, the proposed speech compression technique permits to obtain reconstructed speech signals with good quality.Keywords: speech compression, bionic wavelet transform, filterbanks, psychoacoustic model
Procedia PDF Downloads 3842322 A Robust Model Predictive Control for a Photovoltaic Pumping System Subject to Actuator Saturation Nonlinearity and Parameter Uncertainties: A Linear Matrix Inequality Approach
Authors: Sofiane Bououden, Ilyes Boulkaibet
Abstract:
In this paper, a robust model predictive controller (RMPC) for uncertain nonlinear system under actuator saturation is designed to control a DC-DC buck converter in PV pumping application, where this system is subject to actuator saturation and parameter uncertainties. The considered nonlinear system contains a linear constant part perturbed by an additive state-dependent nonlinear term. Based on the saturating actuator property, an appropriate linear feedback control law is constructed and used to minimize an infinite horizon cost function within the framework of linear matrix inequalities. The proposed approach has successfully provided a solution to the optimization problem that can stabilize the nonlinear plants. Furthermore, sufficient conditions for the existence of the proposed controller guarantee the robust stability of the system in the presence of polytypic uncertainties. In addition, the simulation results have demonstrated the efficiency of the proposed control scheme.Keywords: PV pumping system, DC-DC buck converter, robust model predictive controller, nonlinear system, actuator saturation, linear matrix inequality
Procedia PDF Downloads 1812321 Numerical Simulation of Unsteady Natural Convective Nanofluid Flow within a Trapezoidal Enclosure Using Meshfree Method
Authors: S. Nandal, R. Bhargava
Abstract:
The paper contains a numerical study of the unsteady magneto-hydrodynamic natural convection flow of nanofluids within a symmetrical wavy walled trapezoidal enclosure. The length and height of enclosure are both considered equal to L. Two-phase nanofluid model is employed. The governing equations of nanofluid flow along with boundary conditions are non-dimensionalized and are solved using one of Meshfree technique (EFGM method). Meshfree numerical technique does not require a predefined mesh for discretization purpose. The bottom wavy wall of the enclosure is defined using a cosine function. Element free Galerkin method (EFGM) does not require the domain. The effects of various parameters namely time t, amplitude of bottom wavy wall a, Brownian motion parameter Nb and thermophoresis parameter Nt is examined on rate of heat and mass transfer to get a visualization of cooling and heating effects. Such problems have important applications in heat exchangers or solar collectors, as wavy walled enclosures enhance heat transfer in comparison to flat walled enclosures.Keywords: heat transfer, meshfree methods, nanofluid, trapezoidal enclosure
Procedia PDF Downloads 1582320 Analysis of Fuel Efficiency in Heavy Construction Compaction Machine and Factors Affecting Fuel Efficiency
Authors: Amey Kulkarni, Paavan Shetty, Amol Patil, B. Rajiv
Abstract:
Fuel Efficiency plays a very important role in overall performance of an automobile. In this paper study of fuel efficiency of heavy construction, compaction machine is done. The fuel Consumption trials are performed in order to obtain the consumption of fuel in performing certain set of actions by the compactor. Usually, Heavy Construction machines are put to work in locations where refilling the fuel tank is not an easy task and also the fuel is consumed at a greater rate than a passenger automobile. So it becomes important to have a fuel efficient machine for long working hours. The fuel efficiency is the most important point in determining the future scope of the product. A heavy construction compaction machine operates in five major roles. These five roles are traveling, Static working, High-frequency Low amplitude compaction, Low-frequency High amplitude compaction, low idle. Fuel consumption readings for 1950 rpm, 2000 rpm & 2350 rpm of the engine are taken by using differential fuel flow meter and are analyzed. And the optimum RPM setting which fulfills the fuel efficiency, as well as engine performance criteria, is considered. Also, other factors such as rear end gears, Intake and exhaust restriction for an engine, vehicle operating techniques, air drag, Tribological aspects, Tires are considered for increasing the fuel efficiency of the compactor. The fuel efficiency of compactor can be precisely calculated by using Differential Fuel Flow Meter. By testing the compactor at different combinations of Engine RPM and also considering other factors such as rear end gears, Intake and exhaust restriction of an engine, vehicle operating techniques, air drag, Tribological aspects, The optimum solution was obtained which lead to significant improvement in fuel efficiency of the compactor.Keywords: differential fuel flow meter, engine RPM, fuel efficiency, heavy construction compaction machine
Procedia PDF Downloads 2912319 Modelling a Distribution Network with a Hybrid Solar-Hydro Power Plant in Rural Cameroon
Authors: Contimi Kenfack Mouafo, Sebastian Klick
Abstract:
In the rural and remote areas of Cameroon, access to electricity is very limited since most of the population is not connected to the main utility grid. Throughout the country, efforts are underway to not only expand the utility grid to these regions but also to provide reliable off-grid access to electricity. The Cameroonian company Solahydrowatt is currently working on the design and planning of one of the first hybrid solar-hydropower plants of Cameroon in Fotetsa, in the western region of the country, to provide the population with reliable access to electricity. This paper models and proposes a design for the low-voltage network with a hybrid solar-hydropower plant in Fotetsa. The modelling takes into consideration the voltage compliance of the distribution network, the maximum load of operating equipment, and most importantly, the ability for the network to operate as an off-grid system. The resulting modelled distribution network does not only comply with the Cameroonian voltage deviation standard, but it is also capable of being operated as a stand-alone network independent of the main utility grid.Keywords: Cameroon, rural electrification, hybrid solar-hydro, off-grid electricity supply, network simulation
Procedia PDF Downloads 1242318 Estimation of Missing Values in Aggregate Level Spatial Data
Authors: Amitha Puranik, V. S. Binu, Seena Biju
Abstract:
Missing data is a common problem in spatial analysis especially at the aggregate level. Missing can either occur in covariate or in response variable or in both in a given location. Many missing data techniques are available to estimate the missing data values but not all of these methods can be applied on spatial data since the data are autocorrelated. Hence there is a need to develop a method that estimates the missing values in both response variable and covariates in spatial data by taking account of the spatial autocorrelation. The present study aims to develop a model to estimate the missing data points at the aggregate level in spatial data by accounting for (a) Spatial autocorrelation of the response variable (b) Spatial autocorrelation of covariates and (c) Correlation between covariates and the response variable. Estimating the missing values of spatial data requires a model that explicitly account for the spatial autocorrelation. The proposed model not only accounts for spatial autocorrelation but also utilizes the correlation that exists between covariates, within covariates and between a response variable and covariates. The precise estimation of the missing data points in spatial data will result in an increased precision of the estimated effects of independent variables on the response variable in spatial regression analysis.Keywords: spatial regression, missing data estimation, spatial autocorrelation, simulation analysis
Procedia PDF Downloads 3822317 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method
Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek
Abstract:
Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow
Procedia PDF Downloads 1332316 Development of a Drive Cycle Based Control Strategy for the KIIRA-EV SMACK Hybrid
Authors: Richard Madanda, Paul Isaac Musasizi, Sandy Stevens Tickodri-Togboa, Doreen Orishaba, Victor Tumwine
Abstract:
New vehicle concepts targeting specific geographical markets are designed to satisfy a unique set of road and load requirements. The KIIRA-EV SMACK (KES) hybrid vehicle is designed in Uganda for the East African market. The engine and generator added to the KES electric power train serve both as the range extender and the power assist. In this paper, the design consideration taken to achieve the proper management of the on-board power from the batteries and engine-generator based on the specific drive cycle are presented. To harness the fuel- efficiency benefits of the power train, a specific control philosophy operating the engine and generator at the most efficient speed- torque and speed-power regions is presented. By using a suitable model developed in MATLAB using Simulink and Stateflow, preliminary results show that the steady-state response of the vehicle for a particular hypothetical drive cycle mimicking the expected drive conditions in the city and highway traffic is sufficient.Keywords: control strategy, drive cycle, hybrid vehicle, simulation
Procedia PDF Downloads 3802315 Self-Tuning-Filter and Fuzzy Logic Control for Shunt Active Power Filter
Authors: Kaddari Faiza, Mazari Benyounes, Mihoub Youcef, Safa Ahmed
Abstract:
Active filtering of electric power has now become a mature technology for reactive power and harmonic compensation caused by the proliferation of power electronics devices used for industrial, commercial and residential purposes. The aim of this study is to enhance the power quality by improving the performances of shunt active power filter in harmonic mitigation to obtain sinusoidal source currents with very weak ripples. A power circuit configuration and control scheme for shunt active power filter are described with an improved method for harmonics compensation using self-tuning-filter for harmonics identification and fuzzy logic control to generate reference current. Simulation results (using MATLAB/SIMULINK) illustrates the compensation characteristics of the proposed control strategy. Analysis of these results proves the feasibility and effectiveness of this method to improve the power quality and also show the performances of fuzzy logic control which provides flexibility, high precision and fast response. The total harmonic distortion (THD %) for the simulations found to be within the recommended imposed IEEE 519-1992 harmonic standard.Keywords: Active Powers Filter (APF), Self-Tuning-Filter (STF), fuzzy logic control, hysteresis-band control
Procedia PDF Downloads 7382314 The Experiment and Simulation Analysis of the Effect of CO₂ and Steam Addition on Syngas Composition of Natural Gas Non-Catalyst Partial Oxidation
Authors: Zhenghua Dai, Jianliang Xu, Fuchen Wang
Abstract:
Non-catalyst partial oxidation technology has been widely used to produce syngas by reforming of hydrocarbon, including gas (natural gas, shale gas, refinery gas, coalbed gas, coke oven gas, pyrolysis gas, etc.) and liquid (residual oil, asphalt, deoiled asphalt, biomass oil, etc.). For natural gas non-catalyst partial oxidation, the H₂/CO(v/v) of syngas is about 1.8, which is agreed well with the request of FT synthesis. But for other process, such as carbonylation and glycol, the H₂/CO(v/v) should be close to 1 and 2 respectively. So the syngas composition of non-catalyst partial oxidation should be adjusted to satisfy the request of different chemical synthesis. That means a multi-reforming method by CO₂ and H₂O addition. The natural gas non-catalytic partial oxidation hot model was established. The effects of O₂/CH4 ratio, steam, and CO₂ on the syngas composition were studied. The results of the experiment indicate that the addition of CO₂ and steam into the reformer can be applied to change the syngas H₂/CO ratio. The reactor network model (RN model) was established according to the flow partition of industrial reformer and GRI-Mech 3.0. The RN model results agree well with the industrial data. The effects of steam, CO₂ on the syngas compositions were studied with the RN model.Keywords: non-catalyst partial oxidation, natural gas, H₂/CO, CO₂ and H₂O addition, multi-reforming method
Procedia PDF Downloads 2122313 Emergence of Ciprofloxacin Intermediate Susceptible Salmonella Typhi in India
Authors: Meenakshi Chaudhary, V .S. Randhawa, M. Jais, R. Dutta
Abstract:
Introduction: An outbreak of Multi drug resistant S. Typhi (i.e. resistance to chloramphenicol, ampicillin, and trimethoprim-sulfamethoxazole) occurred in 1990's in India which peaked in 1992-93 and resulted in the change of drug of choice from chloramphenicol to ciprofloxacin for enteric fever. Currently an emergence of Ciprofloxacin susceptible S. Typhi isolates in the region is being reported which appears to be chromosomally mediated. Methodology: Six hundred sixty four strains were randomly selected from the time period between January 2008-December 2011 at the National Salmonella Phage Typing Centre, LHMC, New Delhi. The strains were representative of the north, central and south zones of India. All isolates were subjected to serotyping, biotyping, phage typing and then to antimicrobial susceptibility testing by CLSI disk diffusion (CLSI) technique to Ciprofloxacin, Cefotaxime, Ampicillin, Chloramphenicol, Trimethoprim-Sulfomethoxazole and Tetracycline. Subsequently MIC of the isolates was determined by E-test (AB-Biodisc). Results: More than 80% of the tested strains had intermediate susceptibility to ciprofloxacin. The E test revealed the MIC (Ciprofloxacin) of these strains to be in the range of 0.12 to 0.5 µg/ml. Sixty nine percent of ciprofloxacin intermediate susceptible strains belonged to Phage type E1 and fourteen percent of these were Vi- Negative i.e these could not be typed by the phage typing scheme of Craigie and Yen. All the strains remained susceptible to cefotaxime. Conclusion: Predominant isolation of intermediate susceptible S. Typhi strains from India would alter the recommendations of empiric treatment of enteric fever in the region. Alternative to the low cost ciprofloxacin will have to be sought or increased dosage and/or duration of ciprofloxacin will have to be recommended. The reasons for the trend of increase in percentage of intermediate susceptible S. Typhi strains are not clear but may be attributed partly to the revision of CLSI guidelines in 2013.Keywords: salmonella typhi, decreased ciprofloxacin susceptibility, ciprofloxacin, minimum inhibitory concentration
Procedia PDF Downloads 3222312 Sulfide Removal from Liquid Using Biofilm on Packed Bed of Salak Fruit Seeds
Authors: Retno Ambarwati Sigit Lestari, Wahyudi Budi Sediawan, Sarto Sarto
Abstract:
This study focused on the removal of sulfide from liquid solution using biofilm on packed bed of salak fruit seeds. Biofilter operation of 444 hours consists of 6 phases of operation. Each phase lasted for approximately 72 hours to 82 hours and run at various inlet concentration and flow rate. The highest removal efficiency is 92.01%, at the end of phase 7 at the inlet concentration of 60 ppm and the flow rate of 30 mL min-1. Mathematic model of sulfide removal was proposed to describe the operation of biofilter. The model proposed can be applied to describe the removal of sulfide liquid using biofilter in packed bed. The simulation results the value of the parameters in process. The value of the rate maximum spesific growth is 4.15E-8 s-1, Saturation constant is 9.1E-8 g cm-3, mass transfer coefisient of liquid is 0.5 cm s-1, Henry’s constant is 0.007, and mass of microorganisms growth to mass of sulfide comsumed is 30. The value of the rate maximum spesific growth in early process is 0.00000004 s-1.Keywords: biofilm, packed bed, removal, sulfide, salak fruit seeds.
Procedia PDF Downloads 1942311 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility
Authors: Fu Jinyu, Lin Jinguan
Abstract:
This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate
Procedia PDF Downloads 1582310 Modeling of Power Network by ATP-Draw for Lightning Stroke Studies
Authors: John Morales, Armando Guzman
Abstract:
Protection relay algorithms play a crucial role in Electric Power System stability, where, it is clear that lightning strokes produce the mayor percentage of faults and outages of Transmission Lines (TLs) and Distribution Feeders (DFs). In this context, it is imperative to develop novel protection relay algorithms. However, in order to get this aim, Electric Power Systems (EPS) network have to be simulated as real as possible, especially the lightning phenomena, and EPS elements that affect their behavior like direct and indirect lightning, insulator string, overhead line, soil ionization and other. However, researchers have proposed new protection relay algorithms considering common faults, which are not produced by lightning strokes, omitting these imperative phenomena for the transmission line protection relays behavior. Based on the above said, this paper presents the possibilities of using the Alternative Transient Program ATP-Draw for the modeling and simulation of some models to make lightning stroke studies, especially for protection relays, which are developed through Transient Analysis of Control Systems (TACS) and MODELS language corresponding to the ATP-Draw.Keywords: back-flashover, faults, flashover, lightning stroke, modeling of lightning, outages, protection relays
Procedia PDF Downloads 3162309 Numerical Investigation of Flow Characteristics inside the External Gear Pump Using Urea Liquid Medium
Authors: Kumaresh Selvakumar, Man Young Kim
Abstract:
In selective catalytic reduction (SCR) unit, the injection system is provided with unique dosing pump to govern the urea injection phenomenon. The urea based operating liquid from the AdBlue tank links up directly with the dosing pump unit to furnish appropriate high pressure for examining the flow characteristics inside the liquid pump. This work aims in demonstrating the importance of external gear pump to provide pertinent high pressure and respective mass flow rate for each rotation. Numerical simulations are conducted using immersed solid method technique for better understanding of unsteady flow characteristics within the pump. Parametric analyses have been carried out for the gear speed and mass flow rate to find the behavior of pressure fluctuations. In the simulation results, the outlet pressure achieves maximum magnitude with the increase in rotational speed and the fluctuations grow higher.Keywords: AdBlue tank, external gear pump, immersed solid method, selective catalytic reduction
Procedia PDF Downloads 2802308 DMBR-Net: Deep Multiple-Resolution Bilateral Networks for Real-Time and Accurate Semantic Segmentation
Authors: Pengfei Meng, Shuangcheng Jia, Qian Li
Abstract:
We proposed a real-time high-precision semantic segmentation network based on a multi-resolution feature fusion module, the auxiliary feature extracting module, upsampling module, and atrous spatial pyramid pooling (ASPP) module. We designed a feature fusion structure, which is integrated with sufficient features of different resolutions. We also studied the effect of side-branch structure on the network and made discoveries. Based on the discoveries about the side-branch of the network structure, we used a side-branch auxiliary feature extraction layer in the network to improve the effectiveness of the network. We also designed upsampling module, which has better results than the original upsampling module. In addition, we also re-considered the locations and number of atrous spatial pyramid pooling (ASPP) modules and modified the network structure according to the experimental results to further improve the effectiveness of the network. The network presented in this paper takes the backbone network of Bisenetv2 as a basic network, based on which we constructed a network structure on which we made improvements. We named this network deep multiple-resolution bilateral networks for real-time, referred to as DMBR-Net. After experimental testing, our proposed DMBR-Net network achieved 81.2% mIoU at 119FPS on the Cityscapes validation dataset, 80.7% mIoU at 109FPS on the CamVid test dataset, 29.9% mIoU at 78FPS on the COCOStuff test dataset. Compared with all lightweight real-time semantic segmentation networks, our network achieves the highest accuracy at an appropriate speed.Keywords: multi-resolution feature fusion, atrous convolutional, bilateral networks, pyramid pooling
Procedia PDF Downloads 1502307 Optimal Scheduling of Load and Operational Strategy of a Load Aggregator to Maximize Profit with PEVs
Authors: Md. Shafiullah, Ali T. Al-Awami
Abstract:
This project proposes optimal scheduling of imported power of a load aggregator with the utilization of EVs to maximize its profit. As with the increase of renewable energy resources, electricity price in competitive market becomes more uncertain and, on the other hand, with the penetration of renewable distributed generators in the distribution network the predicted load of a load aggregator also becomes uncertain in real time. Though there is uncertainties in both load and price, the use of EVs storage capacity can make the operation of load aggregator flexible. LA submits its offer to day-ahead market based on predicted loads and optimized use of its EVs to maximize its profit, as well as in real time operation it uses its energy storage capacity in such a way that it can maximize its profit. In this project, load aggregators profit maximization algorithm is formulated and the optimization problem is solved with the help of CVX. As in real time operation the forecasted loads differ from actual load, the mismatches are settled in real time balancing market. Simulation results compare the profit of a load aggregator with a hypothetical group of 1000 EVs and without EVs.Keywords: CVX, electricity market, load aggregator, load and price uncertainties, profit maximization, real time balancing operation
Procedia PDF Downloads 4162306 Proposing an Algorithm to Cluster Ad Hoc Networks, Modulating Two Levels of Learning Automaton and Nodes Additive Weighting
Authors: Mohammad Rostami, Mohammad Reza Forghani, Elahe Neshat, Fatemeh Yaghoobi
Abstract:
An Ad Hoc network consists of wireless mobile equipment which connects to each other without any infrastructure, using connection equipment. The best way to form a hierarchical structure is clustering. Various methods of clustering can form more stable clusters according to nodes' mobility. In this research we propose an algorithm, which allocates some weight to nodes based on factors, i.e. link stability and power reduction rate. According to the allocated weight in the previous phase, the cellular learning automaton picks out in the second phase nodes which are candidates for being cluster head. In the third phase, learning automaton selects cluster head nodes, member nodes and forms the cluster. Thus, this automaton does the learning from the setting and can form optimized clusters in terms of power consumption and link stability. To simulate the proposed algorithm we have used omnet++4.2.2. Simulation results indicate that newly formed clusters have a longer lifetime than previous algorithms and decrease strongly network overload by reducing update rate.Keywords: mobile Ad Hoc networks, clustering, learning automaton, cellular automaton, battery power
Procedia PDF Downloads 4112305 Symbol Synchronization and Resource Reuse Schemes for Layered Video Multicast Service in Long Term Evolution Networks
Authors: Chung-Nan Lee, Sheng-Wei Chu, You-Chiun Wang
Abstract:
LTE (Long Term Evolution) employs the eMBMS (evolved Multimedia Broadcast/Multicast Service) protocol to deliver video streams to a multicast group of users. However, it requires all multicast members to receive a video stream in the same transmission rate, which would degrade the overall service quality when some users encounter bad channel conditions. To overcome this problem, this paper provides two efficient resource allocation schemes in such LTE network: The symbol synchronization (S2) scheme assumes that the macro and pico eNodeBs use the same frequency channel to deliver the video stream to all users. It then adopts a multicast transmission index to guarantee the fairness among users. On the other hand, the resource reuse (R2) scheme allows eNodeBs to transmit data on different frequency channels. Then, by introducing the concept of frequency reuse, it can further improve the overall service quality. Extensive simulation results show that the S2 and R2 schemes can respectively improve around 50% of fairness and 14% of video quality as compared with the common maximum throughput method.Keywords: LTE networks, multicast, resource allocation, layered video
Procedia PDF Downloads 3892304 A Bivariate Inverse Generalized Exponential Distribution and Its Applications in Dependent Competing Risks Model
Authors: Fatemah A. Alqallaf, Debasis Kundu
Abstract:
The aim of this paper is to introduce a bivariate inverse generalized exponential distribution which has a singular component. The proposed bivariate distribution can be used when the marginals have heavy-tailed distributions, and they have non-monotone hazard functions. Due to the presence of the singular component, it can be used quite effectively when there are ties in the data. Since it has four parameters, it is a very flexible bivariate distribution, and it can be used quite effectively for analyzing various bivariate data sets. Several dependency properties and dependency measures have been obtained. The maximum likelihood estimators cannot be obtained in closed form, and it involves solving a four-dimensional optimization problem. To avoid that, we have proposed to use an EM algorithm, and it involves solving only one non-linear equation at each `E'-step. Hence, the implementation of the proposed EM algorithm is very straight forward in practice. Extensive simulation experiments and the analysis of one data set have been performed. We have observed that the proposed bivariate inverse generalized exponential distribution can be used for modeling dependent competing risks data. One data set has been analyzed to show the effectiveness of the proposed model.Keywords: Block and Basu bivariate distributions, competing risks, EM algorithm, Marshall-Olkin bivariate exponential distribution, maximum likelihood estimators
Procedia PDF Downloads 1432303 Signed Language Phonological Awareness: Building Deaf Children's Vocabulary in Signed and Written Language
Authors: Lynn Mcquarrie, Charlotte Enns
Abstract:
The goal of this project was to develop a visually-based, signed language phonological awareness training program and to pilot the intervention with signing deaf children (ages 6 -10 years/ grades 1 - 4) who were beginning readers to assess the effects of systematic explicit American Sign Language (ASL) phonological instruction on both ASL vocabulary and English print vocabulary learning. Growing evidence that signing learners utilize visually-based signed language phonological knowledge (homologous to the sound-based phonological level of spoken language processing) when reading underscore the critical need for further research on the innovation of reading instructional practices for visual language learners. Multiple single-case studies using a multiple probe design across content (i.e., sign and print targets incorporating specific ASL phonological parameters – handshapes) was implemented to examine if a functional relationship existed between instruction and acquisition of these skills. The results indicated that for all cases, representing a variety of language abilities, the visually-based phonological teaching approach was exceptionally powerful in helping children to build their sign and print vocabularies. Although intervention/teaching studies have been essential in testing hypotheses about spoken language phonological processes supporting non-deaf children’s reading development, there are no parallel intervention/teaching studies exploring hypotheses about signed language phonological processes in supporting deaf children’s reading development. This study begins to provide the needed evidence to pursue innovative teaching strategies that incorporate the strengths of visual learners.Keywords: American sign language phonological awareness, dual language strategies, vocabulary learning, word reading
Procedia PDF Downloads 3332302 Vaccine Development for Newcastle Disease Virus in Poultry
Authors: Muhammad Asif Rasheed
Abstract:
Newcastle disease virus (NDV), an avian orthoavulavirus, is a causative agent of Newcastle disease named (NDV) and can cause even the epidemics when the disease is not treated. Previously several vaccines based on attenuated and inactivated viruses have been reported, which are rendered useless with the passage of time due to versatile changes in viral genome. Therefore, we aimed to develop an effective multi-epitope vaccine against the haemagglutinin neuraminidase (HN) protein of 26 NDV strains from Pakistan through a modern immunoinformatic approaches. As a result, a vaccine chimaera was constructed by combining T-cell and B-cell epitopes with the appropriate linkers and adjuvant. The designed vaccine was highly immunogenic, non-allergen, and antigenic; therefore, the potential 3D-structureof multi epitope vaccine was constructed, refined, and validated. A molecular docking study of a multiepitope vaccine candidate with the chicken Toll-like receptor-4 indicated successful binding. An In silico immunological simulation was used to evaluate the candidate vaccine's ability to elicit an effective immune response. According to the computational studies, the proposed multiepitope vaccine is physically stable and may induce immune responses, whichsuggested it a strong candidate against 26 Newcastle disease virus strains from Pakistan. A wet lab study is under process to confirm the results.Keywords: epitopes, newcastle disease virus, paramyxovirus virus, vaccine
Procedia PDF Downloads 1202301 Determination of Unsaturated Soil Permeability Based on Geometric Factor Development of Constant Discharge Model
Authors: A. Rifa’i, Y. Takeshita, M. Komatsu
Abstract:
After Yogyakarta earthquake in 2006, the main problem that occurred in the first yard of Prambanan Temple is ponding area that occurred after rainfall. Soil characterization needs to be determined by conducting several processes, especially permeability coefficient (k) in both saturated and unsaturated conditions to solve this problem. More accurate and efficient field testing procedure is required to obtain permeability data that present the field condition. One of the field permeability test equipment is Constant Discharge procedure to determine the permeability coefficient. Necessary adjustments of the Constant Discharge procedure are needed to be determined especially the value of geometric factor (F) to improve the corresponding value of permeability coefficient. The value of k will be correlated with the value of volumetric water content (θ) of an unsaturated condition until saturated condition. The principle procedure of Constant Discharge model provides a constant flow in permeameter tube that flows into the ground until the water level in the tube becomes constant. Constant water level in the tube is highly dependent on the tube dimension. Every tube dimension has a shape factor called the geometric factor that affects the result of the test. Geometric factor value is defined as the characteristic of shape and radius of the tube. This research has modified the geometric factor parameters by using empty material tube method so that the geometric factor will change. Saturation level is monitored by using soil moisture sensor. The field test results were compared with the results of laboratory tests to validate the results of the test. Field and laboratory test results of empty tube material method have an average difference of 3.33 x 10-4 cm/sec. The test results showed that modified geometric factor provides more accurate data. The improved methods of constant discharge procedure provide more relevant results.Keywords: constant discharge, geometric factor, permeability coefficient, unsaturated soils
Procedia PDF Downloads 2942300 Studying Growth as a Pursuit of Disseminating Social Impact: A Conceptual Study
Authors: Saila Tykkyläinen
Abstract:
The purpose of this study is to pave the way for more focused accumulation of knowledge on social enterprise growth. The body of research touching upon the phenomenon is somewhat fragmented. In order to make an effort to create a solid common ground, this study draws from the theoretical starting points and guidelines developed within small firm growth research. By analyzing their use in social enterprise growth literature, the study offers insights on whether the proven theories and concepts from small firm context could be more systematically applied when investigating growth of social enterprises. Towards this end, the main findings from social enterprise growth research are classified under the three research streams on growth. One of them focuses on factors of growth, another investigates growth as a process and the third is interested in outcomes of growth. During the analysis, special attention is paid on exploring how social mission of the company and the pursuit of augmenting its social impact are dealt within those lines of research. The next step is to scrutinize and discuss some of the central building blocks of growth research, namely the unit of analysis, conceptualization of a firm and operationalizing growth, in relation to social enterprise studies. It appears that the social enterprise growth literature stresses the significance of 'social' both as a main driver and principle outcome of growth. As for the growth process, this emphasis is manifested by special interest in strategies and models tailored to disseminate social impact beyond organizational limits. Consequently, this study promotes more frequent use of business activity as a unit of analysis in the social enterprise context. Most of the times, it is their products, services or programs with which social enterprises and entrepreneurs aim to create the impact. Thus the focus should be placed on activities rather than on organizations. The study also seeks to contribute back to the small firm growth research. Even though the recommendation to think of business activities as an option for unit of analysis stems from there, it is all too rarely used. Social entrepreneurship makes a good case for testing and developing the approach further.Keywords: conceptual study, growth, scaling, social enterprise
Procedia PDF Downloads 3152299 Experimental and Numerical Analysis of the Effects of Ball-End Milling Process upon Residual Stresses and Cutting Forces
Authors: Belkacem Chebil Sonia, Bensalem Wacef
Abstract:
The majority of ball end milling models includes only the influence of cutting parameters (cutting speed, feed rate, depth of cut). Furthermore, this influence is studied in most of works on cutting force. Therefore, this study proposes an accurate ball end milling process modeling which includes also the influence of tool workpiece inclination. In addition, a characterization of residual stresses resulting of thermo mechanical loading in the workpiece was also presented. Moreover, the study of the influence of tool workpiece inclination and cutting parameters was made on residual stresses distribution. In order to achieve the predetermination of cutting forces and residual stresses during a milling operation, a thermo mechanical three-dimensional numerical model of ball end milling was developed. Furthermore, an experimental companion of ball end milling tests was realized on a 5-axis machining center to determine the cutting forces and characterize the residual stresses. The simulation results are compared with the experiment to validate the Finite Element Model and subsequently identify the optimum inclination angle and cutting parameters.Keywords: ball end milling, cutting forces, cutting parameters, residual stress, tool-workpiece inclination
Procedia PDF Downloads 3082298 Design and Implementation of Image Super-Resolution for Myocardial Image
Authors: M. V. Chidananda Murthy, M. Z. Kurian, H. S. Guruprasad
Abstract:
Super-resolution is the technique of intelligently upscaling images, avoiding artifacts or blurring, and deals with the recovery of a high-resolution image from one or more low-resolution images. Single-image super-resolution is a process of obtaining a high-resolution image from a set of low-resolution observations by signal processing. While super-resolution has been demonstrated to improve image quality in scaled down images in the image domain, its effects on the Fourier-based technique remains unknown. Super-resolution substantially improved the spatial resolution of the patient LGE images by sharpening the edges of the heart and the scar. This paper aims at investigating the effects of single image super-resolution on Fourier-based and image based methods of scale-up. In this paper, first, generate a training phase of the low-resolution image and high-resolution image to obtain dictionary. In the test phase, first, generate a patch and then difference of high-resolution image and interpolation image from the low-resolution image. Next simulation of the image is obtained by applying convolution method to the dictionary creation image and patch extracted the image. Finally, super-resolution image is obtained by combining the fused image and difference of high-resolution and interpolated image. Super-resolution reduces image errors and improves the image quality.Keywords: image dictionary creation, image super-resolution, LGE images, patch extraction
Procedia PDF Downloads 3752297 Preparation and Characterization of Calcium Phosphate Cement
Authors: W. Thepsuwan, N. Monmaturapoj
Abstract:
Calcium phosphate cements (CPCs) is one of the most attractive bioceramics due to its moldable and shape ability to fill complicated bony cavities or small dental defect positions. In this study, CPCs were produced by using mixtures of tetracalcium phosphate (TTCP, Ca4O(PO4)2) and dicalcium phosphate anhydrous (DCPA, CaHPO4) in equimolar ratio (1/1) with aqueous solutions of acetic acid (C2H4O2) and disodium hydrogen phosphate dehydrate (Na2HPO4.2H2O) in combination with sodium alginate in order to improve theirs moldable characteristic. The concentrations of the aqueous solutions and sodium alginate were varied to investigate the effects of different aqueous solution and alginate on properties of the cements. The cement paste was prepared by mixing cement powder (P) with aqueous solution (L) in a P/L ratio of 1.0 g/ 0.35 ml. X-ray diffraction (XRD) was used to analyses phase formation of the cements. Setting times and compressive strength of the set CPCs were measured using the Gilmore apparatus and Universal testing machine, respectively. The results showed that CPCs could be produced by using both basic (Na2HPO4.2H2O) and acidic (C2H4O2) solutions. XRD results show the precipitation of hydroxyapatite in all cement samples. No change in phase formation among cements using difference concentrations of Na2HPO4.2H2O solutions. With increasing concentration of acidic solutions, samples obtained less hydroxyapatite with a high dicalcium phosphate dehydrate leaded to a shorter setting time. Samples with sodium alginate exhibited higher crystallization of hydroxyapatite than that of without alginate as a result of shorten setting time in basic solution but a longer setting time in acidic solution. The stronger cement was attained from samples using acidic solution with sodium alginate; however it was lower than using the basic solution.Keywords: calcium phosphate cements, TTCP, DCPA, hydroxyapatite, properties
Procedia PDF Downloads 3902296 Effect of Tilt Angle of Herringbone Microstructures on Enhancement of Heat and Mass Transfer
Authors: Nathan Estrada, Fangjun Shu, Yanxing Wang
Abstract:
The heat and mass transfer characteristics of a simple shear flow over a surface covered with staggered herringbone structures are numerically investigated using the lattice Boltzmann method. The focus is on the effect of ridge angle of the structures on the enhancement of heat and mass transfer. In the simulation, the temperature and mass concentration are modeled as a passive scalar released from the moving top wall and absorbed at the structured bottom wall. Reynolds number is fixed at 100. Two Prandtl or Schmidt numbers, 1 and 10, are considered. The results show that the advective scalar transport plays a more important role at larger Schmidt numbers. The fluid travels downward with higher scalar concentration into the grooves at the backward grove tips and travel upward with lower scalar concentration at the forward grove tips. Different tile angles result in different flow advection in wall-normal direction and thus different heat and mass transport efficiencies. The maximum enhancement is achieved at an angle between 15o and 30o. The mechanism of heat and mass transfer is analyzed in detail.Keywords: fluid mechanics, heat and mass transfer, microfluidics, staggered herringbone mixer
Procedia PDF Downloads 1122295 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems
Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana
Abstract:
Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP
Procedia PDF Downloads 199