Search results for: large eddy simulation
3900 A Compact Via-less Ultra-Wideband Microstrip Filter by Utilizing Open-Circuit Quarter Wavelength Stubs
Authors: Muhammad Yasir Wadood, Fatemeh Babaeian
Abstract:
By developing ultra-wideband (UWB) systems, there is a high demand for UWB filters with low insertion loss, wide bandwidth, and having a planar structure which is compatible with other components of the UWB system. A microstrip interdigital filter is a great option for designing UWB filters. However, the presence of via holes in this structure creates difficulties in the fabrication procedure of the filter. Especially in the higher frequency band, any misalignment of the drilled via hole with the Microstrip stubs causes large errors in the measurement results compared to the desired results. Moreover, in this case (high-frequency designs), the line width of the stubs are very narrow, so highly precise small via holes are required to be implemented, which increases the cost of fabrication significantly. Also, in this case, there is a risk of having fabrication errors. To combat this issue, in this paper, a via-less UWB microstrip filter is proposed which is designed based on a modification of a conventional inter-digital bandpass filter. The novel approaches in this filter design are 1) replacement of each via hole with a quarter-wavelength open circuit stub to avoid the complexity of manufacturing, 2) using a bend structure to reduce the unwanted coupling effects and 3) minimising the size. Using the proposed structure, a UWB filter operating in the frequency band of 3.9-6.6 GHz (1-dB bandwidth) is designed and fabricated. The promising results of the simulation and measurement are presented in this paper. The selected substrate for these designs was Rogers RO4003 with a thickness of 20 mils. This is a common substrate in most of the industrial projects. The compact size of the proposed filter is highly beneficial for applications which require a very miniature size of hardware.
Keywords: Band-pass filters, inter-digital filter, microstrip, via-less.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8343899 Analysis of Motor Cycle Helmet under Static and Dynamic Loading
Authors: V. C. Sathish Gandhi, R. Kumaravelan, S. Ramesh, M. Venkatesan, M. Ponraj
Abstract:
Each year nearly nine hundred persons die in head injuries and over fifty thousand persons are severely injured due to non wearing of helmets. In motor cycle accidents, the human head is exposed to heavy impact loading against natural protection. In this work, an attempt has been made for analyzing the helmet with all the standard data. The simulation software ‘ANSYS’ is used to analyze the helmet with different conditions such as bottom fixed-load on top surface, bottom fixed -load on top line, side fixed –load on opposite surface, side fixed-load on opposite line and dynamic analysis. The maximum force of 19.5 kN is applied on the helmet to study the model in static and dynamic conditions. The simulation has been carried out for the static condition for the parameters like total deformation, strain energy, von-Mises stress for different cases. The dynamic analysis has been performed for the parameter like total deformation and equivalent elastic strain. The result shows that these values are concentrated in the retention portion of the helmet. These results have been compared with the standard experimental data proposed by the BIS and well within the acceptable limit.
Keywords: Helmet, Deformation, Strain energy, Equivalent elastic strain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 48893898 Kinetic Theory Based CFD Modeling of Particulate Flows in Horizontal Pipes
Authors: Pandaba Patro, Brundaban Patro
Abstract:
The numerical simulation of fully developed gas–solid flow in a horizontal pipe is done using the eulerian-eulerian approach, also known as two fluids modeling as both phases are treated as continuum and inter-penetrating continua. The solid phase stresses are modeled using kinetic theory of granular flow (KTGF). The computed results for velocity profiles and pressure drop are compared with the experimental data. We observe that the convection and diffusion terms in the granular temperature cannot be neglected in gas solid flow simulation along a horizontal pipe. The particle-wall collision and lift also play important role in eulerian modeling. We also investigated the effect of flow parameters like gas velocity, particle properties and particle loading on pressure drop prediction in different pipe diameters. Pressure drop increases with gas velocity and particle loading. The gas velocity has the same effect ((proportional toU2 ) as single phase flow on pressure drop prediction. With respect to particle diameter, pressure drop first increases, reaches a peak and then decreases. The peak is a strong function of pipe bore.
Keywords: CFD, Eulerian modeling, gas solid flow, KTGF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31753897 Correlation between Capacitance and Dissipation Factor used for Assessment of Stator Insulation
Authors: José Luis Oslinger, Luis Carlos Castro
Abstract:
Measurements of capacitance C and dissipation factor tand of the stator insulation system provide useful information about internal defects within the insulation. The index k is defined as the proportionality constant between the changes at high voltage of capacitance DC and of the dissipation factor Dtand . DC and Dtand values were highly correlated when small flat defects were within the insulation and that correlation was lost in the presence of large narrow defects like electrical treeing. The discrimination between small and large defects is made resorting to partial discharge PD phase angle analysis. For the validation of the results, C and tand measurements were carried out in a 15MVA 4160V steam turbine turbogenerator placed in a sugar mill. In addition, laboratory test results obtained by other authors were analyzed jointly. In such laboratory tests, model coil bars subjected to thermal cycling resulted highly degraded and DC and Dtand values were not correlated. Thus, the index k could not be calculated.Keywords: Aging, capacitance, dissipation factor, electrical treeing, insulation condition, partial discharge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29423896 Effect of Birks Constant and Defocusing Parameter on Triple-to-Double Coincidence Ratio Parameter in Monte Carlo Simulation-GEANT4
Authors: F. Abubaker, F. Tortorici, M. Capogni, C. Sutera, V. Bellini
Abstract:
This project concerns with the detection efficiency of the portable Triple-to-Double Coincidence Ratio (TDCR) at the National Institute of Metrology of Ionizing Radiation (INMRI-ENEA) which allows direct activity measurement and radionuclide standardization for pure-beta emitter or pure electron capture radionuclides. The dependency of the simulated detection efficiency of the TDCR, by using Monte Carlo simulation Geant4 code, on the Birks factor (kB) and defocusing parameter has been examined especially for low energy beta-emitter radionuclides such as 3H and 14C, for which this dependency is relevant. The results achieved in this analysis can be used for selecting the best kB factor and the defocusing parameter for computing theoretical TDCR parameter value. The theoretical results were compared with the available ones, measured by the ENEA TDCR portable detector, for some pure-beta emitter radionuclides. This analysis allowed to improve the knowledge of the characteristics of the ENEA TDCR detector that can be used as a traveling instrument for in-situ measurements with particular benefits in many applications in the field of nuclear medicine and in the nuclear energy industry.
Keywords: Birks constant, defocusing parameter, GEANT4 code, TDCR parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5203895 Fast Database Indexing for Large Protein Sequence Collections Using Parallel N-Gram Transformation Algorithm
Authors: Jehad A. H. Hammad, Nur'Aini binti Abdul Rashid
Abstract:
With the rapid development in the field of life sciences and the flooding of genomic information, the need for faster and scalable searching methods has become urgent. One of the approaches that were investigated is indexing. The indexing methods have been categorized into three categories which are the lengthbased index algorithms, transformation-based algorithms and mixed techniques-based algorithms. In this research, we focused on the transformation based methods. We embedded the N-gram method into the transformation-based method to build an inverted index table. We then applied the parallel methods to speed up the index building time and to reduce the overall retrieval time when querying the genomic database. Our experiments show that the use of N-Gram transformation algorithm is an economical solution; it saves time and space too. The result shows that the size of the index is smaller than the size of the dataset when the size of N-Gram is 5 and 6. The parallel N-Gram transformation algorithm-s results indicate that the uses of parallel programming with large dataset are promising which can be improved further.Keywords: Biological sequence, Database index, N-gram indexing, Parallel computing, Sequence retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21363894 The Impacts of Local Decision Making on Customisation Process Speed across Distributed Boundaries: A Case Study
Authors: A. M. Qahtani, G. B. Wills, A. M. Gravell
Abstract:
Communicating and managing customers’ requirements in software development projects play a vital role in the software development process. While it is difficult to do so locally, it is even more difficult to communicate these requirements over distributed boundaries and to convey them to multiple distribution customers. This paper discusses the communication of multiple distribution customers’ requirements in the context of customised software products. The main purpose is to understand the challenges of communicating and managing customisation requirements across distributed boundaries. We propose a model for Communicating Customisation Requirements of Multi-Clients in a Distributed Domain (CCRD). Thereafter, we evaluate that model by presenting the findings of a case study conducted with a company with customisation projects for 18 distributed customers. Then, we compare the outputs of the real case process and the outputs of the CCRD model using simulation methods. Our conjecture is that the CCRD model can reduce the challenge of communication requirements over distributed organisational boundaries, and the delay in decision making and in the entire customisation process time.
Keywords: Customisation Software Products, Global Software Engineering, Local Decision Making, Requirement Engineering, Simulation Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18983893 Towards an Enhanced Stochastic Simulation Model for Risk Analysis in Highway Construction
Authors: Anshu Manik, William G. Buttlar, Kasthurirangan Gopalakrishnan
Abstract:
Over the years, there is a growing trend towards quality-based specifications in highway construction. In many Quality Control/Quality Assurance (QC/QA) specifications, the contractor is primarily responsible for quality control of the process, whereas the highway agency is responsible for testing the acceptance of the product. A cooperative investigation was conducted in Illinois over several years to develop a prototype End-Result Specification (ERS) for asphalt pavement construction. The final characteristics of the product are stipulated in the ERS and the contractor is given considerable freedom in achieving those characteristics. The risk for the contractor or agency depends on how the acceptance limits and processes are specified. Stochastic simulation models are very useful in estimating and analyzing payment risk in ERS systems and these form an integral part of the Illinois-s prototype ERS system. This paper describes the development of an innovative methodology to estimate the variability components in in-situ density, air voids and asphalt content data from ERS projects. The information gained from this would be crucial in simulating these ERS projects for estimation and analysis of payment risks associated with asphalt pavement construction. However, these methods require at least two parties to conduct tests on all the split samples obtained according to the sampling scheme prescribed in present ERS implemented in Illinois.Keywords: Asphalt Pavement, Risk Analysis, StochasticSimulation, QC/QA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15153892 The Influence of Zeolitic Spent Refinery Admixture on the Rheological and Technological Properties of Steel Fiber Reinforced Self-Compacting Concrete
Authors: Ž. Rudžionis, P. Grigaliūnas, D. Vaičiukynienė
Abstract:
By planning this experimental work to investigate the effect of zeolitic waste on rheological and technological properties of self-compacting fiber reinforced concrete, we had an intention to draw attention to the environmental factor. Large amount of zeolitic waste, as secondary raw materials are not in use properly and large amount of it is collected without a clear view of its usage in future. The principal aim of this work is to assure, that zeolitic waste admixture takes positive effect to the self-compacting fiber reinforced concrete mixes stability, flowability and other properties by using the experimental research methods. In addition to that a research on cement and zeolitic waste mortars were implemented to clarify the effect of zeolitic waste on properties of cement paste and stone. Primary studies indicates that zeolitic waste characterizes clear pozzolanic behavior, do not deteriorate and in some cases ensure positive rheological and mechanical characteristics of self-compacting concrete mixes.
Keywords: Self compacting concrete, steel fiber reinforced concrete, zeolitic waste, rheological properties of concrete, slump flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17803891 Three-Dimensional Simulation of Free Electron Laser with Prebunching and Efficiency Enhancement
Authors: M. Chitsazi, B. Maraghechi, M. H. Rouhani
Abstract:
Three-dimensional simulation of harmonic up generation in free electron laser amplifier operating simultaneously with a cold and relativistic electron beam is presented in steady-state regime where the slippage of the electromagnetic wave with respect to the electron beam is ignored. By using slowly varying envelope approximation and applying the source-dependent expansion to wave equations, electromagnetic fields are represented in terms of the Hermit Gaussian modes which are well suited for the planar wiggler configuration. The electron dynamics is described by the fully threedimensional Lorentz force equation in presence of the realistic planar magnetostatic wiggler and electromagnetic fields. A set of coupled nonlinear first-order differential equations is derived and solved numerically. The fundamental and third harmonic radiation of the beam is considered. In addition to uniform beam, prebunched electron beam has also been studied. For this effect of sinusoidal distribution of entry times for the electron beam on the evolution of radiation is compared with uniform distribution. It is shown that prebunching reduces the saturation length substantially. For efficiency enhancement the wiggler is set to decrease linearly when the radiation of the third harmonic saturates. The optimum starting point of tapering and the slope of radiation in the amplitude of wiggler are found by successive run of the code.Keywords: Free electron laser, Prebunching, Undulator, Wiggler.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14633890 Web Search Engine Based Naming Procedure for Independent Topic
Authors: Takahiro Nishigaki, Takashi Onoda
Abstract:
In recent years, the number of document data has been increasing since the spread of the Internet. Many methods have been studied for extracting topics from large document data. We proposed Independent Topic Analysis (ITA) to extract topics independent of each other from large document data such as newspaper data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis. The topic represented by ITA is represented by a set of words. However, the set of words is quite different from the topics the user imagines. For example, the top five words with high independence of a topic are as follows. Topic1 = {"scor", "game", "lead", "quarter", "rebound"}. This Topic 1 is considered to represent the topic of "SPORTS". This topic name "SPORTS" has to be attached by the user. ITA cannot name topics. Therefore, in this research, we propose a method to obtain topics easy for people to understand by using the web search engine, topics given by the set of words given by independent topic analysis. In particular, we search a set of topical words, and the title of the homepage of the search result is taken as the topic name. And we also use the proposed method for some data and verify its effectiveness.Keywords: Independent topic analysis, topic extraction, topic naming, web search engine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5003889 Dam Operation Management Criteria during Floods: Case Study of Dez Dam in Southwest Iran
Authors: Ali Heidari
Abstract:
This paper presents the principles for improving flood mitigation operation in multipurpose dams and maximizing reservoir performance during flood occurrence with a focus on the real-time operation of gated spillways. The criteria of operation include the safety of dams during flood management, minimizing the downstream flood risk by decreasing the flood hazard and fulfilling water supply and other purposes of the dam operation in mid and long terms horizons. The parameters deemed to be important include flood inflow, outlet capacity restrictions, downstream flood inundation damages, economic revenue of dam operation, and environmental and sedimentation restrictions. A simulation model was used to determine the real-time release of the Dez Dam located in the Dez Rivers in southwest Iran, considering the gate regulation curves for the gated spillway. The results of the simulation model show that there is a possibility to improve the current procedures used in the real-time operation of the dams, particularly using gate regulation curves and early flood forecasting system results. The Dez Dam operation data show that in one of the best flood control records, 17% of the total active volume and flood control pool of the reservoir have not been used in decreasing the downstream flood hazard despite the availability of a flood forecasting system.
Keywords: Dam operation, flood control criteria, Dez Dam, Iran.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3863888 Groundwater Contamination due to Bhalaswa Landfill Site in New Delhi
Authors: Bharat Jhamnani, SK Singh
Abstract:
Sampling and analysis of leachate from Bhalaswa landfill and groundwater samples from nearby locations, clearly indicated the likely contamination of groundwater due to landfill leachate. The results of simulation studies carried out for the migration of Chloride from landfill shows that the simulation results are in consonance with the observed concentration of Chloride in the vicinity of landfill facility. The solid waste disposal system presently being practiced in Delhi consists of mere dumping of wastes generated, at three locations Bhalaswa, Ghazipur, and Okhla without any regard to proper care for the protection of surrounding environment. Bhalaswa landfill site in Delhi, which is being operated as a dump site, is expected to become cause of serious groundwater pollution in its vicinity. The leachate from Bhalaswa landfill was found to be having a high concentration of chlorides, as well as DOC, COD. The present study was undertaken to determine the likely concentrations of principle contaminants in the groundwater over a period of time due to the discharge of such contaminants from landfill leachates to the underlying groundwater. The observed concentration of chlorides in the groundwater within 75m of the radius of landfill facility was found to be in consonance with the simulated concentration of chloride in groundwater considering one dimensional transport model, with finite mass of contaminant source. Governing equation of contaminant transport involving advection and diffusion-dispersion was solved in matlab7.0 using finite difference method.Keywords: Groundwater, landfill, leachate, solid waste.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38533887 Scaling up Detection Rates and Reducing False Positives in Intrusion Detection using NBTree
Authors: Dewan Md. Farid, Nguyen Huu Hoa, Jerome Darmont, Nouria Harbi, Mohammad Zahidur Rahman
Abstract:
In this paper, we present a new learning algorithm for anomaly based network intrusion detection using improved self adaptive naïve Bayesian tree (NBTree), which induces a hybrid of decision tree and naïve Bayesian classifier. The proposed approach scales up the balance detections for different attack types and keeps the false positives at acceptable level in intrusion detection. In complex and dynamic large intrusion detection dataset, the detection accuracy of naïve Bayesian classifier does not scale up as well as decision tree. It has been successfully tested in other problem domains that naïve Bayesian tree improves the classification rates in large dataset. In naïve Bayesian tree nodes contain and split as regular decision-trees, but the leaves contain naïve Bayesian classifiers. The experimental results on KDD99 benchmark network intrusion detection dataset demonstrate that this new approach scales up the detection rates for different attack types and reduces false positives in network intrusion detection.Keywords: Detection rates, false positives, network intrusiondetection, naïve Bayesian tree.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22813886 Development of Map of Gridded Basin Flash Flood Potential Index: GBFFPI Map of QuangNam, QuangNgai, DaNang, Hue Provinces
Authors: Le Xuan Cau
Abstract:
Flash flood is occurred in short time rainfall interval: from 1 hour to 12 hours in small and medium basins. Flash floods typically have two characteristics: large water flow and big flow velocity. Flash flood is occurred at hill valley site (strip of lowland of terrain) in a catchment with large enough distribution area, steep basin slope, and heavy rainfall. The risk of flash floods is determined through Gridded Basin Flash Flood Potential Index (GBFFPI). Flash Flood Potential Index (FFPI) is determined through terrain slope flash flood index, soil erosion flash flood index, land cover flash floods index, land use flash flood index, rainfall flash flood index. Determining GBFFPI, each cell in a map can be considered as outlet of a water accumulation basin. GBFFPI of the cell is determined as basin average value of FFPI of the corresponding water accumulation basin. Based on GIS, a tool is developed to compute GBFFPI using ArcObjects SDK for .NET. The maps of GBFFPI are built in two types: GBFFPI including rainfall flash flood index (real time flash flood warning) or GBFFPI excluding rainfall flash flood index. GBFFPI Tool can be used to determine a high flash flood potential site in a large region as quick as possible. The GBFFPI is improved from conventional FFPI. The advantage of GBFFPI is that GBFFPI is taking into account the basin response (interaction of cells) and determines more true flash flood site (strip of lowland of terrain) while conventional FFPI is taking into account single cell and does not consider the interaction between cells. The GBFFPI Map of QuangNam, QuangNgai, DaNang, Hue is built and exported to Google Earth. The obtained map proves scientific basis of GBFFPI.Keywords: ArcObjects SDK for .NET, Basin average value of FFPI, Gridded basin flash flood potential index, GBFFPI map.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19233885 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — In the Case of Critical Dataset Size —
Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno
Abstract:
STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to real-world data
Keywords: Rule induction, decision table, missing data, noise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14633884 A Hybrid Nature Inspired Algorithm for Generating Optimal Query Plan
Authors: R. Gomathi, D. Sharmila
Abstract:
The emergence of the Semantic Web technology increases day by day due to the rapid growth of multiple web pages. Many standard formats are available to store the semantic web data. The most popular format is the Resource Description Framework (RDF). Querying large RDF graphs becomes a tedious procedure with a vast increase in the amount of data. The problem of query optimization becomes an issue in querying large RDF graphs. Choosing the best query plan reduces the amount of query execution time. To address this problem, nature inspired algorithms can be used as an alternative to the traditional query optimization techniques. In this research, the optimal query plan is generated by the proposed SAPSO algorithm which is a hybrid of Simulated Annealing (SA) and Particle Swarm Optimization (PSO) algorithms. The proposed SAPSO algorithm has the ability to find the local optimistic result and it avoids the problem of local minimum. Experiments were performed on different datasets by changing the number of predicates and the amount of data. The proposed algorithm gives improved results compared to existing algorithms in terms of query execution time.
Keywords: Semantic web, RDF, Query optimization, Nature inspired algorithms, PSO, SA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22393883 Kinematic Hardening Parameters Identification with Respect to Objective Function
Authors: Marina Franulovic, Robert Basan, Bozidar Krizan
Abstract:
Constitutive modeling of material behavior is becoming increasingly important in prediction of possible failures in highly loaded engineering components, and consequently, optimization of their design. In order to account for large number of phenomena that occur in the material during operation, such as kinematic hardening effect in low cycle fatigue behavior of steels, complex nonlinear material models are used ever more frequently, despite of the complexity of determination of their parameters. As a method for the determination of these parameters, genetic algorithm is good choice because of its capability to provide very good approximation of the solution in systems with large number of unknown variables. For the application of genetic algorithm to parameter identification, inverse analysis must be primarily defined. It is used as a tool to fine-tune calculated stress-strain values with experimental ones. In order to choose proper objective function for inverse analysis among already existent and newly developed functions, the research is performed to investigate its influence on material behavior modeling.
Keywords: Genetic algorithm, kinematic hardening, material model, objective function
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38013882 Unbalanced Distribution Optimal Power Flow to Minimize Losses with Distributed Photovoltaic Plants
Authors: Malinwo Estone Ayikpa
Abstract:
Electric power systems are likely to operate with minimum losses and voltage meeting international standards. This is made possible generally by control actions provide by automatic voltage regulators, capacitors and transformers with on-load tap changer (OLTC). With the development of photovoltaic (PV) systems technology, their integration on distribution networks has increased over the last years to the extent of replacing the above mentioned techniques. The conventional analysis and simulation tools used for electrical networks are no longer able to take into account control actions necessary for studying distributed PV generation impact. This paper presents an unbalanced optimal power flow (OPF) model that minimizes losses with association of active power generation and reactive power control of single-phase and three-phase PV systems. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. The unbalance OPF is formulated by current balance equations and solved by primal-dual interior point method. Several simulation cases have been carried out varying the size and location of PV systems and the results show a detailed view of the impact of PV distributed generation on distribution systems.
Keywords: Distribution system, losses, photovoltaic generation, primal-dual interior point method, reactive power control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10803881 Impact of Extended Enterprise Resource Planning in the Context of Cloud Computing on Industries and Organizations
Authors: Gholamreza Momenzadeh, Forough Nematolahi
Abstract:
The Extended Enterprise Resource Planning (ERPII) system usually requires massive amounts of storage space, powerful servers, and large upfront and ongoing investments to purchase and manage the software and the related hardware which are not affordable for organizations. In recent decades, organizations prefer to adapt their business structures with new technologies for remaining competitive in the world economy. Therefore, cloud computing (which is one of the tools of information technology (IT)) is a modern system that reveals the next-generation application architecture. Also, cloud computing has had some advantages that reduce costs in many ways such as: lower upfront costs for all computing infrastructure and lower cost of maintaining and supporting. On the other hand, traditional ERPII is not responding for huge amounts of data and relations between the organizations. In this study, based on a literature study, ERPII is investigated in the context of cloud computing where the organizations operate more efficiently. Also, ERPII conditions have a response to needs of organizations in large amounts of data and relations between the organizations.Keywords: Extended enterprise resource planning, cloud computing, business process, enterprise information integration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9793880 Influence of Atmospheric Physical Effects on Static Behavior of Building Plate Components Made of Fiber-Cement-Based Materials
Authors: Jindrich J. Melcher, Marcela Karmazínová
Abstract:
The paper presents the brief information on particular results of experimental study focused to the problems of behavior of structural plated components made of fiber-cement-based materials and used in building constructions, exposed to atmospheric physical effects given by the weather changes in the summer period. Weather changes represented namely by temperature and rain cause also the changes of the temperature and moisture of the investigated structural components. This can affect their static behavior that means stresses and deformations, which have been monitored as the main outputs of tests performed. Experimental verification is based on the simulation of the influence of temperature and rain using the defined procedure of warming and water sprinkling with respect to the corresponding weather conditions during summer period in the South Moravian region at the Czech Republic, for which the application of these structural components is mainly planned. Two types of components have been tested: (i) glass-fiber-concrete panels used for building façades and (ii) fiber-cement slabs used mainly for claddings, but also as a part of floor structures or lost shuttering, and so on.
Keywords: Atmospheric physical effect, building component, experiment, fiber-cement, glass-fiber-concrete, simulation, static behavior, test, warming, water sprinkling, weather.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12483879 Thermal Analysis of Extrusion Process in Plastic Making
Authors: S. K. Fasogbon, T. M. Oladosu, O. S. Osasuyi
Abstract:
Plastic extrusion has been an important process of plastic production since 19th century. Meanwhile, in plastic extrusion process, wide variation in temperature along the extrudate usually leads to scraps formation on the side of finished products. To avoid this situation, there is a need to deeply understand temperature distribution along the extrudate in plastic extrusion process. This work developed an analytical model that predicts the temperature distribution over the billet (the polymers melt) along the extrudate during extrusion process with the limitation that the polymer in question does not cover biopolymer such as DNA. The model was solved and simulated. Results for two different plastic materials (polyvinylchloride and polycarbonate) using self-developed MATLAB code and a commercially developed software (ANSYS) were generated and ultimately compared. It was observed that there is a thermodynamic heat transfer from the entry level of the billet into the die down to the end of it. The graph plots indicate a natural exponential decay of temperature with time and along the die length, with the temperature being 413 K and 474 K for polyvinylchloride and polycarbonate respectively at the entry level and 299.3 K and 328.8 K at the exit when the temperature of the surrounding was 298 K. The extrusion model was validated by comparison of MATLAB code simulation with a commercially available ANSYS simulation and the results favourably agree. This work concludes that the developed mathematical model and the self-generated MATLAB code are reliable tools in predicting temperature distribution along the extrudate in plastic extrusion process.
Keywords: ANSYS, extrusion process, MATLAB, plastic making, thermal analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18553878 A Review on the Potential of Electric Vehicles in Reducing World CO2 Footprints
Authors: S. Alotaibi, S. Omer, Y. Su
Abstract:
The conventional Internal Combustion Engine (ICE) based vehicles are a threat to the environment as they account for a large proportion of the overall greenhouse gas (GHG) emissions in the world. Hence, it is required to replace these vehicles with more environment-friendly vehicles. Electric Vehicles (EVs) are promising technologies which offer both human comfort “noise, pollution” as well as reduced (or no) emissions of GHGs. In this paper, different types of EVs are reviewed and their advantages and disadvantages are identified. It is found that in terms of fuel economy, Plug-in Hybrid EVs (PHEVs) have the best fuel economy, followed by Hybrid EVs (HEVs) and ICE vehicles. Since Battery EVs (BEVs) do not use any fuel, their fuel economy is estimated as price per kilometer. Similarly, in terms of GHG emissions, BEVs are the most environmentally friendly since they do not result in any emissions while HEVs and PHEVs produce less emissions compared to the conventional ICE based vehicles. Fuel Cell EVs (FCEVs) are also zero-emission vehicles, but they have large costs associated with them. Finally, if the electricity is provided by using the renewable energy technologies through grid connection, then BEVs could be considered as zero emission vehicles.
Keywords: Electric vehicle, fuel cell electric vehicle, hybrid electric vehicle, internal combustion engine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5383877 Analysis and Design of Inductive Power Transfer Systems for Automotive Battery Charging Applications
Authors: Wahab Ali Shah, Junjia He
Abstract:
Transferring electrical power without any wiring has been a dream since late 19th century. There were some advances in this area as to know more about microwave systems. However, this subject has recently become very attractive due to their practiScal systems. There are low power applications such as charging the batteries of contactless tooth brushes or implanted devices, and higher power applications such as charging the batteries of electrical automobiles or buses. In the first group of applications operating frequencies are in microwave range while the frequency is lower in high power applications. In the latter, the concept is also called inductive power transfer. The aim of the paper is to have an overview of the inductive power transfer for electrical vehicles with a special concentration on coil design and power converter simulation for static charging. Coil design is very important for an efficient and safe power transfer. Coil design is one of the most critical tasks. Power converters are used in both side of the system. The converter on the primary side is used to generate a high frequency voltage to excite the primary coil. The purpose of the converter in the secondary is to rectify the voltage transferred from the primary to charge the battery. In this paper, an inductive power transfer system is studied. Inductive power transfer is a promising technology with several possible applications. Operation principles of these systems are explained, and components of the system are described. Finally, a single phase 2 kW system was simulated and results were presented. The work presented in this paper is just an introduction to the concept. A reformed compensation network based on traditional inductor-capacitor-inductor (LCL) topology is proposed to realize robust reaction to large coupling variation that is common in dynamic wireless charging application. In the future, this type compensation should be studied. Also, comparison of different compensation topologies should be done for the same power level.
Keywords: Coil design, contactless charging, electrical automobiles, inductive power transfer, operating frequency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9833876 Analysis of Residual Stresses and Angular Distortion in Stiffened Cylindrical Shell Fillet Welds Using Finite Element Method
Authors: M. R. Daneshgar, S. E. Habibi, E. Daneshgar, A. Daneshgar
Abstract:
In this paper, a two-dimensional method is developed to simulate the fillet welds in a stiffened cylindrical shell, using finite element method. The stiffener material is aluminum 2519. The thermo-elasto-plastic analysis is used to analyze the thermo-mechanical behavior. Due to the high heat flux rate of the welding process, two uncouple thermal and mechanical analysis are carried out instead of performing a single couple thermo-mechanical simulation. In order to investigate the effects of the welding procedures, two different welding techniques are examined. The resulted residual stresses and distortions due to different welding procedures are obtained. Furthermore, this study employed the technique of element birth and death to simulate the weld filler variation with time in fillet welds. The obtained results are in good agreement with the published experimental and three-dimensional numerical simulation results. Therefore, the proposed 2D modeling technique can effectively give the corresponding results of 3D models. Furthermore, by inspection of the obtained residual hoop and transverse stresses and angular distortions, proper welding procedure is suggested.
Keywords: Stiffened cylindrical shell, fillet welds, residual stress, angular distortion, finite element method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20293875 Estimation of Thermal Conductivity of Nanofluids Using MD-Stochastic Simulation Based Approach
Authors: Sujoy Das, M. M. Ghosh
Abstract:
The thermal conductivity of a fluid can be significantly enhanced by dispersing nano-sized particles in it, and the resultant fluid is termed as "nanofluid". A theoretical model for estimating the thermal conductivity of a nanofluid has been proposed here. It is based on the mechanism that evenly dispersed nanoparticles within a nanofluid undergo Brownian motion in course of which the nanoparticles repeatedly collide with the heat source. During each collision a rapid heat transfer occurs owing to the solidsolid contact. Molecular dynamics (MD) simulation of the collision of nanoparticles with the heat source has shown that there is a pulselike pick up of heat by the nanoparticles within 20-100 ps, the extent of which depends not only on thermal conductivity of the nanoparticles, but also on the elastic and other physical properties of the nanoparticle. After the collision the nanoparticles undergo Brownian motion in the base fluid and release the excess heat to the surrounding base fluid within 2-10 ms. The Brownian motion and associated temperature variation of the nanoparticles have been modeled by stochastic analysis. Repeated occurrence of these events by the suspended nanoparticles significantly contributes to the characteristic thermal conductivity of the nanofluids, which has been estimated by the present model for a ethylene glycol based nanofluid containing Cu-nanoparticles of size ranging from 8 to 20 nm, with Gaussian size distribution. The prediction of the present model has shown a reasonable agreement with the experimental data available in literature.
Keywords: Brownian dynamics, Molecular dynamics, Nanofluid, Thermal conductivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22643874 Satellite Imagery Classification Based on Deep Convolution Network
Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu
Abstract:
Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.
Keywords: Satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23463873 Analysis of Codebook Based Channel Feedback Techniques for MIMO-OFDM Systems
Authors: Muhammad Rehan Khalid, Ahmed Farhan Hanif, Adnan Ahmed Khan
Abstract:
This paper investigates the performance of Multiple- Input Multiple-Output (MIMO) feedback system combined with Orthogonal Frequency Division Multiplexing (OFDM). Two types of codebook based channel feedback techniques are used in this work. The first feedback technique uses a combination of both the long-term and short-term channel state information (CSI) at the transmitter, whereas the second technique uses only the short term CSI. The long-term and short-term CSI at the transmitter is used for efficient channel utilization. OFDM is a powerful technique employed in communication systems suffering from frequency selectivity. Combined with multiple antennas at the transmitter and receiver, OFDM proves to be robust against delay spread. Moreover, it leads to significant data rates with improved bit error performance over links having only a single antenna at both the transmitter and receiver. The effectiveness of these techniques has been demonstrated through the simulation of a MIMO-OFDM feedback system. The results have been evaluated for 4x4 MIMO channels. Simulation results indicate the benefits of the MIMO-OFDM channel feedback system over the one without incorporating OFDM. Performance gain of about 3 dB is observed for MIMO-OFDM feedback system as compared to the one without employing OFDM. Hence MIMO-OFDM becomes an attractive approach for future high speed wireless communication systems.
Keywords: MIMO systems, OFDM, Codebooks, Channel Feedback
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16743872 Weighted Data Replication Strategy for Data Grid Considering Economic Approach
Authors: N. Mansouri, A. Asadi
Abstract:
Data Grid is a geographically distributed environment that deals with data intensive application in scientific and enterprise computing. Data replication is a common method used to achieve efficient and fault-tolerant data access in Grids. In this paper, a dynamic data replication strategy, called Enhanced Latest Access Largest Weight (ELALW) is proposed. This strategy is an enhanced version of Latest Access Largest Weight strategy. However, replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement task. ELALW replaces replicas based on the number of requests in future, the size of the replica, and the number of copies of the file. It also improves access latency by selecting the best replica when various sites hold replicas. The proposed replica selection selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. Simulation results utilizing the OptorSim show our replication strategy achieve better performance overall than other strategies in terms of job execution time, effective network usage and storage resource usage.
Keywords: Data grid, data replication, simulation, replica selection, replica placement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21103871 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment
Authors: U. Yerlikaya, R. T. Balkan
Abstract:
In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.
Keywords: A* Algorithm, autonomous turrets, high-dimensional C-Space, manifold C-Space, point clouds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 386