Search results for: iterative calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1572

Search results for: iterative calculation

1452 Revolutionizing Gaming Setup Design: Utilizing Generative and Iterative Methods to Prop and Environment Design, Transforming the Landscape of Game Development Through Automation and Innovation

Authors: Rashmi Malik, Videep Mishra

Abstract:

The practice of generative design has become a transformative approach for an efficient way of generating multiple iterations for any design project. The conventional way of modeling the game elements is very time-consuming and requires skilled artists to design. A 3D modeling tool like 3D S Max, Blender, etc., is used traditionally to create the game library, which will take its stipulated time to model. The study is focused on using the generative design tool to increase the efficiency in game development at the stage of prop and environment generation. This will involve procedural level and customized regulated or randomized assets generation. The paper will present the system design approach using generative tools like Grasshopper (visual scripting) and other scripting tools to automate the process of game library modeling. The script will enable the generation of multiple products from the single script, thus creating a system that lets designers /artists customize props and environments. The main goal is to measure the efficacy of the automated system generated to create a wide variety of game elements, further reducing the need for manual content creation and integrating it into the workflow of AAA and Indie Games.

Keywords: iterative game design, generative design, gaming asset automation, generative game design

Procedia PDF Downloads 40
1451 Impact of the Operation and Infrastructure Parameters to the Railway Track Capacity

Authors: Martin Kendra, Jaroslav Mašek, Juraj Čamaj, Matej Babin

Abstract:

The railway transport is considered as a one of the most environmentally friendly mode of transport. With future prediction of increasing of freight transport there are lines facing problems with demanded capacity. Increase of the track capacity could be achieved by infrastructure constructive adjustments. The contribution shows how the travel time can be minimized and the track capacity increased by changing some of the basic infrastructure and operation parameters, for example, the minimal curve radius of the track, the number of tracks, or the usable track length at stations. Calculation of the necessary parameter changes is based on the fundamental physical laws applied to the train movement, and calculation of the occupation time is dependent on the changes of controlling the traffic between the stations.

Keywords: curve radius, maximum curve speed, track mass capacity, reconstruction

Procedia PDF Downloads 317
1450 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 189
1449 An Improved Parallel Algorithm of Decision Tree

Authors: Jiameng Wang, Yunfei Yin, Xiyu Deng

Abstract:

Parallel optimization is one of the important research topics of data mining at this stage. Taking Classification and Regression Tree (CART) parallelization as an example, this paper proposes a parallel data mining algorithm based on SSP-OGini-PCCP. Aiming at the problem of choosing the best CART segmentation point, this paper designs an S-SP model without data association; and in order to calculate the Gini index efficiently, a parallel OGini calculation method is designed. In addition, in order to improve the efficiency of the pruning algorithm, a synchronous PCCP pruning strategy is proposed in this paper. In this paper, the optimal segmentation calculation, Gini index calculation, and pruning algorithm are studied in depth. These are important components of parallel data mining. By constructing a distributed cluster simulation system based on SPARK, data mining methods based on SSP-OGini-PCCP are tested. Experimental results show that this method can increase the search efficiency of the best segmentation point by an average of 89%, increase the search efficiency of the Gini segmentation index by 3853%, and increase the pruning efficiency by 146% on average; and as the size of the data set increases, the performance of the algorithm remains stable, which meets the requirements of contemporary massive data processing.

Keywords: classification, Gini index, parallel data mining, pruning ahead

Procedia PDF Downloads 100
1448 Endometrioma Ethanol Sclerotherapy

Authors: Lamia Bensissaid

Abstract:

Goals: Endometriosis affects 6 to 10% of women of childbearing age. 17 to 44% of them have ovarian endometriomas. Medical and surgical treatments represent the two therapeutic axes with which PMA can be associated. Laparoscopic intraperitoneal ovarian cystectomy is described as the reference technique in the management of endometriomas by learned societies (CNGOF, ESHRE, NICE). However, it leads to a significant short-term reduction in the AMH level and the number of antral follicles, especially in cases of bilateral cystectomy, large cyst size or cystectomy after recurrence. Often, the disease is at an advanced stage with several surgical patients. Most have adhesions, which increase the risk of surgical complications and suboptimal resection and, therefore recurrence of the cyst. These results led to a change of opinion towards a conservative approach. Sclerotherapy is an old technique which acts by fibrinoid necrosis. It consists of injecting a sclerosing agent into the cyst cavity. Results : Recurrence was less than 15% for a 12-month follow-up; these rates are comparable to those of surgery. It does not seem to have a negative impact on ovarian reserve, but this is not sufficiently evaluated. It has an advantage in IVF pregnancy rates compared to cystectomy, particularly in cases of recurrent endometriomas. It has the advantages: · To be done on an outpatient basis. · To be inexpensive. · To avoid sometimes difficult and iterative surgery: · To allow an increase in pregnancy rates and the preservation of the ovarian reserve compared to iterative surgery. · of great interest in cases of bilateral endometriomas (kissing ovaries) or recurrent endometriomas. Conclusions: Ethanol sclerotherapy could be a good alternative to surgery.

Keywords: Endometrioma, Sclerotherapy, infertility, Ethanol

Procedia PDF Downloads 31
1447 Iterative Replanning of Diesel Generator and Energy Storage System for Stable Operation of an Isolated Microgrid

Authors: Jiin Jeong, Taekwang Kim, Kwang Ryel Ryu

Abstract:

The target microgrid in this paper is isolated from the large central power system and is assumed to consist of wind generators, photovoltaic power generators, an energy storage system (ESS), a diesel power generator, the community load, and a dump load. The operation of such a microgrid can be hazardous because of the uncertain prediction of power supply and demand and especially due to the high fluctuation of the output from the wind generators. In this paper, we propose an iterative replanning method for determining the appropriate level of diesel generation and the charging/discharging cycles of the ESS for the upcoming one-hour horizon. To cope with the uncertainty of the estimation of supply and demand, the one-hour plan is built repeatedly in the regular interval of one minute by rolling the one-hour horizon. Since the plan should be built with a sufficiently large safe margin to avoid any possible black-out, some energy waste through the dump load is inevitable. In our approach, the level of safe margin is optimized through learning from the past experience. The simulation experiments show that our method combined with the margin optimization can reduce the dump load compared to the method without such optimization.

Keywords: microgrid, operation planning, power efficiency optimization, supply and demand prediction

Procedia PDF Downloads 411
1446 The Improvement of Environmental Protection through Motor Vehicle Noise Abatement

Authors: Z. Jovanovic, Z. Masonicic, S. Dragutinovic, Z. Sakota

Abstract:

In this paper, a methodology for noise reduction of motor vehicles in use is presented. The methodology relies on synergic model of noise generation as a function of time. The arbitrary number of motor vehicle noise sources act in concert yielding the generation of the overall noise level of motor vehicle thereafter. The number of noise sources participating in the overall noise level of motor vehicle is subjected to the constraint of the calculation of the acoustic potential of each noise source under consideration. It is the prerequisite condition for the calculation of the acoustic potential of the whole vehicle. The recast form of pertinent set of equations describing the synergic model is laid down and solved by dint of Gauss method. The bunch of results emerged and some of them i.e. those ensuing from model application to MDD FAP Priboj motor vehicle in use are particularly elucidated.

Keywords: noise abatement, MV noise sources, noise source identification, muffler

Procedia PDF Downloads 415
1445 Modern State of the Universal Modeling for Centrifugal Compressors

Authors: Y. Galerkin, K. Soldatova, A. Drozdov

Abstract:

The 6th version of Universal modeling method for centrifugal compressor stage calculation is described. Identification of the new mathematical model was made. As a result of identification the uniform set of empirical coefficients is received. The efficiency definition error is 0,86 % at a design point. The efficiency definition error at five flow rate points (except a point of the maximum flow rate) is 1,22 %. Several variants of the stage with 3D impellers designed by 6th version program and quasi three-dimensional calculation programs were compared by their gas dynamic performances CFD (NUMECA FINE TURBO). Performance comparison demonstrated general principles of design validity and leads to some design recommendations.

Keywords: compressor design, loss model, performance prediction, test data, model stages, flow rate coefficient, work coefficient

Procedia PDF Downloads 386
1444 Estimation of Pressure Loss Coefficients in Combining Flows Using Artificial Neural Networks

Authors: Shahzad Yousaf, Imran Shafi

Abstract:

This paper presents a new method for calculation of pressure loss coefficients by use of the artificial neural network (ANN) in tee junctions. Geometry and flow parameters are feed into ANN as the inputs for purpose of training the network. Efficacy of the network is demonstrated by comparison of the experimental and ANN based calculated data of pressure loss coefficients for combining flows in a tee junction. Reynolds numbers ranging from 200 to 14000 and discharge ratios varying from minimum to maximum flow for calculation of pressure loss coefficients have been used. Pressure loss coefficients calculated using ANN are compared to the models from literature used in junction flows. The results achieved after the application of ANN agrees reasonably to the experimental values.

Keywords: artificial neural networks, combining flow, pressure loss coefficients, solar collector tee junctions

Procedia PDF Downloads 360
1443 Comparison of Different Data Acquisition Techniques for Shape Optimization Problems

Authors: Attila Vámosi, Tamás Mankovits, Dávid Huri, Imre Kocsis, Tamás Szabó

Abstract:

Non-linear FEM calculations are indispensable when important technical information like operating performance of a rubber component is desired. Rubber bumpers built into air-spring structures may undergo large deformations under load, which in itself shows non-linear behavior. The changing contact range between the parts and the incompressibility of the rubber increases this non-linear behavior further. The material characterization of an elastomeric component is also a demanding engineering task. The shape optimization problem of rubber parts led to the study of FEM based calculation processes. This type of problems was posed and investigated by several authors. In this paper the time demand of certain calculation methods are studied and the possibilities of time reduction is presented.

Keywords: rubber bumper, data acquisition, finite element analysis, support vector regression

Procedia PDF Downloads 448
1442 Theoretical Analysis of Photoassisted Field Emission near the Metal Surface Using Transfer Hamiltonian Method

Authors: Rosangliana Chawngthu, Ramkumar K. Thapa

Abstract:

A model calculation of photoassisted field emission current (PFEC) by using transfer Hamiltonian method will be present here. When the photon energy is incident on the surface of the metals, such that the energy of a photon is usually less than the work function of the metal under investigation. The incident radiation photo excites the electrons to a final state which lies below the vacuum level; the electrons are confined within the metal surface. A strong static electric field is then applied to the surface of the metal which causes the photoexcited electrons to tunnel through the surface potential barrier into the vacuum region and constitutes the considerable current called photoassisted field emission current. The incident radiation is usually a laser beam, causes the transition of electrons from the initial state to the final state and the matrix element for this transition will be written. For the calculation of PFEC, transfer Hamiltonian method is used. The initial state wavefunction is calculated by using Kronig-Penney potential model. The effect of the matrix element will also be studied. An appropriate dielectric model for the surface region of the metal will be used for the evaluation of vector potential. FORTRAN programme is used for the calculation of PFEC. The results will be checked with experimental data and the theoretical results.

Keywords: photoassisted field emission, transfer Hamiltonian, vector potential, wavefunction

Procedia PDF Downloads 189
1441 Iterative Reconstruction Techniques as a Dose Reduction Tool in Pediatric Computed Tomography Imaging: A Phantom Study

Authors: Ajit Brindhaban

Abstract:

Background and Purpose: Computed Tomography (CT) scans have become the largest source of radiation in radiological imaging. The purpose of this study was to compare the quality of pediatric Computed Tomography (CT) images reconstructed using Filtered Back Projection (FBP) with images reconstructed using different strengths of Iterative Reconstruction (IR) technique, and to perform a feasibility study to assess the use of IR techniques as a dose reduction tool. Materials and Methods: An anthropomorphic phantom representing a 5-year old child was scanned, in two stages, using a Siemens Somatom CT unit. In stage one, scans of the head, chest and abdomen were performed using standard protocols recommended by the scanner manufacturer. Images were reconstructed using FBP and 5 different strengths of IR. Contrast-to-Noise Ratios (CNR) were calculated from average CT number and its standard deviation measured in regions of interest created in the lungs, bone, and soft tissues regions of the phantom. Paired t-test and the one-way ANOVA were used to compare the CNR from FBP images with IR images, at p = 0.05 level. The lowest strength value of IR that produced the highest CNR was identified. In the second stage, scans of the head was performed with decreased mA(s) values relative to the increase in CNR compared to the standard FBP protocol. CNR values were compared in this stage using Paired t-test at p = 0.05 level. Results: Images reconstructed using IR technique had higher CNR values (p < 0.01.) in all regions compared to the FBP images, at all strengths of IR. The CNR increased with increasing IR strength of up to 3, in the head and chest images. Increases beyond this strength were insignificant. In abdomen images, CNR continued to increase up to strength 5. The results also indicated that, IR techniques improve CNR by a up to factor of 1.5. Based on the CNR values at strength 3 of IR images and CNR values of FBP images, a reduction in mA(s) of about 20% was identified. The images of the head acquired at 20% reduced mA(s) and reconstructed using IR at strength 3, had similar CNR as FBP images at standard mA(s). In the head scans of the phantom used in this study, it was demonstrated that similar CNR can be achieved even when the mA(s) is reduced by about 20% if IR technique with strength of 3 is used for reconstruction. Conclusions: The IR technique produced better image quality at all strengths of IR in comparison to FBP. IR technique can provide approximately 20% dose reduction in pediatric head CT while maintaining the same image quality as FBP technique.

Keywords: filtered back projection, image quality, iterative reconstruction, pediatric computed tomography imaging

Procedia PDF Downloads 123
1440 Overhead Lines Induced Transient Overvoltage Analysis Using Finite Difference Time Domain Method

Authors: Abdi Ammar, Ouazir Youcef, Laissaoui Abdelmalek

Abstract:

In this work, an approach based on transmission lines theory is presented. It is exploited for the calculation of overvoltage created by direct impacts of lightning waves on a guard cable of an overhead high-voltage line. First, we show the theoretical developments leading to the propagation equation, its discretization by finite difference time domain method (FDTD), and the resulting linear algebraic equations, followed by the calculation of the linear parameters of the line. The second step consists of solving the transmission lines system of equations by the FDTD method. This enabled us to determine the spatio-temporal evolution of the induced overvoltage.

Keywords: lightning surge, transient overvoltage, eddy current, FDTD, electromagnetic compatibility, ground wire

Procedia PDF Downloads 51
1439 Evaluation of Alternative Approaches for Additional Damping in Dynamic Calculations of Railway Bridges under High-Speed Traffic

Authors: Lara Bettinelli, Bernhard Glatz, Josef Fink

Abstract:

Planning engineers and researchers use various calculation models with different levels of complexity, calculation efficiency and accuracy in dynamic calculations of railway bridges under high-speed traffic. When choosing a vehicle model to depict the dynamic loading on the bridge structure caused by passing high-speed trains, different goals are pursued: On the one hand, the selected vehicle models should allow the calculation of a bridge’s vibrations as realistic as possible. On the other hand, the computational efficiency and manageability of the models should be preferably high to enable a wide range of applications. The commonly adopted and straightforward vehicle model is the moving load model (MLM), which simplifies the train to a sequence of static axle loads moving at a constant speed over the structure. However, the MLM can significantly overestimate the structure vibrations, especially when resonance events occur. More complex vehicle models, which depict the train as a system of oscillating and coupled masses, can reproduce the interaction dynamics between the vehicle and the bridge superstructure to some extent and enable the calculation of more realistic bridge accelerations. At the same time, such multi-body models require significantly greater processing capacities and precise knowledge of various vehicle properties. The European standards allow for applying the so-called additional damping method when simple load models, such as the MLM, are used in dynamic calculations. An additional damping factor depending on the bridge span, which should take into account the vibration-reducing benefits of the vehicle-bridge interaction, is assigned to the supporting structure in the calculations. However, numerous studies show that when the current standard specifications are applied, the calculation results for the bridge accelerations are in many cases still too high compared to the measured bridge accelerations, while in other cases, they are not on the safe side. A proposal to calculate the additional damping based on extensive dynamic calculations for a parametric field of simply supported bridges with a ballasted track was developed to address this issue. In this contribution, several different approaches to determine the additional damping of the supporting structure considering the vehicle-bridge interaction when using the MLM are compared with one another. Besides the standard specifications, this includes the approach mentioned above and two additional recently published alternative formulations derived from analytical approaches. For a bridge catalogue of 65 existing bridges in Austria in steel, concrete or composite construction, calculations are carried out with the MLM for two different high-speed trains and the different approaches for additional damping. The results are compared with the calculation results obtained by applying a more sophisticated multi-body model of the trains used. The evaluation and comparison of the results allow assessing the benefits of different calculation concepts for the additional damping regarding their accuracy and possible applications. The evaluation shows that by applying one of the recently published redesigned additional damping methods, the calculation results can reflect the influence of the vehicle-bridge interaction on the design-relevant structural accelerations considerably more reliable than by using normative specifications.

Keywords: Additional Damping Method, Bridge Dynamics, High-Speed Railway Traffic, Vehicle-Bridge-Interaction

Procedia PDF Downloads 140
1438 A Fast Calculation Approach for Position Identification in a Distance Space

Authors: Dawei Cai, Yuya Tokuda

Abstract:

The market of localization based service (LBS) is expanding. The acquisition of physical location is the fundamental basis for LBS. GPS, the de facto standard for outdoor localization, does not work well in indoor environment due to the blocking of signals by walls and ceiling. To acquire high accurate localization in an indoor environment, many techniques have been developed. Triangulation approach is often used for identifying the location, but a heavy and complex computation is necessary to calculate the location of the distances between the object and several source points. This computation is also time and power consumption, and not favorable to a mobile device that needs a long action life with battery. To provide a low power consumption approach for a mobile device, this paper presents a fast calculation approach to identify the location of the object without online solving solutions to simultaneous quadratic equations. In our approach, we divide the location identification into two parts, one is offline, and other is online. In offline mode, we make a mapping process that maps the location area to distance space and find a simple formula that can be used to identify the location of the object online with very light computation. The characteristic of the approach is a good tradeoff between the accuracy and computational amount. Therefore, this approach can be used in smartphone and other mobile devices that need a long work time. To show the performance, some simulation experimental results are provided also in the paper.

Keywords: indoor localization, location based service, triangulation, fast calculation, mobile device

Procedia PDF Downloads 147
1437 Artificial Intelligence for Generative Modelling

Authors: Shryas Bhurat, Aryan Vashistha, Sampreet Dinakar Nayak, Ayush Gupta

Abstract:

As the technology is advancing more towards high computational resources, there is a paradigm shift in the usage of these resources to optimize the design process. This paper discusses the usage of ‘Generative Design using Artificial Intelligence’ to build better models that adapt the operations like selection, mutation, and crossover to generate results. The human mind thinks of the simplest approach while designing an object, but the intelligence learns from the past & designs the complex optimized CAD Models. Generative Design takes the boundary conditions and comes up with multiple solutions with iterations to come up with a sturdy design with the most optimal parameter that is given, saving huge amounts of time & resources. The new production techniques that are at our disposal allow us to use additive manufacturing, 3D printing, and other innovative manufacturing techniques to save resources and design artistically engineered CAD Models. Also, this paper discusses the Genetic Algorithm, the Non-Domination technique to choose the right results using biomimicry that has evolved for current habitation for millions of years. The computer uses parametric models to generate newer models using an iterative approach & uses cloud computing to store these iterative designs. The later part of the paper compares the topology optimization technology with Generative Design that is previously being used to generate CAD Models. Finally, this paper shows the performance of algorithms and how these algorithms help in designing resource-efficient models.

Keywords: genetic algorithm, bio mimicry, generative modeling, non-dominant techniques

Procedia PDF Downloads 118
1436 3D Object Retrieval Based on Similarity Calculation in 3D Computer Aided Design Systems

Authors: Ahmed Fradi

Abstract:

Nowadays, recent technological advances in the acquisition, modeling, and processing of three-dimensional (3D) objects data lead to the creation of models stored in huge databases, which are used in various domains such as computer vision, augmented reality, game industry, medicine, CAD (Computer-aided design), 3D printing etc. On the other hand, the industry is currently benefiting from powerful modeling tools enabling designers to easily and quickly produce 3D models. The great ease of acquisition and modeling of 3D objects make possible to create large 3D models databases, then, it becomes difficult to navigate them. Therefore, the indexing of 3D objects appears as a necessary and promising solution to manage this type of data, to extract model information, retrieve an existing model or calculate similarity between 3D objects. The objective of the proposed research is to develop a framework allowing easy and fast access to 3D objects in a CAD models database with specific indexing algorithm to find objects similar to a reference model. Our main objectives are to study existing methods of similarity calculation of 3D objects (essentially shape-based methods) by specifying the characteristics of each method as well as the difference between them, and then we will propose a new approach for indexing and comparing 3D models, which is suitable for our case study and which is based on some previously studied methods. Our proposed approach is finally illustrated by an implementation, and evaluated in a professional context.

Keywords: CAD, 3D object retrieval, shape based retrieval, similarity calculation

Procedia PDF Downloads 235
1435 The Aspect of the Digital Formation in the Solar Community as One Prototype to Find the Algorithmic Sustainable Conditions in the Global Environment

Authors: Kunihisa Kakumoto

Abstract:

Purpose: The global environmental problem is now raised in the global dimension. The sprawl phenomenon over the natural limitation is to be made a forecast beforehand in an algorithmic way so that the condition of our social life can hopefully be protected under the natural limitation. The sustainable condition in the globe is now to be found to keep the balance between the capacity of nature and the possibility of our social lives. The amount of water on the earth is limited. Therefore, on the reason, sustainable conditions are strongly dependent on the capacity of water. The amount of water can be considered in relation to the area of the green planting because a certain volume of the water can be obtained in the forest, where the green planting can be preserved. We can find the sustainable conditions of the water in relation to the green planting area. The reduction of CO₂ by green planting is also possible. Possible Measure and the Methods: Until now, by the opportunity of many international conferences, the concept of the solar community as one prototype has been introduced by technical papers. The algorithmic trial calculation on the basic concept of the solar community can be taken into consideration. The concept of the solar community is based on the collected data of the solar model house. According to the algorithmic results of the prototype, the simulation work in the globe can be performed as the algorithmic conversion results. This algorithmic study can be simulated by the amount of water, also in relation to the green planting area. Additionally, the submission of CO₂ in the solar community and the reduction of CO₂ by green planting can be calculated. On the base of these calculations in the solar community, the sustainable conditions on the globe can be simulated as the conversion results in an algorithmic way. The digital formation in the solar community can also be taken into consideration by this opportunity. Conclusion: For the finding of sustainable conditions around the globe, the solar community as one prototype has been taken into consideration. The role of the water is very important because the capacity of the water supply is very limited. But, at present, the cycle of the social community is not composed by the point of the natural mechanism. The simulative calculation of this study can be shown by the limitation of the total water supply. According to this process, the total capacity of the water supply and the capable residential number of the population and the areas can be taken into consideration by the algorithmic calculation. For keeping enough water, the green planting areas are very important. The planting area is also very important to keep the balance of CO₂. The simulative calculation can be performed by the relation between the submission and the reduction of CO₂ in the solar community. For the finding of this total balance and the sustainable conditions, the green planting area and the total amount of water can be recognized by the algorithmic simulative calculation. The study for the finding of sustainable conditions can be performed by the simulative calculations on the algorithmic model in the solar community as one prototype. The example of one prototype can be in balance. The activity of the social life must be in the capacity of the natural mechanism. The capable capacity of the natural environment in our world is very limited.

Keywords: the solar community, the sustainable condition, the natural limitation, the algorithmic calculation

Procedia PDF Downloads 72
1434 Digital Joint Equivalent Channel Hybrid Precoding for Millimeterwave Massive Multiple Input Multiple Output Systems

Authors: Linyu Wang, Mingjun Zhu, Jianhong Xiang, Hanyu Jiang

Abstract:

Aiming at the problem that the spectral efficiency of hybrid precoding (HP) is too low in the current millimeter wave (mmWave) massive multiple input multiple output (MIMO) system, this paper proposes a digital joint equivalent channel hybrid precoding algorithm, which is based on the introduction of digital encoding matrix iteration. First, the objective function is expanded to obtain the relation equation, and the pseudo-inverse iterative function of the analog encoder is derived by using the pseudo-inverse method, which solves the problem of greatly increasing the amount of computation caused by the lack of rank of the digital encoding matrix and reduces the overall complexity of hybrid precoding. Secondly, the analog coding matrix and the millimeter-wave sparse channel matrix are combined into an equivalent channel, and then the equivalent channel is subjected to Singular Value Decomposition (SVD) to obtain a digital coding matrix, and then the derived pseudo-inverse iterative function is used to iteratively regenerate the simulated encoding matrix. The simulation results show that the proposed algorithm improves the system spectral efficiency by 10~20%compared with other algorithms and the stability is also improved.

Keywords: mmWave, massive MIMO, hybrid precoding, singular value decompositing, equivalent channel

Procedia PDF Downloads 66
1433 Modelling Railway Noise Over Large Areas, Assisted by GIS

Authors: Conrad Weber

Abstract:

The modelling of railway noise over large projects areas can be very time consuming in terms of preparing the noise models and calculation time. An open-source GIS program has been utilised to assist with the modelling of operational noise levels for 675km of railway corridor. A range of GIS algorithms were utilised to break up the noise model area into manageable calculation sizes. GIS was utilised to prepare and filter a range of noise modelling inputs, including building files, land uses and ground terrain. A spreadsheet was utilised to manage the accuracy of key input parameters, including train speeds, train types, curve corrections, bridge corrections and engine notch settings. GIS was utilised to present the final noise modelling results. This paper explains the noise modelling process and how the spreadsheet and GIS were utilised to accurately model this massive project efficiently.

Keywords: noise, modeling, GIS, rail

Procedia PDF Downloads 90
1432 Development of Hydrodynamic Drag Calculation and Cavity Shape Generation for Supercavitating Torpedoes

Authors: Sertac Arslan, Sezer Kefeli

Abstract:

In this paper, firstly supercavitating phenomenon and supercavity shape design parameters are explained and then drag force calculation methods of high speed supercavitating torpedoes are investigated with numerical techniques and verified with empirical studies. In order to reach huge speeds such as 200, 300 knots for underwater vehicles, hydrodynamic hull drag force which is proportional to density of water (ρ) and square of speed should be reduced. Conventional heavy weight torpedoes could reach up to ~50 knots by classic underwater hydrodynamic techniques. However, to exceed 50 knots and reach about 200 knots speeds, hydrodynamic viscous forces must be reduced or eliminated completely. This requirement revives supercavitation phenomena that could be implemented to conventional torpedoes. Supercavitation is the use of cavitation effects to create a gas bubble, allowing the torpedo to move at huge speed through the water by being fully developed cavitation bubble. When the torpedo moves in a cavitation envelope due to cavitator in nose section and solid fuel rocket engine in rear section, this kind of torpedoes could be entitled as Supercavitating Torpedoes. There are two types of cavitation; first one is natural cavitation, and second one is ventilated cavitation. In this study, disk cavitator is modeled with natural cavitation and supercavitation phenomenon parameters are studied. Moreover, drag force calculation is performed for disk shape cavitator with numerical techniques and compared via empirical studies. Drag forces are calculated with computational fluid dynamics methods and different empirical methods. Numerical calculation method is developed by comparing with empirical results. In verification study cavitation number (σ), drag coefficient (CD) and drag force (D), cavity wall velocity (U

Keywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavity flows

Procedia PDF Downloads 147
1431 A Gradient Orientation Based Efficient Linear Interpolation Method

Authors: S. Khan, A. Khan, Abdul R. Soomrani, Raja F. Zafar, A. Waqas, G. Akbar

Abstract:

This paper proposes a low-complexity image interpolation method. Image interpolation is used to convert a low dimension video/image to high dimension video/image. The objective of a good interpolation method is to upscale an image in such a way that it provides better edge preservation at the cost of very low complexity so that real-time processing of video frames can be made possible. However, low complexity methods tend to provide real-time interpolation at the cost of blurring, jagging and other artifacts due to errors in slope calculation. Non-linear methods, on the other hand, provide better edge preservation, but at the cost of high complexity and hence they can be considered very far from having real-time interpolation. The proposed method is a linear method that uses gradient orientation for slope calculation, unlike conventional linear methods that uses the contrast of nearby pixels. Prewitt edge detection is applied to separate uniform regions and edges. Simple line averaging is applied to unknown uniform regions, whereas unknown edge pixels are interpolated after calculation of slopes using gradient orientations of neighboring known edge pixels. As a post-processing step, bilateral filter is applied to interpolated edge regions in order to enhance the interpolated edges.

Keywords: edge detection, gradient orientation, image upscaling, linear interpolation, slope tracing

Procedia PDF Downloads 232
1430 Shock Compressibility of Iron Alloys Calculated in the Framework of Quantum-Statistical Models

Authors: Maxim A. Kadatskiy, Konstantin V. Khishchenko

Abstract:

Iron alloys are widespread components in various types of structural materials which are exposed to intensive thermal and mechanical loads. Various quantum-statistical cell models with the approximation of self-consistent field can be used for the prediction of the behavior of these materials under extreme conditions. The application of these models is even more valid, the higher the temperature and the density of matter. Results of Hugoniot calculation for iron alloys in the framework of three quantum-statistical (the Thomas–Fermi, the Thomas–Fermi with quantum and exchange corrections and the Hartree–Fock–Slater) models are presented. Results of quantum-statistical calculations are compared with results from other reliable models and available experimental data. It is revealed a good agreement between results of calculation and experimental data for terra pascal pressures. Advantages and disadvantages of this approach are shown.

Keywords: alloy, Hugoniot, iron, terapascal pressure

Procedia PDF Downloads 313
1429 Calculus of Turbojet Performances for Ideal Case

Authors: S. Bennoud, S. Hocine, H. Slme

Abstract:

Developments in turbine cooling technology play an important role in increasing the thermal efficiency and the power output of recent gas turbines, in particular the turbojets. Advanced turbojets operate at high temperatures to improve thermal efficiency and power output. These temperatures are far above the permissible metal temperatures. Therefore, there is a critical need to cool the blades in order to give theirs a maximum life period for safe operation. The focused objective of this work is to calculate the turbojet performances, as well as the calculation of turbine blades cooling. The developed application able the calculation of turbojet performances to different altitudes in order to find a point of optimal use making possible to maintain the turbine blades at an acceptable maximum temperature and to limit the local variations in temperatures in order to guarantee their integrity during all the lifespan of the engine.

Keywords: brayton cycle, turbine blades cooling, turbojet cycle, turbojet performances

Procedia PDF Downloads 197
1428 Kinoform Optimisation Using Gerchberg- Saxton Iterative Algorithm

Authors: M. Al-Shamery, R. Young, P. Birch, C. Chatwin

Abstract:

Computer Generated Holography (CGH) is employed to create digitally defined coherent wavefronts. A CGH can be created by using different techniques such as by using a detour-phase technique or by direct phase modulation to create a kinoform. The detour-phase technique was one of the first techniques that was used to generate holograms digitally. The disadvantage of this technique is that the reconstructed image often has poor quality due to the limited dynamic range it is possible to record using a medium with reasonable spatial resolution.. The kinoform (phase-only hologram) is an alternative technique. In this method, the phase of the original wavefront is recorded but the amplitude is constrained to be constant. The original object does not need to exist physically and so the kinoform can be used to reconstruct an almost arbitrary wavefront. However, the image reconstructed by this technique contains high levels of noise and is not identical to the reference image. To improve the reconstruction quality of the kinoform, iterative techniques such as the Gerchberg-Saxton algorithm (GS) are employed. In this paper the GS algorithm is described for the optimisation of a kinoform used for the reconstruction of a complex wavefront. Iterations of the GS algorithm are applied to determine the phase at a plane (with known amplitude distribution which is often taken as uniform), that satisfies given phase and amplitude constraints in a corresponding Fourier plane. The GS algorithm can be used in this way to enhance the reconstruction quality of the kinoform. Different images are employed as the reference object and their kinoform is synthesised using the GS algorithm. The quality of the reconstructed images is quantified to demonstrate the enhanced reconstruction quality achieved by using this method.

Keywords: computer generated holography, digital holography, Gerchberg-Saxton algorithm, kinoform

Procedia PDF Downloads 495
1427 Increase of Energy Efficiency by Means of Application of Active Bearings

Authors: Alexander Babin, Leonid Savin

Abstract:

In the present paper, increasing of energy efficiency of a thrust hybrid bearing with a central feeding chamber is considered. The mathematical model was developed to determine the pressure distribution and the reaction forces, based on the Reynolds equation and static characteristics’ equations. The boundary problem of pressure distribution calculation was solved using the method of finite differences. For various types of lubricants, geometry and operational characteristics, axial gaps can be determined, where the minimal friction coefficient is provided. The next part of the study considers the application of servovalves in order to maintain the desired position of the rotor. The report features the calculation results and the analysis of the influence of the operational and geometric parameters on the energy efficiency of mechatronic fluid-film bearings.

Keywords: active bearings, energy efficiency, mathematical model, mechatronics, thrust multipad bearing

Procedia PDF Downloads 260
1426 Lattice Network Model for Calculation of Eddy Current Losses in a Solid Permanent Magnet

Authors: Jan Schmidt, Pierre Köhring

Abstract:

Permanently excited machines are set up with magnets that are made of highly energetic magnetic materials. Inherently, the permanent magnets warm up while the machine is operating. With an increasing temperature, the electromotive force and hence the degree of efficiency decrease. The reasons for this are slot harmonics and distorted armature currents arising from frequency inverter operation. To prevent or avoid demagnetizing of the permanent magnets it is necessary to ensure that the magnets do not excessively heat up. Demagnetizations of permanent magnets are irreversible and a breakdown of the electrical machine is inevitable. For the design of an electrical machine, the knowledge of the behavior of heating under operating conditions of the permanent magnet is of crucial importance. Therefore, a calculation model is presented with which the machine designer can easily calculate the eddy current losses in the magnetic material.

Keywords: analytical model, eddy current, losses, lattice network, permanent magnet

Procedia PDF Downloads 391
1425 Effective Slab Width for Beam-End Flexural Strength of Composite Frames with Circular-Section Columns

Authors: Jizhi Zhao, Qiliang Zhou, Muxuan Tao

Abstract:

The calculation of the ultimate loading capacity of composite frame beams is an important step in the design of composite frame structural systems. Currently, the plastic limit theory is mainly used for this calculation in the codes adopted by many countries; however, the effective slab width recommended in most codes is based on the elastic theory, which does not accurately reflect the complex stress mechanism at the beam-column joints in the ultimate loading state. Therefore, the authors’ research group put forward the Compression-on-Column-Face mechanism and Tension-on-Transverse-Beam mechanism to explain the mechanism in the ultimate loading state. Formulae are derived for calculating the effective slab width in composite frames with rectangular/square-section columns under ultimate lateral loading. Moreover, this paper discusses the calculation method of the effective slab width for the beam-end flexural strength of composite frames with circular-section columns. The proposed design formula is suitable for exterior and interior joints. Finally, this paper compares the proposed formulae with available formulae in other literature, current design codes, and experimental results, providing the most accurate results to predict the effective slab width and ultimate loading capacity.

Keywords: composite frame structure, effective slab width, circular-section column, design formulae, ultimate loading capacity

Procedia PDF Downloads 102
1424 GIS Application in Surface Runoff Estimation for Upper Klang River Basin, Malaysia

Authors: Suzana Ramli, Wardah Tahir

Abstract:

Estimation of surface runoff depth is a vital part in any rainfall-runoff modeling. It leads to stream flow calculation and later predicts flood occurrences. GIS (Geographic Information System) is an advanced and opposite tool used in simulating hydrological model due to its realistic application on topography. The paper discusses on calculation of surface runoff depth for two selected events by using GIS with Curve Number method for Upper Klang River basin. GIS enables maps intersection between soil type and land use that later produces curve number map. The results show good correlation between simulated and observed values with more than 0.7 of R2. Acceptable performance of statistical measurements namely mean error, absolute mean error, RMSE, and bias are also deduced in the paper.

Keywords: surface runoff, geographic information system, curve number method, environment

Procedia PDF Downloads 247
1423 Application of Double Side Approach Method on Super Elliptical Winkler Plate

Authors: Hsiang-Wen Tang, Cheng-Ying Lo

Abstract:

In this study, the static behavior of super elliptical Winkler plate is analyzed by applying the double side approach method. The lack of information about super elliptical Winkler plates is the motivation of this study and we use the double side approach method to solve this problem because of its superior ability on efficiently treating problems with complex boundary shape. The double side approach method has the advantages of high accuracy, easy calculation procedure and less calculation load required. Most important of all, it can give the error bound of the approximate solution. The numerical results not only show that the double side approach method works well on this problem but also provide us the knowledge of static behavior of super elliptical Winkler plate in practical use.

Keywords: super elliptical winkler plate, double side approach method, error bound, mechanic

Procedia PDF Downloads 325