Search results for: pinch point analysis.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9971

Search results for: pinch point analysis.

7601 A Parametric Study: Frame Analysis Method for Masonry Arch Bridges

Authors: M. E. Rahman, D. Sujan, V. Pakrashi, P. Fanning

Abstract:

The predictability of masonry arch bridges and their behaviour is widely considered doubtful due to the lack of knowledge about the conditions of a given masonry arch bridge. The assessment methods for masonry arch bridges are MEXE, ARCHIE, RING and Frame Analysis Method. The material properties of the masonry and fill material are extremely difficult to determine accurately. Consequently, it is necessary to examine the effect of load dispersal angle through the fill material, the effect of variations in the stiffness of the masonry, the tensile strength of the masonry mortar continuum and the compressive strength of the masonry mortar continuum. It is also important to understand the effect of fill material on load dispersal angle to determine their influence on ratings. In this paper a series of parametric studies, to examine the sensitivity of assessment ratings to the various sets of input data required by the frame analysis method, are carried out.

Keywords: Arch Bridge, Frame Analyses Method, Masonry

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2187
7600 Functional Near Infrared Spectroscope for Cognition Brain Tasks by Wavelets Analysis and Neural Networks

Authors: Truong Quang Dang Khoa, Masahiro Nakagawa

Abstract:

Brain Computer Interface (BCI) has been recently increased in research. Functional Near Infrared Spectroscope (fNIRs) is one the latest technologies which utilize light in the near-infrared range to determine brain activities. Because near infrared technology allows design of safe, portable, wearable, non-invasive and wireless qualities monitoring systems, fNIRs monitoring of brain hemodynamics can be value in helping to understand brain tasks. In this paper, we present results of fNIRs signal analysis indicating that there exist distinct patterns of hemodynamic responses which recognize brain tasks toward developing a BCI. We applied two different mathematics tools separately, Wavelets analysis for preprocessing as signal filters and feature extractions and Neural networks for cognition brain tasks as a classification module. We also discuss and compare with other methods while our proposals perform better with an average accuracy of 99.9% for classification.

Keywords: functional near infrared spectroscope (fNIRs), braincomputer interface (BCI), wavelets, neural networks, brain activity, neuroimaging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2037
7599 Uncertainty Analysis of a Hardware in Loop Setup for Testing Products Related to Building Technology

Authors: Balasundaram Prasaant, Ploix Stephane, Delinchant Benoit, Muresan Cristian

Abstract:

Hardware in Loop (HIL) testing is done to test and validate a particular product especially in building technology. When it comes to building technology, it is more important to test the products for their efficiency. The test rig in the HIL simulator may contribute to some uncertainties on measured efficiency. The uncertainties include physical uncertainties and scenario-based uncertainties. In this paper, a simple uncertainty analysis framework for an HIL setup is shown considering only the physical uncertainties. The entire modeling of the HIL setup is done in Dymola. The uncertain sources are considered based on available knowledge of the components and also on expert knowledge. For the propagation of uncertainty, Monte Carlo Simulation is used since it is the most reliable and easy to use. In this article it is shown how an HIL setup can be modeled and how uncertainty propagation can be performed on it. Such an approach is not common in building energy analysis.

Keywords: Energy in Buildings, Hardware in Loop, Modelica (Dymola), Monte Carlo Simulation, Uncertainty Propagation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 575
7598 Connected Vertex Cover in 2-Connected Planar Graph with Maximum Degree 4 is NP-complete

Authors: Priyadarsini P. L. K, Hemalatha T.

Abstract:

This paper proves that the problem of finding connected vertex cover in a 2-connected planar graph ( CVC-2 ) with maximum degree 4 is NP-complete. The motivation for proving this result is to give a shorter and simpler proof of NP-Completeness of TRA-MLC (the Top Right Access point Minimum-Length Corridor) problem [1], by finding the reduction from CVC-2. TRA-MLC has many applications in laying optical fibre cables for data communication and electrical wiring in floor plans.The problem of finding connected vertex cover in any planar graph ( CVC ) with maximum degree 4 is NP-complete [2]. We first show that CVC-2 belongs to NP and then we find a polynomial reduction from CVC to CVC-2. Let a graph G0 and an integer K form an instance of CVC, where G0 is a planar graph and K is an upper bound on the size of the connected vertex cover in G0. We construct a 2-connected planar graph, say G, by identifying the blocks and cut vertices of G0, and then finding the planar representation of all the blocks of G0, leading to a plane graph G1. We replace the cut vertices with cycles in such a way that the resultant graph G is a 2-connected planar graph with maximum degree 4. We consider L = K -2t+3 t i=1 di where t is the number of cut vertices in G1 and di is the number of blocks for which ith cut vertex is common. We prove that G will have a connected vertex cover with size less than or equal to L if and only if G0 has a connected vertex cover of size less than or equal to K.

Keywords: NP-complete, 2-Connected planar graph, block, cut vertex

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2004
7597 Optimal Combination for Modal Pushover Analysis by Using Genetic Algorithm

Authors: K. Shakeri, M. Mohebbi

Abstract:

In order to consider the effects of the higher modes in the pushover analysis, during the recent years several multi-modal pushover procedures have been presented. In these methods the response of the considered modes are combined by the square-rootof- sum-of-squares (SRSS) rule while application of the elastic modal combination rules in the inelastic phases is no longer valid. In this research the feasibility of defining an efficient alternative combination method is investigated. Two steel moment-frame buildings denoted SAC-9 and SAC-20 under ten earthquake records are considered. The nonlinear responses of the structures are estimated by the directed algebraic combination of the weighted responses of the separate modes. The weight of the each mode is defined so that the resulted response of the combination has a minimum error to the nonlinear time history analysis. The genetic algorithm (GA) is used to minimize the error and optimize the weight factors. The obtained optimal factors for each mode in different cases are compared together to find unique appropriate weight factors for each mode in all cases.

Keywords: Genetic Algorithm, Modal Pushover, Optimalweight.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1804
7596 The Behavior of Dam Foundation Reinforced by Stone Columns: Case Study of Kissir Dam-Jijel

Authors: Toufik Karech, Abderahmen Benseghir, Tayeb Bouzid

Abstract:

This work presents a 2D numerical simulation of an earth dam to assess the behavior of its foundation after a treatment by stone columns. This treatment aims to improve the bearing capacity, to increase the mechanical properties of the soil, to accelerate the consolidation, to reduce the settlements and to eliminate the liquefaction phenomenon in case of seismic excitation. For the evaluation of the pore pressures, the position of the phreatic line and the flow network was defined, and a seepage analysis was performed with the software MIDAS Soil Works. The consolidation calculation is performed through a simulation of the actual construction stages of the dam. These analyzes were performed using the Mohr-Coulomb soil model and the results are compared with the actual measurements of settlement gauges implanted in the dam. An analysis of the bearing capacity was conducted to show the role of stone columns in improving the bearing capacity of the foundation.

Keywords: Earth dam, dam foundation, numerical simulation, stone columns, seepage analysis, consolidation, bearing capacity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1131
7595 Fragility Analysis of Weir Structure Subjected to Flooding Water Damage

Authors: Oh Hyeon Jeon, WooYoung Jung

Abstract:

In this study, seepage analysis was performed by the level difference between upstream and downstream of weir structure for safety evaluation of weir structure against flooding. Monte Carlo Simulation method was employed by considering the probability distribution of the adjacent ground parameter, i.e., permeability coefficient of weir structure. Moreover, by using a commercially available finite element program (ABAQUS), modeling of the weir structure is carried out. Based on this model, the characteristic of water seepage during flooding was determined at each water level with consideration of the uncertainty of their corresponding permeability coefficient. Subsequently, fragility function could be constructed based on this response from numerical analysis; this fragility function results could be used to determine the weakness of weir structure subjected to flooding disaster. They can also be used as a reference data that can comprehensively predict the probability of failur,e and the degree of damage of a weir structure.

Keywords: Weir structure, seepage, flood disaster fragility, probabilistic risk assessment, Monte-Carlo Simulation, permeability coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1162
7594 Comparison of GSA, SA and PSO Based Intelligent Controllers for Path Planning of Mobile Robot in Unknown Environment

Authors: P. K. Panigrahi, Saradindu Ghosh, Dayal R. Parhi

Abstract:

Now-a-days autonomous mobile robots have found applications in diverse fields. An autonomous robot system must be able to behave in an intelligent manner to deal with complex and changing environment. This work proposes the performance of path planning and navigation of autonomous mobile robot using Gravitational Search Algorithm (GSA), Simulated Annealing (SA) and Particle Swarm optimization (PSO) based intelligent controllers in an unstructured environment. The approach not only finds a valid collision free path but also optimal one. The main aim of the work is to minimize the length of the path and duration of travel from a starting point to a target while moving in an unknown environment with obstacles without collision. Finally, a comparison is made between the three controllers, it is found that the path length and time duration made by the robot using GSA is better than SA and PSO based controllers for the same work.

Keywords: Autonomous Mobile Robot, Gravitational Search Algorithm, Particle Swarm Optimization, Simulated Annealing Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3120
7593 Service Business Model Canvas: A Boundary Object Operating as a Business Development Tool

Authors: Taru Hakanen, Mervi Murtonen

Abstract:

This study aims to increase understanding of the transition of business models in servitization. The significance of service in all business has increased dramatically during the past decades. Service-dominant logic (SDL) describes this change in the economy and questions the goods-dominant logic on which business has primarily been based in the past. A business model canvas is one of the most cited and used tools in defining end developing business models. The starting point of this paper lies in the notion that the traditional business model canvas is inherently goods-oriented and best suits for product-based business. However, the basic differences between goods and services necessitate changes in business model representations when proceeding in servitization. Therefore, new knowledge is needed on how the conception of business model and the business model canvas as its representation should be altered in servitized firms in order to better serve business developers and interfirm co-creation. That is to say, compared to products, services are intangible and they are co-produced between the supplier and the customer. Value is always co-created in interaction between a supplier and a customer, and customer experience primarily depends on how well the interaction succeeds between the actors. The role of service experience is even stronger in service business compared to product business, as services are co-produced with the customer. This paper provides business model developers with a service business model canvas, which takes into account the intangible, interactive, and relational nature of service. The study employs a design science approach that contributes to theory development via design artifacts. This study utilizes qualitative data gathered in workshops with ten companies from various industries. In particular, key differences between Goods-dominant logic (GDL) and SDLbased business models are identified when an industrial firm proceeds in servitization. As the result of the study, an updated version of the business model canvas is provided based on service-dominant logic. The service business model canvas ensures a stronger customer focus and includes aspects salient for services, such as interaction between companies, service co-production, and customer experience. It can be used for the analysis and development of a current service business model of a company or for designing a new business model. It facilitates customer-focused new service design and service development. It aids in the identification of development needs, and facilitates the creation of a common view of the business model. Therefore, the service business model canvas can be regarded as a boundary object, which facilitates the creation of a common understanding of the business model between several actors involved. The study contributes to the business model and service business development disciplines by providing a managerial tool for practitioners in service development. It also provides research insight into how servitization challenges companies’ business models.

Keywords: Boundary object, business model canvas, managerial tool, service-dominant logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3351
7592 Evaluation and Analysis of the Secure E-Voting Authentication Preparation Scheme

Authors: Nidal F. Shilbayeh, Reem A. Al-Saidi, Ahmed H. Alsswey

Abstract:

In this paper, we presented an evaluation and analysis of E-Voting Authentication Preparation Scheme (EV-APS). EV-APS applies some modified security aspects that enhance the security measures and adds a strong wall of protection, confidentiality, non-repudiation and authentication requirements. Some of these modified security aspects are Kerberos authentication protocol, PVID scheme, responder certificate validation, and the converted Ferguson e-cash protocol. Authentication and privacy requirements have been evaluated and proved. Authentication guaranteed only eligible and authorized voters were permitted to vote. Also, the privacy guaranteed that all votes will be kept secret. Evaluation and analysis of some of these security requirements have been given. These modified aspects will help in filtering the counter buffer from unauthorized votes by ensuring that only authorized voters are permitted to vote.

Keywords: E-Voting preparation stage, blind signature protocol, nonce based authentication scheme, Kerberos authentication protocol, pseudo voter identity scheme PVID.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618
7591 The Effect of Confinement Shapes on Over-Reinforced HSC Beams

Authors: Ross Jeffry, Muhammad N. S. Hadi

Abstract:

High strength concrete (HSC) provides high strength but lower ductility than normal strength concrete. This low ductility limits the benefit of using HSC in building safe structures. On the other hand, when designing reinforced concrete beams, designers have to limit the amount of tensile reinforcement to prevent the brittle failure of concrete. Therefore the full potential of the use of steel reinforcement can not be achieved. This paper presents the idea of confining concrete in the compression zone so that the HSC will be in a state of triaxial compression, which leads to improvements in strength and ductility. Five beams made of HSC were cast and tested. The cross section of the beams was 200×300 mm, with a length of 4 m and a clear span of 3.6 m subjected to four-point loading, with emphasis placed on the midspan deflection. The first beam served as a reference beam. The remaining beams had different tensile reinforcement and the confinement shapes were changed to gauge their effectiveness in improving the strength and ductility of the beams. The compressive strength of the concrete was 85 MPa and the tensile strength of the steel was 500 MPa and for the stirrups and helixes was 250 MPa. Results of testing the five beams proved that placing helixes with different diameters as a variable parameter in the compression zone of reinforced concrete beams improve their strength and ductility.

Keywords: Confinement, ductility, high strength concrete, reinforced concrete beam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2239
7590 Analysis of Possible Causes of Fukushima Disaster

Authors: Abid Hossain Khan, Syam Hasan, M. A. R. Sarkar

Abstract:

Fukushima disaster is one of the most publicly exposed accidents in a nuclear facility which has changed the outlook of people towards nuclear power. Some have used it as an example to establish nuclear energy as an unsafe source, while others have tried to find the real reasons behind this accident. Many papers have tried to shed light on the possible causes, some of which are purely based on assumptions while others rely on rigorous data analysis. To our best knowledge, none of the works can say with absolute certainty that there is a single prominent reason that has paved the way to this unexpected incident. This paper attempts to compile all the apparent reasons behind Fukushima disaster and tries to analyze and identify the most likely one.

Keywords: Fuel meltdown, Fukushima disaster, manmade calamity, nuclear facility, tsunami.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2184
7589 Visualization of Quantitative Thresholds in Stocks

Authors: Siddhant Sahu, P. James Daniel Paul

Abstract:

Technical analysis comprised by various technical indicators is a holistic way of representing price movement of stocks in the market. Various forms of indicators have evolved from the primitive ones in the past decades. There have been many attempts to introduce volume as a major determinant to determine strong patterns in market forecasting. The law of demand defines the relationship between the volume and price. Most of the traders are familiar with the volume game. Including the time dimension to the law of demand provides a different visualization to the theory. While attempting the same, it was found that there are different thresholds in the market for different companies. These thresholds have a significant influence on the price. This article is an attempt in determining the thresholds for companies using the three dimensional graphs for optimizing the portfolios. It also emphasizes on the magnitude of importance of volumes as a key factor for determining of predicting strong price movements, bullish and bearish markets. It uses a comprehensive data set of major companies which form a major chunk of the Indian automotive sector and are thus used as an illustration.

Keywords: Technical Analysis, Expert System, Law of demand, Stocks, Portfolio Analysis, Indian Automotive Sector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2087
7588 A Differential Calculus Based Image Steganography with Crossover

Authors: Srilekha Mukherjee, Subha Ash, Goutam Sanyal

Abstract:

Information security plays a major role in uplifting the standard of secured communications via global media. In this paper, we have suggested a technique of encryption followed by insertion before transmission. Here, we have implemented two different concepts to carry out the above-specified tasks. We have used a two-point crossover technique of the genetic algorithm to facilitate the encryption process. For each of the uniquely identified rows of pixels, different mathematical methodologies are applied for several conditions checking, in order to figure out all the parent pixels on which we perform the crossover operation. This is done by selecting two crossover points within the pixels thereby producing the newly encrypted child pixels, and hence the encrypted cover image. In the next lap, the first and second order derivative operators are evaluated to increase the security and robustness. The last lap further ensures reapplication of the crossover procedure to form the final stego-image. The complexity of this system as a whole is huge, thereby dissuading the third party interferences. Also, the embedding capacity is very high. Therefore, a larger amount of secret image information can be hidden. The imperceptible vision of the obtained stego-image clearly proves the proficiency of this approach.

Keywords: Steganography, Crossover, Differential Calculus, Peak Signal to Noise Ratio, Cross-correlation Coefficient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1397
7587 Knowledge Based Wear Particle Analysis

Authors: Mohammad S. Laghari, Qurban A. Memon, Gulzar A. Khuwaja

Abstract:

The paper describes a knowledge based system for analysis of microscopic wear particles. Wear particles contained in lubricating oil carry important information concerning machine condition, in particular the state of wear. Experts (Tribologists) in the field extract this information to monitor the operation of the machine and ensure safety, efficiency, quality, productivity, and economy of operation. This procedure is not always objective and it can also be expensive. The aim is to classify these particles according to their morphological attributes of size, shape, edge detail, thickness ratio, color, and texture, and by using this classification thereby predict wear failure modes in engines and other machinery. The attribute knowledge links human expertise to the devised Knowledge Based Wear Particle Analysis System (KBWPAS). The system provides an automated and systematic approach to wear particle identification which is linked directly to wear processes and modes that occur in machinery. This brings consistency in wear judgment prediction which leads to standardization and also less dependence on Tribologists.

Keywords: Computer vision, knowledge based systems, morphology, wear particles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
7586 First-Principle Investigation of the Electronic Band Structure and Dielectric Response Function of ZnIn2Se4 and ZnIn2Te4

Authors: Nnamdi N. Omehe, Chibuzo Emeruwa

Abstract:

ZnIn2Se4 and ZnIn2Te4 are vacancy defect materials whose properties have been investigated using Density Functional Theory (DFT) framework. The pseudopotential method in conjunction with the LDA+U technique and the Projector Augmented Wave (PAW) was used to calculate the electronic band structure, total density of state, and the partial density of state; while the norm-conserving pseudopotential was used to calculate the dielectric response function with scissors shift. Both ZnIn2Se4 and ZnIn2Te4 were predicted to be semiconductors with energy band gap of 1.66 eV and 1.33 eV respectively, and they both have direct energy band gap at the gamma point of high symmetry. The topmost valence subband for ZnIn2Se4 and ZnIn2Te4 has an energy width of 5.7 eV and 6.0 eV respectively. The calculations of partial density of state (PDOS) show that for ZnIn2Se4, the top of the valence band is dominated by Se-4p orbital, while the bottom of the conduction band is composed of In-5p, In-5s, and Zn-4s states. PDOS for ZnIn2Te4, shows that the top of the valence band is mostly of Te-5p states, while its conduction band bottom is composed mainly of Zn-4s, Te-5p, Te-5s, and In-5s states. Dielectric response function calculation yielded (0) of 11.9 and 36 for ZnIn2Se4 and ZnIn2Te4 respectively.

Keywords: Optoelectronic, Dielectric Response Function, LDA+U, band structure calculation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 108
7585 Daemon- Based Distributed Deadlock Detection and Resolution

Authors: Z. RahimAlipour, A. T. Haghighat

Abstract:

detecting the deadlock is one of the important problems in distributed systems and different solutions have been proposed for it. Among the many deadlock detection algorithms, Edge-chasing has been the most widely used. In Edge-chasing algorithm, a special message called probe is made and sent along dependency edges. When the initiator of a probe receives the probe back the existence of a deadlock is revealed. But these algorithms are not problem-free. One of the problems associated with them is that they cannot detect some deadlocks and they even identify false deadlocks. A key point not mentioned in the literature is that when the process is waiting to obtain the required resources and its execution has been blocked, how it can actually respond to probe messages in the system. Also the question of 'which process should be victimized in order to achieve a better performance when multiple cycles exist within one single process in the system' has received little attention. In this paper, one of the basic concepts of the operating system - daemon - will be used to solve the problems mentioned. The proposed Algorithm becomes engaged in sending probe messages to the mandatory daemons and collects enough information to effectively identify and resolve multi-cycle deadlocks in distributed systems.

Keywords: Distributed system, distributed deadlock detectionand resolution, daemon, false deadlock.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936
7584 Physicochemical Characterization of Waste from Vegetal Extracts Industry for Use as Briquettes

Authors: Maíra O. Palm, Cintia Marangoni, Ozair Souza, Noeli Sellin

Abstract:

Wastes from a vegetal extracts industry (cocoa, oak, Guarana and mate) were characterized by particle size, proximate and ultimate analysis, lignocellulosic fractions, high heating value, thermal analysis (Thermogravimetric analysis – TGA, and Differential thermal analysis - DTA) and energy density to evaluate their potential as biomass in the form of briquettes for power generation. All wastes presented adequate particle sizes to briquettes production. The wastes showed high moisture content, requiring previous drying for use as briquettes. Cocoa and oak wastes had the highest volatile matter contents with maximum mass loss at 310 ºC and 450 ºC, respectively. The solvents used in the aroma extraction process influenced in the moisture content of the wastes, which was higher for mate due to water has been used as solvent. All wastes showed an insignificant loss mass after 565 °C, hence resulting in low ash content. High carbon and hydrogen contents and low sulfur and nitrogen contents were observed ensuring a low generation of sulfur and nitrous oxides. Mate and cocoa exhibited the highest carbon and lignin content, and high heating value. The dried wastes had high heating value, from 17.1 MJ/kg to 20.8 MJ/kg. The results indicate the energy potential of wastes for use as fuel in power generation.

Keywords: Agro-industrial waste, biomass, briquettes, combustion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1038
7583 Selecting an Advanced Creep Model or a Sophisticated Time-Integration? A New Approach by Means of Sensitivity Analysis

Authors: Holger Keitel

Abstract:

The prediction of long-term deformations of concrete and reinforced concrete structures has been a field of extensive research and several different creep models have been developed so far. Most of the models were developed for constant concrete stresses, thus, in case of varying stresses a specific superposition principle or time-integration, respectively, is necessary. Nowadays, when modeling concrete creep the engineering focus is rather on the application of sophisticated time-integration methods than choosing the more appropriate creep model. For this reason, this paper presents a method to quantify the uncertainties of creep prediction originating from the selection of creep models or from the time-integration methods. By adapting variance based global sensitivity analysis, a methodology is developed to quantify the influence of creep model selection or choice of time-integration method. Applying the developed method, general recommendations how to model creep behavior for varying stresses are given.

Keywords: Concrete creep models, time-integration methods, sensitivity analysis, prediction uncertainty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
7582 Measurement and Analysis of Temperature Effects on Box Girders of Continuous Rigid Frame Bridges

Authors: Bugao Wang, Weifeng Wang, Xianwei Zeng

Abstract:

Researches on the general rules of temperature field changing and their effects on the bridge in construction are necessary. This paper investigated the rules of temperature field changing and its effects on bridge using onsite measurement and computational analysis. Guanyinsha Bridge was used as a case study in this research. The temperature field was simulated in analyses. The effects of certain boundary conditions such as sun radiance, wind speed, and model parameters such as heat factor and specific heat on temperature field are investigated. Recommended values for these parameters are proposed. The simulated temperature field matches the measured observations with high accuracy. At the same time, the stresses and deflections of the bridge computed with the simulated temperature field matches measured values too. As a conclusion, the temperature effect analysis of reinforced concrete box girder can be conducted directly based on the reliable weather data of the concerned area.

Keywords: continuous rigid frame bridge, temperature effectanalysis, temperature field, temperature field simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2581
7581 A Numerical Approach for Static and Dynamic Analysis of Deformable Journal Bearings

Authors: D. Benasciutti, M. Gallina, M. Gh. Munteanu, F. Flumian

Abstract:

This paper presents a numerical approach for the static and dynamic analysis of hydrodynamic radial journal bearings. In the first part, the effect of shaft and housing deformability on pressure distribution within oil film is investigated. An iterative algorithm that couples Reynolds equation with a plane finite elements (FE) structural model is solved. Viscosity-to-pressure dependency (Vogel- Barus equation) is also included. The deformed lubrication gap and the overall stress state are obtained. Numerical results are presented with reference to a typical journal bearing configuration at two different inlet oil temperatures. Obtained results show the great influence of bearing components structural deformation on oil pressure distribution, compared with results for ideally rigid components. In the second part, a numerical approach based on perturbation method is used to compute stiffness and damping matrices, which characterize the journal bearing dynamic behavior.

Keywords: Journal bearing, finite elements, deformation, dynamic analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2037
7580 Quantifying and Adjusting the Effects of Publication Bias in Continuous Meta-Analysis

Authors: N.R.N. Idris

Abstract:

This study uses simulated meta-analysis to assess the effects of publication bias on meta-analysis estimates and to evaluate the efficacy of the trim and fill method in adjusting for these biases. The estimated effect sizes and the standard error were evaluated in terms of the statistical bias and the coverage probability. The results demonstrate that if publication bias is not adjusted it could lead to up to 40% bias in the treatment effect estimates. Utilization of the trim and fill method could reduce the bias in the overall estimate by more than half. The method is optimum in presence of moderate underlying bias but has minimal effects in presence of low and severe publication bias. Additionally, the trim and fill method improves the coverage probability by more than half when subjected to the same level of publication bias as those of the unadjusted data. The method however tends to produce false positive results and will incorrectly adjust the data for publication bias up to 45 % of the time. Nonetheless, the bias introduced into the estimates due to this adjustment is minimal

Keywords: Publication bias, Trim and Fill method, percentage relative bias, coverage probability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1558
7579 Searching the Efficient Frontier for the Coherent Covering Location Problem

Authors: Felipe Azocar Simonet, Luis Acosta Espejo

Abstract:

In this article, we will try to find an efficient boundary approximation for the bi-objective location problem with coherent coverage for two levels of hierarchy (CCLP). We present the mathematical formulation of the model used. Supported efficient solutions and unsupported efficient solutions are obtained by solving the bi-objective combinatorial problem through the weights method using a Lagrangean heuristic. Subsequently, the results are validated through the DEA analysis with the GEM index (Global efficiency measurement).

Keywords: Coherent covering location problem, efficient frontier, Lagrangian relaxation, data envelopment analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 809
7578 Improving the Analytical Power of Dynamic DEA Models, by the Consideration of the Shape of the Distribution of Inputs/Outputs Data: A Linear Piecewise Decomposition Approach

Authors: Elias K. Maragos, Petros E. Maravelakis

Abstract:

In Dynamic Data Envelopment Analysis (DDEA), which is a subfield of Data Envelopment Analysis (DEA), the productivity of Decision Making Units (DMUs) is considered in relation to time. In this case, as it is accepted by the most of the researchers, there are outputs, which are produced by a DMU to be used as inputs in a future time. Those outputs are known as intermediates. The common models, in DDEA, do not take into account the shape of the distribution of those inputs, outputs or intermediates data, assuming that the distribution of the virtual value of them does not deviate from linearity. This weakness causes the limitation of the accuracy of the analytical power of the traditional DDEA models. In this paper, the authors, using the concept of piecewise linear inputs and outputs, propose an extended DDEA model. The proposed model increases the flexibility of the traditional DDEA models and improves the measurement of the dynamic performance of DMUs.

Keywords: Data envelopment analysis, Dynamic DEA, Piecewise linear inputs, Piecewise linear outputs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 656
7577 Study on Characterization of Tuncbilek Fly Ash

Authors: A.S. Kipcak, N. Baran Acarali, S. Kolemen, N. Tugrul, E. Moroydor Derun, S. Piskin

Abstract:

Fly ash is one of the residues generated in combustion, and comprises the fine particles that rise with the flue gases. Ash which does not rise is termed bottom ash [1]. In our country, it is expected that will be occurred 50 million tons of waste ash per year until 2020. Released waste from the thermal power plants is caused very significant problems as known. The fly ashes can be evaluated by using as adsorbent material. The purpose of this study is to investigate the possibility of use of Tuncbilek fly ash like low-cost adsorbents for heavy metal adsorption. First of all, Tuncbilek fly ash was characterized. For this purpose; analysis such as sieve analysis, XRD, XRF, SEM and FT-IR were performed.

Keywords: Fly ash, heavy metal, sieve, adsorbent

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853
7576 Multiple Job Shop-Scheduling using Hybrid Heuristic Algorithm

Authors: R.A.Mahdavinejad

Abstract:

In this paper, multi-processors job shop scheduling problems are solved by a heuristic algorithm based on the hybrid of priority dispatching rules according to an ant colony optimization algorithm. The objective function is to minimize the makespan, i.e. total completion time, in which a simultanous presence of various kinds of ferons is allowed. By using the suitable hybrid of priority dispatching rules, the process of finding the best solution will be improved. Ant colony optimization algorithm, not only promote the ability of this proposed algorithm, but also decreases the total working time because of decreasing in setup times and modifying the working production line. Thus, the similar work has the same production lines. Other advantage of this algorithm is that the similar machines (not the same) can be considered. So, these machines are able to process a job with different processing and setup times. According to this capability and from this algorithm evaluation point of view, a number of test problems are solved and the associated results are analyzed. The results show a significant decrease in throughput time. It also shows that, this algorithm is able to recognize the bottleneck machine and to schedule jobs in an efficient way.

Keywords: Job shops scheduling, Priority dispatching rules, Makespan, Hybrid heuristic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1669
7575 Coordination between SC and SVC for Voltage Stability Improvement

Authors: Ali Reza Rajabi, Shahab Rashnoei, Mojtaba Hakimzadeh, Amir Habibi

Abstract:

At any point of time, a power system operating condition should be stable, meeting various operational criteria and it should also be secure in the event of any credible contingency. Present day power systems are being operated closer to their stability limits due to economic and environmental constraints. Maintaining a stable and secure operation of a power system is therefore a very important and challenging issue. Voltage instability has been given much attention by power system researchers and planners in recent years, and is being regarded as one of the major sources of power system insecurity. Voltage instability phenomena are the ones in which the receiving end voltage decreases well below its normal value and does not come back even after setting restoring mechanisms such as VAR compensators, or continues to oscillate for lack of damping against the disturbances. Reactive power limit of power system is one of the major causes of voltage instability. This paper investigates the effects of coordinated series capacitors (SC) with static VAR compensators (SVC) on steady-state voltage stability of a power system. Also, the influence of the presence of series capacitor on static VAR compensator controller parameters and ratings required to stabilize load voltages at certain values are highlighted.

Keywords: Static VAR Compensator (SVC), Series Capacitor (SC), voltage stability, reactive power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966
7574 Multi Switched Split Vector Quantization of Narrowband Speech Signals

Authors: M. Satya Sai Ram, P. Siddaiah, M. Madhavi Latha

Abstract:

Vector quantization is a powerful tool for speech coding applications. This paper deals with LPC Coding of speech signals which uses a new technique called Multi Switched Split Vector Quantization (MSSVQ), which is a hybrid of Multi, switched, split vector quantization techniques. The spectral distortion performance, computational complexity, and memory requirements of MSSVQ are compared to split vector quantization (SVQ), multi stage vector quantization(MSVQ) and switched split vector quantization (SSVQ) techniques. It has been proved from results that MSSVQ has better spectral distortion performance, lower computational complexity and lower memory requirements when compared to all the above mentioned product code vector quantization techniques. Computational complexity is measured in floating point operations (flops), and memory requirements is measured in (floats).

Keywords: Linear predictive Coding, Multi stage vectorquantization, Switched Split vector quantization, Split vectorquantization, Line Spectral Frequencies (LSF).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673
7573 A Formulation of the Latent Class Vector Model for Pairwise Data

Authors: Tomoya Okubo, Kuninori Nakamura, Shin-ichi Mayekawa

Abstract:

In this research, a latent class vector model for pairwise data is formulated. As compared to the basic vector model, this model yields consistent estimates of the parameters since the number of parameters to be estimated does not increase with the number of subjects. The result of the analysis reveals that the model was stable and could classify each subject to the latent classes representing the typical scales used by these subjects.

Keywords: finite mixture models, latent class analysis, Thrustone's paired comparison method, vector model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1216
7572 Spatial Correlation Analysis between Climate Factors and Plant Production in Asia

Authors: Yukiyo Yamamoto, Jun Furuya, Shintaro Kobayashi

Abstract:

Using 1km grid datasets representing monthly mean precipitation, monthly mean temperature, and dry matter production (DMP), we considered the regional plant production ability in Southeast and South Asia, and also employed pixel-by-pixel correlation analysis to assess the intensity of relation between climate factors and plant production. While annual DMP in South Asia was approximately less than 2,000kg, the one in most part of Southeast Asia exceeded 2,500 - 3,000kg. It suggested that plant production in Southeast Asia was superior to South Asia, however, Rain-Use Efficiency (RUE) representing dry matter production per 1mm precipitation showed that inland of Indochina Peninsula and India were higher than islands in Southeast Asia. By the results of correlation analysis between climate factors and DMP, while the area in most parts of Indochina Peninsula indicated negative correlation coefficients between DMP and precipitation or temperature, the area in Malay Peninsula and islands showed negative correlation to precipitation and positive one to temperature, and most part of India dominating South Asia showed positive to precipitation and negative to temperature. In addition, the areas where the correlation coefficients exceeded |0.8| were regarded as “susceptible" to climate factors, and the areas smaller than |0.2| were “insusceptible". By following the discrimination, the map implying expected impacts by climate change was provided.

Keywords: Asia, correlation analysis, plant production, precipitation, temperature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450