Search results for: core structure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3031

Search results for: core structure

271 A Study on Cement-Based Composite Containing Polypropylene Fibers and Finely Ground Glass Exposed to Elevated Temperatures

Authors: O. Alidoust, I. Sadrinejad, M. A. Ahmadi

Abstract:

High strength concrete has been used in situations where it may be exposed to elevated temperatures. Numerous authors have shown the significant contribution of polypropylene fiber to the spalling resistance of high strength concrete. When cement-based composite that reinforced by polypropylene fibers heated up to 170 °C, polypropylene fibers readily melt and volatilize, creating additional porosity and small channels in to the matrix that cause the poor structure and low strength. This investigation develops on the mechanical properties of mortar incorporating polypropylene fibers exposed to high temperature. Also effects of different pozzolans on strength behaviour of samples at elevated temperature have been studied. To reach this purpose, the specimens were produced by partial replacement of cement with finely ground glass, silica fume and rice husk ash as high reactive pozzolans. The amount of this replacement was 10% by weight of cement to find the effects of pozzolans as a partial replacement of cement on the mechanical properties of mortars. In this way, lots of mixtures with 0%, 0.5%, 1% and 1.5% of polypropylene fibers were cast and tested for compressive and flexural strength, accordance to ASTM standard. After that specimens being heated to temperatures of 300, 600 °C, respectively, the mechanical properties of heated samples were tested. Mechanical tests showed significant reduction in compressive strength which could be due to polypropylene fiber melting. Also pozzolans improve the mechanical properties of sampels.

Keywords: Mechanical properties, compressive strength, Flexural strength, pozzolanic behavior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2127
270 The Effect of Compost Addition on Chemical and Nitrogen Characteristics, Respiration Activity and Biomass Production in Prepared Reclamation Substrates

Authors: L. Plošek, F. Nsanganwimana, B. Pourrut, J. Elbl, J. Hynšt, A. Kintl, D. Kubná, J. Záhora

Abstract:

Land degradation is of concern in many countries. People more and more must address the problems associated with the degradation of soil properties due to man. Increasingly, organic soil amendments, such as compost are being examined for their potential use in soil restoration and for preventing soil erosion. In the Czech Republic, compost is the most used to improve soil structure and increase the content of soil organic matter. Land reclamation / restoration is one of the ways to evaluate industrially produced compost because Czech farmers are not willing to use compost as organic fertilizer. The most common use of reclamation substrates in the Czech Republic is for the rehabilitation of landfills and contaminated sites.

This paper deals with the influence of reclamation substrates (RS) with different proportions of compost and sand on selected soil properties–chemical characteristics, nitrogen bioavailability, leaching of mineral nitrogen, respiration activity and plant biomass production. Chemical properties vary proportionally with addition of compost and sand to the control variant (topsoil). The highest differences between the variants were recorded in leaching of mineral nitrogen (varies from 1.36mg dm-3 in C to 9.09mg dm-3). Addition of compost to soil improves conditions for plant growth in comparison with soil alone. However, too high addition of compost may have adverse effects on plant growth. In addition, high proportion of compost increases leaching of mineral N. Therefore, mixture of 70% of soil with 10% of compost and 20% of sand may be recommended as optimal composition of RS.

Keywords: Biomass, Compost, Reclamation, Respiration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2273
269 Probability-Based Damage Detection of Structures Using Model Updating with Enhanced Ideal Gas Molecular Movement Algorithm

Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee

Abstract:

Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that considers the uncertainty caused by measurement noise. Enhanced ideal gas molecular movement (EIGMM) is used as the main algorithm for model updating. Ideal gas molecular movement (IGMM) is a multiagent algorithm based on the ideal gas molecular movement. Ideal gas molecules disperse rapidly in different directions and cover all the space inside. This is embedded in the high speed of molecules, collisions between them and with the surrounding barriers. In IGMM algorithm to accomplish the optimal solutions, the initial population of gas molecules is randomly generated and the governing equations related to the velocity of gas molecules and collisions between those are utilized. In this paper, an enhanced version of IGMM, which removes unchanged variables after specified iterations, is developed. The proposed method is implemented on two numerical examples in the field of structural damage detection. The results show that the proposed method can perform well and competitive in PBDD of structures.

Keywords: Enhanced ideal gas molecular movement, ideal gas molecular movement, model updating method, probability-based damage detection, uncertainty quantification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1036
268 A Study on the Performance Characteristics of Variable Valve for Reverse Continuous Damper

Authors: Se Kyung Oh, Young Hwan Yoon, Ary Bachtiar Krishna

Abstract:

Nowadays, a passenger car suspension must has high performance criteria with light weight, low cost, and low energy consumption. Pilot controlled proportional valve is designed and analyzed to get small pressure change rate after blow-off, and to get a fast response of the damper, a reverse damping mechanism is adapted. The reverse continuous variable damper is designed as a HS-SH damper which offers good body control with reduced transferred input force from the tire, compared with any other type of suspension system. The damper structure is designed, so that rebound and compression damping forces can be tuned independently, of which the variable valve is placed externally. The rate of pressure change with respect to the flow rate after blow-off becomes smooth when the fixed orifice size increases, which means that the blow-off slope is controllable using the fixed orifice size. Damping forces are measured with the change of the solenoid current at the different piston velocities to confirm the maximum hysteresis of 20 N, linearity, and variance of damping force. The damping force variance is wide and continuous, and is controlled by the spool opening, of which scheme is usually adapted in proportional valves. The reverse continuous variable damper developed in this study is expected to be utilized in the semi-active suspension systems in passenger cars after its performance and simplicity of the design is confirmed through a real car test.

Keywords: Blow-off, damping force, pilot controlledproportional valve, reverse continuous damper.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2384
267 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases

Authors: Mohammad A. Bani-Khaled

Abstract:

In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.

Keywords: Coupled dynamics, geometric complexity, Proper Orthogonal Decomposition (POD), thin walled beams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 970
266 Systems Engineering Management Using Transdisciplinary Quality System Development Lifecycle Model

Authors: Mohamed Asaad Abdelrazek, Amir Taher El-Sheikh, M. Zayan, A.M. Elhady

Abstract:

The successful realization of complex systems is dependent not only on the technology issues and the process for implementing them, but on the management issues as well. Managing the systems development lifecycle requires technical management. Systems engineering management is the technical management. Systems engineering management is accomplished by incorporating many activities. The three major activities are development phasing, systems engineering process and lifecycle integration. Systems engineering management activities are performed across the system development lifecycle. Due to the ever-increasing complexity of systems as well the difficulty of managing and tracking the development activities, new ways to achieve systems engineering management activities are required. This paper presents a systematic approach used as a design management tool applied across systems engineering management roles. In this approach, Transdisciplinary System Development Lifecycle (TSDL) Model has been modified and integrated with Quality Function Deployment. Hereinafter, the name of the systematic approach is the Transdisciplinary Quality System Development Lifecycle (TQSDL) Model. The QFD translates the voice of customers (VOC) into measurable technical characteristics. The modified TSDL model is based on Axiomatic Design developed by Suh which is applicable to all designs: products, processes, systems and organizations. The TQSDL model aims to provide a robust structure and systematic thinking to support the implementation of systems engineering management roles. This approach ensures that the customer requirements are fulfilled as well as satisfies all the systems engineering manager roles and activities.

Keywords: Axiomatic design, quality function deployment, systems engineering management, system development lifecycle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
265 Optimal and Critical Path Analysis of State Transportation Network Using Neo4J

Authors: Pallavi Bhogaram, Xiaolong Wu, Min He, Onyedikachi Okenwa

Abstract:

A transportation network is a realization of a spatial network, describing a structure which permits either vehicular movement or flow of some commodity. Examples include road networks, railways, air routes, pipelines, and many more. The transportation network plays a vital role in maintaining the vigor of the nation’s economy. Hence, ensuring the network stays resilient all the time, especially in the face of challenges such as heavy traffic loads and large scale natural disasters, is of utmost importance. In this paper, we used the Neo4j application to develop the graph. Neo4j is the world's leading open-source, NoSQL, a native graph database that implements an ACID-compliant transactional backend to applications. The Southern California network model is developed using the Neo4j application and obtained the most critical and optimal nodes and paths in the network using centrality algorithms. The edge betweenness centrality algorithm calculates the critical or optimal paths using Yen's k-shortest paths algorithm, and the node betweenness centrality algorithm calculates the amount of influence a node has over the network. The preliminary study results confirm that the Neo4j application can be a suitable tool to study the important nodes and the critical paths for the major congested metropolitan area.

Keywords: Transportation network, critical path, connectivity reliability, network model, Neo4J application, optimal path, critical path, edge betweenness centrality index, node betweenness centrality index, Yen’s k-shortest paths.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 765
264 Co-payment Strategies for Chronic Medications: A Qualitative and Comparative Analysis at European Level

Authors: Pedro M. Abreu, Bruno R. Mendes

Abstract:

The management of pharmacotherapy and the process of dispensing medicines is becoming critical in clinical pharmacy due to the increase of incidence and prevalence of chronic diseases, the complexity and customization of therapeutic regimens, the introduction of innovative and more expensive medicines, the unbalanced relation between expenditure and revenue as well as due to the lack of rationalization associated with medication use. For these reasons, co-payments emerged in Europe in the 70s and have been applied over the past few years in healthcare. Co-payments lead to a rationing and rationalization of user’s access under healthcare services and products, and simultaneously, to a qualification and improvement of the services and products for the end-user. This analysis, under hospital practices particularly and co-payment strategies in general, was carried out on all the European regions and identified four reference countries, that apply repeatedly this tool and with different approaches. The structure, content and adaptation of European co-payments were analyzed through 7 qualitative attributes and 19 performance indicators, and the results expressed in a scorecard, allowing to conclude that the German models (total score of 68,2% and 63,6% in both elected co-payments) can collect more compliance and effectiveness, the English models (total score of 50%) can be more accessible, and the French models (total score of 50%) can be more adequate to the socio-economic and legal framework. Other European models did not show the same quality and/or performance, so were not taken as a standard in the future design of co-payments strategies. In this sense, we can see in the co-payments a strategy not only to moderate the consumption of healthcare products and services, but especially to improve them, as well as a strategy to increment the value that the end-user assigns to these services and products, such as medicines.

Keywords: Clinical pharmacy, co-payments, healthcare, medicines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1299
263 Meanings and Construction: Evolution of Inheriting the Traditions in Chinese Modern Architecture in the 1980s

Authors: Wei Wang

Abstract:

Queli Hotel, Xixi Scenery Spot Reception and Square Pagoda Garden are three important landmarks of localized Chinese modern architecture (LCMA) in the architectural design context of "Inheriting the Traditions in Modern Architecture" in the 1980s. As the most representative cases of LCMA in the 1980s, they interpret the traditions of Chinese garden and imperial roof from different perspectives. Based on the research text, conceptual drawings, construction drawings and site investigation, this paper extracts two groups of prominent contradictions in practice ("Pattern-Material-Structure" and "Type-Topography-Body") for keyword-based analysis to compare and examine different choices and balances by architects. Based on this, this paper attempts to indicate that the ideographic form derived from macro-narrative and the innovative investigation in construction is a pair of inevitable contradictions that must be handled and coordinated in these practices. The collision of the contradictions under specific conditions results in three cognitive attitudes and practical strategies towards traditions: Formal symbolism, spatial abstraction and construction-based narrative. These differentiated thoughts about Localization and Chineseness reflect various professional ideologies and value standpoints in the transition of Chinese Architecture discipline in the 1980s. The great variety in this particular circumstance suggests tremendous potential and possibilities of the future LCMA.

Keywords: Construction, Meaning, Queli Hotel, Square Pagoda Garden, Tradition, Xixi Scenery Spot Reception.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 586
262 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material

Authors: S. Boria

Abstract:

In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.

Keywords: Composite material, crashworthiness, finite element analysis, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1073
261 Performance Analysis of Digital Signal Processors Using SMV Benchmark

Authors: Erh-Wen Hu, Cyril S. Ku, Andrew T. Russo, Bogong Su, Jian Wang

Abstract:

Unlike general-purpose processors, digital signal processors (DSP processors) are strongly application-dependent. To meet the needs for diverse applications, a wide variety of DSP processors based on different architectures ranging from the traditional to VLIW have been introduced to the market over the years. The functionality, performance, and cost of these processors vary over a wide range. In order to select a processor that meets the design criteria for an application, processor performance is usually the major concern for digital signal processing (DSP) application developers. Performance data are also essential for the designers of DSP processors to improve their design. Consequently, several DSP performance benchmarks have been proposed over the past decade or so. However, none of these benchmarks seem to have included recent new DSP applications. In this paper, we use a new benchmark that we recently developed to compare the performance of popular DSP processors from Texas Instruments and StarCore. The new benchmark is based on the Selectable Mode Vocoder (SMV), a speech-coding program from the recent third generation (3G) wireless voice applications. All benchmark kernels are compiled by the compilers of the respective DSP processors and run on their simulators. Weighted arithmetic mean of clock cycles and arithmetic mean of code size are used to compare the performance of five DSP processors. In addition, we studied how the performance of a processor is affected by code structure, features of processor architecture and optimization of compiler. The extensive experimental data gathered, analyzed, and presented in this paper should be helpful for DSP processor and compiler designers to meet their specific design goals.

Keywords: digital signal processors, DSP benchmark, instruction level parallelism, modified cyclomatic complexity, performance analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553
260 Undergraduates Learning Preferences: A Comparison of Science, Technology and Social Science Academic Disciplines in Relations to Teaching Designs and Strategies

Authors: Salina Budin, Shaira Ismail

Abstract:

Students learn effectively in a learning environment with a suitable teaching approach that matches their learning preferences. The main objective of the study is to examine the learning preferences amongst the students in the Science and Technology (S&T), and Social Science (SS) fields of study at the Universiti Teknologi Mara (UiTM), Pulau Pinang. The measurement instrument is based on the Dunn and Dunn Learning Styles which measure five elements of learning styles; environmental, sociological, emotional, physiological and psychological. Questionnaires are distributed amongst undergraduates in the Faculty of Mechanical Engineering and Faculty of Business Management. The respondents comprise of 131 diploma students of the Faculty of Mechanical Engineering and 111 degree students of the Faculty of Business Management. The results indicate that, both S&T and SS students share a similar learning preferences on the environmental aspect, emotional preferences, motivational level, learning responsibility, persistent level in learning and learning structure. Most of the S&T students are concluded as analytical learners and the majority of SS students are global learners. Both S&T and SS students are concluded as visual learners, preferred to be in an active mobility in a relaxing and enjoying mode with some light of refreshments during the learning process and exhibited reflective characteristics in learning. Obviously, the S&T students are considered as left brain dominant, whereas the SS students are right brain dominant. The findings highlighted that both categories of students exhibited similar learning preferences except on psychological preferences.

Keywords: Learning preferences, Dunn and Dunn learning style, teaching approach, science and technology, social science.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1340
259 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: Goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, type-I error, penalized quasi-likelihood, power, quasi-likelihood.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 695
258 Application Reliability Method for Concrete Dams

Authors: Mustapha Kamel Mihoubi, Mohamed Essadik Kerkar

Abstract:

Probabilistic risk analysis models are used to provide a better understanding of the reliability and structural failure of works, including when calculating the stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of reliability analysis methods including the methods used in engineering. It is in our case, the use of level 2 methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type first order risk method (FORM) and the second order risk method (SORM). By way of comparison, a level three method was used which generates a full analysis of the problem and involves an integration of the probability density function of random variables extended to the field of security using the Monte Carlo simulation method. Taking into account the change in stress following load combinations: normal, exceptional and extreme acting on the dam, calculation of the results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities, thus causing a significant decrease in strength, shear forces then induce a shift that threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case the increase of uplift in a hypothetical default of the drainage system.

Keywords: Dam, failure, limit-state, Monte Carlo simulation, reliability, probability, simulation, sliding, Taylor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1183
257 Petro-Mineralogical Studies of Phosphorite Deposit of Sallopat Block of Banswara District, Rajasthan, India

Authors: K. F. Khan, Samsuddin Khan

Abstract:

The Paleoproterozoic phosphorite deposit of Sallopat block of Banswara district of Rajasthan belongs to kalinjara formation of lunavada group of Aravalli Super Group. The phosphorites are found to occur as massive, brecciated, laminated and stromatolitic associated with calcareous quartzite, interbedded dolomite and multi coloured chert. The phosphorites are showing alternate brown and grey coloured concentric rims which are composed of phosphate, calcite and quartz minerals. Petro-mineralogical studies of phosphorite samples using petrological microscope, XRD, FEG- SEM and EDX reveal that apatite-(CaF) and apatite-(CaOH) are phosphate minerals which are intermixed with minor amount of carbonate materials. Sporadic findings of the uniform tiny granules of partially anisotropic apatite-(CaF) along with dolomite, calcite, quartz, muscovite, zeolite and other gangue minerals have been observed with the replacement of phosphate material by quartz and carbonate. The presence of microbial filaments of organic matter and alternate concentric rims of stromatolitic structure may suggest that the deposition of the phosphate took place in shallow marine oxidizing environmental conditions leading to the formation of phosphorite layers as primary biogenic precipitates by bacterial or algal activities. Different forms and texture of phosphate minerals may be due to environmental vicissitudes at the time of deposition followed by some replacement processes and biogenic activities.

Keywords: Petro-mineralogy, phosphorites, sallopat, apatite.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1599
256 Three Dimensional Large Eddy Simulation of Blood Flow and Deformation in an Elastic Constricted Artery

Authors: Xi Gu, Guan Heng Yeoh, Victoria Timchenko

Abstract:

In the current work, a three-dimensional geometry of a 75% stenosed blood vessel is analyzed. Large eddy simulation (LES) with the help of a dynamic subgrid scale Smagorinsky model is applied to model the turbulent pulsatile flow. The geometry, the transmural pressure and the properties of the blood and the elastic boundary were based on clinical measurement data. For the flexible wall model, a thin solid region is constructed around the 75% stenosed blood vessel. The deformation of this solid region was modelled as a deforming boundary to reduce the computational cost of the solid model. Fluid-structure interaction is realized via a twoway coupling between the blood flow modelled via LES and the deforming vessel. The information of the flow pressure and the wall motion was exchanged continually during the cycle by an arbitrary Lagrangian-Eulerian method. The boundary condition of current time step depended on previous solutions. The fluctuation of the velocity in the post-stenotic region was analyzed in the study. The axial velocity at normalized position Z=0.5 shows a negative value near the vessel wall. The displacement of the elastic boundary was concerned in this study. In particular, the wall displacement at the systole and the diastole were compared. The negative displacement at the stenosis indicates a collapse at the maximum velocity and the deceleration phase.

Keywords: Large Eddy Simulation, Fluid Structural Interaction, Constricted Artery, Computational Fluid Dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2301
255 The Role of Internal Function of Organization for The Successful Implementation of Good Corporate Governance

Authors: Aries Susanty

Abstract:

The inability to implement the principles of good corporate governance (GCG) as demonstrated in the surveys is due to a number of constraints which can be classified into three; namely internal constraints, external constraints, and constraints coming from the structure of ownership. The issues in the internal constraints mentioned are related to the function of several elements of the company. As a business organization, corporation is unable to achieve its goal to successfully implement GCG principles since it is not support by its internal elements- functions. Two of several numbers of internal elements of a company are ethical work climate and leadership style of the top management. To prove the correlation between internal function of organization (in this case ethical work climate and transformational leadership) and the successful implementation of GCG principles, this study proposes two hypotheses to be empirically tested on thirty surveyed organizations; eleven of which are state-owned companies and nineteen are private companies. These thirty corporations are listed in the Jakarta Stock Exchange. All state-owned companies in the samples are those which have been privatized. The research showed that internal function of organization give support to the successful implementation of GCG principle. In this research we can prove that : (i) ethical work climate has positive significance of correlation with the successful implementation of social awareness principle (one of principles on GCG) and, (ii) only at the state-owned companies, transformational leadership have positive significance effect to forming the ethical work climate.

Keywords: Good Corporate Governance Principles, Ethical Work Climate, Transformational Leadership

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1428
254 Development of a Wall Climbing Robotic Ground Penetrating Radar System for Inspection of Vertical Concrete Structures

Authors: Md Omar Faruq Howlader, Tariq Pervez Sattar, Sandra Dudley

Abstract:

This paper describes the design process of a 200 MHz Ground Penetrating Radar (GPR) and a battery powered concrete vertical concrete surface climbing mobile robot. The key design feature is a miniaturized 200 MHz dipole antenna using additional radiating arms and procedure records a reduction of 40% in length compared to a conventional antenna. The antenna set is mounted in front of the robot using a servo mechanism for folding and unfolding purposes. The robot’s adhesion mechanism to climb the reinforced concrete wall is based on neodymium permanent magnets arranged in a unique combination to concentrate and maximize the magnetic flux to provide sufficient adhesion force for GPR installation. The experiments demonstrated the robot’s capability of climbing reinforced concrete wall carrying the attached prototype GPR system and perform floor-to-wall transition and vice versa. The developed GPR’s performance is validated by its capability of detecting and localizing an aluminium sheet and a reinforcement bar (rebar) of 12 mm diameter buried under a test rig built of wood to mimic the concrete structure environment. The present robotic GPR system proves the concept of feasibility of undertaking inspection procedure on large concrete structures in hazardous environments that may not be accessible to human inspectors.

Keywords: Climbing robot, dipole antenna, Ground Penetrating Radar (GPR), mobile robots, robotic GPR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
253 Dual-Actuated Vibration Isolation Technology for a Rotary System’s Position Control on a Vibrating Frame: Disturbance Rejection and Active Damping

Authors: Kamand Bagherian, Nariman Niknejad

Abstract:

A vibration isolation technology for precise position control of a rotary system powered by two permanent magnet DC (PMDC) motors is proposed, where this system is mounted on an oscillatory frame. To achieve vibration isolation for this system, active damping and disturbance rejection (ADDR) technology is presented which introduces a cooperation of a main and an auxiliary PMDC, controlled by discrete-time sliding mode control (DTSMC) based schemes. The controller of the main actuator tracks a desired position and the auxiliary actuator simultaneously isolates the induced vibration, as its controller follows a torque trend. To determine this torque trend, a combination of two algorithms is introduced by the ADDR technology. The first torque-trend producing algorithm rejects the disturbance by counteracting the perturbation, estimated using a model-based observer. The second torque trend applies active variable damping to minimize the oscillation of the output shaft. In this practice, the presented technology is implemented on a rotary system with a pendulum attached, mounted on a linear actuator simulating an oscillation-transmitting structure. In addition, the obtained results illustrate the functionality of the proposed technology.

Keywords: Vibration isolation, position control, discrete-time nonlinear controller, active damping, disturbance tracking algorithm, oscillation transmitting support, stability robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 546
252 Promotion of Growth and Modulation of As- Induced Stress Ethylene in Maize by As- Tolerant ACC Deaminase Producing Bacteria

Authors: Charlotte C. Shagol, Tongmin Sa

Abstract:

One of the major pollutants in the environment is arsenic (As). Due to the toxic effects of As to all organisms, its remediation is necessary. Conventional technologies used in the remediation of As contaminated soils are expensive and may even compromise the structure of the soil. An attractive alternative is phytoremediation, which is the use of plants which can take up the contaminant in their tissues. Plant growth promoting bacteria (PGPB) has been known to enhance growth of plants through several mechanisms such as phytohormone production, phosphate solubilization, siderophore production and 1-aminocyclopropane-1- carboxylate (ACC) deaminase production, which is an essential trait that aids plants especially under stress conditions such as As stress. Twenty one bacteria were isolated from As-contaminated soils in the vicinity of the Janghang Smelter in Chungnam Province, South Korea. These exhibited high tolerance to either arsenite (As III) or arsenate (As V) or both. Most of these isolates possess several plant growth promoting traits which can be potentially exploited to increase phytoremediation efficiency. Among the identified isolates is Pseudomonas sp. JS1215, which produces ACC deaminase, indole acetic acid (IAA), and siderophore. It also has the ability to solubilize phosphate. Inoculation of JS1215 significantly enhanced root and shoot length and biomass accumulation of maize under normal conditions. In the presence of As, particularly in lower As level, inoculation of JS1215 slightly increased root length and biomass. Ethylene increased with increasing As concentration, but was reduced by JS1215 inoculation. JS1215 can be a potential bioinoculant for increasing phytoremediation efficiency.

Keywords: As-tolerant bacteria, plant growth promoting bacteria, As stress, phytoremediation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1680
251 Combined Sewer Overflow forecasting with Feed-forward Back-propagation Artificial Neural Network

Authors: Achela K. Fernando, Xiujuan Zhang, Peter F. Kinley

Abstract:

A feed-forward, back-propagation Artificial Neural Network (ANN) model has been used to forecast the occurrences of wastewater overflows in a combined sewerage reticulation system. This approach was tested to evaluate its applicability as a method alternative to the common practice of developing a complete conceptual, mathematical hydrological-hydraulic model for the sewerage system to enable such forecasts. The ANN approach obviates the need for a-priori understanding and representation of the underlying hydrological hydraulic phenomena in mathematical terms but enables learning the characteristics of a sewer overflow from the historical data. The performance of the standard feed-forward, back-propagation of error algorithm was enhanced by a modified data normalizing technique that enabled the ANN model to extrapolate into the territory that was unseen by the training data. The algorithm and the data normalizing method are presented along with the ANN model output results that indicate a good accuracy in the forecasted sewer overflow rates. However, it was revealed that the accurate forecasting of the overflow rates are heavily dependent on the availability of a real-time flow monitoring at the overflow structure to provide antecedent flow rate data. The ability of the ANN to forecast the overflow rates without the antecedent flow rates (as is the case with traditional conceptual reticulation models) was found to be quite poor.

Keywords: Artificial Neural Networks, Back-propagationlearning, Combined sewer overflows, Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
250 Rural – Urban Partnership for Balanced Spatial Development in Latvia

Authors: Zane Bulderberga

Abstract:

Spatial dimension in development planning is becoming more topical in 21st century as a result of changes in population structure. Sustainable spatial development focuses on identifying and using territorial advantages to foster the harmonized development of the entire country, reducing negative effects of population concentration, increasing availability and mobility. EU and national development planning documents state polycentrism as main tool for balance spatial development, including investment concentration in growth centres. If mutual cooperation of growth centres as well as urban–rural cooperation is not fostered, then territorial differences can deepen and create unbalanced development.

The aim of research: to evaluate the urban–rural interaction, elaborating spatial development scenarios in framework of Latvian regional policy. To perform the research monographic, comparison, abstract–logical method, synthesis and analysis will be used when studying the theoretical aspects of research aiming at collecting the ideas of scientists from different countries, concepts, regulations as well as to create meaningful scientific discussion. Hierarchy analysis process (AHP) will be used to state further scenarios of spatial development in Latvia.

Experts from various institutions recognized urban – rural interaction and co-operation as an essential tool for the development. The most important factors for balanced spatial development in Latvia are availability of public transportation and improvement of service availability. Evaluating the three alternative scenarios, it was concluded that the urban – rural partnership will ensure a balanced development in Latvian regions.

Keywords: Rural – urban interaction, rural – urban cooperation, spatial development, AHP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2702
249 The Evaluation of Gravity Anomalies Based on Global Models by Land Gravity Data

Authors: M. Yilmaz, I. Yilmaz, M. Uysal

Abstract:

The Earth system generates different phenomena that are observable at the surface of the Earth such as mass deformations and displacements leading to plate tectonics, earthquakes, and volcanism. The dynamic processes associated with the interior, surface, and atmosphere of the Earth affect the three pillars of geodesy: shape of the Earth, its gravity field, and its rotation. Geodesy establishes a characteristic structure in order to define, monitor, and predict of the whole Earth system. The traditional and new instruments, observables, and techniques in geodesy are related to the gravity field. Therefore, the geodesy monitors the gravity field and its temporal variability in order to transform the geodetic observations made on the physical surface of the Earth into the geometrical surface in which positions are mathematically defined. In this paper, the main components of the gravity field modeling, (Free-air and Bouguer) gravity anomalies are calculated via recent global models (EGM2008, EIGEN6C4, and GECO) over a selected study area. The model-based gravity anomalies are compared with the corresponding terrestrial gravity data in terms of standard deviation (SD) and root mean square error (RMSE) for determining the best fit global model in the study area at a regional scale in Turkey. The least SD (13.63 mGal) and RMSE (15.71 mGal) were obtained by EGM2008 for the Free-air gravity anomaly residuals. For the Bouguer gravity anomaly residuals, EIGEN6C4 provides the least SD (8.05 mGal) and RMSE (8.12 mGal). The results indicated that EIGEN6C4 can be a useful tool for modeling the gravity field of the Earth over the study area.

Keywords: Free-air gravity anomaly, Bouguer gravity anomaly, global model, land gravity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 919
248 Q-Map: Clinical Concept Mining from Clinical Documents

Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala

Abstract:

Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.

Keywords: Information retrieval (IR), unified medical language system (UMLS), Syntax Based Analysis, natural language processing (NLP), medical informatics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 725
247 Deployment of Beyond 4G Wireless Communication Networks with Carrier Aggregation

Authors: Bahram Khan, Anderson Rocha Ramos, Rui R. Paulo, Fernando J. Velez

Abstract:

With the growing demand for a new blend of applications, the users dependency on the internet is increasing day by day. Mobile internet users are giving more attention to their own experiences, especially in terms of communication reliability, high data rates and service stability on move. This increase in the demand is causing saturation of existing radio frequency bands. To address these challenges, researchers are investigating the best approaches, Carrier Aggregation (CA) is one of the newest innovations, which seems to fulfill the demands of the future spectrum, also CA is one the most important feature for Long Term Evolution - Advanced (LTE-Advanced). For this purpose to get the upcoming International Mobile Telecommunication Advanced (IMT-Advanced) mobile requirements (1 Gb/s peak data rate), the CA scheme is presented by 3GPP, which would sustain a high data rate using widespread frequency bandwidth up to 100 MHz. Technical issues such as aggregation structure, its implementations, deployment scenarios, control signal techniques, and challenges for CA technique in LTE-Advanced, with consideration of backward compatibility, are highlighted in this paper. Also, performance evaluation in macro-cellular scenarios through a simulation approach is presented, which shows the benefits of applying CA, low-complexity multi-band schedulers in service quality, system capacity enhancement and concluded that enhanced multi-band scheduler is less complex than the general multi-band scheduler, which performs better for a cell radius longer than 1800 m (and a PLR threshold of 2%).

Keywords: Component carrier, carrier aggregation, LTE-Advanced, scheduling, spectrum management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 492
246 User-Perceived Quality Factors for Certification Model of Web-Based System

Authors: Jamaiah H. Yahaya, Aziz Deraman, Abdul Razak Hamdan, Yusmadi Yah Jusoh

Abstract:

One of the most essential issues in software products is to maintain it relevancy to the dynamics of the user’s requirements and expectation. Many studies have been carried out in quality aspect of software products to overcome these problems. Previous software quality assessment models and metrics have been introduced with strengths and limitations. In order to enhance the assurance and buoyancy of the software products, certification models have been introduced and developed. From our previous experiences in certification exercises and case studies collaborating with several agencies in Malaysia, the requirements for user based software certification approach is identified and demanded. The emergence of social network applications, the new development approach such as agile method and other varieties of software in the market have led to the domination of users over the software. As software become more accessible to the public through internet applications, users are becoming more critical in the quality of the services provided by the software. There are several categories of users in web-based systems with different interests and perspectives. The classifications and metrics are identified through brain storming approach with includes researchers, users and experts in this area. The new paradigm in software quality assessment is the main focus in our research. This paper discusses the classifications of users in web-based software system assessment and their associated factors and metrics for quality measurement. The quality model is derived based on IEEE structure and FCM model. The developments are beneficial and valuable to overcome the constraints and improve the application of software certification model in future.

Keywords: Software certification model, user centric approach, software quality factors, metrics and measurements, web-based system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2104
245 Development of High Strength Self Curing Concrete Using Super Absorbing Polymer

Authors: K. Bala Subramanian, A. Siva, S. Swaminathan, Arul. M. G. Ajin

Abstract:

Concrete is an essential building material which is widely used in construction industry all over the world due to its compressible strength. Curing of concrete plays a vital role in durability and other performance necessities. Improper curing can affect the concrete performance and durability easily. When areas like scarcity of water, structures is not accessible by humans external curing cannot be performed, so we opt for internal curing. Internal curing (or) self curing plays a major role in developing the concrete pore structure and microstructure. The concept of internal curing is to enhance the hydration process to maintain the temperature uniformly. The evaporation of water in the concrete is reduced by self curing agent (Super Absorbing Polymer – SAP) there by increasing the water retention capacity of the concrete. The research work was carried out to reduce water, which is prime material used for concrete in the construction industry. Concrete curing plays a major role in developing hydration process. Concept of self curing will reduce the evaporation of water from concrete. Self curing will increase water retention capacity as compared to the conventional concrete. Proper self curing (or) internal curing increases the strength, durability and performance of concrete. Super absorbing Polymer (SAP) used as internal curing agent. In this study 0.2% to 0.4% of SAP was varied in different grade of high strength concrete. In the experiment replacement of cement by silica fumes with 5%, 10% and 15% are studied. It is found that replacement of silica fumes by 10 % gives more strength and durability when compared to others.

Keywords: Compressive Strength, High strength Concrete Rapid chloride permeability, Super Absorbing Polymer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3168
244 Forecasting Stock Price Manipulation in Capital Market

Authors: F. Rahnamay Roodposhti, M. Falah Shams, H. Kordlouie

Abstract:

The aim of the article is extending and developing econometrics and network structure based methods which are able to distinguish price manipulation in Tehran stock exchange. The principal goal of the present study is to offer model for approximating price manipulation in Tehran stock exchange. In order to do so by applying separation method a sample consisting of 397 companies accepted at Tehran stock exchange were selected and information related to their price and volume of trades during years 2001 until 2009 were collected and then through performing runs test, skewness test and duration correlative test the selected companies were divided into 2 sets of manipulated and non manipulated companies. In the next stage by investigating cumulative return process and volume of trades in manipulated companies, the date of starting price manipulation was specified and in this way the logit model, artificial neural network, multiple discriminant analysis and by using information related to size of company, clarity of information, ratio of P/E and liquidity of stock one year prior price manipulation; a model for forecasting price manipulation of stocks of companies present in Tehran stock exchange were designed. At the end the power of forecasting models were studied by using data of test set. Whereas the power of forecasting logit model for test set was 92.1%, for artificial neural network was 94.1% and multi audit analysis model was 90.2%; therefore all of the 3 aforesaid models has high power to forecast price manipulation and there is no considerable difference among forecasting power of these 3 models.

Keywords: Price Manipulation, Liquidity, Size of Company, Floating Stock, Information Clarity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2792
243 Prediction of Optimum Cutting Parameters to obtain Desired Surface in Finish Pass end Milling of Aluminium Alloy with Carbide Tool using Artificial Neural Network

Authors: Anjan Kumar Kakati, M. Chandrasekaran, Amitava Mandal, Amit Kumar Singh

Abstract:

End milling process is one of the common metal cutting operations used for machining parts in manufacturing industry. It is usually performed at the final stage in manufacturing a product and surface roughness of the produced job plays an important role. In general, the surface roughness affects wear resistance, ductility, tensile, fatigue strength, etc., for machined parts and cannot be neglected in design. In the present work an experimental investigation of end milling of aluminium alloy with carbide tool is carried out and the effect of different cutting parameters on the response are studied with three-dimensional surface plots. An artificial neural network (ANN) is used to establish the relationship between the surface roughness and the input cutting parameters (i.e., spindle speed, feed, and depth of cut). The Matlab ANN toolbox works on feed forward back propagation algorithm is used for modeling purpose. 3-12-1 network structure having minimum average prediction error found as best network architecture for predicting surface roughness value. The network predicts surface roughness for unseen data and found that the result/prediction is better. For desired surface finish of the component to be produced there are many different combination of cutting parameters are available. The optimum cutting parameter for obtaining desired surface finish, to maximize tool life is predicted. The methodology is demonstrated, number of problems are solved and algorithm is coded in Matlab®.

Keywords: End milling, Surface roughness, Neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2116
242 Effect of Organic-waste Compost Addition on Leaching of Mineral Nitrogen from Arable Land and Plant Production

Authors: Jakub Elbl, Lukas Plošek, Antonín Kintl, Jaroslav Záhora, Jitka Přichystalová, Jaroslav Hynšt

Abstract:

Application of compost in agriculture is very desirable worldwide. In the Czech Republic, compost is the most often used to improve soil structure and increase the content of soil organic matter, but the effects of compost addition on the fate of mineral nitrogen are only scarcely described. This paper deals with possibility of using combined application of compost, mineral and organic fertilizers to reduce the leaching of mineral nitrogen from arable land. To demonstrate the effect of compost addition on leaching of mineral nitrogen, we performed the pot experiment. As a model crop, Lactuca sativa L. was used and cultivated for 35 days in climate chamber in thoroughly homogenized arable soil. Ten variants of the experiment were prepared; two control variants (pure arable soil and arable soil with added compost), four variants with different doses of mineral and organic fertilizers and four variants of the same doses of mineral and organic fertilizers with the addition of compos. The highest decrease of mineral nitrogen leaching was observed by the simultaneous applications of soluble humic substances and compost to soil samples, about 417% in comparison with the control variant. Application of these organic compounds also supported microbial activity and nitrogen immobilization documented by the highest soil respiration and by the highest value of the index of nitrogen availability. The production of plant biomass after this application was not the highest due to microbial competition for the nutrients in soil, but was 24% higher in comparison with the control variant. To support these promising results the experiment should be repeated in field conditions.

Keywords: Nitrogen, Compost, Salad, Arable land.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2022