Search results for: octree compression techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2902

Search results for: octree compression techniques

1852 The Applicability of the Zipper Strut to Seismic Rehabilitation of Steel Structures

Authors: G. R. Nouri, H. Imani Kalesar, Zahra Ameli

Abstract:

Chevron frames (Inverted-V-braced frames or Vbraced frames) have seismic disadvantages, such as not good exhibit force redistribution capability and compression brace buckles immediately. Researchers developed new design provisions on increasing both the ductility and lateral resistance of these structures in seismic areas. One of these new methods is adding zipper columns, as proposed by Khatib et al. (1988) [2]. Zipper columns are vertical members connecting the intersection points of the braces above the first floor. In this paper applicability of the suspended zipper system to Seismic Rehabilitation of Steel Structures is investigated. The models are 3-, 6-, 9-, and 12-story Inverted-V-braced frames. In this case, it is assumed that the structures must be rehabilitated. For rehabilitation of structures, zipper column is used. The result of researches showed that the suspended zipper system is effective in case of 3-, 6-, and 9-story Inverted-V-braced frames and it would increase lateral resistance of structure up to life safety level. But in case of high-rise buildings (such as 12 story frame), it doesn-t show good performance. For solving this problem, the braced bay can consist of small “units" over the height of the entire structure, which each of them is a zipper-braced bay with a few stories. By using this method the lateral resistance of 12 story Inverted-V-braced frames is increased up to safety life level.

Keywords: chevron-braced frames, suspended zipper frames, zipper frames, zipper columns

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2197
1851 Surface Topography Assessment Techniques based on an In-process Monitoring Approach of Tool Wear and Cutting Force Signature

Authors: A. M. Alaskari, S. E. Oraby

Abstract:

The quality of a machined surface is becoming more and more important to justify the increasing demands of sophisticated component performance, longevity, and reliability. Usually, any machining operation leaves its own characteristic evidence on the machined surface in the form of finely spaced micro irregularities (surface roughness) left by the associated indeterministic characteristics of the different elements of the system: tool-machineworkpart- cutting parameters. However, one of the most influential sources in machining affecting surface roughness is the instantaneous state of tool edge. The main objective of the current work is to relate the in-process immeasurable cutting edge deformation and surface roughness to a more reliable easy-to-measure force signals using a robust non-linear time-dependent modeling regression techniques. Time-dependent modeling is beneficial when modern machining systems, such as adaptive control techniques are considered, where the state of the machined surface and the health of the cutting edge are monitored, assessed and controlled online using realtime information provided by the variability encountered in the measured force signals. Correlation between wear propagation and roughness variation is developed throughout the different edge lifetimes. The surface roughness is further evaluated in the light of the variation in both the static and the dynamic force signals. Consistent correlation is found between surface roughness variation and tool wear progress within its initial and constant regions. At the first few seconds of cutting, expected and well known trend of the effect of the cutting parameters is observed. Surface roughness is positively influenced by the level of the feed rate and negatively by the cutting speed. As cutting continues, roughness is affected, to different extents, by the rather localized wear modes either on the tool nose or on its flank areas. Moreover, it seems that roughness varies as wear attitude transfers from one mode to another and, in general, it is shown that it is improved as wear increases but with possible corresponding workpart dimensional inaccuracy. The dynamic force signals are found reasonably sensitive to simulate either the progressive or the random modes of tool edge deformation. While the frictional force components, feeding and radial, are found informative regarding progressive wear modes, the vertical (power) components is found more representative carrier to system instability resulting from the edge-s random deformation.

Keywords: Dynamic force signals, surface roughness (finish), tool wear and deformation, tool wear modes (nose, flank)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1330
1850 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases

Authors: Mohammad A. Bani-Khaled

Abstract:

In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.

Keywords: Coupled dynamics, geometric complexity, Proper Orthogonal Decomposition (POD), thin walled beams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 995
1849 Modeling and Simulation of Ship Structures Using Finite Element Method

Authors: Javid Iqbal, Zhu Shifan

Abstract:

The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.

Keywords: Dynamic analysis, finite element methods, ship structure, vibration analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2415
1848 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies

Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey

Abstract:

Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. The world wide observed changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although the effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.

Keywords: Climate Change, Downscaling, GCM, RCM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3346
1847 Image Magnification Using Adaptive Interpolationby Pixel Level Data-Dependent Geometrical Shapes

Authors: Muhammad Sajjad, Naveed Khattak, Noman Jafri

Abstract:

World has entered in 21st century. The technology of computer graphics and digital cameras is prevalent. High resolution display and printer are available. Therefore high resolution images are needed in order to produce high quality display images and high quality prints. However, since high resolution images are not usually provided, there is a need to magnify the original images. One common difficulty in the previous magnification techniques is that of preserving details, i.e. edges and at the same time smoothing the data for not introducing the spurious artefacts. A definitive solution to this is still an open issue. In this paper an image magnification using adaptive interpolation by pixel level data-dependent geometrical shapes is proposed that tries to take into account information about the edges (sharp luminance variations) and smoothness of the image. It calculate threshold, classify interpolation region in the form of geometrical shapes and then assign suitable values inside interpolation region to the undefined pixels while preserving the sharp luminance variations and smoothness at the same time. The results of proposed technique has been compared qualitatively and quantitatively with five other techniques. In which the qualitative results show that the proposed method beats completely the Nearest Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The quantitative results are competitive and consistent with NN, BL, BC and others.

Keywords: Adaptive, digital image processing, imagemagnification, interpolation, geometrical shapes, qualitative &quantitative analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1778
1846 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation

Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar

Abstract:

The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.

Keywords: Computational fluid dynamics, erosion, slurry transportation, k-ε Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1889
1845 Teacher Training Course: Conflict Resolution through Mediation

Authors: Csilla M. Szabó

Abstract:

In Hungary, the society has changed a lot for the past 25 years, and these changes could be detected in educational situations as well. The number and the intensity of conflicts have been increased in most fields of life, as well as at schools. Teachers have difficulties to be able to handle school conflicts. What is more, the new net generation, generation Z has values and behavioural patterns different from those of the previous one, which might generate more serious conflicts at school, especially with teachers who were mainly socialising in a traditional teacher – student relationship. In Hungary, the bill CCIV of 2011 declared the foundation of Institutes of Teacher Training in higher education institutes. One of the tasks of the Institutes is to survey the competences and needs of teachers working in public education and to provide further trainings and services for them according to their needs and requirements. This job is supported by the Social Renewal Operative Programs 4.1.2.B. The professors of a college carried out a questionnaire and surveyed the needs and the requirements of teachers working in the region. Based on the results, the professors of the Institute of Teacher Training decided to meet the requirements of teachers and to launch short teacher further training courses in spring 2015. One of the courses is going to focus on school conflict management through mediation. The aim of the pilot course is to provide conflict management techniques for teachers and to present different mediation techniques to them. The theoretical part of the course (5 hours) will enable participants to understand the main points and the advantages of mediation, while the practical part (10 hours) will involve teachers in role plays to learn how to cope with conflict situations applying mediation. We hope if conflicts could be reduced, it would influence school atmosphere in a positive way and the teaching – learning process could be more successful and effective.

Keywords: Conflict resolution, generation Z, mediation, teacher training.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711
1844 Application of Interferometric Techniques for Quality Control of Oils Used in the Food Industry

Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich

Abstract:

The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.

Keywords: Food industry, interferometric, oils, quality control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2158
1843 Discrete Element Modeling of the Effect of Particle Shape on Creep Behavior of Rockfills

Authors: Yunjia Wang, Zhihong Zhao, Erxiang Song

Abstract:

Rockfills are widely used in civil engineering, such as dams, railways, and airport foundations in mountain areas. A significant long-term post-construction settlement may affect the serviceability or even the safety of rockfill infrastructures. The creep behavior of rockfills is influenced by a number of factors, such as particle size, strength and shape, water condition and stress level. However, the effect of particle shape on rockfill creep still remains poorly understood, which deserves a careful investigation. Particle-based discrete element method (DEM) was used to simulate the creep behavior of rockfills under different boundary conditions. Both angular and rounded particles were considered in this numerical study, in order to investigate the influence of particle shape. The preliminary results showed that angular particles experience more breakages and larger creep strains under one-dimensional compression than rounded particles. On the contrary, larger creep strains were observed in he rounded specimens in the direct shear test. The mechanism responsible for this difference is that the possibility of the existence of key particle in rounded particles is higher than that in angular particles. The above simulations demonstrate that the influence of particle shape on the creep behavior of rockfills can be simulated by DEM properly. The method of DEM simulation may facilitate our understanding of deformation properties of rockfill materials.

Keywords: Rockfills, creep behavior, particle crushing, discrete element method, boundary conditions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1041
1842 Some Mechanical Properties of Cement Stabilized Malaysian Soft Clay

Authors: Meei-Hoan Ho, Chee-Ming Chan

Abstract:

Soft clays are defined as cohesive soil whose water content is higher than its liquid limits. Thus, soil-cement mixing is adopted to improve the ground conditions by enhancing the strength and deformation characteristics of the soft clays. For the above mentioned reasons, a series of laboratory tests were carried out to study some fundamental mechanical properties of cement stabilized soft clay. The test specimens were prepared by varying the portion of ordinary Portland cement to the soft clay sample retrieved from the test site of RECESS (Research Centre for Soft Soil). Comparisons were made for both homogeneous and columnar system specimens by relating the effects of cement stabilized clay of for 0, 5 and 10 % cement and curing for 3, 28 and 56 days. The mechanical properties examined included one-dimensional compressibility and undrained shear strength. For the mechanical properties, both homogeneous and columnar system specimens were prepared to examine the effect of different cement contents and curing periods on the stabilized soil. The one-dimensional compressibility test was conducted using an oedometer, while a direct shear box was used for measuring the undrained shear strength. The higher the value of cement content, the greater is the enhancement of the yield stress and the decrease of compression index. The value of cement content in a specimen is a more active parameter than the curing period.

Keywords: Soft soil, Oedometer, Direct shear box, Cementstabilisedcolumn.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3222
1841 Critical Approach to Define the Architectural Structure of a Health Prototype in a Rural Area of Brazil

Authors: Domenico Chizzoniti, Monica Moscatelli, Letizia Cattani, Luca Preis

Abstract:

A primary healthcare facility in developing countries should be a multifunctional space able to respond to different requirements: Flexibility, modularity, aggregation and reversibility. These basic features could be better satisfied if applied to an architectural artifact that complies with the typological, figurative and constructive aspects of the context in which it is located. Therefore, the purpose of this paper is to identify a procedure that can define the figurative aspects of the architectural structure of the health prototype for the marginal areas of developing countries through a critical approach. The application context is the rural areas of the Northeast of Bahia in Brazil. The prototype should be located in the rural district of Quingoma, in the municipality of Lauro de Freitas, a particular place where there is still a cultural fusion of black and indigenous populations. Based on the historical analysis of settlement strategies and architectural structures in spaces of public interest or collective use, this paper aims to provide a procedure able to identify the categories and rules underlying typological and figurative aspects, in order to detect significant and generalizable elements, as well as materials and constructive techniques typically adopted in the rural areas of Brazil. The object of this work is therefore not only the recovery of certain constructive approaches but also the development of a procedure that integrates the requirements of the primary healthcare prototype with its surrounding economic, social, cultural, settlement and figurative conditions.

Keywords: Architectural typology, Developing countries, Local construction techniques, Primary health care.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 917
1840 Distributed Cost-Based Scheduling in Cloud Computing Environment

Authors: Rupali, Anil Kumar Jaiswal

Abstract:

Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc.  Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively.  Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.

Keywords: Physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827
1839 Heat Forging Analysis Method on Blank Consisting of Two Metals

Authors: Takashi Ueda, Shinichi Enoki

Abstract:

Forging parts is used to automobiles; because, they have high strength and it is possible to press them into complicated shape. When itis possible to manufacture hollow forging parts, it leads to reduce weightof the automobiles. But, hollow forging parts are confined to axisymmetrical shape. Hollowforging parts that were pressed to complicated shape are expected. Therefore, we forge a blank that aluminum alloy was inserted in stainless steel. After that, we can providecomplex forging parts that are reduced weight,ifit is possible to be melted the aluminum alloy away by using different of melting points.It is necessary to establish heat forging analysis methodon blank consist of stainless steel and aluminum alloy. Because,this forging is different from conventional forging and this technology is not confirmed. In this study, we compared forging experiment with numerical analysis on the view point of forming load and shape after forming and establish how to set the material temperaturesof two metals and material property of stainless steel on the analysis method. Consequently, temperature difference of stainless steel and aluminum alloy was obtained by experiment. We got material property of stainless steel on forging experimental by compression tests. We had compared numerical analysis that was used the temperature difference of two metals and the material property of stainless steel on forging experimental with forging experiment. Forging analysis method on blankconsist of two metals was established by result of numerical analysis having agreedwith result of forging experiment.

Keywords: Forging, lightweight, analysis, hollow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1722
1838 An Approach to Image Extraction and Accurate Skin Detection from Web Pages

Authors: Moheb R. Girgis, Tarek M. Mahmoud, Tarek Abd-El-Hafeez

Abstract:

This paper proposes a system to extract images from web pages and then detect the skin color regions of these images. As part of the proposed system, using BandObject control, we built a Tool bar named 'Filter Tool Bar (FTB)' by modifying the Pavel Zolnikov implementation. The Yahoo! Team provides us with the Yahoo! SDK API, which also supports image search and is really useful. In the proposed system, we introduced three new methods for extracting images from the web pages (after loading the web page by using the proposed FTB, before loading the web page physically from the localhost, and before loading the web page from any server). These methods overcome the drawback of the regular expressions method for extracting images suggested by Ilan Assayag. The second part of the proposed system is concerned with the detection of the skin color regions of the extracted images. So, we studied two famous skin color detection techniques. The first technique is based on the RGB color space and the second technique is based on YUV and YIQ color spaces. We modified the second technique to overcome the failure of detecting complex image's background by using the saturation parameter to obtain an accurate skin detection results. The performance evaluation of the efficiency of the proposed system in extracting images before and after loading the web page from localhost or any server in terms of the number of extracted images is presented. Finally, the results of comparing the two skin detection techniques in terms of the number of pixels detected are presented.

Keywords: Browser Helper Object, Color spaces, Image and URL extraction, Skin detection, Web Browser events.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1863
1837 Mapping of Alteration Zones in Mineral Rich Belt of South-East Rajasthan Using Remote Sensing Techniques

Authors: Mrinmoy Dhara, Vivek K. Sengar, Shovan L. Chattoraj, Soumiya Bhattacharjee

Abstract:

Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies.

Keywords: Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1458
1836 Real-time Haptic Modeling and Simulation for Prosthetic Insertion

Authors: Catherine A. Todd, Fazel Naghdy

Abstract:

In this work a surgical simulator is produced which enables a training otologist to conduct a virtual, real-time prosthetic insertion. The simulator provides the Ear, Nose and Throat surgeon with real-time visual and haptic responses during virtual cochlear implantation into a 3D model of the human Scala Tympani (ST). The parametric model is derived from measured data as published in the literature and accounts for human morphological variance, such as differences in cochlear shape, enabling patient-specific pre- operative assessment. Haptic modeling techniques use real physical data and insertion force measurements, to develop a force model which mimics the physical behavior of an implant as it collides with the ST walls during an insertion. Output force profiles are acquired from the insertion studies conducted in the work, to validate the haptic model. The simulator provides the user with real-time, quantitative insertion force information and associated electrode position as user inserts the virtual implant into the ST model. The information provided by this study may also be of use to implant manufacturers for design enhancements as well as for training specialists in optimal force administration, using the simulator. The paper reports on the methods for anatomical modeling and haptic algorithm development, with focus on simulator design, development, optimization and validation. The techniques may be transferrable to other medical applications that involve prosthetic device insertions where user vision is obstructed.

Keywords: Haptic modeling, medical device insertion, real-time visualization of prosthetic implantation, surgical simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2021
1835 Interest of the Sequences Pseudo Noises Codes of Different Lengths for the Reduction from the Interference between Users of CDMA Network

Authors: Nerguè Kassahan Kone, Souleymane Oumtanaga

Abstract:

The third generation (3G) of cellular system adopted the spread spectrum as solution for the transmission of the data in the physical layer. Contrary to systems IS-95 or CDMAOne (systems with spread spectrum of the preceding generation), the new standard, called Universal Mobil Telecommunications System (UMTS), uses long codes in the down link. The system is conceived for the vocal communication and the transmission of the data. In particular, the down link is very important, because of the asymmetrical request of the data, i.e., more remote loading towards the mobiles than towards the basic station. Moreover, the UMTS uses for the down link an orthogonal spreading out with a variable factor of spreading out (OVSF for Orthogonal Variable Spreading Factor). This characteristic makes it possible to increase the flow of data of one or more users by reducing their factor of spreading out without changing the factor of spreading out of other users. In the current standard of the UMTS, two techniques to increase the performances of the down link were proposed, the diversity of sending antenna and the codes space-time. These two techniques fight only fainding. The receiver proposed for the mobil station is the RAKE, but one can imagine a receiver more sophisticated, able to reduce the interference between users and the impact of the coloured noise and interferences to narrow band. In this context, where the users have long codes synchronized with variable factor of spreading out and ignorance by the mobile of the other active codes/users, the use of the sequences of code pseudo-noises different lengths is presented in the form of one of the most appropriate solutions.

Keywords: DS-CDMA, multiple access interference, ratio Signal / interference + Noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1334
1834 Seawater Desalination for Production of Highly Pure Water Using a Hydrophobic PTFE Membrane and Direct Contact Membrane Distillation (DCMD)

Authors: Ahmad Kayvani Fard, Yehia Manawi

Abstract:

Qatar’s primary source of fresh water is through seawater desalination. Amongst the major processes that are commercially available on the market, the most common large scale techniques are Multi-Stage Flash distillation (MSF), Multi Effect distillation (MED), and Reverse Osmosis (RO). Although commonly used, these three processes are highly expensive down to high energy input requirements and high operating costs allied with maintenance and stress induced on the systems in harsh alkaline media. Beside that cost, environmental footprint of these desalination techniques are significant; from damaging marine eco-system, to huge land use, to discharge of tons of GHG and huge carbon footprint. Other less energy consuming techniques based on membrane separation are being sought to reduce both the carbon footprint and operating costs is membrane distillation (MD). Emerged in 1960s, MD is an alternative technology for water desalination attracting more attention since 1980s. MD process involves the evaporation of a hot feed, typically below boiling point of brine at standard conditions, by creating a water vapor pressure difference across the porous, hydrophobic membrane. Main advantages of MD compared to other commercially available technologies (MSF and MED) and specially RO are reduction of membrane and module stress due to absence of trans-membrane pressure, less impact of contaminant fouling on distillate due to transfer of only water vapor, utilization of low grade or waste heat from oil and gas industries to heat up the feed up to required temperature difference across the membrane, superior water quality, and relatively lower capital and operating cost. To achieve the objective of this study, state of the art flat-sheet cross-flow DCMD bench scale unit was designed, commissioned, and tested. The objective of this study is to analyze the characteristics and morphology of the membrane suitable for DCMD through SEM imaging and contact angle measurement and to study the water quality of distillate produced by DCMD bench scale unit. Comparison with available literature data is undertaken where appropriate and laboratory data is used to compare a DCMD distillate quality with that of other desalination techniques and standards. Membrane SEM analysis showed that the PTFE membrane used for the study has contact angle of 127º with highly porous surface supported with less porous and bigger pore size PP membrane. Study on the effect of feed solution (salinity) and temperature on water quality of distillate produced from ICP and IC analysis showed that with any salinity and different feed temperature (up to 70ºC) the electric conductivity of distillate is less than 5 μS/cm with 99.99% salt rejection and proved to be feasible and effective process capable of consistently producing high quality distillate from very high feed salinity solution (i.e. 100000 mg/L TDS) even with substantial quality difference compared to other desalination methods such as RO and MSF.

Keywords: Membrane Distillation, Waste Heat, Seawater Desalination, Membrane, Freshwater, Direct Contact Membrane Distillation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4127
1833 Evaluation of Efficient CSI Based Channel Feedback Techniques for Adaptive MIMO-OFDM Systems

Authors: Muhammad Rehan Khalid, Muhammad Haroon Siddiqui, Danish Ilyas

Abstract:

This paper explores the implementation of adaptive coding and modulation schemes for Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) feedback systems. Adaptive coding and modulation enables robust and spectrally-efficient transmission over time-varying channels. The basic premise is to estimate the channel at the receiver and feed this estimate back to the transmitter, so that the transmission scheme can be adapted relative to the channel characteristics. Two types of codebook based channel feedback techniques are used in this work. The longterm and short-term CSI at the transmitter is used for efficient channel utilization. OFDM is a powerful technique employed in communication systems suffering from frequency selectivity. Combined with multiple antennas at the transmitter and receiver, OFDM proves to be robust against delay spread. Moreover, it leads to significant data rates with improved bit error performance over links having only a single antenna at both the transmitter and receiver. The coded modulation increases the effective transmit power relative to uncoded variablerate variable-power MQAM performance for MIMO-OFDM feedback system. Hence proposed arrangement becomes an attractive approach to achieve enhanced spectral efficiency and improved error rate performance for next generation high speed wireless communication systems.

Keywords: Adaptive Coded Modulation, MQAM, MIMO, OFDM, Codebooks, Feedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
1832 Calibration of the Discrete Element Method Using a Large Shear Box

Authors: Corné J. Coetzee, Etienne Horn

Abstract:

One of the main challenges in using the Discrete Element Method (DEM) is to specify the correct input parameter values. In general, the models are sensitive to the input parameter values and accurate results can only be achieved if the correct values are specified. For the linear contact model, micro-parameters such as the particle density, stiffness, coefficient of friction, as well as the particle size and shape distributions are required. There is a need for a procedure to accurately calibrate these parameters before any attempt can be made to accurately model a complete bulk materials handling system. Since DEM is often used to model applications in the mining and quarrying industries, a calibration procedure was developed for materials that consist of relatively large (up to 40 mm in size) particles. A coarse crushed aggregate was used as the test material. Using a specially designed large shear box with a diameter of 590 mm, the confined Young’s modulus (bulk stiffness) and internal friction angle of the material were measured by means of the confined compression test and the direct shear test respectively. DEM models of the experimental setup were developed and the input parameter values were varied iteratively until a close correlation between the experimental and numerical results was achieved. The calibration process was validated by modelling the pull-out of an anchor from a bed of material. The model results compared well with experimental measurement.

Keywords: Discrete Element Method (DEM), calibration, shear box, anchor pull-out.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2639
1831 The Role of Vibro-Stone Column for Enhancing the Soft Soil Properties

Authors: Mohsen Ramezan Shirazi, Orod Zarrin, Komeil Valipourian

Abstract:

This study investigated the behavior of improved soft soils through the vibro replacement technique by considering their settlements and consolidation rates and the applicability of this technique in various types of soils and settlement and bearing capacity calculations.

Keywords: Bearing capacity, expansive clay, stone columns, vibro techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3855
1830 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform

Authors: Omaima N. Ahmad AL-Allaf

Abstract:

Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.

Keywords: Image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1133
1829 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

Authors: A.K. Tangirala, S. Babji

Abstract:

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1507
1828 Performance Analysis of MC-SS for the Indoor BPLC Systems

Authors: Justinian Anatory

Abstract:

power-line networks are promise infrastructure for broadband services provision to end users. However, the network performance is affected by stochastic channel changing which is due to load impedances, number of branches and branched line lengths. It has been proposed that multi-carrier modulations techniques such as orthogonal frequency division multiplexing (OFDM), Multi-Carrier Spread Spectrum (MC-SS), wavelet OFDM can be used in such environment. This paper investigates the performance of different indoor topologies of power-line networks that uses MC-SS modulation scheme.It is observed that when a branch is added in the link between sending and receiving end of an indoor channel an average of 2.5dB power loss is found. In additional, when the branch is added at a node an average of 1dB power loss is found. Additionally when the terminal impedances of the branch change from line characteristic impedance to impedance either higher or lower values the channel performances were tremendously improved. For example changing terminal load from characteristic impedance (85 .) to 5 . the signal to noise ratio (SNR) required to attain the same performances were decreased from 37dB to 24dB respectively. Also, changing the terminal load from channel characteristic impedance (85 .) to very higher impedance (1600 .) the SNR required to maintain the same performances were decreased from 37dB to 23dB. The result concludes that MC-SS performs better compared with OFDM techniques in all aspects and especially when the channel is terminated in either higher or lower impedances.

Keywords: Communication channel model; Broadband Powerlinecommunication; Branched network; OFDM; Delay Spread, MCSS;impulsive noise; load impedance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1578
1827 Transcritical CO2 Heat Pump Simulation Model and Validation for Simultaneous Cooling and Heating

Authors: Jahar Sarkar

Abstract:

In the present study, a steady-state simulation model has been developed to evaluate the system performance of a transcritical carbon dioxide heat pump system for simultaneous water cooling and heating. Both the evaporator (including both two-phase and superheated zone) and gas cooler models consider the highly variable heat transfer characteristics of CO2 and pressure drop. The numerical simulation model of transcritical CO2 heat pump has been validated by test data obtained from experiments on the heat pump prototype. Comparison between the test results and the model prediction for system COP variation with compressor discharge pressure shows a modest agreement with a maximum deviation of 15% and the trends are fairly similar. Comparison for other operating parameters also shows fairly similar deviation between the test results and the model prediction. Finally, the simulation results are presented to study the effects of operating parameters such as, temperature of heat exchanger fluid at the inlet, discharge pressure, compressor speed on system performance of CO2 heat pump, suitable in a dairy plant where simultaneous cooling at 4oC and heating at 73oC are required. Results show that good heat transfer properties of CO2 for both two-phase and supercritical region and efficient compression process contribute a lot for high system COPs.

Keywords: CO2 heat pump, dairy system, experiment, simulation model, validation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1849
1826 A Data Hiding Model with High Security Features Combining Finite State Machines and PMM method

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

Recent years have witnessed the rapid development of the Internet and telecommunication techniques. Information security is becoming more and more important. Applications such as covert communication, copyright protection, etc, stimulate the research of information hiding techniques. Traditionally, encryption is used to realize the communication security. However, important information is not protected once decoded. Steganography is the art and science of communicating in a way which hides the existence of the communication. Important information is firstly hidden in a host data, such as digital image, video or audio, etc, and then transmitted secretly to the receiver.In this paper a data hiding model with high security features combining both cryptography using finite state sequential machine and image based steganography technique for communicating information more securely between two locations is proposed. The authors incorporated the idea of secret key for authentication at both ends in order to achieve high level of security. Before the embedding operation the secret information has been encrypted with the help of finite-state sequential machine and segmented in different parts. The cover image is also segmented in different objects through normalized cut.Each part of the encoded secret information has been embedded with the help of a novel image steganographic method (PMM) on different cuts of the cover image to form different stego objects. Finally stego image is formed by combining different stego objects and transmit to the receiver side. At the receiving end different opposite processes should run to get the back the original secret message.

Keywords: Cover Image, Finite state sequential machine, Melaymachine, Pixel Mapping Method (PMM), Stego Image, NCUT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2241
1825 Component Based Framework for Authoring and Multimedia Training in Mathematics

Authors: Ion Smeureanu, Marian Dardala, Adriana Reveiu

Abstract:

The new programming technologies allow for the creation of components which can be automatically or manually assembled to reach a new experience in knowledge understanding and mastering or in getting skills for a specific knowledge area. The project proposes an interactive framework that permits the creation, combination and utilization of components that are specific to mathematical training in high schools. The main framework-s objectives are: • authoring lessons by the teacher or the students; all they need are simple operating skills for Equation Editor (or something similar, or Latex); the rest are just drag & drop operations, inserting data into a grid, or navigating through menus • allowing sonorous presentations of mathematical texts and solving hints (easier understood by the students) • offering graphical representations of a mathematical function edited in Equation • storing of learning objects in a database • storing of predefined lessons (efficient for expressions and commands, the rest being calculations; allows a high compression) • viewing and/or modifying predefined lessons, according to the curricula The whole thing is focused on a mathematical expressions minicompiler, storing the code that will be later used for different purposes (tables, graphics, and optimisations). Programming technologies used. A Visual C# .NET implementation is proposed. New and innovative digital learning objects for mathematics will be developed; they are capable to interpret, contextualize and react depending on the architecture where they are assembled.

Keywords: Adaptor, automatic assembly learning component and user control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
1824 Influence of Stacking Sequence and Temperature on Buckling Resistance of GFRP Infill Panel

Authors: Viriyavudh Sim, SeungHyun Kim, JungKyu Choi, WooYoung Jung

Abstract:

Glass Fiber Reinforced Polymer (GFRP) is a major evolution for energy dissipation when used as infill material for seismic retrofitting of steel frame, a basic PMC infill wall system consists of two GFRP laminates surrounding an infill of foam core. This paper presents numerical analysis in terms of buckling resistance of GFRP sandwich infill panels system under the influence of environment temperature and stacking sequence of laminate skin. Mode of failure under in-plane compression is studied by means of numerical analysis with ABAQUS platform. Parameters considered in this study are contact length between infill and frame, laminate stacking sequence of GFRP skin and variation of mechanical properties due to increment of temperature. The analysis is done with four cases of simple stacking sequence over a range of temperature. The result showed that both the effect of temperature and stacking sequence alter the performance of entire panel system. The rises of temperature resulted in the decrements of the panel’s strength. This is due to the polymeric nature of this material. Additionally, the contact length also displays the effect on the performance of infill panel. Furthermore, the laminate stiffness can be modified by orientation of laminate, which can increase the infill panel strength. Hence, optimal performance of the entire panel system can be obtained by comparing different cases of stacking sequence.

Keywords: Buckling resistance, GFRP infill panel, stacking sequence, temperature dependent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478
1823 Implementation of the Quality Management System and Development of Organizational Learning: Case of Three Small and Medium-Sized Enterprises in Morocco

Authors: Abdelghani Boudiaf

Abstract:

The profusion of studies relating to the concept of organizational learning shows the importance that has been given to this concept in the management sciences. A few years ago, companies leaned towards ISO 9001 certification; this requires the implementation of the quality management system (QMS). In order for this objective to be achieved, companies must have a set of skills, which pushes them to develop learning through continuous training. The results of empirical research have shown that implementation of the QMS in the company promotes the development of learning. It should also be noted that several types of learning are developed in this sense. Given the nature of skills development is normative in the context of the quality demarche, companies are obliged to qualify and improve the skills of their human resources. Continuous training is the keystone to develop the necessary learning. To carry out continuous training, companies need to be able to identify their real needs by developing training plans based on well-defined engineering. The training process goes obviously through several stages. Initially, training has a general aspect, that is to say, it focuses on topics and actions of a general nature. Subsequently, this is done in a more targeted and more precise way to accompany the evolution of the QMS and also to make the changes decided each time (change of working method, change of practices, change of objectives, change of mentality, etc.). To answer our problematic we opted for the method of qualitative research. It should be noted that the case study method crosses several data collection techniques to explain and understand a phenomenon. Three cases of companies were studied as part of this research work using different data collection techniques related to this method.

Keywords: Changing mentalities, continuous training, organizational learning, quality management system, skills development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 697