Search results for: rational numbers.
63 Foot Anthropometry of Primary School Children in the South of Thailand
Authors: S. Rawangwong, J. Chatthong, W. Boonchouytan
Abstract:
The objective of the research was to study of foot anthropometry of children aged 7-12 years in the South of Thailand Thirty-three dimensions were measured on 305 male and 295 female subjects with 3 age ranges (7-12 years old). The instrumentation consists of four types of anthropometer, digital vernier caliper, digital height gauge and measuring tape. The mean values and standard deviations of average age, height, and weight of the male subjects were 9.52(±1.70) years, 137.80(±11.55) cm, and 37.57(±11.65) kg. Female average age, height, and weight subjects were 9.53(±1.70) years, 137.88(±11.55) cm, and 34.90(±11.57) kg respectively. The comparison of the 33 comparison measured anthropometric. Between male and female subjects were sexual differences in size on women in almost all areas of significance (p<0.05). The comparison of size and proportion elementary school students 11-12 years old men in Southern of Thailand with Thai boys aged 11-12 years of industrial standards at stage 4 year A.D. 2000-2001 Number nine ratio. Concluded that students male in Southern of Thailand has a size different from the proportions of research Industrial Standards. Ministry of Industry, Phase 4, when every year from A.D. 2000-2001 ratio was significantly (p<0.05).All of the feet studied were classified into 4 categories according to the ratios of diagonal foot breadth to the maximum foot length and heel breadth to the foot breadth. They were short but thick, small but long, small, and large. The numbers of the males feet classified in these categories were 86, 64, 40, and 115 persons or 28.20, 20.98, 13.11, and 37.70% respectively. For the female feet, the same values were 46, 59, 81, and 109 persons or 15.59, 20.00, 27.46, and 36.95% respectively.Keywords: Ergonomics, foot anthropometry, male and female, primary school children
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 292462 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation
Authors: H. Khanfari, M. Johari Fard
Abstract:
Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.
Keywords: Carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 179061 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations
Authors: Satyanadh Gundimada, Vijayan K Asari
Abstract:
A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.
Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 185060 A Study on the Effectiveness of Alternative Commercial Ventilation Inlets That Improve Energy Efficiency of Building Ventilation Systems
Authors: Brian Considine, Aonghus McNabola, John Gallagher, Prashant Kumar
Abstract:
Passive air pollution control devices known as aspiration efficiency reducers (AER) have been developed using aspiration efficiency (AE) concepts. Their purpose is to reduce the concentration of particulate matter (PM) drawn into a building air handling unit (AHU) through alterations in the inlet design improving energy consumption. In this paper an examination is conducted into the effect of installing a deflector system around an AER-AHU inlet for both a forward and rear-facing orientations relative to the wind. The results of the study found that these deflectors are an effective passive control method for reducing AE at various ambient wind speeds over a range of microparticles of varying diameter. The deflector system was found to induce a large wake zone at low ambient wind speeds for a rear-facing AER-AHU, resulting in significantly lower AE in comparison to without. As the wind speed increased, both contained a wake zone but have much lower concentration gradients with the deflectors. For the forward-facing models, the deflector system at low ambient wind speed was preferred at higher Stokes numbers but there was negligible difference as the Stokes number decreased. Similarly, there was no significant difference at higher wind speeds across the Stokes number range tested. The results demonstrate that a deflector system is a viable passive control method for the reduction of ventilation energy consumption.
Keywords: Aspiration efficiency, energy, particulate matter, ventilation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47759 Innovative Design Considerations for Adaptive Spacecraft
Authors: K. Parandhama Gowd
Abstract:
Space technologies have changed the way we live in the present day society and manage many aspects of our daily affairs through Remote sensing, Navigation & Communications. Further, defense and military usage of spacecraft has increased tremendously along with civilian purposes. The number of satellites deployed in space in Low Earth Orbit (LEO), Medium Earth Orbit (MEO), and the Geostationary Orbit (GEO) has gone up. The dependency on remote sensing and operational capabilities are most invariably to be exploited more and more in future. Every country is acquiring spacecraft in one way or other for their daily needs, and spacecraft numbers are likely to increase significantly and create spacecraft traffic problems. The aim of this research paper is to propose innovative design concepts for adaptive spacecraft. The main idea here is to improve existing design methods of spacecraft design and development to further improve upon design considerations for futuristic adaptive spacecraft with inbuilt features for automatic adaptability and self-protection. In other words, the innovative design considerations proposed here are to have future spacecraft with self-organizing capabilities for orbital control and protection from anti-satellite weapons (ASAT). Here, an attempt is made to propose design and develop futuristic spacecraft for 2030 and beyond due to tremendous advancements in VVLSI, miniaturization, and nano antenna array technologies, including nano technologies are expected.
Keywords: Satellites, low earth orbit, medium earth orbit, geostationary earth orbit, self-organizing control system, anti-satellite weapons, orbital control, radar warning receiver, missile warning receiver, laser warning receiver, attitude and orbit control systems, command and data handling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 99958 Crash Severity Modeling in Urban Highways Using Backward Regression Method
Authors: F. Rezaie Moghaddam, T. Rezaie Moghaddam, M. Pasbani Khiavi, M. Ali Ghorbani
Abstract:
Identifying and classifying intersections according to severity is very important for implementation of safety related counter measures and effective models are needed to compare and assess the severity. Highway safety organizations have considered intersection safety among their priorities. In spite of significant advances in highways safety, the large numbers of crashes with high severities still occur in the highways. Investigation of influential factors on crashes enables engineers to carry out calculations in order to reduce crash severity. Previous studies lacked a model capable of simultaneous illustration of the influence of human factors, road, vehicle, weather conditions and traffic features including traffic volume and flow speed on the crash severity. Thus, this paper is aimed at developing the models to illustrate the simultaneous influence of these variables on the crash severity in urban highways. The models represented in this study have been developed using binary Logit Models. SPSS software has been used to calibrate the models. It must be mentioned that backward regression method in SPSS was used to identify the significant variables in the model. Consider to obtained results it can be concluded that the main factor in increasing of crash severity in urban highways are driver age, movement with reverse gear, technical defect of the vehicle, vehicle collision with motorcycle and bicycle, bridge, frontal impact collisions, frontal-lateral collisions and multi-vehicle crashes in urban highways which always increase the crash severity in urban highways.Keywords: Backward regression, crash severity, speed, urbanhighways.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 192157 Cellular Automata Based Robust Watermarking Architecture towards the VLSI Realization
Authors: V. H. Mankar, T. S. Das, S. K. Sarkar
Abstract:
In this paper, we have proposed a novel blind watermarking architecture towards its hardware implementation in VLSI. In order to facilitate this hardware realization, cellular automata (CA) concept is introduced. The CA has been already accepted as an attractive structure for VLSI implementation because of its modularity, parallelism, high performance and reliability. The hardware realizable multiresolution spread spectrum watermarking techniques are very few in numbers in spite of their best ever resiliency against signal impairments. This is because of the computational cost and complexity associated with their different filter banks and lifting techniques. The concept of cellular automata theory in order to form a new transform domain technique i.e. Cellular Automata Transform (CAT) have been incorporated. Since CA provides spreading sequences having very low cross-correlation properties, the CA based pseudorandom sequence generator is considered in the present work. Considering the watermarking technique as a digital communication process, an error control coding (ECC) must be incorporated in the data hiding schemes. Besides the hardware implementation of entire CA based data hiding technique, the individual blocks of the algorithm using CA provide the best result than that of some other methods irrespective of the hardware and software technique. The Cellular Automata Transform, CA based PN sequence generator, and CA ECC are the requisite blocks that are developed not only to meet the reliable hardware requirements but also for the basic spread spectrum watermarking features. The proposed algorithm shows statistical invisibility and resiliency against various common signal-processing operations. This algorithmic design utilizes the existing allocated bandwidth in the data transmission channel in a more efficient manner.
Keywords: Cellular automata, watermarking, error control coding, PN sequence, VLSI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 206756 Screen of MicroRNA Targets in Zebrafish Using Heterogeneous Data Sources: A Case Study for Dre-miR-10 and Dre-miR-196
Authors: Yanju Zhang, Joost M. Woltering, Fons J. Verbeek
Abstract:
It has been established that microRNAs (miRNAs) play an important role in gene expression by post-transcriptional regulation of messengerRNAs (mRNAs). However, the precise relationships between microRNAs and their target genes in sense of numbers, types and biological relevance remain largely unclear. Dissecting the miRNA-target relationships will render more insights for miRNA targets identification and validation therefore promote the understanding of miRNA function. In miRBase, miRanda is the key algorithm used for target prediction for Zebrafish. This algorithm is high-throughput but brings lots of false positives (noise). Since validation of a large scale of targets through laboratory experiments is very time consuming, several computational methods for miRNA targets validation should be developed. In this paper, we present an integrative method to investigate several aspects of the relationships between miRNAs and their targets with the final purpose of extracting high confident targets from miRanda predicted targets pool. This is achieved by using the techniques ranging from statistical tests to clustering and association rules. Our research focuses on Zebrafish. It was found that validated targets do not necessarily associate with the highest sequence matching. Besides, for some miRNA families, the frequency of their predicted targets is significantly higher in the genomic region nearby their own physical location. Finally, in a case study of dre-miR-10 and dre-miR-196, it was found that the predicted target genes hoxd13a, hoxd11a, hoxd10a and hoxc4a of dre-miR- 10 while hoxa9a, hoxc8a and hoxa13a of dre-miR-196 have similar characteristics as validated target genes and therefore represent high confidence target candidates.Keywords: MicroRNA targets validation, microRNA-target relationships, dre-miR-10, dre-miR-196.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 199155 Optimization of a Bioremediation Strategy for an Urban Stream of Matanza-Riachuelo Basin
Authors: María D. Groppa, Andrea Trentini, Myriam Zawoznik, Roxana Bigi, Carlos Nadra, Patricia L. Marconi
Abstract:
In the present work, a remediation bioprocess based on the use of a local isolate of the microalgae Chlorella vulgaris immobilized in alginate beads is proposed. This process was shown to be effective for the reduction of several chemical and microbial contaminants present in Cildáñez stream, a water course that is part of the Matanza-Riachuelo Basin (Buenos Aires, Argentina). The bioprocess, involving the culture of the microalga in autotrophic conditions in a stirred-tank bioreactor supplied with a marine propeller for 6 days, allowed a significant reduction of Escherichia coli and total coliform numbers (over 95%), as well as of ammoniacal nitrogen (96%), nitrates (86%), nitrites (98%), and total phosphorus (53%) contents. Pb content was also significantly diminished after the bioprocess (95%). Standardized cytotoxicity tests using Allium cepa seeds and Cildáñez water pre- and post-remediation were also performed. Germination rate and mitotic index of onion seeds imbibed in Cildáñez water subjected to the bioprocess was similar to that observed in seeds imbibed in distilled water and significantly superior to that registered when untreated Cildáñez water was used for imbibition. Our results demonstrate the potential of this simple and cost-effective technology to remove urban-water contaminants, offering as an additional advantage the possibility of an easy biomass recovery, which may become a source of alternative energy.
Keywords: Bioreactor, bioremediation, Chlorella vulgaris, Matanza-Riachuelo basin, microalgae.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 84454 Understanding Grip Choice and Comfort Whilst Hoovering
Authors: S.R.Kamat, A.Yoxall, C.Craig , M.J.Carré, J.Rowson
Abstract:
The hand is one of the essential parts of the body for carrying out Activities of Daily Living (ADLs). Individuals use their hands and fingers in everyday activities in the both the workplace and home. Hand-intensive tasks require diverse and sometimes extreme levels of exertion, depending on the action, movement or manipulation involved. The authors have undertaken several studies looking at grip choice and comfort. It is hoped that in providing improved understanding of discomfort during ADLs this will aid in the design of consumer products. Previous work by the authors outlined a methodology for calculating pain frequency and pain level for a range of tasks. From an online survey undertaken by the authors with regards manipulating objects during everyday tasks, tasks involving gripping were seen to produce the highest levels of pain and discomfort. Questioning of the participants showed that cleaning tasks were seen to be ADL's that produced the highest levels of discomfort, with women feeling higher levels of discomfort than men. This paper looks at the methodology for calculating pain frequency and pain level with particular regards to gripping activities. This methodology shows that activities such as mopping, sweeping and hoovering shows the highest numbers of pain frequency and pain level at 3112.5 frequency per month while the pain level per person doing this action was 0.78.The study then uses thin-film force sensors to analyze the force distribution in the hand whilst hoovering and compares this for differing grip styles and genders. Women were seen to have more of their hand under a higher pressure than men when undertaking hoovering. This suggests that women may feel greater discomfort than men since their hand is at a higher pressure more of the time.Keywords: hovering, grip, pain
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 147553 Real Time Classification of Political Tendency of Twitter Spanish Users based on Sentiment Analysis
Authors: Marc Solé, Francesc Giné, Magda Valls, Nina Bijedic
Abstract:
What people say on social media has turned into a rich source of information to understand social behavior. Specifically, the growing use of Twitter social media for political communication has arisen high opportunities to know the opinion of large numbers of politically active individuals in real time and predict the global political tendencies of a specific country. It has led to an increasing body of research on this topic. The majority of these studies have been focused on polarized political contexts characterized by only two alternatives. Unlike them, this paper tackles the challenge of forecasting Spanish political trends, characterized by multiple political parties, by means of analyzing the Twitters Users political tendency. According to this, a new strategy, named Tweets Analysis Strategy (TAS), is proposed. This is based on analyzing the users tweets by means of discovering its sentiment (positive, negative or neutral) and classifying them according to the political party they support. From this individual political tendency, the global political prediction for each political party is calculated. In order to do this, two different strategies for analyzing the sentiment analysis are proposed: one is based on Positive and Negative words Matching (PNM) and the second one is based on a Neural Networks Strategy (NNS). The complete TAS strategy has been performed in a Big-Data environment. The experimental results presented in this paper reveal that NNS strategy performs much better than PNM strategy to analyze the tweet sentiment. In addition, this research analyzes the viability of the TAS strategy to obtain the global trend in a political context make up by multiple parties with an error lower than 23%.Keywords: Political tendency, prediction, sentiment analysis, Twitter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 84852 Experimental Investigation of Heat Transfer and Flow of Nano Fluids in Horizontal Circular Tube
Authors: Abdulhassan Abd. K, Sattar Al-Jabair, Khalid Sultan
Abstract:
We have measured the pressure drop and convective heat transfer coefficient of water – based AL(25nm),AL2O3(30nm) and CuO(50nm) Nanofluids flowing through a uniform heated circular tube in the fully developed laminar flow regime. The experimental results show that the data for Nanofluids friction factor show a good agreement with analytical prediction from the Darcy's equation for single-phase flow. After reducing the experimental results to the form of Reynolds, Rayleigh and Nusselt numbers. The results show the local Nusselt number and temperature have distribution with the non-dimensional axial distance from the tube entry. Study decided that thenNanofluid as Newtonian fluids through the design of the linear relationship between shear stress and the rate of stress has been the study of three chains of the Nanofluid with different concentrations and where the AL, AL2O3 and CuO – water ranging from (0.25 - 2.5 vol %). In addition to measuring the four properties of the Nanofluid in practice so as to ensure the validity of equations of properties developed by the researchers in this area and these properties is viscosity, specific heat, and density and found that the difference does not exceed 3.5% for the experimental equations between them and the practical. The study also demonstrated that the amount of the increase in heat transfer coefficient for three types of Nano fluid is AL, AL2O3, and CuO – Water and these ratios are respectively (45%, 32%, 25%) with insulation and without insulation (36%, 23%, 19%), and the statement of any of the cases the best increase in heat transfer has been proven that using insulation is better than not using it. I have been using three types of Nano particles and one metallic Nanoparticle and two oxide Nanoparticle and a statement, whichever gives the best increase in heat transfer.Keywords: Newtonian, NUR factor, Brownian motion
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 186051 The Use of Mobile Phone as Enhancement to Mark Multiple Choice Objectives English Grammar and Literature Examination: An Exploratory Case Study of Preliminary National Diploma Students, Abdu Gusau Polytechnic, Talata Mafara, Zamfara State, Nigeria
Authors: T. Abdulkadir
Abstract:
Most often, marking and assessment of multiple choice kinds of examinations have been opined by many as a cumbersome and herculean task to accomplished manually in Nigeria. Usually this may be in obvious nexus to the fact that mass numbers of candidates were known to take the same examination simultaneously. Eventually, marking such a mammoth number of booklets dared and dread even the fastest paid examiners who often undertake the job with the resulting consequences of stress and boredom. This paper explores the evolution, as well as the set aim to envision and transcend marking the Multiple Choice Objectives- type examination into a thing of creative recreation, or perhaps a more relaxing activity via the use of the mobile phone. A more “pragmatic” dimension method was employed to achieve this work, rather than the formal “in-depth research” based approach due to the “novelty” of the mobile-smartphone e-Marking Scheme discovery. Moreover, being an evolutionary scheme, no recent academic work shares a direct same topic concept with the ‘use of cell phone as an e-marking technique’ was found online; thus, the dearth of even miscellaneous citations in this work. Additional future advancements are what steered the anticipatory motive of this paper which laid the fundamental proposition. However, the paper introduces for the first time the concept of mobile-smart phone e-marking, the steps to achieve it, as well as the merits and demerits of the technique all spelt out in the subsequent pages.
Keywords: Cell phone, e-marking scheme, mobile phone, mobile-smart phone, multiple choice objectives, smartphone.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 96950 Application of Fuzzy Logic Approach for an Aircraft Model with and without Winglet
Authors: Altab Hossain, Ataur Rahman, Jakir Hossen, A.K.M. P. Iqbal, SK. Hasan
Abstract:
The measurement of aerodynamic forces and moments acting on an aircraft model is important for the development of wind tunnel measurement technology to predict the performance of the full scale vehicle. The potentials of an aircraft model with and without winglet and aerodynamic characteristics with NACA wing No. 65-3- 218 have been studied using subsonic wind tunnel of 1 m × 1 m rectangular test section and 2.5 m long of Aerodynamics Laboratory Faculty of Engineering (University Putra Malaysia). Focusing on analyzing the aerodynamic characteristics of the aircraft model, two main issues are studied in this paper. First, a six component wind tunnel external balance is used for measuring lift, drag and pitching moment. Secondly, Tests are conducted on the aircraft model with and without winglet of two configurations at Reynolds numbers 1.7×105, 2.1×105, and 2.5×105 for different angle of attacks. Fuzzy logic approach is found as efficient for the representation, manipulation and utilization of aerodynamic characteristics. Therefore, the primary purpose of this work was to investigate the relationship between lift and drag coefficients, with free-stream velocities and angle of attacks, and to illustrate how fuzzy logic might play an important role in study of lift aerodynamic characteristics of an aircraft model with the addition of certain winglet configurations. Results of the developed fuzzy logic were compared with the experimental results. For lift coefficient analysis, the mean of actual and predicted values were 0.62 and 0.60 respectively. The coreelation between actual and predicted values (from FLS model) of lift coefficient in different angle of attack was found as 0.99. The mean relative error of actual and predicted valus was found as 5.18% for the velocity of 26.36 m/s which was found to be less than the acceptable limits (10%). The goodness of fit of prediction value was 0.95 which was close to 1.0.Keywords: Wind tunnel; Winglet; Lift coefficient; Fuzzy logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190549 Application of Pulse Doubling in Star-Connected Autotransformer Based 12-Pulse AC-DC Converter for Power Quality Improvement
Authors: Rohollah. Abdollahi, Alireza. Jalilian
Abstract:
This paper presents a pulse doubling technique in a 12-pulse ac-dc converter which supplies direct torque controlled motor drives (DTCIMD-s) in order to have better power quality conditions at the point of common coupling. The proposed technique increases the number of rectification pulses without significant changes in the installations and yields in harmonic reduction in both ac and dc sides. The 12-pulse rectified output voltage is accomplished via two paralleled six-pulse ac-dc converters each of them consisting of three-phase diode bridge rectifier. An autotransformer is designed to supply the rectifiers. The design procedure of magnetics is in a way such that makes it suitable for retrofit applications where a six-pulse diode bridge rectifier is being utilized. Independent operation of paralleled diode-bridge rectifiers, i.e. dc-ripple re-injection methodology, requires a Zero Sequence Blocking Transformer (ZSBT). Finally, a tapped interphase reactor is connected at the output of ZSBT to double the pulse numbers of output voltage up to 24 pulses. The aforementioned structure improves power quality criteria at ac mains and makes them consistent with the IEEE-519 standard requirements for varying loads. Furthermore, near unity power factor is obtained for a wide range of DTCIMD operation. A comparison is made between 6- pulse, 12-pulse, and proposed converters from view point of power quality indices. Results show that input current total harmonic distortion (THD) is less than 5% for the proposed topology at various loads.
Keywords: AC–DC converter, star-connected autotransformer, power quality, 24 pulse rectifier, Pulse Doubling, direct torquecontrolled induction motor drive (DTCIMD).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 286648 A Modern Review of the Spintronic Technology: Fundamentals, Materials, Devices, Circuits, Challenges, and Current Research Trends
Authors: Muhibul Haque Bhuyan
Abstract:
Spintronic, also termed spin electronics or spin transport electronics, is a kind of new technology, which exploits the two fundamental degrees of freedom- spin-state and charge-state of electrons to enhance the operational speed for the data storage and transfer efficiency of the device. Thus, it seems an encouraging technology to combat most of the prevailing complications in orthodox electron-based devices. This novel technology possesses the capacity to mix the semiconductor microelectronics and magnetic devices’ functionalities into one integrated circuit. Traditional semiconductor microelectronic devices use only the electronic charge to process the information based on binary numbers, 0 and 1. Due to the incessant shrinking of the transistor size, we are reaching the final limit of 1 nm or so. At this stage, the fabrication and other device operational processes will become challenging as the quantum effect comes into play. In this situation, we should find an alternative future technology, and spintronic may be such technology to transfer and store information. This review article provides a detailed discussion of the spintronic technology: fundamentals, materials, devices, circuits, challenges, and current research trends. At first, the fundamentals of spintronics technology are discussed. Then types, properties, and other issues of the spintronic materials are presented. After that, fabrication and working principles, as well as application areas and advantages/disadvantages of spintronic devices and circuits, are explained. Finally, the current challenges, current research areas, and prospects of spintronic technology are highlighted. This is a new paradigm of electronic cum magnetic devices built on the charge and spin of the electrons. Modern engineering and technological advances in search of new materials for this technology give us hope that this would be a very optimistic technology in the upcoming days.
Keywords: Spintronic technology, spin, charge, magnetic devices, spintronic devices, spintronic materials.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 74947 The Effect of the Side-Weir Crest Height to Scour in Clay-Sand Mixed Sediments
Authors: F. Ayça Varol Saraçoğlu, Hayrullah Ağaçcıoğlu
Abstract:
Experimental studies to investigate the depth of the scour conducted at a side-weir intersection located at the 1800 curved flume which located Hydraulic Laboratory of Yıldız Technical University, Istanbul, Turkey. Side weirs were located at the middle of the straight part of the main channel. Three different lengths (25, 40 and 50 cm) and three different weir crest height (7, 10 and 12 cm) of the side weir placed on the side weir station. There is no scour when the material is only kaolin. Therefore, the cohesive bed was prepared by properly mixing clay material (kaolin) with 31% sand in all experiments. Following 24h consolidation time, in order to observe the effect of flow intensity on the scour depth, experiments were carried out for five different upstream Froude numbers in the range of 0.33-0.81. As a result of this study the relation between scour depth and upstream flow intensity as a function of time have been established. The longitudinal velocities decreased along the side weir; towards the downstream due to overflow over the side-weirs. At the beginning, the scour depth increases rapidly with time and then asymptotically approached constant values in all experiments for all side weir dimensions as in non-cohesive sediment. Thus, the scour depth reached equilibrium conditions. Time to equilibrium depends on the approach flow intensity and the dimensions of side weirs. For different heights of the weir crest, dimensionless scour depths increased with increasing upstream Froude number. Equilibrium scour depths which formed 7 cm side-weir crest height were obtained higher than that of the 12 cm side-weir crest height. This means when side-weir crest height increased equilibrium scour depths decreased. Although the upstream side of the scour hole is almost vertical, the downstream side of the hole is inclined.Keywords: Clay-sand mixed sediments, scour, side weir.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 213246 Improving Production Traits for El-Salam and Mandarah Chicken Strains by Crossing II-Estimation of Crossbreeding Effects on Egg Production and Egg Quality Traits
Authors: Ayman E. Taha, Fawzy A. Abd El-Ghany
Abstract:
A crossbreeding experiment was carried out between two Egyptian strains of chickens namely Mandarah (MM) and El-Salam (SS). The two purebred strains and their reciprocal crosses (MS and SM) were used to estimate the effect of crossing on egg laying and egg quality parameters, direct additive and maternal additive effects as well as heterosis and direct heterosis percentages for studied traits. Results revealed that SM cross recorded the highest significant averages for most of egg production traits including body weight at sexual maturity (BW1), egg numbers at first 90 days, 42 weeks and 65 weeks of age (EN1, EN2 and EN3; respectively), egg weight at 90 days, 42 weeks of age (EW1 and EW2), egg mass at 90 days, 42 weeks and 65 weeks of age (EM1, EM2 and EM3; respectively), feed conversion ratio to egg production at 90 days , 42 weeks and 65 weeks of age (FCR1, FCR2 and FCR3; respectively), fertility and commercial hatchability percentages. Moreover, SM line reached the age sexual maturity (ASM) and period to the first ten eggs (Pf10 egg) at earlier age than other lines. On the other hand, crossing did not well improve egg quality parameters. Estimates and percentages of direct additive effect (GI) were negative for most of the studied traits except for EN1, EN2, EN3, FCR3, fertility, scientific and commercial hatchability percentages that were positive. But Estimates and percentages of maternal heterosis (Gm) were positive for all the studied traits of egg production, except for BW2, BW3, ASM, Pf10, FCR1, FCR2, FCR3 and scientific hatchability that were negative. Also, positive estimates and percentages of heterosis were recorded for most of egg production and egg quality traits. It was concluded that using of SS strain as a sire line and MM strain as a dam line resulting in best new commercial egg line (SM) which is of great concern for poultry breeder in Egypt.
Keywords: Mandarahand El-Salam chickens, Crossing, Egg production, Egg quality, Crossbreeding components.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 287545 Analysis of Surface Hardness, Surface Roughness, and Near Surface Microstructure of AISI 4140 Steel Worked with Turn-Assisted Deep Cold Rolling Process
Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma, K. Jagannath, Achutha Kini U.
Abstract:
In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the surface hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor hobson talysurf tester, micro vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer.
Keywords: Surface hardness, response surface methodology, microstructure, central composite design, deep cold rolling, surface roughness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 180544 Optical Flow Technique for Supersonic Jet Measurements
Authors: H. D. Lim, Jie Wu, T. H. New, Shengxian Shi
Abstract:
This paper outlines the development of an experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 4 bar and exit Mach of 1.5. High-speed singleframe or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Despite these challenges however, this supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.
Keywords: Schlieren, optical flow, supersonic jets, shock shear layer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190443 Prioritization Assessment of Housing Development Risk Factors: A Fuzzy Hierarchical Process-Based Approach
Authors: Yusuf Garba Baba
Abstract:
The construction industry and housing subsector are fraught with risks that have the potential of negatively impacting on the achievement of project objectives. The success or otherwise of most construction projects depends to large extent on how well these risks have been managed. The recent paradigm shift by the subsector to use of formal risk management approach in contrast to hitherto developed rules of thumb means that risks must not only be identified but also properly assessed and responded to in a systematic manner. The study focused on identifying risks associated with housing development projects and prioritisation assessment of the identified risks in order to provide basis for informed decision. The study used a three-step identification framework: review of literature for similar projects, expert consultation and questionnaire based survey to identify potential risk factors. Delphi survey method was employed in carrying out the relative prioritization assessment of the risks factors using computer-based Analytical Hierarchical Process (AHP) software. The results show that 19 out of the 50 risks significantly impact on housing development projects. The study concludes that although significant numbers of risk factors have been identified as having relevance and impacting to housing construction projects, economic risk group and, in particular, ‘changes in demand for houses’ is prioritised by most developers as posing a threat to the achievement of their housing development objectives. Unless these risks are carefully managed, their effects will continue to impede success in these projects. The study recommends the adoption and use of the combination of multi-technique identification framework and AHP prioritization assessment methodology as a suitable model for the assessment of risks in housing development projects.
Keywords: Risk identification, risk assessment, analytical hierarchical process, multi-criteria decision.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73442 Six Sigma Solutions and its Benefit-Cost Ratio for Quality Improvement
Authors: S. Homrossukon, A. Anurathapunt
Abstract:
This is an application research presenting the improvement of production quality using the six sigma solutions and the analyses of benefit-cost ratio. The case of interest is the production of tile-concrete. Such production has faced with the problem of high nonconforming products from an inappropriate surface coating and had low process capability based on the strength property of tile. Surface coating and tile strength are the most critical to quality of this product. The improvements followed five stages of six sigma solutions. After the improvement, the production yield was improved to 80% as target required and the defective products from coating process was remarkably reduced from 29.40% to 4.09%. The process capability based on the strength quality was increased from 0.87 to 1.08 as customer oriented. The improvement was able to save the materials loss for 3.24 millions baht or 0.11 million dollars. The benefits from the improvement were analyzed from (1) the reduction of the numbers of non conforming tile using its factory price for surface coating improvement and (2) the materials saved from the increment of process capability. The benefit-cost ratio of overall improvement was high as 7.03. It was non valuable investment in define, measure, analyses and the initial of improve stages after that it kept increasing. This was due to there were no benefits in define, measure, and analyze stages of six sigma since these three stages mainly determine the cause of problem and its effects rather than improve the process. The benefit-cost ratio starts existing in the improve stage and go on. Within each stage, the individual benefitcost ratio was much higher than the accumulative one as there was an accumulation of cost since the first stage of six sigma. The consideration of the benefit-cost ratio during the improvement project helps make decisions for cost saving of similar activities during the improvement and for new project. In conclusion, the determination of benefit-cost ratio behavior through out six sigma implementation period provides the useful data for managing quality improvement for the optimal effectiveness. This is the additional outcome from the regular proceeding of six sigma.Keywords: Six Sigma Solutions, Process Improvement, QualityManagement, Benefit Cost Ratio
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 213741 The Application of Line Balancing Technique and Simulation Program to Increase Productivity in Hard Disk Drive Components
Authors: Alonggot Limcharoen, Jintana Wannarat, Vorawat Panich
Abstract:
This study aims to investigate the balancing of the number of operators (Line Balancing technique) in the production line of hard disk drive components in order to increase efficiency. At present, the trend of using hard disk drives has continuously declined leading to limits in a company’s revenue potential. It is important to improve and develop the production process to create market share and to have the ability to compete with competitors with a higher value and quality. Therefore, an effective tool is needed to support such matters. In this research, the Arena program was applied to analyze the results both before and after the improvement. Finally, the precedent was used before proceeding with the real process. There were 14 work stations with 35 operators altogether in the RA production process where this study was conducted. In the actual process, the average production time was 84.03 seconds per product piece (by timing 30 times in each work station) along with a rating assessment by implementing the Westinghouse principles. This process showed that the rating was 123% underlying an assumption of 5% allowance time. Consequently, the standard time was 108.53 seconds per piece. The Takt time was calculated from customer needs divided by working duration in one day; 3.66 seconds per piece. Of these, the proper number of operators was 30 people. That meant five operators should be eliminated in order to increase the production process. After that, a production model was created from the actual process by using the Arena program to confirm model reliability; the outputs from imitation were compared with the original (actual process) and this comparison indicated that the same output meaning was reliable. Then, worker numbers and their job responsibilities were remodeled into the Arena program. Lastly, the efficiency of production process enhanced from 70.82% to 82.63% according to the target.
Keywords: Hard disk drive, line balancing, simulation, Arena program.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 118640 Psychological Impact of Radiation Versus Its Physiological Effects: Radiation Workers’ Perspective in Medical Centers
Authors: Muhammad Waqar, Touqir Ahmad Afridi, Quratulain Soomro
Abstract:
Radiation is a ghost causing unimaginable physical damage, but its harm is not inevitable. The panic created by previously reported worst-case scenarios i.e., Three Mile Island, Fukushima, Chernobyl, has adversely affected the attitude of radiation workers towards the profession. The psychological effect of radiation-related catastrophes creates an invisible barrier that reduces the efficiency of radiation workers. Careful handling and proper monitoring of radiation decreases the hazards of radiation and proves that the psychological impairment of radiation is myriad fold adverse than its physiological damage. Thermoluminescent Dosimeter (TLD) badges with unique identity numbers were provided to 36 radiation workers for a period of one year (2021). TLDs were read quarterly, and doses were recorded for every radiation worker. Annual doses were recorded and compared with national and international standards. Moreover, the period for which an individual worker is expected to reach one year limit of 20 mSv was also calculated. The highest radiation dose for the radiation worker in 2021 was found at 3.2 mSv, which was 16% of the permissible annual dose limit. The average occupational radiation doses ranged from 1.0 mSv to 3.20 mSv. 64% of the employees did not exceed the 10% of the annual limit, receiving less than 2 mSv. The least time for 20 mSv completion was found 6.25 years for the hot-lab technician. As a whole, the 20 mSv completion period ranged from 6.25 to 20 years. We concluded that the annual professional radiation doses were well within the permissible limits of Pakistan Nuclear Regulatory Authority (PNRA) and International Commission on Radiological Protection (ICRP). The fear of radiation is unnecessary and it creates reluctance towards performing their assigned duties and it is also not favorable for the institute. It must be abolished through education and training sessions.
Keywords: TLD, thermoluminescent dosimeter, psychological impact, radiation dose, annual dose limit, PNRA, ICRP, IAEA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 43139 Determinants of the Income of Household Level Coir Yarn Labourers in Sri Lanka
Authors: G. H. B. Dilhari, A. A. D. T. Saparamadu
Abstract:
Sri Lanka is one of the prominent countries for the coir production. The coir is one of the by-products of the coconut and the coir industry is considered to be one of the traditional industries in Sri Lanka. Because of the inherent nature of the coir industry, labourers play a significant role in the coir production process. The study has analyzed the determinants of the income of the household level coir yarn labourers. The study was conducted in the Kumarakanda Grama Niladhari division. Simple random sampling was used to generate a sample of 100 household level coir yarn labourers and structured questionnaire, personal interviews, and discussion were performed to gather the required data. The obtained data were statistically analyzed by using Statistical Package for Social Science (SPSS) software. Mann-Whitney U and Kruskal-Wallis test were performed for mean comparison. The findings revealed that the household level coir yarn industry is dominated by the female workers and it was identified that fewer numbers of workers have engaged in this industry as the main occupation. In addition to that, elderly participation in the industry is higher than the younger participation and most of them have engaged in the industry as a source of extra income. Level of education, the methods of engagement, satisfaction, engagement in the industry by the next generation, support from the government, method of government support, working hours per day, employed as a main job, number of completed units per day, suffering from job related diseases and type of the diseases were related with income level of household level coir yarn laboures. The recommendations as to flourish in future includes, technological transformation for coir yarn production, strengthening the raw material base and regulating the raw material supply, introduction of new technologies, markets and training programmes, the establishment of the labourers’ association, the initiation of micro credit schemes and better consideration about the job oriented diseases.Keywords: Coir, Income, Sri Lanka.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152438 Effect of Different Methods to Control the Parasitic Weed Phelipanche ramosa (L.- Pomel) in Tomato Crop
Authors: G. Disciglio, F. Lops, A. Carlucci, G. Gatta, A. Tarantino, E. Tarantino
Abstract:
Phelipanche ramosa is the most damaging obligate flowering parasitic weed on wide species of cultivated plants. The semi-arid regions of the world are considered the main centers of this parasitic plant that causes heavy infestation. This is due to its production of high numbers of seeds (up to 200,000) that remain viable for extended periods (up to 20 years). In this study, 13 treatments for the control of Phelipanche were carried out, which included agronomic, chemical, and biological treatments and the use of resistant plant methods. In 2014, a trial was performed at the Department of Agriculture, Food and Environment, University of Foggia (southern Italy), on processing tomato (cv ‘Docet’) grown in pots filled with soil taken from a field that was heavily infested by P. ramosa). The tomato seedlings were transplanted on May 8, 2014, into a sandy-clay soil (USDA). A randomized block design with 3 replicates (pots) was adopted. During the growing cycle of the tomato, at 70, 75, 81 and 88 days after transplantation, the number of P. ramosa shoots emerged in each pot was determined. The tomato fruit were harvested on August 8, 2014, and the quantitative and qualitative parameters were determined. All of the data were subjected to analysis of variance (ANOVA) using the JMP software (SAS Institute Inc. Cary, NC, USA), and for comparisons of means (Tukey's tests). The data show that each treatment studied did not provide complete control against P. ramosa. However, the virulence of the attacks was mitigated by some of the treatments tried: radicon biostimulant, compost activated with Fusarium, mineral fertilizer nitrogen, sulfur, enzone, and the resistant tomato genotype. It is assumed that these effects can be improved by combining some of these treatments with each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.
Keywords: Control methods, Phelipanche ramosa, tomato crop.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 305037 Protective Effect of Melissa officinalis L. against Malathion Toxicity and Reproductive Impairment in Male Rats
Authors: M. M. Seif, F. A. Khalil, A. A. K. Abou Arab, A. S. Abdel- Aziz, M. A. Abou Donia, Sh. R. Mohamed
Abstract:
Malathion (ML) is a well known pesticide commonly used in many agricultural and non-agricultural processes. Its toxicity has been attributed primarily to the accumulation of acetylcholine (Ach) at nerve junctions, due to the inhibition of acetylcholinesterase (AChE). The aim of the current research was to study the protective effect of the melissa plant extract against reproductive impairment induced by malathion in 32 male albino rats, and the biological experiment was divided into four groups (8 in each) that given malathion (27 mg/kg; 1/50 of the LD50 for an oral dose) and/or Melissa officinalis (MO) extract (200mg/kg/day) by gavages technique. The sperm counts, sperm motility, sperm morphology, FSH, LH, and testosterone levels had been determined in testes homogenate at the end of the experiment. It is worthy to report that, rats treated with melissa extract did not show a significant difference when compared with the control group, while rats given malathion alone had significantly lower sperm count, sperm motility, and significantly higher abnormal sperm numbers, than the untreated control rats as well as having significantly lower serum FSH, LH, and testosterone levels compared with the control group. Administrations of melissa extract restore all mentioned histological parameters towards the control group and the melissa extract had a strong positive protective effect against malathion toxicity. Results the of biological parameters were confirmed by the histological examination of rat testes and indicated that, both control and melissa groups showing normal seminiferous tubules, while malathion group testicular tissues had necrosis, edema in the seminiferous tubules and degeneration of spermatogonial cells lining the seminiferous tubules with incomplete spermatogenesis. The use of melissa against malathion improved the histological picture and showing normal seminiferous tubules with complete spermatogenesis and almost there was no histopathological changes could be noted.
Keywords: Malathion, Melissa officinalis L., Reproductive toxicity, Rats.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 288236 Enabling Factors towards Safety Improvement for Industrialised Building System (IBS)
Authors: Nasyairi Mat Nasir, Zulhabri Ismail, Faridah Ismail, Sharifah Nur Aina Syed Alwee, Masnizan Che Mat
Abstract:
The utilisation of Industrial Building System (IBS) in construction industry will lead to a safe site condition since minimum numbers of workers are required to be on-site, timely material delivery, systematic component storage, reduction of construction material and waste. These matters are being promoted in the Construction Industry Master Plan (CIMP 2006-2015). However, the enabling factors of IBS that will foster a safer working environment are indefinite; on that basis a research has been conducted. The purpose of this paper is to discuss and identify the relevant factors towards safety improvement for IBS. A quantitative research by way of questionnaire surveys have been conducted to 314 construction companies. The target group was Grade 5 to Grade 7 contractors registered with Construction Industry Development Board (CIDB) which specialise in IBS. The findings disclosed seven factors linked to the safety improvement of IBS construction site in Malaysia. The factors were historical, economic, psychological, technical, procedural, organisational and the environmental factors. From the findings, a psychological factor ranked as the highest and most crucial factor contributing to safer IBS construction site. The psychological factor included the self-awareness and influences from workmates behaviour. Followed by organisational factors, where project management style will encourage the safety efforts. From the procedural factors, it was also found that training was one of the significant factors to improve safety culture of IBS construction site. Another important finding that formed as a part of the environmental factor was storage of IBS components, in which proper planning of the layout would able to contribute to a safer site condition. To conclude, in order to improve safety of IBS construction site, a welltrained and skilled workers are required for IBS projects, thus proper training is permissible and should be emphasised.
Keywords: Enabling Factors, Industrialised Building System, Safety Improvement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 293335 Using Non-Linear Programming Techniques in Determination of the Most Probable Slip Surface in 3D Slopes
Authors: M. M. Toufigh, A. R. Ahangarasr, A. Ouria
Abstract:
Among many different methods that are used for optimizing different engineering problems mathematical (numerical) optimization techniques are very important because they can easily be used and are consistent with most of engineering problems. Many studies and researches are done on stability analysis of three dimensional (3D) slopes and the relating probable slip surfaces and determination of factors of safety, but in most of them force equilibrium equations, as in simplified 2D methods, are considered only in two directions. In other words for decreasing mathematical calculations and also for simplifying purposes the force equilibrium equation in 3rd direction is omitted. This point is considered in just a few numbers of previous studies and most of them have only given a factor of safety and they haven-t made enough effort to find the most probable slip surface. In this study shapes of the slip surfaces are modeled, and safety factors are calculated considering the force equilibrium equations in all three directions, and also the moment equilibrium equation is satisfied in the slip direction, and using nonlinear programming techniques the shape of the most probable slip surface is determined. The model which is used in this study is a 3D model that is composed of three upper surfaces which can cover all defined and probable slip surfaces. In this research the meshing process is done in a way that all elements are prismatic with quadrilateral cross sections, and the safety factor is defined on this quadrilateral surface in the base of the element which is a part of the whole slip surface. The method that is used in this study to find the most probable slip surface is the non-linear programming method in which the objective function that must get optimized is the factor of safety that is a function of the soil properties and the coordinates of the nodes on the probable slip surface. The main reason for using non-linear programming method in this research is its quick convergence to the desired responses. The final results show a good compatibility with the previously used classical and 2D methods and also show a reasonable convergence speed.Keywords: Non-linear programming, numerical optimization, slope stability, 3D analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 161934 Research of the Load Bearing Capacity of Inserts Embedded in CFRP under Different Loading Conditions
Authors: F. Pottmeyer, M. Weispfenning, K. A. Weidenmann
Abstract:
Continuous carbon fiber reinforced plastics (CFRP) exhibit a high application potential for lightweight structures due to their outstanding specific mechanical properties. Embedded metal elements, so-called inserts, can be used to join structural CFRP parts. Drilling of the components to be joined can be avoided using inserts. In consequence, no bearing stress is anticipated. This is a distinctive benefit of embedded inserts, since continuous CFRP have low shear and bearing strength. This paper aims at the investigation of the load bearing capacity after preinduced damages from impact tests and thermal-cycling. In addition, characterization of mechanical properties during dynamic high speed pull-out testing under different loading velocities was conducted. It has been shown that the load bearing capacity increases up to 100% for very high velocities (15 m/s) in comparison with quasi-static loading conditions (1.5 mm/min). Residual strength measurements identified the influence of thermal loading and preinduced mechanical damage. For both, the residual strength was evaluated afterwards by quasi-static pull-out tests. Taking into account the DIN EN 6038 a high decrease of force occurs at impact energy of 16 J with significant damage of the laminate. Lower impact energies of 6 J, 9 J, and 12 J do not decrease the measured residual strength, although the laminate is visibly damaged - distinguished by cracks on the rear side. To evaluate the influence of thermal loading, the specimens were placed in a climate chamber and were exposed to various numbers of temperature cycles. One cycle took 1.5 hours from -40 °C to +80 °C. It could be shown that already 10 temperature cycles decrease the load bearing capacity up to 20%. Further reduction of the residual strength with increasing number of thermal cycles was not observed. Thus, it implies that the maximum damage of the composite is already induced after 10 temperature cycles.
Keywords: Composite, joining, inserts, dynamic loading, thermal loading, residual strength, impact.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828