Search results for: reduction factor.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3018

Search results for: reduction factor.

108 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients

Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi

Abstract:

Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.

Keywords: Frailty model, latent variables, liver cirrhosis, parametric distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1056
107 Maize Tolerance to Natural and Artificial Infestation with Diabrotica virgifera virgifera Eggs

Authors: Snežana T. Tanasković, Sonja M. Gvozdenac, Branka D. Popović, Vesna M. Đurović, Matthias Erb

Abstract:

Western corn rootworm – WCR (Diabrotica virgifera sp.virgifera, Coleoptera, Chrysomelidae) is economically the most important pest of maize worldwide. WCR natural population is already very abundant on Serbian fields, and keeps increasing each year. Tolerance is recognized by larger root size and bigger root regrowth. Severe larval injuries cause lack of compensatory regrowth and lead to reduction of plant growth and yield. The aim of this research was to evaluate tolerance of commercial Serbian maize hybrid NS 640, under natural WCR infestation and under conditions of artificial infestation, and to obtain the information about its tolerance to WCR larval feeding in two consecutive years. Field experiments were conducted in 2015 and 2016, in Bečej (Vojvodina province, Serbia). In experimental field, 96 plants were selected, marked and arranged in 48 pairs. Each pair represented two plants. The first plant was artificially infested with 4 mL WCR egg suspension in agar (550 eggs plant-1) in the root zone (D plant). The second plant represented control plant (C plant) with injection of 4 mL distilled water in root zone. The experimental field was inspected weekly. A hybrid tolerance was assessed based on root injury level and root mass. Root injury was rated using the Node-Injury Scale 1-6, during the last field inspection (September – October). Comparing the root injuries on D and C plants in 2015, more severe damages were recorded on D plants (12 plants - rate 5 and 17 plants - rate 6) compared to C plants (2 plants - rate 5 and 8 plants - rate 6). Also, the highest number of plants with healthy roots (rate 1), was registered in the control (25 plants), while only 4 D plants were rated as injury level 1. In 2016, root injuries caused by WCR larvae on D and C plants did not differ significantly. The reason is the difference in climatic conditions between the years. The 2015 was extremely dry and more suitable for WCR larval development and movement in the soil, compared to 2016. Thus, more severe damages appeared on artificially infested plants (D plants). Root mass was in strong correlation with the level of root injury, but did not differ significantly between D and C plants, in both years.

Keywords: D. v. virgifera, maize, root injury, tolerance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 875
106 Submicron Laser-Induced Dot, Ripple and Wrinkle Structures and Their Applications

Authors: P. Slepicka, N. Slepickova Kasalkova, I. Michaljanicova, O. Nedela, Z. Kolska, V. Svorcik

Abstract:

Polymers exposed to laser or plasma treatment or modified with different wet methods which enable the introduction of nanoparticles or biologically active species, such as amino-acids, may find many applications both as biocompatible or anti-bacterial materials or on the contrary, can be applied for a decrease in the number of cells on the treated surface which opens application in single cell units. For the experiments, two types of materials were chosen, a representative of non-biodegradable polymers, polyethersulphone (PES) and polyhydroxybutyrate (PHB) as biodegradable material. Exposure of solid substrate to laser well below the ablation threshold can lead to formation of various surface structures. The ripples have a period roughly comparable to the wavelength of the incident laser radiation, and their dimensions depend on many factors, such as chemical composition of the polymer substrate, laser wavelength and the angle of incidence. On the contrary, biopolymers may significantly change their surface roughness and thus influence cell compatibility. The focus was on the surface treatment of PES and PHB by pulse excimer KrF laser with wavelength of 248 nm. The changes of physicochemical properties, surface morphology, surface chemistry and ablation of exposed polymers were studied both for PES and PHB. Several analytical methods involving atomic force microscopy, gravimetry, scanning electron microscopy and others were used for the analysis of the treated surface. It was found that the combination of certain input parameters leads not only to the formation of optimal narrow pattern, but to the combination of a ripple and a wrinkle-like structure, which could be an optimal candidate for cell attachment. The interaction of different types of cells and their interactions with the laser exposed surface were studied. It was found that laser treatment contributes as a major factor for wettability/contact angle change. The combination of optimal laser energy and pulse number was used for the construction of a surface with an anti-cellular response. Due to the simple laser treatment, we were able to prepare a biopolymer surface with higher roughness and thus significantly influence the area of growth of different types of cells (U-2 OS cells).

Keywords: Polymer treatment, laser, periodic pattern, cell response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 780
105 A Xenon Mass Gauging through Heat Transfer Modeling for Electric Propulsion Thrusters

Authors: A. Soria-Salinas, M.-P. Zorzano, J. Martín-Torres, J. Sánchez-García-Casarrubios, J.-L. Pérez-Díaz, A. Vakkada-Ramachandran

Abstract:

The current state-of-the-art methods of mass gauging of Electric Propulsion (EP) propellants in microgravity conditions rely on external measurements that are taken at the surface of the tank. The tanks are operated under a constant thermal duty cycle to store the propellant within a pre-defined temperature and pressure range. We demonstrate using computational fluid dynamics (CFD) simulations that the heat-transfer within the pressurized propellant generates temperature and density anisotropies. This challenges the standard mass gauging methods that rely on the use of time changing skin-temperatures and pressures. We observe that the domes of the tanks are prone to be overheated, and that a long time after the heaters of the thermal cycle are switched off, the system reaches a quasi-equilibrium state with a more uniform density. We propose a new gauging method, which we call the Improved PVT method, based on universal physics and thermodynamics principles, existing TRL-9 technology and telemetry data. This method only uses as inputs the temperature and pressure readings of sensors externally attached to the tank. These sensors can operate during the nominal thermal duty cycle. The improved PVT method shows little sensitivity to the pressure sensor drifts which are critical towards the end-of-life of the missions, as well as little sensitivity to systematic temperature errors. The retrieval method has been validated experimentally with CO2 in gas and fluid state in a chamber that operates up to 82 bar within a nominal thermal cycle of 38 °C to 42 °C. The mass gauging error is shown to be lower than 1% the mass at the beginning of life, assuming an initial tank load at 100 bar. In particular, for a pressure of about 70 bar, just below the critical pressure of CO2, the error of the mass gauging in gas phase goes down to 0.1% and for 77 bar, just above the critical point, the error of the mass gauging of the liquid phase is 0.6% of initial tank load. This gauging method improves by a factor of 8 the accuracy of the standard PVT retrievals using look-up tables with tabulated data from the National Institute of Standards and Technology.

Keywords: Electric propulsion, mass gauging, propellant, PVT, xenon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2188
104 Study The Effects of Conventional and Low Input Production System on Energy Efficiency of Silybum marianum L.

Authors: M. Haj Seyed Hadi, M. Darzi, E. Sharifi Ashoorabadi

Abstract:

Medicinal plants are most suitable crops for ecological production systems because of their role in human health and the aim of sustainable agriculture to improve ecosystem efficiency and its products quality. Calculations include energy output (contents of energy in seed) and energy inputs (consumption of fertilizers, pesticides, labor, machines, fuel and electricity). The ratio of output of the production to inputs is called the energy outputs / inputs ratio or energy efficiency. One way to quantify essential parts of agricultural development is the energy flow method. The output / input energy ratio is proposed as the most comprehensive single factor in pursuing the objective of sustainability. Sylibum marianum L. is one of the most important medicinal plants in Iran and has effective role on health of growing population in Iran. The objective of this investigation was to find out energy efficiency in conventional and low input production system of Milk thistle. This investigation was carried out in the spring of 2005 – 2007 in the Research Station of Rangelands in Hamand - Damavand region of IRAN. This experiment was done in split-split plot based on randomized complete block design with 3 replications. Treatments were 2 production systems (Conventional and Low input system) in the main plots, 3 planting time (25 of March, 4 and 14 of April) in the sub plots and 2 seed types (Improved and Native of Khoozestan) in the sub-sub plots. Results showed that in conventional production system energy efficiency, because of higher inputs and less seed yield, was less than low input production system. Seed yield was 1199.5 and 1888 kg/ha in conventional and low input systems, respectively. Total energy inputs and out puts for conventional system was 10068544.5 and 7060515.9 kcal. These amounts for low input system were 9533885.6 and 11113191.8 kcal. Results showed that energy efficiency for seed production in conventional and low input system was 0.7 and 1.16, respectively. So, milk thistle seed production in low input system has 39.6 percent higher energy efficiency than conventional production system. Also, higher energy efficiency were found in sooner planting time (25 of March) and native seed of Khoozestan.

Keywords: energy efficiency, milk thistle, production system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1621
103 Comparative Analysis of Chemical Composition and Biological Activities of Ajuga genevensis L. in in vitro Culture and Intact Plants

Authors: Naira Sahakyan, Margarit Petrosyan, Armen Trchounian

Abstract:

One of the tasks in contemporary biotechnology, pharmacology and other fields of human activities is to obtain biologically active substances from plants. They are very essential in the treatment of many diseases due to their actually high therapeutic value without visible side effects. However, sometimes the possibility of obtaining the metabolites is limited due to the reduction of wild-growing plants. That is why the plant cell cultures are of great interest as alternative sources of biologically active substances. Besides, during the monitored cultivation, it is possible to obtain substances that are not synthesized by plants in nature. Isolated culture of Ajuga genevensis with high growth activity and ability of regeneration was obtained using MS nutrient medium. The agar-diffusion method showed that aqueous extracts of callus culture revealed high antimicrobial activity towards various gram-positive (Bacillus subtilis A1WT; B. mesentericus WDCM 1873; Staphylococcus aureus WDCM 5233; Staph. citreus WT) and gram-negative (Escherichia coli WKPM M-17; Salmonella typhimurium TA 100) microorganisms. The broth dilution method revealed that the minimal and half maximal inhibitory concentration values against E. coli corresponded to the 70 μg/mL and 140 μg/mL concentration of the extract respectively. According to the photochemiluminescent analysis, callus tissue extracts of leaf and root origin showed higher antioxidant activity than the same quantity of A. genevensis intact plant extract. A. genevensis intact plant and callus culture extracts showed no cytotoxic effect on K-562 suspension cell line of human chronic myeloid leukemia. The GC-MS analysis showed deep differences between the qualitative and quantitative composition of callus culture and intact plant extracts. Hexacosane (11.17%); n-hexadecanoic acid (9.33%); and 2-methoxy-4-vinylphenol (4.28%) were the main components of intact plant extracts. 10-Methylnonadecane (57.0%); methoxyacetic acid, 2-tetradecyl ester (17.75%) and 1-Bromopentadecane (14.55%) were the main components of A. genevensis callus culture extracts. Obtained data indicate that callus culture of A. genevensis can be used as an alternative source of biologically active substances.

Keywords: Ajuga genevensis, antibacterial activity, antioxidant activity, callus cultures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562
102 Behavioral Mapping and Post-Occupancy Evaluation of Meeting-Point Design in an International Airport

Authors: Meng-Cong Zheng, Yu-Sheng Chen

Abstract:

The meeting behavior is a pervasive kind of interaction, which often occurs between the passenger and the shuttle. However, the meeting point set up at the Taoyuan International Airport is too far from the entry-exit, often causing passengers to stop searching near the entry-exit. When the number of people waiting for the rush hour increases, it often results in chaos in the waiting area. This study tried to find out what is the key factor to promote the rapid finding of each other between the passengers and the pick-ups. Then we implemented several design proposals to improve the meeting behavior of passengers and pick-ups based on behavior mapping and post-occupancy evaluation to enhance their meeting efficiency in unfamiliar environments. The research base is the reception hall of the second terminal of Taoyuan International Airport. Behavioral observation and mapping are implemented on the entry of inbound passengers into the welcome space, including the crowd distribution of the people who rely on the separation wall in the waiting area, the behavior of meeting and the interaction between the inbound passengers and the pick-ups. Then we redesign the space planning and signage design based on post-occupancy evaluation to verify the effectiveness of space plan and signage design. This study found that passengers ignore existing meeting-point designs which are placed on distant pillars at both ends. The position of the screen affects the area where the receiver is stranded, causing the pick-ups to block the passenger's moving line. The pick-ups prefer to wait where it is easy to watch incoming passengers and where it is closest to the mode of transport they take when leaving. Large visitors tend to gather next to landmarks, and smaller groups have a wide waiting area in the lobby. The location of the meeting point chosen by the pick-ups is related to the view of the incoming passenger. Finally, this study proposes an improved design of the meeting point, setting the traffic information in it, so that most passengers can see the traffic information when they enter the country. At the same time, we also redesigned the pick-ups desk to improve the efficiency of passenger meeting.

Keywords: Meeting point design, post-occupancy evaluation, behavioral mapping, international airport.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1027
101 An Induction Motor Drive System with Intelligent Supervisory Control for Water Networks Including Storage Tank

Authors: O. S. Ebrahim, K. O. Shawky, M. A. Badr, P. K. Jain

Abstract:

This paper describes an efficient; low-cost; high-availability; induction motor (IM) drive system with intelligent supervisory control for water distribution networks including storage tank. To increase the operational efficiency and reduce cost, the IM drive system includes main pumping unit and an auxiliary voltage source inverter (VSI) fed unit. The main unit comprises smart star/delta starter, regenerative fluid clutch, switched VAR compensator, and hysteresis liquid-level controller. Three-state energy saving mode (ESM) is defined at no-load and a logic algorithm is developed for best energetic cost reduction. To reduce voltage sag, the supervisory controller operates the switched VAR compensator upon motor starting. To provide smart star/delta starter at low cost, a method based on current sensing is developed for interlocking, malfunction detection, and life–cycles counting and used to synthesize an improved fuzzy logic (FL) based availability assessment scheme. Furthermore, a recurrent neural network (RNN) full state estimator is proposed to provide sensor fault-tolerant algorithm for the feedback control. The auxiliary unit is working at low flow rates and improves the system efficiency and flexibility for distributed generation during islanding mode. Compared with doubly-fed IM, the proposed one ensures 30% working throughput under main motor/pump fault conditions, higher efficiency, and marginal cost difference. This is critically important in case of water networks. Theoretical analysis, computer simulations, cost study, as well as efficiency evaluation, using timely cascaded energy-conservative systems, are performed on IM experimental setup to demonstrate the validity and effectiveness of the proposed drive and control.

Keywords: Artificial Neural Network, ANN, Availability Assessment, Cloud Computing, Energy Saving, Induction Machine, IM, Supervisory Control, Fuzzy Logic, FL, Pumped Storage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 627
100 Records of Lepidopteron Borers (Lepidoptera) on Stored Seeds of Indian Himalayan Conifers

Authors: Pawan Kumar, Pitamber Singh Negi

Abstract:

Many of the regeneration failures in conifers are often being attributed to heavy insect attack and pathogens during the period of seed formation and under storage conditions. Conifer berries and seed insects occur throughout the known range of the hosts and also limit the production of seed for nursery stock. On occasion, even entire seed crops are lost due to insect attacks. The berry and seeds of both the species have been found to be infected with insects. Recently, heavy damage to the berry and seeds of Juniper and Chilgoza Pine was observed in the field as well as in stored conditions, leading to reduction in the viability of seeds to germinate. Both the species are under great threat and regeneration of the species is very low. Due to lack of adequate literature, the study on the damage potential of seed insects was urgently required to know the exact status of the insect-pests attacking seeds/berries of both the pine species so as to develop pest management practices against the insect pests attack. As both the species are also under threat and are fighting for survival, so the study is important to develop management practices for the insect-pests of seeds/berries of Juniper and Chilgoza pine so as to evaluate in the nursery, as these species form major vegetation of their distribution zones. A six-year study on the management of insect pests of seeds of Chilgoza revealed that seeds of this species are prone to insect pests mainly borers. During present investigations, it was recorded that cones of are heavily attacked only by Dioryctria abietella (Lepidoptera: Pyralidae) in natural conditions, but seeds which are economically important are heavily infected, (sometimes up to 100% damage was also recorded) by insect borer, Plodia interpunctella (Lepidoptera: Pyralidae) and is recorded for the first time ‘to author’s best knowledge’ infesting the stored Chilgoza seeds. Similarly, Juniper berries and seeds were heavily attacked only by a single borer, Homaloxestis cholopis (Lepidoptera: Lecithoceridae) recorded as a new report in natural habitat as well as in stored conditions. During the present investigation details of insect pest attack on Juniper and Chilgoza pine seeds and berries was observed and suitable management practices were also developed to contain the insect-pests attack.

Keywords: Borer, conifer, cones, chilgoza pine, lepidoptera, juniper, management, seed.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 866
99 Role of Oxidative DNA Damage in Pathogenesis of Diabetic Neuropathy

Authors: Ireneusz Majsterek, Anna Merecz, Agnieszka Sliwinska, Marcin Kosmalski, Jacek Kasznicki, Jozef Drzewoski

Abstract:

Oxidative stress is considered to be the cause for onset and the progression of type 2 diabetes mellitus (T2DM) and complications including neuropathy. It is a deleterious process that can be an important mediator of damage to cell structures: protein, lipids and DNA. Data suggest that in patients with diabetes and diabetic neuropathy DNA repair is impaired, which prevents effective removal of lesions. Objective: The aim of our study was to evaluate the association of the hOGG1 (326 Ser/Cys) and XRCC1 (194 Arg/Trp, 399 Arg/Gln) gene polymorphisms whose protein is involved in the BER pathway with DNA repair efficiency in patients with diabetes type 2 and diabetic neuropathy compared to the healthy subjects. Genotypes were determined by PCR-RFLP analysis in 385 subjects, including 117 with type 2 diabetes, 56 with diabetic neuropathy and 212 with normal glucose metabolism. The polymorphisms studied include codon 326 of hOGG1 and 194, 399 of XRCC1 in the base excision repair (BER) genes. Comet assay was carried out using peripheral blood lymphocytes from the patients and controls. This test enabled the evaluation of DNA damage in cells exposed to hydrogen peroxide alone and in the combination with the endonuclease III (Nth). The results of the analysis of polymorphism were statistically examination by calculating the odds ratio (OR) and their 95% confidence intervals (95% CI) using the ¤ç2-tests. Our data indicate that patients with diabetes mellitus type 2 (including those with neuropathy) had higher frequencies of the XRCC1 399Arg/Gln polymorphism in homozygote (GG) (OR: 1.85 [95% CI: 1.07-3.22], P=0.3) and also increased frequency of 399Gln (G) allele (OR: 1.38 [95% CI: 1.03-1.83], P=0.3). No relation to other polymorphisms with increased risk of diabetes or diabetic neuropathy. In T2DM patients complicated by neuropathy, there was less efficient repair of oxidative DNA damage induced by hydrogen peroxide in both the presence and absence of the Nth enzyme. The results of our study suggest that the XRCC1 399 Arg/Gln polymorphism is a significant risk factor of T2DM in Polish population. Obtained data suggest a decreased efficiency of DNA repair in cells from patients with diabetes and neuropathy may be associated with oxidative stress. Additionally, patients with neuropathy are characterized by even greater sensitivity to oxidative damage than patients with diabetes, which suggests participation of free radicals in the pathogenesis of neuropathy.

Keywords: Diabetic neuropathy, oxidative stress, gene polymorphisms, oxidative DNA damage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2068
98 Polymeric Sustained Biodegradable Patch Formulation for Wound Healing

Authors: Abhay Asthana, Gyati Shilakari Asthana

Abstract:

It is the patient compliance and stability in combination with controlled drug delivery and biocompatibility that forms the core feature in present research and development of sustained biodegradable patch formulation intended for wound healing. The aim was to impart sustained degradation, sterile formulation, significant folding endurance, elasticity, biodegradability, bio-acceptability and strength. The optimized formulation comprised of polymers including Hydroxypropyl methyl cellulose, Ethylcellulose, and Gelatin, and Citric Acid PEG Citric acid (CPEGC) triblock dendrimers and active Curcumin. Polymeric mixture dissolved in geometric order in suitable medium through continuous stirring under ambient conditions. With continued stirring Curcumin was added with aid of DCM and Methanol in optimized ratio to get homogenous dispersion. The dispersion was sonicated with optimum frequency and for given time and later casted to form a patch form. All steps were carried out under strict aseptic conditions. The formulations obtained in the acceptable working range were decided based on thickness, uniformity of drug content, smooth texture and flexibility and brittleness. The patch kept on stability using butter paper in sterile pack displayed folding endurance in range of 20 to 23 times without any evidence of crack in an optimized formulation at room temperature (RT) (24 ± 2°C). The patch displayed acceptable parameters after stability study conducted in refrigerated conditions (8±0.2°C) and at RT (24 ± 2°C) up to 90 days. Further, no significant changes were observed in critical parameters such as elasticity, biodegradability, drug release and drug content during stability study conducted at RT 24±2°C for 45 and 90 days. The drug content was in range 95 to 102%, moisture content didn’t exceeded 19.2% and patch passed the content uniformity test. Percentage cumulative drug release was found to be 80% in 12h and matched the biodegradation rate as drug release with correlation factor R2>0.9. The biodegradable patch based formulation developed shows promising results in terms of stability and release profiles.

Keywords: Sustained biodegradation, wound healing, polymeric patch, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2302
97 Modelling for Roof Failure Analysis in an Underground Cave

Authors: M. Belén Prendes-Gero, Celestino González-Nicieza, M. Inmaculada Alvarez-Fernández

Abstract:

Roof collapse is one of the problems with a higher frequency in most of the mines of all countries, even now. There are many reasons that may cause the roof to collapse, namely the mine stress activities in the mining process, the lack of vigilance and carelessness or the complexity of the geological structure and irregular operations. This work is the result of the analysis of one accident produced in the “Mary” coal exploitation located in northern Spain. In this accident, the roof of a crossroad of excavated galleries to exploit the “Morena” Layer, 700 m deep, collapsed. In the paper, the work done by the forensic team to determine the causes of the incident, its conclusions and recommendations are collected. Initially, the available documentation (geology, geotechnics, mining, etc.) and accident area were reviewed. After that, laboratory and on-site tests were carried out to characterize the behaviour of the rock materials and the support used (metal frames and shotcrete). With this information, different hypotheses of failure were simulated to find the one that best fits reality. For this work, the software of finite differences in three dimensions, FLAC 3D, was employed. The results of the study confirmed that the detachment was originated as a consequence of one sliding in the layer wall, due to the large roof span present in the place of the accident, and probably triggered as a consequence of the existence of a protection pillar insufficient. The results allowed to establish some corrective measures avoiding future risks. For example, the dimensions of the protection zones that must be remained unexploited and their interaction with the crossing areas between galleries, or the use of more adequate supports for these conditions, in which the significant deformations may discourage the use of rigid supports such as shotcrete. At last, a grid of seismic control was proposed as a predictive system. Its efficiency was tested along the investigation period employing three control equipment that detected new incidents (although smaller) in other similar areas of the mine. These new incidents show that the use of explosives produces vibrations which are a new risk factor to analyse in a next future.

Keywords: Forensic analysis, hypothesis modelling, roof failure, seismic monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 605
96 Resolving a Piping Vibration Problem by Installing Viscous Damper Supports

Authors: Carlos Herrera Sierralta, Husain M. Muslim, Meshal T. Alsaiari, Daniel Fischer

Abstract:

The vast majority of piping vibration problems in the Oil & Gas industry are provoked by the process flow characteristics which are basically related to the fluid properties, the type of service and its different operational scenarios. In general, the corrective actions recommended for flow induced vibration in piping systems can be grouped in two major areas: those which affect the excitation mechanisms typically associated to process variables, and those which affect the response mechanism of the pipework per se. Where possible the first option is to try to solve the flow induced problem from the excitation mechanism perspective. However, in producing facilities the approach of changing process parameters might not always be convenient as it could lead to reduction of production rates or it may require the shutdown of the system. That impediment might lead to a second option, which is to modify the response of the piping system to excitation generated by the process flow. In principle, the action of shifting the natural frequency of the system well above the frequency inherent to the process always favours the elimination, or considerably reduces the level of vibration experienced by the piping system. Tightening up the clearances at the supports (ideally zero gap) and adding new static supports at the system, are typical ways of increasing the natural frequency of the piping system. However, only stiffening the piping system may not be sufficient to resolve the vibration problem, and in some cases, it might not be feasible to implement it at all, as the available piping layout could create limitations on adding supports due to thermal expansion/contraction requirements. In these cases, utilization of viscous damper supports could be recommended as these devices can allow relatively large quasi-static movement of piping while providing sufficient capabilities of dissipating the vibration. Therefore, when correctly selected and installed, viscous damper supports can provide a significant effect on the response of the piping system over a wide range of frequencies. Viscous dampers cannot be used to support sustained, static loads. This paper shows over a real case example, a methodology which allows to determine the selection of the viscous damper supports via a dynamic analysis model. By implementing this methodology, it is possible to resolve piping vibration problems by adding new viscous dampers supports to the system. The methodology applied on this paper can be used to resolve similar vibration issues.

Keywords: dynamic analysis, flow induced vibration, piping supports, turbulent flow, slug flow, viscous damper

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 310
95 Enhanced-Delivery Overlay Multicasting Scheme by Optimizing Bandwidth and Latency Discrepancy Ratios

Authors: Omar F. Hamad, T. Marwala

Abstract:

With optimized bandwidth and latency discrepancy ratios, Node Gain Scores (NGSs) are determined and used as a basis for shaping the max-heap overlay. The NGSs - determined as the respective bandwidth-latency-products - govern the construction of max-heap-form overlays. Each NGS is earned as a synergy of discrepancy ratio of the bandwidth requested with respect to the estimated available bandwidth, and latency discrepancy ratio between the nodes and the source node. The tree leads to enhanceddelivery overlay multicasting – increasing packet delivery which could, otherwise, be hindered by induced packet loss occurring in other schemes not considering the synergy of these parameters on placing the nodes on the overlays. The NGS is a function of four main parameters – estimated available bandwidth, Ba; individual node's requested bandwidth, Br; proposed node latency to its prospective parent (Lp); and suggested best latency as advised by source node (Lb). Bandwidth discrepancy ratio (BDR) and latency discrepancy ratio (LDR) carry weights of α and (1,000 - α ) , respectively, with arbitrary chosen α ranging between 0 and 1,000 to ensure that the NGS values, used as node IDs, maintain a good possibility of uniqueness and balance between the most critical factor between the BDR and the LDR. A max-heap-form tree is constructed with assumption that all nodes possess NGS less than the source node. To maintain a sense of load balance, children of each level's siblings are evenly distributed such that a node can not accept a second child, and so on, until all its siblings able to do so, have already acquired the same number of children. That is so logically done from left to right in a conceptual overlay tree. The records of the pair-wise approximate available bandwidths as measured by a pathChirp scheme at individual nodes are maintained. Evaluation measures as compared to other schemes – Bandwidth Aware multicaSt architecturE (BASE), Tree Building Control Protocol (TBCP), and Host Multicast Tree Protocol (HMTP) - have been conducted. This new scheme generally performs better in terms of trade-off between packet delivery ratio; link stress; control overhead; and end-to-end delays.

Keywords: Overlay multicast, Available bandwidth, Max-heapform overlay, Induced packet loss, Bandwidth-latency product, Node Gain Score (NGS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
94 Adaptive WiFi Fingerprinting for Location Approximation

Authors: Mohd Fikri Azli bin Abdullah, Khairul Anwar bin Kamarul Hatta, Esther Jeganathan

Abstract:

WiFi has become an essential technology that is widely used nowadays. It is famous due to its convenience to be used with mobile devices. This is especially true for Internet users worldwide that use WiFi connections. There are many location based services that are available nowadays which uses Wireless Fidelity (WiFi) signal fingerprinting. A common example that is gaining popularity in this era would be Foursquare. In this work, the WiFi signal would be used to estimate the user or client’s location. Similar to GPS, fingerprinting method needs a floor plan to increase the accuracy of location estimation. Still, the factor of inconsistent WiFi signal makes the estimation defer at different time intervals. Given so, an adaptive method is needed to obtain the most accurate signal at all times. WiFi signals are heavily distorted by external factors such as physical objects, radio frequency interference, electrical interference, and environmental factors to name a few. Due to these factors, this work uses a method of reducing the signal noise and estimation using the Nearest Neighbour based on past activities of the signal to increase the signal accuracy up to more than 80%. The repository yet increases the accuracy by using Artificial Neural Network (ANN) pattern matching. The repository acts as the server cum support of the client side application decision. Numerous previous works has adapted the methods of collecting signal strengths in the repository over the years, but mostly were just static. In this work, proposed solutions on how the adaptive method is done to match the signal received to the data in the repository are highlighted. With the said approach, location estimation can be done more accurately. Adaptive update allows the latest location fingerprint to be stored in the repository. Furthermore, any redundant location fingerprints are removed and only the updated version of the fingerprint is stored in the repository. How the location estimation of the user can be predicted would be highlighted more in the proposed solution section. After some studies on previous works, it is found that the Artificial Neural Network is the most feasible method to deploy in updating the repository and making it adaptive. The Artificial Neural Network functions are to do the pattern matching of the WiFi signal to the existing data available in the repository.

Keywords: Adaptive Repository, Artificial Neural Network, Location Estimation, Nearest Neighbour Euclidean Distance, WiFi RSSI Fingerprinting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3457
93 A Novel and Green Approach to Produce Nano- Porous Materials Zeolite A and MCM-41 from Coal Fly Ash and their Applications in Environmental Protection

Authors: K. S. Hui, K. N. Hui, Seong Kon Lee

Abstract:

Zeolite A and MCM-41 have extensive applications in basic science, petrochemical science, energy conservation/storage, medicine, chemical sensor, air purification, environmentally benign composite structure and waste remediation. However, the use of zeolite A and MCM-41 in these areas, especially environmental remediation, are restricted due to prohibitive production cost. Efficient recycling of and resource recovery from coal fly ash has been a major topic of current international research interest, aimed at achieving sustainable development of human society from the viewpoints of energy, economy, and environmental strategy. This project reported an original, novel, green and fast methods to produce nano-porous zeolite A and MCM-41 materials from coal fly ash. For zeolite A, this novel production method allows a reduction by half of the total production time while maintaining a high degree of crystallinity of zeolite A which exists in a narrower particle size distribution. For MCM-41, this remarkably green approach, being an environmentally friendly process and reducing generation of toxic waste, can produce pure and long-range ordered MCM-41 materials from coal fly ash. This approach took 24 h at 25 oC to produce 9 g of MCM-41 materials from 30 g of the coal fly ash, which is the shortest time and lowest reaction temperature required to produce pure and ordered MCM-41 materials (having the largest internal surface area) compared to the values reported in the literature. Performance evaluation of the produced zeolite A and MCM-41 materials in wastewater treatment and air pollution control were reported. The residual fly ash was also converted to zeolite Na-P1 which showed good performance in removal of multi-metal ions in wastewater. In wastewater treatment, compared to commercial-grade zeolite A, adsorbents produced from coal fly ash were effective in removing multi heavy metal ions in water and could be an alternative material for treatment of wastewater. In methane emission abatement, the zeolite A (produced from coal fly ash) achieved similar methane removal efficiency compared to the zeolite A prepared from pure chemicals. This report provides the guidance for production of zeolite A and MCM-41 from coal fly ash by a cost-effective approach which opens potential applications of these materials in environmental industry. Finally, environmental and economic aspects of production of zeolite A and MCM-41 from coal fly ash were discussed.

Keywords: Metal ions, waste water, methane, volatile organic compounds

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2253
92 Emerging VC Industry: Do Market Expectations Play the Most Important Role in Project Selection? Evidence on Russian Data

Authors: I. Rodionov, A. Semenov, E. Gosteva, O. Sokolova

Abstract:

The venture capital becomes more and more advanced and effective source of the innovation project financing, connected with a high-risk level. In the developed countries, it plays a key role in transforming innovation projects into successful businesses and creating the prosperity of the modern economy. In Russia, there are many necessary preconditions for creation of the effective venture investment system: the network of the public institutes for innovation financing operates; there is a significant number of the small and medium-sized enterprises, capable to sell production with good market potential. However, the current system does not confirm the necessary level of efficiency in practice that can be substantially explained by the absence of the accurate plan of action to form the national venture model and by the lack of experience of successful venture deals with profitable exits in Russian economy. This paper studies the influence of various factors on the venture industry development by the example of the IT-sector in Russia. The choice of the sector is based on the fact, that this segment is the main driver of the venture capital market growth in Russia, and the necessary set of data exists. The size of investment of the second round is used as the dependent variable. To analyse the influence of the previous round, such determinant as the volume of the previous (first) round investments is used. There is also used a dummy variable in regression to examine that the participation of an investor with high reputation and experience in the previous round can influence the size of the next investment round. The regression analysis of short-term interrelations between studied variables reveals prevailing influence of the volume of the first round investments on the venture investments volume of the second round. The most important determinant of the value of the second-round investment is the value of first–round investment, so it means that the most competitive on the Russian market are the start-up teams that can attract more money on the start, and the target market growth is not the factor of crucial importance. This supports the point of view that VC in Russia is driven by endogenous factors and not by exogenous ones that are based on global market growth.

Keywords: Venture industry, venture investment, determinants of the venture sector development, IT-sector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556
91 International Financial Crises and the Political Economy of Financial Reforms in Turkey: 1994-2009

Authors: Birgül Şakar

Abstract:

This study1 holds for the formation of international financial crisis and political factors for economic crisis in Turkey, are evaluated in chronological order. The international arena and relevant studies conducted in Turkey work in the literature are assessed. The main purpose of the study is to hold the linkage between the crises and political stability in Turkey in details, and to examine the position of Turkey in this regard. The introduction part follows the literature survey on the models explaining causes and results of the crises, the second part of the study. In the third part, the formations of the world financial crises are studied. The fourth part, financial crisis in Turkey in 1994, 2000, 2001 and 2008 are reviewed and their political reasons are analyzed. In the last part of the study the results and recommendations are held. Political administrations have laid the grounds for an economic crisis in Turkey. In this study, the emergence of an economic crisis in Turkey and the developments after the crisis are chronologically examined and an explanation is offered as to the cause and effect relationship between the political administration and economic equilibrium in the country. Economic crises can be characterized as follows: high prices of consumables, high interest rates, current account deficits, budget deficits, structural defects in government finance, rising inflation and fixed currency applications, rising government debt, declining savings rates and increased dependency on foreign capital stock. Entering into the conditions of crisis during a time when the exchange value of the country-s national currency was rising, speculative finance movements and shrinking of foreign currency reserves happened due to expectations for devaluation and because of foreign investors- resistance to financing national debt, and a financial risk occurs. During the February 2001 crisis and immediately following, devaluation and reduction of value occurred in Turkey-s stock market. While changing over to the system of floating exchange rates in the midst of this crisis, the effects of the crisis on the real economy are discussed in this study. Administered politics include financial reforms, such as the rearrangement of banking systems. These reforms followed with the provision of foreign financial support. There have been winners and losers in the imbalance of income distribution, which has recently become more evident in Turkey-s fragile economy.

Keywords: Economics, marketing crisis, financial reforms, political economy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465
90 High Efficiency Solar Thermal Collectors Utilization in Process Heat: A Case Study of Textile Finishing Industry

Authors: Gökçen A. Çiftçioğlu, M. A. Neşet Kadırgan, Figen Kadırgan

Abstract:

Solar energy, since it is available every day, is seen as one of the most valuable renewable energy resources. Thus, the energy of sun should be efficiently used in various applications. The most known applications that use solar energy are heating water and spaces. High efficiency solar collectors need appropriate selective surfaces to absorb the heat. Selective surfaces (Selektif-Sera) used in this study are applied to flat collectors, which are produced by a roll to roll cost effective coating of nano nickel layers, developed in Selektif Teknoloji Co. Inc. Efficiency of flat collectors using Selektif-Sera absorbers are calculated in collaboration with Institute for Solar Technik Rapperswil, Switzerland. The main cause of high energy consumption in industry is mostly caused from low temperature level processes. There is considerable effort in research to minimize the energy use by renewable energy sources such as solar energy. A feasibility study will be presented to obtain the potential of solar thermal energy utilization in the textile industry using these solar collectors. For the feasibility calculations presented in this study, textile dyeing and finishing factory located at Kahramanmaras is selected since the geographic location was an important factor. Kahramanmaras is located in the south east part of Turkey thus has a great potential to have solar illumination much longer. It was observed that, the collector area is limited by the available area in the factory, thus a hybrid heating generating system (lignite/solar thermal) was preferred in the calculations of this study to be more realistic. During the feasibility work, the calculations took into account the preheating process, where well waters heated from 15 °C to 30-40 °C by using the hot waters in heat exchangers. Then the preheated water was heated again by high efficiency solar collectors. Economic comparison between the lignite use and solar thermal collector use was provided to determine the optimal system that can be used efficiently. The optimum design of solar thermal systems was studied depending on the optimum collector area. It was found that the solar thermal system is more economic and efficient than the merely lignite use. Return on investment time is calculated as 5.15 years.

Keywords: Solar energy, heating, solar heating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1238
89 Development of a Basic Robot System for Medical and Nursing Care for Patients with Glaucoma

Authors: Naoto Suzuki

Abstract:

Medical methods to completely treat glaucoma are yet to be developed. Therefore, ophthalmologists manage patients mainly to delay disease progression. Patients with glaucoma are mainly elderly individuals. In elderly people's houses, having an equipment that can provide medical treatment and care can release their family from their care. For elderly people with the glaucoma to live by themselves as much as possible, we developed a support robot having five functions: elderly people care, ophthalmological examination, trip assistance to the neighborhood, medical treatment, and data referral to a hospital. The medical and nursing care robot should approach the visual field that the patients can see at a speed suitable for their eyesight. This is because the robot will be dangerous if it approaches the patients from the visual field that they cannot see. We experimentally developed a robot that brings a white cane to elderly people with glaucoma. The base part of the robot is a carriage, which is a Megarover 1.1, and it has two infrared sensors. The robot moves along a white line on the floor using the infrared sensors and has a special arm, which does not use electricity. The arm can scoop the block attached to the white cane. Next, we also developed a direction detector comprised of a charge-coupled device camera (SVR41ResucueHD; Sun Mechatronics), goggles (MG-277MLF; Midori Anzen Co. Ltd.), and biconvex lenses with a focal length of 25 mm (Edmund Co.). Some young people were photographed using the direction detector, which was put on their faces. Image processing was performed using Scilab 6.1.0 and Image Processing and Computer Vision Toolbox 4.1.2. To measure the people's line of vision, we calculated the iris's center of gravity using five processes: reduction, trimming, binarization or gray scale, edge extraction, and Hough transform. We compared the binarization and gray scale processes in image processing. The binarization process was better than the gray scale process. For edge extraction, we compared five methods: Sobel, Prewitt, Laplacian of Gaussian, fast Fourier transform, and Canny. The Canny method was the optimal extraction method. We performed the Hough transform to search for the main coordinates from the iris's edge, and we found that the Hough transform could calculate the center point of the iris.

Keywords: Glaucoma, support robot, elderly people, Hough transform, direction detector, line of vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 544
88 Test Method Development for Evaluation of Process and Design Effect on Reinforced Tube

Authors: Cathal Merz, Gareth O’Donnell

Abstract:

Coil reinforced thin-walled (CRTW) tubes are used in medicine to treat problems affecting blood vessels within the body through minimally invasive procedures. The CRTW tube considered in this research makes up part of such a device and is inserted into the patient via their femoral or brachial arteries and manually navigated to the site in need of treatment. This procedure replaces the requirement to perform open surgery but is limited by reduction of blood vessel lumen diameter and increase in tortuosity of blood vessels deep in the brain. In order to maximize the capability of these procedures, CRTW tube devices are being manufactured with decreasing wall thicknesses in order to deliver treatment deeper into the body and to allow passage of other devices through its inner diameter. This introduces significant stresses to the device materials which have resulted in an observed increase in the breaking of the proximal segment of the device into two separate pieces after it has failed by buckling. As there is currently no international standard for measuring the mechanical properties of these CRTW tube devices, it is difficult to accurately analyze this problem. The aim of the current work is to address this discrepancy in the biomedical device industry by developing a measurement system that can be used to quantify the effect of process and design changes on CRTW tube performance, aiding in the development of better performing, next generation devices. Using materials testing frames, micro-computed tomography (micro-CT) imaging, experiment planning, analysis of variance (ANOVA), T-tests and regression analysis, test methods have been developed for assessing the impact of process and design changes on the device. The major findings of this study have been an insight into the suitability of buckle and three-point bend tests for the measurement of the effect of varying processing factors on the device’s performance, and guidelines for interpreting the output data from the test methods. The findings of this study are of significant interest with respect to verifying and validating key process and design changes associated with the device structure and material condition. Test method integrity evaluation is explored throughout.

Keywords: Buckling, coil reinforced thin-walled tubes, fracture, test method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 694
87 An Overview of Some High Order and Multi-Level Finite Difference Schemes in Computational Aeroacoustics

Authors: Appanah Rao Appadu, Muhammad Zaid Dauhoo

Abstract:

In this paper, we have combined some spatial derivatives with the optimised time derivative proposed by Tam and Webb in order to approximate the linear advection equation which is given by = 0. Ôêé Ôêé + Ôêé Ôêé x f t u These spatial derivatives are as follows: a standard 7-point 6 th -order central difference scheme (ST7), a standard 9-point 8 th -order central difference scheme (ST9) and optimised schemes designed by Tam and Webb, Lockard et al., Zingg et al., Zhuang and Chen, Bogey and Bailly. Thus, these seven different spatial derivatives have been coupled with the optimised time derivative to obtain seven different finite-difference schemes to approximate the linear advection equation. We have analysed the variation of the modified wavenumber and group velocity, both with respect to the exact wavenumber for each spatial derivative. The problems considered are the 1-D propagation of a Boxcar function, propagation of an initial disturbance consisting of a sine and Gaussian function and the propagation of a Gaussian profile. It is known that the choice of the cfl number affects the quality of results in terms of dissipation and dispersion characteristics. Based on the numerical experiments solved and numerical methods used to approximate the linear advection equation, it is observed in this work, that the quality of results is dependent on the choice of the cfl number, even for optimised numerical methods. The errors from the numerical results have been quantified into dispersion and dissipation using a technique devised by Takacs. Also, the quantity, Exponential Error for Low Dispersion and Low Dissipation, eeldld has been computed from the numerical results. Moreover, based on this work, it has been found that when the quantity, eeldld can be used as a measure of the total error. In particular, the total error is a minimum when the eeldld is a minimum.

Keywords: Optimised time derivative, dissipation, dispersion, cfl number, Nomenclature: k : time step, h : spatial step, β :advection velocity, r: cfl/Courant number, hkrβ= , w =θ, h : exact wave number, n :time level, RPE : Relative phase error per unit time step, AFM :modulus of amplification factor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635
86 The Gravitational Impact of the Sun and the Moon on Heavy Mineral Deposits and Dust Particles in Low Gravity Regions of the Earth

Authors: T. B. Karu Jayasundara

Abstract:

The Earth’s gravity is not uniform. The satellite imageries of the Earth’s surface from NASA reveal a number of different gravity anomaly regions all over the globe. When the moon rotates around the earth, its gravity has a major physical influence on a number of regions on the earth. This physical change can be seen by the tides. The tides make sea levels high and low in coastal regions. During high tide, the gravitational force of the Moon pulls the Earth’s gravity so that the total gravitational intensity of Earth is reduced; it is further reduced in the low gravity regions of Earth. This reduction in gravity helps keep the suspended particles such as dust in the atmosphere, sand grains in the sea water for longer. Dramatic differences can be seen from the floating dust in the low gravity regions when compared with other regions. The above phenomena can be demonstrated from experiments. The experiments have to be done in high and low gravity regions of the earth during high and low tide, which will assist in comparing the final results. One of the experiments that can be done is by using a water filled cylinder about 80 cm tall, a few particles, which have the same density and same diameter (about 1 mm) and a stop watch. The selected particles were dropped from the surface of the water in the cylinder and the time taken for the particles to reach the bottom of the cylinder was measured using the stop watch. The times of high and low tide charts can be obtained from the regional government authorities. This concept is demonstrated by the particle drop times taken at high and low tides. The result of the experiment shows that the particle settlement time is less in low tide and high in high tide. The experiment for dust particles in air can be collected on filters, which are cellulose ester membranes and using a vacuum pump. The dust on filters can be used to make slides according to the NOHSC method. Counting the dust particles on the slides can be done using a phase contrast microscope. The results show that the concentration of dust is high at high tide and low in low tide. As a result of the high tides, a high concentration of heavy minerals deposit on placer deposits and dust particles retain in the atmosphere for longer in low gravity regions. These conditions are remarkably exhibited in the lowest low gravity region of the earth, mainly in the regions of India, Sri Lanka and in the middle part of the Indian Ocean. The biggest heavy mineral placer deposits are found in coastal regions of India and Sri Lanka and heavy dust particles are found in the atmosphere of India, particularly in the Delhi region.

Keywords: Dust particles, high and low tides, heavy minerals. low gravity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 622
85 Tensile and Fracture Properties of Cast and Forged Composite Synthesized by Addition of in-situ Generated Al3Ti-Al2O3 Particles to Magnesium

Authors: H. M. Nanjundaswamy, S. K. Nath, S. Ray

Abstract:

TiO2 particles have been added in molten aluminium to result in aluminium based cast Al/Al3Ti-Al2O3 composite, which has been added then to molten magnesium to synthesize magnesium based cast Mg-Al/Al3Ti-Al2O3 composite. The nominal compositions in terms of Mg, Al, and TiO2 contents in the magnesium based composites are Mg-9Al-0.6TiO2, Mg-9Al-0.8TiO2, Mg-9Al-1.0TiO2 and Mg-9Al-1.2TiO2 designated respectively as MA6T, MA8T, MA10T and MA12T. The microstructure of the cast magnesium based composite shows grayish rods of intermetallics Al3Ti, inherited from aluminium based composite but these rods, on hot forging, breaks into smaller lengths decreasing the average aspect ratio (length to diameter) from 7.5 to 3.0. There are also cavities in between the broken segments of rods. β-phase in cast microstructure, Mg17Al12, dissolves during heating prior to forging and re-precipitates as relatively finer particles on cooling. The amount of β-phase also decreases on forging as segregation is removed. In both the cast and forged composite, the Brinell hardness increases rapidly with increasing addition of TiO2 but the hardness is higher in forged composites by about 80 BHN. With addition of higher level of TiO2 in magnesium based cast composite, yield strength decreases progressively but there is marginal increase in yield strength over that of the cast Mg-9 wt. pct. Al, designated as MA alloy. But the ultimate tensile strength (UTS) in the cast composites decreases with the increasing particle content indicating possibly an early initiation of crack in the brittle inter-dendritic region and their easy propagation through the interfaces of the particles. In forged composites, there is a significant improvement in both yield strength and UTS with increasing TiO2 addition and also, over those observed in their cast counterpart, but at higher addition it decreases. It may also be noted that as in forged MA alloy, incomplete recovery of forging strain increases the strength of the matrix in the composites and the ductility decreases both in the forged alloy and the composites. Initiation fracture toughness, JIC, decreases drastically in cast composites compared to that in MA alloy due to the presence of intermetallic Al3Ti and Al2O3 particles in the composite. There is drastic reduction of JIC on forging both in the alloy and the composites, possibly due to incomplete recovery of forging strain in both as well as breaking of Al3Ti rods and the voids between the broken segments of Al3Ti rods in composites. The ratio of tearing modulus to elastic modulus in cast composites show higher ratio, which increases with the increasing TiO2 addition. The ratio decreases comparatively more on forging of cast MA alloy than those in forged composites.

Keywords: Composite, fracture toughness, forging, tensile properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1364
84 C-LNRD: A Cross-Layered Neighbor Route Discovery for Effective Packet Communication in Wireless Sensor Network

Authors: K. Kalaikumar, E. Baburaj

Abstract:

One of the problems to be addressed in wireless sensor networks is the issues related to cross layer communication. Cross layer architecture shares the information across the layer, ensuring Quality of Services (QoS). With this shared information, MAC protocol adapts effective functionality maintenance such as route selection on changeable sensor network environment. However, time slot assignment and neighbour route selection time duration for cross layer have not been carried out. The time varying physical layer communication over cross layer causes high traffic load in the sensor network. Though, the traffic load was reduced using cross layer optimization procedure, the computational cost is high. To improve communication efficacy in the sensor network, a self-determined time slot based Cross-Layered Neighbour Route Discovery (C-LNRD) method is presented in this paper. In the presented work, the initial process is to discover the route in the sensor network using Dynamic Source Routing based Medium Access Control (MAC) sub layers. This process considers MAC layer operation with dynamic route neighbour table discovery. Then, the discovered route path for packet communication employs Broad Route Distributed Time Slot Assignment method on Cross-Layered Sensor Network system. Broad Route means time slotting on varying length of the route paths. During packet communication in this sensor network, transmission of packets is adjusted over the different time with varying ranges for controlling the traffic rate. Finally, Rayleigh fading model is developed in C-LNRD to identify the performance of the sensor network communication structure. The main task of Rayleigh Fading is to measure the power level of each communication under MAC sub layer. The minimized power level helps to easily reduce the computational cost of packet communication in the sensor network. Experiments are conducted on factors such as power factor, on packet communication, neighbour route discovery time, and information (i.e., packet) propagation speed.

Keywords: Medium access control, neighbour route discovery, wireless sensor network, Rayleigh fading, distributed time slot assignment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 773
83 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models

Authors: N. Mirzaei Varzeghani, M. Saffarzadeh, A. Naderan, A. Taheri

Abstract:

Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, more passengers aged 55 and older using this airport, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.

Keywords: Multimodal transportation, travel behavior, demand modeling, statistical models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 526
82 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: G. Candel, D. Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: Concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 488
81 Influence of Sire Breed, Protein Supplementation and Gender on Wool Spinning Fineness in First-Cross Merino Lambs

Authors: A. E. O. Malau-Aduli, B. W. B. Holman, P. A. Lane

Abstract:

Our objectives were to evaluate the effects of sire breed, type of protein supplement, level of supplementation and sex on wool spinning fineness (SF), its correlations with other wool characteristics and prediction accuracy in F1 Merino crossbred lambs. Texel, Coopworth, White Suffolk, East Friesian and Dorset rams were mated with 500 purebred Merino dams at a ratio of 1:100 in separate paddocks within a single management system. The F1 progeny were raised on ryegrass pasture until weaning, before forty lambs were randomly allocated to treatments in a 5 x 2 x 2 x 2 factorial experimental design representing 5 sire breeds, 2 supplementary feeds (canola or lupins), 2 levels of supplementation (1% or 2% of liveweight) and sex (wethers or ewes). Lambs were supplemented for six weeks after an initial three weeks of adjustment, wool sampled at the commencement and conclusion of the feeding trial and analyzed for SF, mean fibre diameter (FD), coefficient of variation (CV), standard deviation, comfort factor (CF), fibre curvature (CURV), and clean fleece yield. Data were analyzed using mixed linear model procedures with sire fitted as a random effect, and sire breed, sex, supplementary feed type, level of supplementation and their second-order interactions as fixed effects. Sire breed (P<0.001), sex (P<0.004), sire breed x level of supplementation (P<0.004), and sire breed x sex (P<0.019) interactions significantly influenced SF. SF ranged from 22.7 ± 0.2μm in White Suffolk-sired lambs to 25.1 ± 0.2μm in East Friesian crossbred lambs. Ewes had higher SF than wethers. There were significant (P<0.001) correlations between SF and FD (0.93), CV (0.40), CF (-0.94) and CURV (-0.12). Its strong relationship with other wool quality traits enabled accurate predictions explaining up to about 93% of the observed variation. The interactions between sire breed genetics and nutrition will have an impact on the choices that dual-purpose sheep producers make when selecting sire breeds and protein supplementary feed levels to achieve optimal wool spinning fineness at the farmgate level. This will facilitate selective breeding programs being able to better account for SF and its interactions with other wool characteristics.

Keywords: Merino crossbred sheep, protein supplementation, sire breed, wool quality, wool spinning fineness

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2177
80 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress

Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin

Abstract:

Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.

Keywords: Acceptance, coping strategies, measurement instrument, questionnaire, stress, validation process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 920
79 Simulation of the Visco-Elasto-Plastic Deformation Behaviour of Short Glass Fibre Reinforced Polyphthalamides

Authors: V. Keim, J. Spachtholz, J. Hammer

Abstract:

The importance of fibre reinforced plastics continually increases due to the excellent mechanical properties, low material and manufacturing costs combined with significant weight reduction. Today, components are usually designed and calculated numerically by using finite element methods (FEM) to avoid expensive laboratory tests. These programs are based on material models including material specific deformation characteristics. In this research project, material models for short glass fibre reinforced plastics are presented to simulate the visco-elasto-plastic deformation behaviour. Prior to modelling specimens of the material EMS Grivory HTV-5H1, consisting of a Polyphthalamide matrix reinforced by 50wt.-% of short glass fibres, are characterized experimentally in terms of the highly time dependent deformation behaviour of the matrix material. To minimize the experimental effort, the cyclic deformation behaviour under tensile and compressive loading (R = −1) is characterized by isothermal complex low cycle fatigue (CLCF) tests. Combining cycles under two strain amplitudes and strain rates within three orders of magnitude and relaxation intervals into one experiment the visco-elastic deformation is characterized. To identify visco-plastic deformation monotonous tensile tests either displacement controlled or strain controlled (CERT) are compared. All relevant modelling parameters for this complex superposition of simultaneously varying mechanical loadings are quantified by these experiments. Subsequently, two different material models are compared with respect to their accuracy describing the visco-elasto-plastic deformation behaviour. First, based on Chaboche an extended 12 parameter model (EVP-KV2) is used to model cyclic visco-elasto-plasticity at two time scales. The parameters of the model including a total separation of elastic and plastic deformation are obtained by computational optimization using an evolutionary algorithm based on a fitness function called genetic algorithm. Second, the 12 parameter visco-elasto-plastic material model by Launay is used. In detail, the model contains a different type of a flow function based on the definition of the visco-plastic deformation as a part of the overall deformation. The accuracy of the models is verified by corresponding experimental LCF testing.

Keywords: Complex low cycle fatigue, material modelling, short glass fibre reinforced polyphthalamides, visco-elasto-plastic deformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1370