Search results for: Uncertainty Propagation.
46 Teaching Translation in Brazilian Universities: A Study about the Possible Impacts of Translators’ Comments on the Cyberspace about Translator Education
Authors: Erica Lima
Abstract:
The objective of this paper is to discuss relevant points about teaching translation in Brazilian universities and the possible impacts of blogs and social networks to translator education today. It is intended to analyze the curricula of Brazilian translation courses, contrasting them to information obtained from two social networking groups of great visibility in the area concerning essential characteristics to become a successful profession. Therefore, research has, as its main corpus, a few undergraduate translation programs’ syllabuses, as well as a few postings on social networks groups that specifically share professional opinions regarding the necessity for a translator to obtain a degree in translation to practice the profession. To a certain extent, such comments and their corresponding responses lead to the propagation of discourses which influence the ideas that aspiring translators and recent graduates end up having towards themselves and their undergraduate courses. The postings also show that many professionals do not have a clear position regarding the translator education; while refuting it, they also encourage “free” courses. It is thus observed that cyberspace constitutes, on the one hand, a place of mobilization of people in defense of similar ideas. However, on the other hand, it embodies a place of tension and conflict, in view of the fact that there are many participants and, as in any other situation of interlocution, disagreements may arise. From the postings, aspects related to professionalism were analyzed (including discussions about regulation), as well as questions about the classic dichotomies: theory/practice; art/technique; self-education/academic training. As partial result, the common interest regarding the valorization of the profession could be mentioned, although there is no consensus on the essential characteristics to be a good translator. It was also possible to observe that the set of socially constructed representations in the group reflects characteristics of the world situation of the translation courses (especially in some European countries and in the United States), which, in the first instance, does not accurately reflect the Brazilian idiosyncrasies of the area.
Keywords: Cyberspace, teaching translation, translator education, university.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 91145 Performance Analysis of HSDPA Systems using Low-Density Parity-Check (LDPC)Coding as Compared to Turbo Coding
Authors: K. Anitha Sheela, J. Tarun Kumar
Abstract:
HSDPA is a new feature which is introduced in Release-5 specifications of the 3GPP WCDMA/UTRA standard to realize higher speed data rate together with lower round-trip times. Moreover, the HSDPA concept offers outstanding improvement of packet throughput and also significantly reduces the packet call transfer delay as compared to Release -99 DSCH. Till now the HSDPA system uses turbo coding which is the best coding technique to achieve the Shannon limit. However, the main drawbacks of turbo coding are high decoding complexity and high latency which makes it unsuitable for some applications like satellite communications, since the transmission distance itself introduces latency due to limited speed of light. Hence in this paper it is proposed to use LDPC coding in place of Turbo coding for HSDPA system which decreases the latency and decoding complexity. But LDPC coding increases the Encoding complexity. Though the complexity of transmitter increases at NodeB, the End user is at an advantage in terms of receiver complexity and Bit- error rate. In this paper LDPC Encoder is implemented using “sparse parity check matrix" H to generate a codeword at Encoder and “Belief Propagation algorithm "for LDPC decoding .Simulation results shows that in LDPC coding the BER suddenly drops as the number of iterations increase with a small increase in Eb/No. Which is not possible in Turbo coding. Also same BER was achieved using less number of iterations and hence the latency and receiver complexity has decreased for LDPC coding. HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14Mbps, with 2Mbps on the uplink. The changes that HSDPA enables includes better quality, more reliable and more robust data services. In other words, while realistic data rates are only a few Mbps, the actual quality and number of users achieved will improve significantly.Keywords: AMC, HSDPA, LDPC, WCDMA, 3GPP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 204844 A Post Keynesian Environmental Macroeconomic Model for Agricultural Water Sustainability under Climate Change in the Murray-Darling Basin, Australia
Authors: Ke Zhao, Ballarat Colin Richardson, Jerry Courvisanos, John Crawford
Abstract:
Climate change has profound consequences for the agriculture of south-eastern Australia and its climate-induced water shortage in the Murray-Darling Basin. Post Keynesian Economics (PKE) macro-dynamics, along with Kaleckian investment and growth theory, are used to develop an ecological-economic system dynamics model of this complex nonlinear river basin system. The Murray- Darling Basin Simulation Model (MDB-SM) uses the principles of PKE to incorporate the fundamental uncertainty of economic behaviors of farmers regarding the investments they make and the climate change they face, particularly as regards water ecosystem services. MDB-SM provides a framework for macroeconomic policies, especially for long-term fiscal policy and for policy directed at the sustainability of agricultural water, as measured by socio-economic well-being considerations, which include sustainable consumption and investment in the river basin. The model can also reproduce other ecological and economic aspects and, for certain parameters and initial values, exhibit endogenous business cycles and ecological sustainability with realistic characteristics. Most importantly, MDBSM provides a platform for the analysis of alternative economic policy scenarios. These results reveal the importance of understanding water ecosystem adaptation under climate change by integrating a PKE macroeconomic analytical framework with the system dynamics modelling approach. Once parameterised and supplied with historical initial values, MDB-SM should prove to be a practical tool to provide alternative long-term policy simulations of agricultural water and socio-economic well-being.
Keywords: Agricultural water, Macroeconomic dynamics, Modeling, Investment dynamics, Sustainability, Unemployment, Economics, Keynesian, Kaleckian.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 217343 Seismic Fragility Assessment of Strongback Steel Braced Frames Subjected to Near-Field Earthquakes
Authors: Mohammadreza Salek Faramarzi, Touraj Taghikhany
Abstract:
In this paper, seismic fragility assessment of a recently developed hybrid structural system, known as the strongback system (SBS) is investigated. In this system, to mitigate the occurrence of the soft-story mechanism and improve the distribution of story drifts over the height of the structure, an elastic vertical truss is formed. The strengthened members of the braced span are designed to remain substantially elastic during levels of excitation where soft-story mechanisms are likely to occur and impose a nearly uniform story drift distribution. Due to the distinctive characteristics of near-field ground motions, it seems to be necessary to study the effect of these records on seismic performance of the SBS. To this end, a set of 56 near-field ground motion records suggested by FEMA P695 methodology is used. For fragility assessment, nonlinear dynamic analyses are carried out in OpenSEES based on the recommended procedure in HAZUS technical manual. Four damage states including slight, moderate, extensive, and complete damage (collapse) are considered. To evaluate each damage state, inter-story drift ratio and floor acceleration are implemented as engineering demand parameters. Further, to extend the evaluation of the collapse state of the system, a different collapse criterion suggested in FEMA P695 is applied. It is concluded that SBS can significantly increase the collapse capacity and consequently decrease the collapse risk of the structure during its life time. Comparing the observing mean annual frequency (MAF) of exceedance of each damage state against the allowable values presented in performance-based design methods, it is found that using the elastic vertical truss, improves the structural response effectively.
Keywords: Strongback System, Near-fault, Seismic fragility, Uncertainty, IDA, Probabilistic performance assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 57442 Reinforcement of Calcium Phosphate Cement with E-Glass Fibre
Authors: Sudip Dasgupta, Debosmita Pani, Kanchan Maji
Abstract:
Calcium Phosphate Cement (CPC) due to its high bioactivity and optimum bioresorbability shows excellent bone regeneration capability. Despite it has limited applications as bone implant due to its macro-porous microstructure causing its poor mechanical strength. The reinforcement of apatitic CPCs with biocompatible fibre glass phase is an attractive area of research to improve upon its mechanical strength. Here, we study the setting behaviour of Si-doped and un-doped α tri calcium phosphate (α - TCP) based CPC and its reinforcement with addition of E-glass fibre. Alpha Tri calcium phosphate powders were prepared by solid state sintering of CaCO3 , CaHPO4 and Tetra Ethyl Ortho Silicate (TEOS) was used as silicon source to synthesize Si doped α-TCP powders. Both initial and final setting time of the developed cement was delayed because of Si addition. Crystalline phases of HA (JCPDS 9- 432), α-TCP (JCPDS 29-359) and β-TCP (JCPDS 9-169) were detected in the X-ray diffraction (XRD) pattern after immersion of CPC in simulated body fluid (SBF) for 0 hours to 10 days. As Si incorporation in the crystal lattice stabilized the TCP phase, Si doped CPC showed little slower rate of conversion into HA phase as compared to un-doped CPC. The SEM image of the microstructure of hardened CPC showed lower grain size of HA in un-doped CPC because of premature setting and faster hydrolysis of un-doped CPC in SBF as compared that in Si-doped CPC. Premature setting caused generation of micro and macro porosity in un-doped CPC structure which resulted in its lower mechanical strength as compared to that in Si-doped CPC. It was found that addition of 10 wt% of E-glass fibre into Si-doped α-TCP increased the average DTS of CPC from 8 MPa to 15 MPa as the fibres could resists the propagation of crack by deflecting the crack tip. Our study shows that biocompatible E-glass fibre in optimum proportion in CPC matrix can enhance the mechanical strength of CPC without affecting its biocompatibility.
Keywords: Calcium phosphate cement, biocompatibility, e-glass fibre, diametral tensile strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 221341 Multiple Criteria Decision Making Analysis for Selecting and Evaluating Fighter Aircraft
Authors: C. Ardil, A. M. Pashaev, R.A. Sadiqov, P. Abdullayev
Abstract:
In this paper, multiple criteria decision making analysis technique, is presented for ranking and selection of a set of determined alternatives - fighter aircraft - which are associated with a set of decision factors. In fighter aircraft design, conflicting decision criteria, disciplines, and technologies are always involved in the design process. Multiple criteria decision making analysis techniques can be helpful to effectively deal with such situations and make wise design decisions. Multiple criteria decision making analysis theory is a systematic mathematical approach for dealing with problems which contain uncertainties in decision making. The feasibility and contributions of applying the multiple criteria decision making analysis technique in fighter aircraft selection analysis is explored. In this study, an integrated framework incorporating multiple criteria decision making analysis technique in fighter aircraft analysis is established using entropy objective weighting method. An improved integrated multiple criteria decision making analysis method is utilized to aggregate the multiple decision criteria into one composite figure of merit, which serves as an objective function in the decision process. Therefore, it is demonstrated that the suitable multiple criteria decision making analysis method with decision solution provides an effective objective function for the decision making analysis. Considering that the inherent uncertainties and the weighting factors have crucial decision impacts on the fighter aircraft evaluation, seven fighter aircraft models for the multiple design criteria in terms of the weighting factors are constructed. The proposed multiple criteria decision making analysis model is based on integrated entropy index procedure, and additive multiple criteria decision making analysis theory. Hence, the applicability of proposed technique for fighter aircraft selection problem is considered. The constructed multiple criteria decision making analysis model can provide efficient decision analysis approach for uncertainty assessment of the decision problem. Consequently, the fighter aircraft alternatives are ranked based their final evaluation scores, and sensitivity analysis is conducted.
Keywords: Fighter Aircraft, Fighter Aircraft Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, MCDMA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 62540 Microstructure and Corrosion Behavior of Laser Welded Magnesium Alloys with Silver Nanoparticles
Authors: M. Ishak, K. Yamasaki, K. Maekawa
Abstract:
Magnesium alloys have gained increased attention in recent years in automotive, electronics, and medical industry. This because of magnesium alloys have better properties than aluminum alloys and steels in respects of their low density and high strength to weight ratio. However, the main problems of magnesium alloy welding are the crack formation and the appearance of porosity during the solidification. This paper proposes a unique technique to weld two thin sheets of AZ31B magnesium alloy using a paste containing Ag nanoparticles. The paste containing Ag nanoparticles of 5 nm in average diameter and an organic solvent was used to coat the surface of AZ31B thin sheet. The coated sheet was heated at 100 °C for 60 s to evaporate the solvent. The dried sheet was set as a lower AZ31B sheet on the jig, and then lap fillet welding was carried out by using a pulsed Nd:YAG laser in a closed box filled with argon gas. The characteristics of the microstructure and the corrosion behavior of the joints were analyzed by opticalmicroscopy (OM), energy dispersive spectrometry (EDS), electron probe micro-analyzer (EPMA), scanning electron microscopy (SEM), and immersion corrosion test. The experimental results show that the wrought AZ31B magnesium alloy can be joined successfully using Ag nanoparticles. Ag nanoparticles insert promote grain refinement, narrower the HAZ width and wider bond width compared to weld without and insert. Corrosion rate of welded AZ31B with Ag nanoparticles reduced up to 44 % compared to base metal. The improvement of corrosion resistance of welded AZ31B with Ag nanoparticles due to finer grains and large grain boundaries area which consist of high Al content. β-phase Mg17Al12 could serve as effective barrier and suppressed further propagation of corrosion. Furthermore, Ag distribution in fusion zone provide much more finer grains and may stabilize the magnesium solid solution making it less soluble or less anodic in aqueous
Keywords: Laser welding, magnesium alloys, nanoparticles, mechanical property
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 207339 Genetic Programming: Principles, Applications and Opportunities for Hydrological Modelling
Authors: Oluwaseun K. Oyebode, Josiah A. Adeyemo
Abstract:
Hydrological modelling plays a crucial role in the planning and management of water resources, most especially in water stressed regions where the need to effectively manage the available water resources is of critical importance. However, due to the complex, nonlinear and dynamic behaviour of hydro-climatic interactions, achieving reliable modelling of water resource systems and accurate projection of hydrological parameters are extremely challenging. Although a significant number of modelling techniques (process-based and data-driven) have been developed and adopted in that regard, the field of hydrological modelling is still considered as one that has sluggishly progressed over the past decades. This is majorly as a result of the identification of some degree of uncertainty in the methodologies and results of techniques adopted. In recent times, evolutionary computation (EC) techniques have been developed and introduced in response to the search for efficient and reliable means of providing accurate solutions to hydrological related problems. This paper presents a comprehensive review of the underlying principles, methodological needs and applications of a promising evolutionary computation modelling technique – genetic programming (GP). It examines the specific characteristics of the technique which makes it suitable to solving hydrological modelling problems. It discusses the opportunities inherent in the application of GP in water related-studies such as rainfall estimation, rainfall-runoff modelling, streamflow forecasting, sediment transport modelling, water quality modelling and groundwater modelling among others. Furthermore, the means by which such opportunities could be harnessed in the near future are discussed. In all, a case for total embracement of GP and its variants in hydrological modelling studies is made so as to put in place strategies that would translate into achieving meaningful progress as it relates to modelling of water resource systems, and also positively influence decision-making by relevant stakeholders.
Keywords: Computational modelling, evolutionary algorithms, genetic programming, hydrological modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 332938 Assessment of Path Loss Prediction Models for Wireless Propagation Channels at L-Band Frequency over Different Micro-Cellular Environments of Ekiti State, Southwestern Nigeria
Authors: C. I. Abiodun, S. O. Azi, J. S. Ojo, P. Akinyemi
Abstract:
The design of accurate and reliable mobile communication systems depends majorly on the suitability of path loss prediction methods and the adaptability of the methods to various environments of interest. In this research, the results of the adaptability of radio channel behavior are presented based on practical measurements carried out in the 1800 MHz frequency band. The measurements are carried out in typical urban, suburban and rural environments in Ekiti State, Southwestern part of Nigeria. A total number of seven base stations of MTN GSM service located in the studied environments were monitored. Path loss and break point distances were deduced from the measured received signal strength (RSS) and a practical path loss model is proposed based on the deduced break point distances. The proposed two slope model, regression line and four existing path loss models were compared with the measured path loss values. The standard deviations of each model with respect to the measured path loss were estimated for each base station. The proposed model and regression line exhibited lowest standard deviations followed by the Cost231-Hata model when compared with the Erceg Ericsson and SUI models. Generally, the proposed two-slope model shows closest agreement with the measured values with a mean error values of 2 to 6 dB. These results show that, either the proposed two slope model or Cost 231-Hata model may be used to predict path loss values in mobile micro cell coverage in the well-considered environments. Information from this work will be useful for link design of microwave band wireless access systems in the region.
Keywords: Break-point distances, path loss models, path loss exponent, received signal strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81937 T Cell Immunity Profile in Pediatric Obesity and Asthma
Authors: Mustafa M. Donma, Erkut Karasu, Burcu Ozdilek, Burhan Turgut, Birol Topcu, Burcin Nalbantoglu, Orkide Donma
Abstract:
The mechanisms underlying the association between obesity and asthma may be related to a decreased immunological tolerance induced by a defective function of regulatory T cells (Tregs). The aim of this study is to establish the potential link between these diseases and CD4+, CD25+ FoxP3+ Tregs as well as T helper cells (Ths) in children. This is a prospective case control study. Obese (n:40), asthmatic (n:40), asthmatic obese (n:40) and healthy children (n:40), who don't have any acute or chronic diseases, were included in this study. Obese children were evaluated according to WHO criteria. Asthmatic patients were chosen based on GINA criteria. Parents were asked to fill up the questionnaire. Informed consent forms were taken. Blood samples were marked with CD4+, CD25+ and FoxP3+ in order to determine Tregs and Ths by flow cytometric method. Statistical analyses were performed. p≤0.05 was chosen as meaningful threshold. Tregs exhibiting anti-inflammatory nature were significantly lower in obese (0,16%; p≤0,001), asthmatic (0,25%; p≤0,01) and asthmatic obese (0,29%; p≤0,05) groups than the control group (0,38%). Ths were counted higher in asthma group than the control (p≤0,01) and obese (p≤0,001) groups. T cell immunity plays important roles in obesity and asthma pathogeneses. Decreased numbers of Tregs found in obese, asthmatic and asthmatic obese children may help to elucidate some questions in pathophysiology of these diseases. For HOMA-IR levels, any significant difference was not noted between control and obese groups, but statistically higher values were found for obese asthmatics. The values obtained in all groups were found to be below the critical cut off points. This finding has made the statistically significant difference observed between Tregs of obese, asthmatic, obese asthmatic and control groups much more valuable. These findings will be useful in diagnosis and treatment of these disorders and future studies are needed. The production and propagation of Tregs may be promising in alternative asthma and obesity treatments.
Keywords: Asthma, flow cytometry, pediatric obesity, T cells.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 229236 The Analysis of Regulation on Sustainability in Financial Sector in Lithuania
Authors: D. Kubiliute
Abstract:
The Republic of Lithuania is known as a trusted location for global business institutions and it attracts investors with its competitive environment for financial service providers. Along with the aspiration to offer a strong results-oriented and innovations-driven environment for financial service providers, Lithuanian regulatory authorities consistently implement the European Union's high regulatory standards for financial activities including sustainability-related disclosures. Since the European Union directed its policy towards transition to a climate-neutral, green, competitive and inclusive economy, additional regulatory requirements for financial market participants are adopted: disclosure of sustainable activities, transparency, prevention of greenwashing, and other. The financial sector is one of the key factors influencing the implementation of sustainability objectives in the European Union policies and mitigating the negative effects of climate change – public funds are not enough to make a significant impact on sustainable investments, therefore directing public and private capital to green projects may help to finance the necessary changes. The topic of the study is original and has not yet been widely analyzed in Lithuanian legal discourse. There are used quantitative and qualitative methodologies, logical, systematic and critical analysis principles, hence the aim of this study is to reveal the problematic of the implementation of regulation on sustainability in the Lithuanian financial sector. Additional regulatory requirements could cause serious changes in financial business operations: additional funds, employees and time have to be dedicated in order the companies could implement these regulations. Lack of knowledge and data on how to implement new regulatory requirements towards sustainable reporting causes a lot of uncertainty for financial market participants. And for some companies it might even be an essential point in terms of business continuity. It is considered that the supervisory authorities should find a balance between financial market needs and legal regulation.
Keywords: Financial, market participant, legal, regulation, sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21835 Concept for Knowledge out of Sri Lankan Non-State Sector: Performances of Higher Educational Institutes and Successes of Its Sector
Authors: S. Jeyarajan
Abstract:
Concept of knowledge is discovered from conducted study for successive Competition in Sri Lankan Non-State Higher Educational Institutes. The Concept discovered out of collected Knowledge Management Practices from Emerald inside likewise reputed literatures and of Non-State Higher Educational sector. A test is conducted to reveal existences and its reason behind of these collected practices in Sri Lankan Non-State Higher Education Institutes. Further, unavailability of such study and uncertain on number of participants for data collection in the Sri Lankan context contributed selection of research method as qualitative method, which used attributes of Delphi Method to manage those likewise uncertainty. Data are collected under Dramaturgical Method, which contributes efficient usage of the Delphi method. Grounded theory is selected as data analysis techniques, which is conducted in intermixed discourse to manage different perspectives of data that are collected systematically through perspective and modified snowball sampling techniques. Data are then analysed using Grounded Theory Development Techniques in Intermix discourses to manage differences in Data. Consequently, Agreement in the results of Grounded theories and of finding in the Foreign Study is discovered in the analysis whereas present study conducted as Qualitative Research and The Foreign Study conducted as Quantitative Research. As such, the Present study widens the discovery in the Foreign Study. Further, having discovered reason behind of the existences, the Present result shows Concept for Knowledge from Sri Lankan Non-State sector to manage higher educational Institutes in successful manner.
Keywords: Adherence of snowball sampling into perspective sampling, Delphi method in qualitative method, grounded theory development in intermix discourses of analysis, knowledge management for success of higher educational institutes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77234 Mapping the Core Processes and Identifying Actors along with Their Roles, Functions and Linkages in Trout Value Chain in Kashmir, India
Authors: Stanzin Gawa, Nalini Ranjan Kumar, Gohar Bilal Wani, Vinay Maruti Hatte, A. Vinay
Abstract:
Rainbow trout (Oncorhynchus mykiss) and Brown trout (Salmo trutta fario) are the two species of trout which were once introduced by British in waters of Kashmir has well adapted to favorable climatic conditions. Cold water fisheries are one of the emerging sectors in Kashmir valley and trout holds an important place Jammu and Kashmir fisheries. Realizing the immense potential of trout culture in Kashmir region, the state fisheries department started privatizing trout culture under the centrally funded scheme of RKVY in which they provide 80 percent subsidy for raceway construction and supply of feed and seed for the first year since 2009-10 and at present there are 362 private trout farms. To cater the growing demand for trout in the valley, it is important to understand the bottlenecks faced in the propagation of trout culture. Value chain analysis provides a generic framework to understand the various activities and processes, mapping and studying linkages is first step that needs to be done in any value chain analysis. In Kashmir, it is found that trout hatcheries play a crucial role in insuring the continuous supply of trout seed in valley. Feed is most limiting factor in trout culture and the farmer has to incur high cost in payment and in the transportation of feed from the feed mill to farm. Lack of aqua clinic in the Kashmir valley needs to be addressed. Brood stock maintenance, breeding and seed production, technical assistance to private farmer, extension services have to be strengthened and there is need to development healthier environment for new entrepreneurs. It was found that trout farmers do not avail credit facility as there is no well define credit scheme for fisheries in the state. The study showed weak institutional linkages. Research and development should focus more on applied science rather than basic science.
Keywords: Trout, Kashmir, value chain, linkages, culture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136633 Multistage Condition Monitoring System of Aircraft Gas Turbine Engine
Authors: A. M. Pashayev, D. D. Askerov, C. Ardil, R. A. Sadiqov, P. S. Abdullayev
Abstract:
Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows drawing conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stageby- stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.Keywords: aviation gas turbine engine, technical condition, fuzzy logic, neural networks, fuzzy statistics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 157032 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam
Authors: S. Golmohammadi, M. Noorian Bidgoli
Abstract:
Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the Rock Quality Designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and Stress Reduction Factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has been attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the Rock Engineering System (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.
Keywords: Q-system, Rock Engineering System, statistical analysis, rock mass, tunnel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29531 STLF Based on Optimized Neural Network Using PSO
Authors: H. Shayeghi, H. A. Shayanfar, G. Azimi
Abstract:
The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.
Keywords: Large Neural Network, Short-Term Load Forecasting, Particle Swarm Optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 222430 Bee Parameter Determination via Weighted Centriod Modified Simplex and Constrained Response Surface Optimisation Methods
Authors: P. Luangpaiboon
Abstract:
Various intelligences and inspirations have been adopted into the iterative searching process called as meta-heuristics. They intelligently perform the exploration and exploitation in the solution domain space aiming to efficiently seek near optimal solutions. In this work, the bee algorithm, inspired by the natural foraging behaviour of honey bees, was adapted to find the near optimal solutions of the transportation management system, dynamic multi-zone dispatching. This problem prepares for an uncertainty and changing customers- demand. In striving to remain competitive, transportation system should therefore be flexible in order to cope with the changes of customers- demand in terms of in-bound and outbound goods and technological innovations. To remain higher service level but lower cost management via the minimal imbalance scenario, the rearrangement penalty of the area, in each zone, including time periods are also included. However, the performance of the algorithm depends on the appropriate parameters- setting and need to be determined and analysed before its implementation. BEE parameters are determined through the linear constrained response surface optimisation or LCRSOM and weighted centroid modified simplex methods or WCMSM. Experimental results were analysed in terms of best solutions found so far, mean and standard deviation on the imbalance values including the convergence of the solutions obtained. It was found that the results obtained from the LCRSOM were better than those using the WCMSM. However, the average execution time of experimental run using the LCRSOM was longer than those using the WCMSM. Finally a recommendation of proper level settings of BEE parameters for some selected problem sizes is given as a guideline for future applications.Keywords: Meta-heuristic, Bee Algorithm, Dynamic Multi-Zone Dispatching, Linear Constrained Response SurfaceOptimisation Method, Weighted Centroid Modified Simplex Method
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 137329 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling
Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal
Abstract:
Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.
Keywords: Benchmark collection, program educational objectives, student outcomes, ABET, Accreditation, machine learning, supervised multiclass classification, text mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83728 Aircraft Gas Turbine Engines Technical Condition Identification System
Authors: A. M. Pashayev, C. Ardil, D. D. Askerov, R. A. Sadiqov, P. S. Abdullayev
Abstract:
In this paper is shown that the probability-statistic methods application, especially at the early stage of the aviation gas turbine engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence is considered the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods. Training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. Thus for GTE technical condition more adequate model making are analysed dynamics of skewness and kurtosis coefficients' changes. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows to draw conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. For checking of models adequacy is considered the Fuzzy Multiple Correlation Coefficient of Fuzzy Multiple Regression. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-bystage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine temperature condition was made.
Keywords: Gas turbine engines, neural networks, fuzzy logic, fuzzy statistics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190427 Parameters of Main Stage of Discharge between Artificial Charged Aerosol Cloud and Ground in Presence of Model Hydrometeor Arrays
Authors: D. S. Zhuravkova, A. G. Temnikov, O. S. Belova, L. L. Chernensky, T. K. Gerastenok, I. Y. Kalugina, N. Y. Lysov, A.V. Orlov
Abstract:
Investigation of the discharges from the artificial charged water aerosol clouds in presence of the arrays of the model hydrometeors could help to receive the new data about the peculiarities of the return stroke formation between the thundercloud and the ground when the large volumes of the hail particles participate in the lightning discharge initiation and propagation stimulation. Artificial charged water aerosol clouds of the negative or positive polarity with the potential up to one million volts have been used. Hail has been simulated by the group of the conductive model hydrometeors of the different form. Parameters of the impulse current of the main stage of the discharge between the artificial positively and negatively charged water aerosol clouds and the ground in presence of the model hydrometeors array and of its corresponding electromagnetic radiation have been determined. It was established that the parameters of the array of the model hydrometeors influence on the parameters of the main stage of the discharge between the artificial thundercloud cell and the ground. The maximal values of the main stage current impulse parameters and the electromagnetic radiation registered by the plate antennas have been found for the array of the model hydrometeors of the cylinder revolution form for the negatively charged aerosol cloud and for the array of the hydrometeors of the plate rhombus form for the positively charged aerosol cloud, correspondingly. It was found that parameters of the main stage of the discharge between the artificial charged water aerosol cloud and the ground in presence of the model hydrometeor array of the different considered forms depend on the polarity of the artificial charged aerosol cloud. In average, for all forms of the investigated model hydrometeors arrays, the values of the amplitude and the current rise of the main stage impulse current and the amplitude of the corresponding electromagnetic radiation for the artificial charged aerosol cloud of the positive polarity were in 1.1-1.9 times higher than for the charged aerosol cloud of the negative polarity. Thus, the received results could indicate to the possible more important role of the big volumes of the large hail arrays in the thundercloud on the parameters of the return stroke for the positive lightning.
Keywords: Main stage of discharge, hydrometeor form, lightning parameters, negative and positive artificial charged aerosol cloud.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 103826 C-LNRD: A Cross-Layered Neighbor Route Discovery for Effective Packet Communication in Wireless Sensor Network
Authors: K. Kalaikumar, E. Baburaj
Abstract:
One of the problems to be addressed in wireless sensor networks is the issues related to cross layer communication. Cross layer architecture shares the information across the layer, ensuring Quality of Services (QoS). With this shared information, MAC protocol adapts effective functionality maintenance such as route selection on changeable sensor network environment. However, time slot assignment and neighbour route selection time duration for cross layer have not been carried out. The time varying physical layer communication over cross layer causes high traffic load in the sensor network. Though, the traffic load was reduced using cross layer optimization procedure, the computational cost is high. To improve communication efficacy in the sensor network, a self-determined time slot based Cross-Layered Neighbour Route Discovery (C-LNRD) method is presented in this paper. In the presented work, the initial process is to discover the route in the sensor network using Dynamic Source Routing based Medium Access Control (MAC) sub layers. This process considers MAC layer operation with dynamic route neighbour table discovery. Then, the discovered route path for packet communication employs Broad Route Distributed Time Slot Assignment method on Cross-Layered Sensor Network system. Broad Route means time slotting on varying length of the route paths. During packet communication in this sensor network, transmission of packets is adjusted over the different time with varying ranges for controlling the traffic rate. Finally, Rayleigh fading model is developed in C-LNRD to identify the performance of the sensor network communication structure. The main task of Rayleigh Fading is to measure the power level of each communication under MAC sub layer. The minimized power level helps to easily reduce the computational cost of packet communication in the sensor network. Experiments are conducted on factors such as power factor, on packet communication, neighbour route discovery time, and information (i.e., packet) propagation speed.
Keywords: Medium access control, neighbour route discovery, wireless sensor network, Rayleigh fading, distributed time slot assignment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77425 Effect of Non-Metallic Inclusion from the Continuous Casting Process on the Multi-Stage Forging Process and the Tensile Strength of the Bolt: A Case Study
Authors: Tomasz Dubiel, Tadeusz Balawender, Mirosław Osetek
Abstract:
The paper presents the influence of non-metallic inclusions on the multi-stage forging process and the mechanical properties of the dodecagon socket bolt used in the automotive industry. The detected metallurgical defect was so large that it directly influenced the mechanical properties of the bolt and resulted in failure to meet the requirements of the mechanical property class. In order to assess the defect, an X-ray examination and metallographic examination of the defective bolt were performed, showing exogenous non-metallic inclusion. The size of the defect on the cross section was 0.531 mm in width and 1.523 mm in length; the defect was continuous along the entire axis of the bolt. In analysis, a finite element method (FEM) simulation of the multi-stage forging process was designed, taking into account a non-metallic inclusion parallel to the sample axis, reflecting the studied case. The process of defect propagation due to material upset in the head area was analyzed. The final forging stage in shaping the dodecagonal socket and filling the flange area was particularly studied. The effect of the defect was observed to significantly reduce the effective cross-section as a result of the expansion of the defect perpendicular to the axis of the bolt. The mechanical properties of products with and without the defect were analyzed. In the first step, the hardness test confirmed that the required value for the mechanical class 8.8 of both bolt types was obtained. In the second step, the bolts were subjected to a static tensile test. The bolts without the defect gave a positive result, while all 10 bolts with the defect gave a negative result, achieving a tensile strength below the requirements. Tensile strength tests were confirmed by metallographic tests and FEM simulation with perpendicular inclusion spread in the area of the head. The bolts were damaged directly under the bolt head, which is inconsistent with the requirements of ISO 898-1. It has been shown that non-metallic inclusions with orientation in accordance with the axis of the bolt can directly cause loss of functionality and these defects should be detected even before assembling in the machine element.
Keywords: continuous casting, multi-stage forging, non-metallic inclusion, upset bolt head
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 55624 Climate Related Financial Risk for Automobile Industry and Impact to Financial Institutions
Authors: S. Mahalakshmi, B. Senthil Arasu
Abstract:
As per the recent changes happening in the global policies, climate related changes and the impact it causes across every sector are viewed as green swan events – in essence, climate related changes can happen often and lead to risk and lot of uncertainty, but need to be mitigated instead of considering them as black swan events. This brings about a question on how this risk can be computed, so that the financial institutions can plan to mitigate it. Climate related changes impact all risk types – credit risk, market risk, operational risk, liquidity risk, reputational risk and others. And the models required to compute this have to consider the different industrial needs of the counterparty, as well as the factors that are contributing to this – be it in the form of different risk drivers, or the different transmission channels or the different approaches and the granular form of data availability. This brings out to the suggestion that the climate related changes, though it affects Pillar I risks, will be a Pillar II risk. This has to be modeled specifically based on the financial institution’s actual exposure to different industries, instead of generalizing the risk charge. And this will have to be considered as the additional capital to be met by the financial institution in addition to their Pillar I risks, as well as the existing Pillar II risks. In this paper, we present a risk assessment framework to model and assess climate change risks - for both credit and market risks. This framework helps in assessing the different scenarios, and how the different transition risks affect the risk associated with the different parties. This research paper delves on the topic of increase in concentration of greenhouse gases, that in turn causing global warming. It then considers the various scenarios of having the different risk drivers impacting credit and market risk of an institution, by understanding the transmission channels, and also considering the transition risk. The paper then focuses on the industry that’s fast seeing a disruption: automobile industry. The paper uses the framework to show how the climate changes and the change to the relevant policies have impacted the entire financial institution. Appropriate statistical models for forecasting, anomaly detection and scenario modeling are built to demonstrate how the framework can be used by the relevant agencies to understand their financial risks. The paper also focuses on the climate risk calculation for the Pillar II capital calculations, and how it will make sense for the bank to maintain this in addition to their regular Pillar I and Pillar II capital.
Keywords: Capital calculation, climate risk, credit risk, pillar II risk, scenario modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42323 Discontinuous Spacetime with Vacuum Holes as Explanation for Gravitation, Quantum Mechanics and Teleportation
Authors: Constantin Z. Leshan
Abstract:
Hole Vacuum theory is based on discontinuous spacetime that contains vacuum holes. Vacuum holes can explain gravitation, some laws of quantum mechanics and allow teleportation of matter. All massive bodies emit a flux of holes which curve the spacetime; if we increase the concentration of holes, it leads to length contraction and time dilation because the holes do not have the properties of extension and duration. In the limited case when space consists of holes only, the distance between every two points is equal to zero and time stops - outside of the Universe, the extension and duration properties do not exist. For this reason, the vacuum hole is the only particle in physics capable of describing gravitation using its own properties only. All microscopic particles must 'jump' continually and 'vibrate' due to the appearance of holes (impassable microscopic 'walls' in space), and it is the cause of the quantum behavior. Vacuum holes can explain the entanglement, non-locality, wave properties of matter, tunneling, uncertainty principle and so on. Particles do not have trajectories because spacetime is discontinuous and has impassable microscopic 'walls' due to the simple mechanical motion is impossible at small scale distances; it is impossible to 'trace' a straight line in the discontinuous spacetime because it contains the impassable holes. Spacetime 'boils' continually due to the appearance of the vacuum holes. For teleportation to be possible, we must send a body outside of the Universe by enveloping it with a closed surface consisting of vacuum holes. Since a material body cannot exist outside of the Universe, it reappears instantaneously in a random point of the Universe. Since a body disappears in one volume and reappears in another random volume without traversing the physical space between them, such a transportation method can be called teleportation (or Hole Teleportation). It is shown that Hole Teleportation does not violate causality and special relativity due to its random nature and other properties. Although Hole Teleportation has a random nature, it can be used for colonization of extrasolar planets by the help of the method called 'random jumps': after a large number of random teleportation jumps, there is a probability that the spaceship may appear near a habitable planet. We can create vacuum holes experimentally using the method proposed by Descartes: we must remove a body from the vessel without permitting another body to occupy this volume.
Keywords: Border of the universe, causality violation, perfect isolation, quantum jumps.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 123222 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation
Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke
Abstract:
Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174021 Evaluating the Small-Strain Mechanical Properties of Cement-Treated Clayey Soils Based on the Confining Pressure
Authors: M. A. Putera, N. Yasufuku, A. Alowaisy, R. Ishikura, J. G. Hussary, A. Rifa’i
Abstract:
Indonesia’s government has planned a project for a high-speed railway connecting the capital cities, Jakarta and Surabaya, about 700 km. Based on that location, it has been planning construction above the lowland soil region. The lowland soil region comprises cohesive soil with high water content and high compressibility index, which in fact, led to a settlement problem. Among the variety of railway track structures, the adoption of the ballastless track was used effectively to reduce the settlement; it provided a lightweight structure and minimized workspace. Contradictorily, deploying this thin layer structure above the lowland area was compensated with several problems, such as lack of bearing capacity and deflection behavior during traffic loading. It is necessary to combine with ground improvement to assure a settlement behavior on the clayey soil. Reflecting on the assurance of strength increment and working period, those were convinced by adopting methods such as cement-treated soil as the substructure of railway track. Particularly, evaluating mechanical properties in the field has been well known by using the plate load test and cone penetration test. However, observing an increment of mechanical properties has uncertainty, especially for evaluating cement-treated soil on the substructure. The current quality control of cement-treated soils was established by laboratory tests. Moreover, using small strain devices measurement in the laboratory can predict more reliable results that are identical to field measurement tests. Aims of this research are to show an intercorrelation of confining pressure with the initial condition of the Young’s modulus (E0), Poisson ratio (υ0) and Shear modulus (G0) within small strain ranges. Furthermore, discrepancies between those parameters were also investigated. Experimental result confirmed the intercorrelation between cement content and confining pressure with a power function. In addition, higher cement ratios have discrepancies, conversely with low mixing ratios.
Keywords: Cement content, confining pressure, high-speed railway, small strain ranges.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42220 Developing Improvements to Multi-Hazard Risk Assessments
Authors: A. Fathianpour, M. B. Jelodar, S. Wilkinson
Abstract:
This paper outlines the approaches taken to assess multi-hazard assessments. There is currently confusion in assessing multi-hazard impacts, and so this study aims to determine which of the available options are the most useful. The paper uses an international literature search, and analysis of current multi-hazard assessments and a case study to illustrate the effectiveness of the chosen method. Findings from this study will help those wanting to assess multi-hazards to undertake a straightforward approach. The paper is significant as it helps to interpret the various approaches and concludes with the preferred method. Many people in the world live in hazardous environments and are susceptible to disasters. Unfortunately, when a disaster strikes it is often compounded by additional cascading hazards, thus people would confront more than one hazard simultaneously. Hazards include natural hazards (earthquakes, floods, etc.) or cascading human-made hazards (for example, Natural Hazard Triggering Technological disasters (Natech) such as fire, explosion, toxic release). Multi-hazards have a more destructive impact on urban areas than one hazard alone. In addition, climate change is creating links between different disasters such as causing landslide dams and debris flows leading to more destructive incidents. Much of the prevailing literature deals with only one hazard at a time. However, recently sophisticated multi-hazard assessments have started to appear. Given that multi-hazards occur, it is essential to take multi-hazard risk assessment under consideration. This paper aims to review the multi-hazard assessment methods through articles published to date and categorize the strengths and disadvantages of using these methods in risk assessment. Napier City is selected as a case study to demonstrate the necessity of using multi-hazard risk assessments. In order to assess multi-hazard risk assessments, first, the current multi-hazard risk assessment methods were described. Next, the drawbacks of these multi-hazard risk assessments were outlined. Finally, the improvements to current multi-hazard risk assessments to date were summarised. Generally, the main problem of multi-hazard risk assessment is to make a valid assumption of risk from the interactions of different hazards. Currently, risk assessment studies have started to assess multi-hazard situations, but drawbacks such as uncertainty and lack of data show the necessity for more precise risk assessment. It should be noted that ignoring or partial considering multi-hazards in risk assessment will lead to an overestimate or overlook in resilient and recovery action managements.
Keywords: Cascading hazards, multi-hazard, risk assessment, risk reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 109419 Tensile and Fracture Properties of Cast and Forged Composite Synthesized by Addition of in-situ Generated Al3Ti-Al2O3 Particles to Magnesium
Authors: H. M. Nanjundaswamy, S. K. Nath, S. Ray
Abstract:
TiO2 particles have been added in molten aluminium to result in aluminium based cast Al/Al3Ti-Al2O3 composite, which has been added then to molten magnesium to synthesize magnesium based cast Mg-Al/Al3Ti-Al2O3 composite. The nominal compositions in terms of Mg, Al, and TiO2 contents in the magnesium based composites are Mg-9Al-0.6TiO2, Mg-9Al-0.8TiO2, Mg-9Al-1.0TiO2 and Mg-9Al-1.2TiO2 designated respectively as MA6T, MA8T, MA10T and MA12T. The microstructure of the cast magnesium based composite shows grayish rods of intermetallics Al3Ti, inherited from aluminium based composite but these rods, on hot forging, breaks into smaller lengths decreasing the average aspect ratio (length to diameter) from 7.5 to 3.0. There are also cavities in between the broken segments of rods. β-phase in cast microstructure, Mg17Al12, dissolves during heating prior to forging and re-precipitates as relatively finer particles on cooling. The amount of β-phase also decreases on forging as segregation is removed. In both the cast and forged composite, the Brinell hardness increases rapidly with increasing addition of TiO2 but the hardness is higher in forged composites by about 80 BHN. With addition of higher level of TiO2 in magnesium based cast composite, yield strength decreases progressively but there is marginal increase in yield strength over that of the cast Mg-9 wt. pct. Al, designated as MA alloy. But the ultimate tensile strength (UTS) in the cast composites decreases with the increasing particle content indicating possibly an early initiation of crack in the brittle inter-dendritic region and their easy propagation through the interfaces of the particles. In forged composites, there is a significant improvement in both yield strength and UTS with increasing TiO2 addition and also, over those observed in their cast counterpart, but at higher addition it decreases. It may also be noted that as in forged MA alloy, incomplete recovery of forging strain increases the strength of the matrix in the composites and the ductility decreases both in the forged alloy and the composites. Initiation fracture toughness, JIC, decreases drastically in cast composites compared to that in MA alloy due to the presence of intermetallic Al3Ti and Al2O3 particles in the composite. There is drastic reduction of JIC on forging both in the alloy and the composites, possibly due to incomplete recovery of forging strain in both as well as breaking of Al3Ti rods and the voids between the broken segments of Al3Ti rods in composites. The ratio of tearing modulus to elastic modulus in cast composites show higher ratio, which increases with the increasing TiO2 addition. The ratio decreases comparatively more on forging of cast MA alloy than those in forged composites.
Keywords: Composite, fracture toughness, forging, tensile properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136518 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study
Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio D. Grieco, Emanuela Guerriero
Abstract:
Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from a real-life pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.Keywords: Constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 197517 An Overview of the Porosity Classification in Carbonate Reservoirs and Their Challenges: An Example of Macro-Microporosity Classification from Offshore Miocene Carbonate in Central Luconia, Malaysia
Authors: Hammad T. Janjuhah, Josep Sanjuan, Mohamed K. Salah
Abstract:
Biological and chemical activities in carbonates are responsible for the complexity of the pore system. Primary porosity is generally of natural origin while secondary porosity is subject to chemical reactivity through diagenetic processes. To understand the integrated part of hydrocarbon exploration, it is necessary to understand the carbonate pore system. However, the current porosity classification scheme is limited to adequately predict the petrophysical properties of different reservoirs having various origins and depositional environments. Rock classification provides a descriptive method for explaining the lithofacies but makes no significant contribution to the application of porosity and permeability (poro-perm) correlation. The Central Luconia carbonate system (Malaysia) represents a good example of pore complexity (in terms of nature and origin) mainly related to diagenetic processes which have altered the original reservoir. For quantitative analysis, 32 high-resolution images of each thin section were taken using transmitted light microscopy. The quantification of grains, matrix, cement, and macroporosity (pore types) was achieved using a petrographic analysis of thin sections and FESEM images. The point counting technique was used to estimate the amount of macroporosity from thin section, which was then subtracted from the total porosity to derive the microporosity. The quantitative observation of thin sections revealed that the mouldic porosity (macroporosity) is the dominant porosity type present, whereas the microporosity seems to correspond to a sum of 40 to 50% of the total porosity. It has been proven that these Miocene carbonates contain a significant amount of microporosity, which significantly complicates the estimation and production of hydrocarbons. Neglecting its impact can increase uncertainty about estimating hydrocarbon reserves. Due to the diversity of geological parameters, the application of existing porosity classifications does not allow a better understanding of the poro-perm relationship. However, the classification can be improved by including the pore types and pore structures where they can be divided into macro- and microporosity. Such studies of microporosity identification/classification represent now a major concern in limestone reservoirs around the world.
Keywords: Carbonate reservoirs, microporosity, overview of porosity classification, reservoir characterization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1004