Search results for: Performance Parameters
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8641

Search results for: Performance Parameters

301 Inquiry on the Improvement Teaching Quality in the Classroom with Meta-Teaching Skills

Authors: Shahlan Surat, Saemah Rahman, Saadiah Kummin

Abstract:

When teachers reflect and evaluate whether their teaching methods actually have an impact on students’ learning, they will adjust their practices accordingly. This inevitably improves their students’ learning and performance. The approach in meta-teaching can invigorate and create a passion for teaching. It thus helps to increase the commitment and love for the teaching profession. This study was conducted to determine the level of metacognitive thinking of teachers in the process of teaching and learning in the classroom. Metacognitive thinking teachers include the use of metacognitive knowledge which consists of different types of knowledge: declarative, procedural and conditional. The ability of the teachers to plan, monitor and evaluate the teaching process can also be determined. This study was conducted on 377 graduate teachers in Klang Valley, Malaysia. The stratified sampling method was selected for the purpose of this study. The metacognitive teaching inventory consisting of 24 items is called InKePMG (Teacher Indicators of Effectiveness Meta-Teaching). The results showed the level of mean is high for two components of metacognitive knowledge; declarative knowledge (mean = 4.16) and conditional (mean = 4.11) whereas, the mean of procedural knowledge is 4.00 (moderately high). Similarly, the level of knowledge in monitoring (mean = 4.11), evaluating (mean = 4.00) which indicate high score and planning (mean = 4.00) are moderately high score among teachers. In conclusion, this study shows that the planning and procedural knowledge is an important element in improving the quality of teachers teaching in the classroom. Thus, the researcher recommended that further studies should focus on training programs for teachers on metacognitive skills and also on developing creative thinking among teachers.

Keywords: Metacognitive thinking skills, procedural knowledge, conditional knowledge, declarative knowledge, meta-teaching and regulation of cognitive.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1412
300 Triple Intercell Bar for Electrometallurgical Processes: A Design to Increase PV Energy Utilization

Authors: Eduardo P. Wiechmann, Jorge A. Henríquez, Pablo E. Aqueveque, Luis G. Muñoz

Abstract:

PV energy prices are declining rapidly. To take advantage of the benefits of those prices and lower the carbon footprint, operational practices must be modified. Undoubtedly, it challenges the electrowinning practice to operate at constant current throughout the day. This work presents a technology that contributes in providing modulation capacity to the electrode current distribution system. This is to raise the day time dc current and lower it at night. The system is a triple intercell bar that operates in current-source mode. The design is a capping board free dogbone type of bar that ensures an operation free of short circuits, hot swapability repairs and improved current balance. This current-source system eliminates the resetting currents circulating in equipotential bars. Twin auxiliary connectors are added to the main connectors providing secure current paths to bypass faulty or impaired contacts. All system conductive elements are positioned over a baseboard offering a large heat sink area to the ventilation of a facility. The system works with lower temperature than a conventional busbar. Of these attributes, the cathode current balance property stands out and is paramount for day/night modulation and the use of photovoltaic energy. A design based on a 3D finite element method model predicting electric and thermal performance under various industrial scenarios is presented. Preliminary results obtained in an electrowinning facility with industrial prototypes are included.

Keywords: Electrowinning, intercell bars, PV energy, current modulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 591
299 Multiaxial Fatigue Analysis of a High Performance Nickel-Based Superalloy

Authors: P. Selva, B. Lorrain, J. Alexis, A. Seror, A. Longuet, C. Mary, F. Denard

Abstract:

Over the past four decades, the fatigue behavior of nickel-based alloys has been widely studied. However, in recent years, significant advances in the fabrication process leading to grain size reduction have been made in order to improve fatigue properties of aircraft turbine discs. Indeed, a change in particle size affects the initiation mode of fatigue cracks as well as the fatigue life of the material. The present study aims to investigate the fatigue behavior of a newly developed nickel-based superalloy under biaxial-planar loading. Low Cycle Fatigue (LCF) tests are performed at different stress ratios so as to study the influence of the multiaxial stress state on the fatigue life of the material. Full-field displacement and strain measurements as well as crack initiation detection are obtained using Digital Image Correlation (DIC) techniques. The aim of this presentation is first to provide an in-depth description of both the experimental set-up and protocol: the multiaxial testing machine, the specific design of the cruciform specimen and performances of the DIC code are introduced. Second, results for sixteen specimens related to different load ratios are presented. Crack detection, strain amplitude and number of cycles to crack initiation vs. triaxial stress ratio for each loading case are given. Third, from fractographic investigations by scanning electron microscopy it is found that the mechanism of fatigue crack initiation does not depend on the triaxial stress ratio and that most fatigue cracks initiate from subsurface carbides.

Keywords: Cruciform specimen, multiaxial fatigue, Nickelbased superalloy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2170
298 In-situ Chemical Oxidation of Residual TCE by Permanganate in Epikarst

Authors: Nihat Hakan Akyol, Irfan Yolcubal

Abstract:

In-situ chemical oxidation (ISCO) has been widely used for source zone remediation of Dense Nonaqueous Phase Liquids (DNAPLs) in subsurface environments. DNAPL source zones for karst aquifers are generally located in epikarst where the DNAPL mass is trapped either in karst soil or at the regolith contact with carbonate bedrock. This study aims to investigate the performance of oxidation of residual trichloroethylene found in such environments by potassium permanganate. Batch and flow cell experiments were conducted to determine the kinetics and the mass removal rate of TCE. pH change, Cl production, TCE and MnO4 destruction were monitored routinely during experiments. Nonreactive tracer tests were also conducted prior and after the oxidation process to determine the influence of oxidation on flow conditions. The results show that oxidant consumption rate of the calcareous epikarst soil was significant and the oxidant demand was determined to be 20 g KMnO4/kg soil. Oxidation rate of residual TCE (1.26x10-3 s-1) was faster than the oxidant consumption rate of the soil (2.54 - 2.92x10-4 s-1) at only high oxidant concentrations (> 40 mM KMnO4). Half life of TCE oxidation ranged from 7.9 to 10.7 min. Although highly significant fraction of residual TCE mass in the system was destroyed by permanganate oxidation, TCE concentration in the effluent remained above its MCL. Flow interruption tests indicate that efficiency of ISCO was limited by the rate of TCE dissolution and the rate-limited desorption of TCE. The residence time and the initial concentration of the oxidant in the source zone also controlled the efficiency of ISCO in epikarst.

Keywords: Epikarst, in-situ chemical oxidation, permanganate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1998
297 Job Satisfaction of Midwives Working in Labor Ward of the Lady Dufferin Hospital: A Cross-Sectional Study

Authors: B. Muhammadani

Abstract:

Health workforce is a fundamental component of health system and plays a significant role in delivering effective health care services. However, there is a crucial shortage of skilled personnel which make them prone to work in stressful conditions. In spite of excessively high workload and burnout among the staff, little attention is given to their job satisfaction level which has serious implications on the productivity and effective performance of staff to achieve organizational goals. Therefore, this study aims to explore the job satisfaction of midwives working in the labor ward of the Lady Dufferin Hospital, Karachi. A cross-sectional survey was conducted. The short version of Minnesota Job Satisfaction Questionnaire was administered on a convenient sample group of 22 midwives to gather information on their job satisfaction. The results demonstrated that midwives were overall satisfied with their job. The level of job satisfaction was however found different in various positions within midwifery cadre. The head of midwives was highly satisfied as compared to midwifery staff who works under the supervision of head. The level of satisfaction of team leaders fall between the head and staff of midwifery. Similar trends were observed for both intrinsic and extrinsic job satisfaction. Such evidences on these issues are essential and useful as it helps explore the attitudes of individuals towards work which has direct implications on access to quality care services. Strategic interventions are required at organizational level to provide motivators and satisfiers to health workers for their work related satisfaction and enhanced motivation.

Keywords: Health workforce, job satisfaction, motivation, workload, burnout, midwives, health system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2275
296 Scatterer Density in Edge and Coherence Enhancing Nonlinear Anisotropic Diffusion for Medical Ultrasound Speckle Reduction

Authors: Ahmed Badawi, J. Michael Johnson, Mohamed Mahfouz

Abstract:

This paper proposes new enhancement models to the methods of nonlinear anisotropic diffusion to greatly reduce speckle and preserve image features in medical ultrasound images. By incorporating local physical characteristics of the image, in this case scatterer density, in addition to the gradient, into existing tensorbased image diffusion methods, we were able to greatly improve the performance of the existing filtering methods, namely edge enhancing (EE) and coherence enhancing (CE) diffusion. The new enhancement methods were tested using various ultrasound images, including phantom and some clinical images, to determine the amount of speckle reduction, edge, and coherence enhancements. Scatterer density weighted nonlinear anisotropic diffusion (SDWNAD) for ultrasound images consistently outperformed its traditional tensor-based counterparts that use gradient only to weight the diffusivity function. SDWNAD is shown to greatly reduce speckle noise while preserving image features as edges, orientation coherence, and scatterer density. SDWNAD superior performances over nonlinear coherent diffusion (NCD), speckle reducing anisotropic diffusion (SRAD), adaptive weighted median filter (AWMF), wavelet shrinkage (WS), and wavelet shrinkage with contrast enhancement (WSCE), make these methods ideal preprocessing steps for automatic segmentation in ultrasound imaging.

Keywords: Nonlinear anisotropic diffusion, ultrasound imaging, speckle reduction, scatterer density estimation, edge based enhancement, coherence enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881
295 Software Product Quality Evaluation Model with Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

This paper presents a software product quality evaluation model based on the ISO/IEC 25010 quality model. The evaluation characteristics and sub characteristics were identified from the ISO/IEC 25010 quality model. The multidimensional structure of the quality model is based on characteristics such as functional suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, and portability, and associated sub characteristics. Random numbers are generated to establish the decision maker’s importance weights for each sub characteristics. Also, random numbers are generated to establish the decision matrix of the decision maker’s final scores for each software product against each sub characteristics. Thus, objective criteria importance weights and index scores for datasets were obtained from the random numbers. In the proposed model, five different software product quality evaluation datasets under three different weight vectors were applied to multiple criteria decision analysis method, preference analysis for reference ideal solution (PARIS) for comparison, and sensitivity analysis procedure. This study contributes to provide a better understanding of the application of MCDMA methods and ISO/IEC 25010 quality model guidelines in software product quality evaluation process.

Keywords: ISO/IEC 25010 quality model, multiple criteria decisions making, multiple criteria decision making analysis, MCDMA, PARIS, Software Product Quality Evaluation Model, Software Product Quality Evaluation, Software Evaluation, Software Selection, Software

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 415
294 Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules

Authors: Tamanna Siddiqui, M. Afshar Alam

Abstract:

Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Knowledge discovery in database, quantification, dempster shafer theory, genetic programming, hierarchy, subsumption matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1503
293 The Effect of CPU Location in Total Immersion of Microelectronics

Authors: A. Almaneea, N. Kapur, J. L. Summers, H. M. Thompson

Abstract:

Meeting the growth in demand for digital services such as social media, telecommunications, and business and cloud services requires large scale data centres, which has led to an increase in their end use energy demand. Generally, over 30% of data centre power is consumed by the necessary cooling overhead. Thus energy can be reduced by improving the cooling efficiency. Air and liquid can both be used as cooling media for the data centre. Traditional data centre cooling systems use air, however liquid is recognised as a promising method that can handle the more densely packed data centres. Liquid cooling can be classified into three methods; rack heat exchanger, on-chip heat exchanger and full immersion of the microelectronics. This study quantifies the improvements of heat transfer specifically for the case of immersed microelectronics by varying the CPU and heat sink location. Immersion of the server is achieved by filling the gap between the microelectronics and a water jacket with a dielectric liquid which convects the heat from the CPU to the water jacket on the opposite side. Heat transfer is governed by two physical mechanisms, which is natural convection for the fixed enclosure filled with dielectric liquid and forced convection for the water that is pumped through the water jacket. The model in this study is validated with published numerical and experimental work and shows good agreement with previous work. The results show that the heat transfer performance and Nusselt number (Nu) is improved by 89% by placing the CPU and heat sink on the bottom of the microelectronics enclosure.

Keywords: CPU location, data centre cooling, heat sink in enclosures, Immersed microelectronics, turbulent natural convection in enclosures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2150
292 Safety Climate Assessment and Its Impact on the Productivity of Construction Enterprises

Authors: Krzysztof J. Czarnocki, F. Silveira, E. Czarnocka, K. Szaniawska

Abstract:

Research background: Problems related to the occupational health and decreasing level of safety occur commonly in the construction industry. Important factor in the occupational safety in construction industry is scaffold use. All scaffolds used in construction, renovation, and demolition shall be erected, dismantled and maintained in accordance with safety procedure. Increasing demand for new construction projects unfortunately still is linked to high level of occupational accidents. Therefore, it is crucial to implement concrete actions while dealing with scaffolds and risk assessment in construction industry, the way on doing assessment and liability of assessment is critical for both construction workers and regulatory framework. Unfortunately, professionals, who tend to rely heavily on their own experience and knowledge when taking decisions regarding risk assessment, may show lack of reliability in checking the results of decisions taken. Purpose of the article: The aim was to indicate crucial parameters that could be modeling with Risk Assessment Model (RAM) use for improving both building enterprise productivity and/or developing potential and safety climate. The developed RAM could be a benefit for predicting high-risk construction activities and thus preventing accidents occurred based on a set of historical accident data. Methodology/Methods: A RAM has been developed for assessing risk levels as various construction process stages with various work trades impacting different spheres of enterprise activity. This project includes research carried out by teams of researchers on over 60 construction sites in Poland and Portugal, under which over 450 individual research cycles were carried out. The conducted research trials included variable conditions of employee exposure to harmful physical and chemical factors, variable levels of stress of employees and differences in behaviors and habits of staff. Genetic modeling tool has been used for developing the RAM. Findings and value added: Common types of trades, accidents, and accident causes have been explored, in addition to suitable risk assessment methods and criteria. We have found that the initial worker stress level is more direct predictor for developing the unsafe chain leading to the accident rather than the workload, or concentration of harmful factors at the workplace or even training frequency and management involvement.

Keywords: Civil engineering, occupational health, productivity, safety climate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1076
291 Urban Heat Island Intensity Assessment through Comparative Study on Land Surface Temperature and Normalized Difference Vegetation Index: A Case Study of Chittagong, Bangladesh

Authors: Tausif A. Ishtiaque, Zarrin T. Tasin, Kazi S. Akter

Abstract:

Current trend of urban expansion, especially in the developing countries has caused significant changes in land cover, which is generating great concern due to its widespread environmental degradation. Energy consumption of the cities is also increasing with the aggravated heat island effect. Distribution of land surface temperature (LST) is one of the most significant climatic parameters affected by urban land cover change. Recent increasing trend of LST is causing elevated temperature profile of the built up area with less vegetative cover. Gradual change in land cover, especially decrease in vegetative cover is enhancing the Urban Heat Island (UHI) effect in the developing cities around the world. Increase in the amount of urban vegetation cover can be a useful solution for the reduction of UHI intensity. LST and Normalized Difference Vegetation Index (NDVI) have widely been accepted as reliable indicators of UHI and vegetation abundance respectively. Chittagong, the second largest city of Bangladesh, has been a growth center due to rapid urbanization over the last several decades. This study assesses the intensity of UHI in Chittagong city by analyzing the relationship between LST and NDVI based on the type of land use/land cover (LULC) in the study area applying an integrated approach of Geographic Information System (GIS), remote sensing (RS), and regression analysis. Land cover map is prepared through an interactive supervised classification using remotely sensed data from Landsat ETM+ image along with NDVI differencing using ArcGIS. LST and NDVI values are extracted from the same image. The regression analysis between LST and NDVI indicates that within the study area, UHI is directly correlated with LST while negatively correlated with NDVI. It interprets that surface temperature reduces with increase in vegetation cover along with reduction in UHI intensity. Moreover, there are noticeable differences in the relationship between LST and NDVI based on the type of LULC. In other words, depending on the type of land usage, increase in vegetation cover has a varying impact on the UHI intensity. This analysis will contribute to the formulation of sustainable urban land use planning decisions as well as suggesting suitable actions for mitigation of UHI intensity within the study area.

Keywords: Land cover change, land surface temperature, normalized difference vegetation index, urban heat island.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1425
290 Resolving a Piping Vibration Problem by Installing Viscous Damper Supports

Authors: Carlos Herrera Sierralta, Husain M. Muslim, Meshal T. Alsaiari, Daniel Fischer

Abstract:

The vast majority of piping vibration problems in the Oil & Gas industry are provoked by the process flow characteristics which are basically related to the fluid properties, the type of service and its different operational scenarios. In general, the corrective actions recommended for flow induced vibration in piping systems can be grouped in two major areas: those which affect the excitation mechanisms typically associated to process variables, and those which affect the response mechanism of the pipework per se. Where possible the first option is to try to solve the flow induced problem from the excitation mechanism perspective. However, in producing facilities the approach of changing process parameters might not always be convenient as it could lead to reduction of production rates or it may require the shutdown of the system. That impediment might lead to a second option, which is to modify the response of the piping system to excitation generated by the process flow. In principle, the action of shifting the natural frequency of the system well above the frequency inherent to the process always favours the elimination, or considerably reduces the level of vibration experienced by the piping system. Tightening up the clearances at the supports (ideally zero gap) and adding new static supports at the system, are typical ways of increasing the natural frequency of the piping system. However, only stiffening the piping system may not be sufficient to resolve the vibration problem, and in some cases, it might not be feasible to implement it at all, as the available piping layout could create limitations on adding supports due to thermal expansion/contraction requirements. In these cases, utilization of viscous damper supports could be recommended as these devices can allow relatively large quasi-static movement of piping while providing sufficient capabilities of dissipating the vibration. Therefore, when correctly selected and installed, viscous damper supports can provide a significant effect on the response of the piping system over a wide range of frequencies. Viscous dampers cannot be used to support sustained, static loads. This paper shows over a real case example, a methodology which allows to determine the selection of the viscous damper supports via a dynamic analysis model. By implementing this methodology, it is possible to resolve piping vibration problems by adding new viscous dampers supports to the system. The methodology applied on this paper can be used to resolve similar vibration issues.

Keywords: dynamic analysis, flow induced vibration, piping supports, turbulent flow, slug flow, viscous damper

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 260
289 Efficient Design Optimization of Multi-State Flow Network for Multiple Commodities

Authors: Yu-Cheng Chou, Po Ting Lin

Abstract:

The network of delivering commodities has been an important design problem in our daily lives and many transportation applications. The delivery performance is evaluated based on the system reliability of delivering commodities from a source node to a sink node in the network. The system reliability is thus maximized to find the optimal routing. However, the design problem is not simple because (1) each path segment has randomly distributed attributes; (2) there are multiple commodities that consume various path capacities; (3) the optimal routing must successfully complete the delivery process within the allowable time constraints. In this paper, we want to focus on the design optimization of the Multi-State Flow Network (MSFN) for multiple commodities. We propose an efficient approach to evaluate the system reliability in the MSFN with respect to randomly distributed path attributes and find the optimal routing subject to the allowable time constraints. The delivery rates, also known as delivery currents, of the path segments are evaluated and the minimal-current arcs are eliminated to reduce the complexity of the MSFN. Accordingly, the correct optimal routing is found and the worst-case reliability is evaluated. It has been shown that the reliability of the optimal routing is at least higher than worst-case measure. Two benchmark examples are utilized to demonstrate the proposed method. The comparisons between the original and the reduced networks show that the proposed method is very efficient.

Keywords: Multiple Commodities, Multi-State Flow Network (MSFN), Time Constraints, Worst-Case Reliability (WCR)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1429
288 Impact of Interface Soil Layer on Groundwater Aquifer Behaviour

Authors: Hayder H. Kareem, Shunqi Pan

Abstract:

The geological environment where the groundwater is collected represents the most important element that affects the behaviour of groundwater aquifer. As groundwater is a worldwide vital resource, it requires knowing the parameters that affect this source accurately so that the conceptualized mathematical models would be acceptable to the broadest ranges. Therefore, groundwater models have recently become an effective and efficient tool to investigate groundwater aquifer behaviours. Groundwater aquifer may contain aquitards, aquicludes, or interfaces within its geological formations. Aquitards and aquicludes have geological formations that forced the modellers to include those formations within the conceptualized groundwater models, while interfaces are commonly neglected from the conceptualization process because the modellers believe that the interface has no effect on aquifer behaviour. The current research highlights the impact of an interface existing in a real unconfined groundwater aquifer called Dibdibba, located in Al-Najaf City, Iraq where it has a river called the Euphrates River that passes through the eastern part of this city. Dibdibba groundwater aquifer consists of two types of soil layers separated by an interface soil layer. A groundwater model is built for Al-Najaf City to explore the impact of this interface. Calibration process is done using PEST 'Parameter ESTimation' approach and the best Dibdibba groundwater model is obtained. When the soil interface is conceptualized, results show that the groundwater tables are significantly affected by that interface through appearing dry areas of 56.24 km² and 6.16 km² in the upper and lower layers of the aquifer, respectively. The Euphrates River will also leak water into the groundwater aquifer of 7359 m³/day. While these results are changed when the soil interface is neglected where the dry area became 0.16 km², the Euphrates River leakage became 6334 m³/day. In addition, the conceptualized models (with and without interface) reveal different responses for the change in the recharge rates applied on the aquifer through the uncertainty analysis test. The aquifer of Dibdibba in Al-Najaf City shows a slight deficit in the amount of water supplied by the current pumping scheme and also notices that the Euphrates River suffers from stresses applied to the aquifer. Ultimately, this study shows a crucial need to represent the interface soil layer in model conceptualization to be the intended and future predicted behaviours more reliable for consideration purposes.

Keywords: Al-Najaf City, groundwater aquifer behaviour, groundwater modelling, interface soil layer, Visual MODFLOW.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 930
287 Efficiency of Robust Heuristic Gradient Based Enumerative and Tunneling Algorithms for Constrained Integer Programming Problems

Authors: Vijaya K. Srivastava, Davide Spinello

Abstract:

This paper presents performance of two robust gradient-based heuristic optimization procedures based on 3n enumeration and tunneling approach to seek global optimum of constrained integer problems. Both these procedures consist of two distinct phases for locating the global optimum of integer problems with a linear or non-linear objective function subject to linear or non-linear constraints. In both procedures, in the first phase, a local minimum of the function is found using the gradient approach coupled with hemstitching moves when a constraint is violated in order to return the search to the feasible region. In the second phase, in one optimization procedure, the second sub-procedure examines 3n integer combinations on the boundary and within hypercube volume encompassing the result neighboring the result from the first phase and in the second optimization procedure a tunneling function is constructed at the local minimum of the first phase so as to find another point on the other side of the barrier where the function value is approximately the same. In the next cycle, the search for the global optimum commences in both optimization procedures again using this new-found point as the starting vector. The search continues and repeated for various step sizes along the function gradient as well as that along the vector normal to the violated constraints until no improvement in optimum value is found. The results from both these proposed optimization methods are presented and compared with one provided by popular MS Excel solver that is provided within MS Office suite and other published results.

Keywords: Constrained integer problems, enumerative search algorithm, Heuristic algorithm, tunneling algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 780
286 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 555
285 Overview Studies of High Strength Self-Consolidating Concrete

Authors: Raya Harkouss, Bilal Hamad

Abstract:

Self-Consolidating Concrete (SCC) is considered as a relatively new technology created as an effective solution to problems associated with low quality consolidation. A SCC mix is defined as successful if it flows freely and cohesively without the intervention of mechanical compaction. The construction industry is showing high tendency to use SCC in many contemporary projects to benefit from the various advantages offered by this technology.

At this point, a main question is raised regarding the effect of enhanced fluidity of SCC on the structural behavior of high strength self-consolidating reinforced concrete.

A three phase research program was conducted at the American University of Beirut (AUB) to address this concern. The first two phases consisted of comparative studies conducted on concrete and mortar mixes prepared with second generation Sulphonated Naphtalene-based superplasticizer (SNF) or third generation Polycarboxylate Ethers-based superplasticizer (PCE). The third phase of the research program investigates and compares the structural performance of high strength reinforced concrete beam specimens prepared with two different generations of superplasticizers that formed the unique variable between the concrete mixes. The beams were designed to test and exhibit flexure, shear, or bond splitting failure.

The outcomes of the experimental work revealed comparable resistance of beam specimens cast using self-compacting concrete and conventional vibrated concrete. The dissimilarities in the experimental values between the SCC and the control VC beams were minimal, leading to a conclusion, that the high consistency of SCC has little effect on the flexural, shear and bond strengths of concrete members.

Keywords: Self-consolidating concrete (SCC), high-strength concrete, concrete admixtures, mechanical properties of hardened SCC, structural behavior of reinforced concrete beams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2953
284 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 410
283 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations

Authors: Satyanadh Gundimada, Vijayan K Asari

Abstract:

A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.

Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1829
282 Signing the First Packet in Amortization Scheme for Multicast Stream Authentication

Authors: Mohammed Shatnawi, Qusai Abuein, Susumu Shibusawa

Abstract:

Signature amortization schemes have been introduced for authenticating multicast streams, in which, a single signature is amortized over several packets. The hash value of each packet is computed, some hash values are appended to other packets, forming what is known as hash chain. These schemes divide the stream into blocks, each block is a number of packets, the signature packet in these schemes is either the first or the last packet of the block. Amortization schemes are efficient solutions in terms of computation and communication overhead, specially in real-time environment. The main effictive factor of amortization schemes is it-s hash chain construction. Some studies show that signing the first packet of each block reduces the receiver-s delay and prevents DoS attacks, other studies show that signing the last packet reduces the sender-s delay. To our knowledge, there is no studies that show which is better, to sign the first or the last packet in terms of authentication probability and resistance to packet loss. In th is paper we will introduce another scheme for authenticating multicast streams that is robust against packet loss, reduces the overhead, and prevents the DoS attacks experienced by the receiver in the same time. Our scheme-The Multiple Connected Chain signing the First packet (MCF) is to append the hash values of specific packets to other packets,then append some hashes to the signature packet which is sent as the first packet in the block. This scheme is aspecially efficient in terms of receiver-s delay. We discuss and evaluate the performance of our proposed scheme against those that sign the last packet of the block.

Keywords: multicast stream authentication, hash chain construction, signature amortization, authentication probability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1494
281 Opponent Color and Curvelet Transform Based Image Retrieval System Using Genetic Algorithm

Authors: Yesubai Rubavathi Charles, Ravi Ramraj

Abstract:

In order to retrieve images efficiently from a large database, a unique method integrating color and texture features using genetic programming has been proposed. Opponent color histogram which gives shadow, shade, and light intensity invariant property is employed in the proposed framework for extracting color features. For texture feature extraction, fast discrete curvelet transform which captures more orientation information at different scales is incorporated to represent curved like edges. The recent scenario in the issues of image retrieval is to reduce the semantic gap between user’s preference and low level features. To address this concern, genetic algorithm combined with relevance feedback is embedded to reduce semantic gap and retrieve user’s preference images. Extensive and comparative experiments have been conducted to evaluate proposed framework for content based image retrieval on two databases, i.e., COIL-100 and Corel-1000. Experimental results clearly show that the proposed system surpassed other existing systems in terms of precision and recall. The proposed work achieves highest performance with average precision of 88.2% on COIL-100 and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3% on Corel. Thus, the experimental results confirm that the proposed content based image retrieval system architecture attains better solution for image retrieval.

Keywords: Content based image retrieval, Curvelet transform, Genetic algorithm, Opponent color histogram, Relevance feedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1804
280 The Effect of Cow Reproductive Traits on Lifetime Productivity and Longevity

Authors: Lāsma Cielava, Daina Jonkus, Līga Paura

Abstract:

The age of first calving (AFC) is one of the most important factors that have a significant impact on cow productivity in different lactations and its whole life. A belated AFC leads to reduced reproductive performance and it is one of the main reasons for reduced longevity. Cows that calved in time period from 2001-2007 and in this time finished at least four lactations were included in the database. Data were obtained from 68841 crossbred Holstein Black and White (HM), crossbred Latvian Brown (LB), and Latvian Brown genetic resources (LBGR) cows. Cows were distributed in four groups depending on age at first calving. The longest lifespan was conducted for LBGR cows, but they were also characterized with lowest lifetime milk yield and life day milk yield. HM breed cows had the shortest lifespan, but in the lifespan of 2862.2 days was obtained in average 37916.4 kg milk accordingly 13.2 kg milk in one life day. HM breed cows were also characterized with longer calving intervals (CI) in first four lactations, but LBGR cows had the shortest CI in the study group. Age at first calving significantly affected the length of CI in different lactations (p<0.05). HM cows that first time calved >30 months old in the fourth lactation had the longest CI in all study groups (421.4 days). The LBGR cows were characterized with the shortest CI, but there was slight increase in second and third lactation. Age at first calving had a significant impact on cows’ age in each calving time. In the analysis, cow group was conducted that cows with age at first calving <24 months or in average 580.5 days at the time of fifth calving were 2156.7 days (5.9 years) old, but cows with age at first calving >30 months (932.6 days) at the time of fifth calving were 2560.9 days (7.3 years) old.

Keywords: Age at first calving, calving interval, longevity, milk yield.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549
279 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique

Authors: S. Jalaja, A. M. Vijaya Prakash

Abstract:

Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.

Keywords: Carry save adder Karatsuba multiplication, mid-range Karatsuba multiplication, modified FFA, transposed filter, retiming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 888
278 An Autonomous Collaborative Forecasting System Implementation – The First Step towards Successful CPFR System

Authors: Chi-Fang Huang, Yun-Shiow Chen, Yun-Kung Chung

Abstract:

In the past decade, artificial neural networks (ANNs) have been regarded as an instrument for problem-solving and decision-making; indeed, they have already done with a substantial efficiency and effectiveness improvement in industries and businesses. In this paper, the Back-Propagation neural Networks (BPNs) will be modulated to demonstrate the performance of the collaborative forecasting (CF) function of a Collaborative Planning, Forecasting and Replenishment (CPFR®) system. CPFR functions the balance between the sufficient product supply and the necessary customer demand in a Supply and Demand Chain (SDC). Several classical standard BPN will be grouped, collaborated and exploited for the easy implementation of the proposed modular ANN framework based on the topology of a SDC. Each individual BPN is applied as a modular tool to perform the task of forecasting SKUs (Stock-Keeping Units) levels that are managed and supervised at a POS (point of sale), a wholesaler, and a manufacturer in an SDC. The proposed modular BPN-based CF system will be exemplified and experimentally verified using lots of datasets of the simulated SDC. The experimental results showed that a complex CF problem can be divided into a group of simpler sub-problems based on the single independent trading partners distributed over SDC, and its SKU forecasting accuracy was satisfied when the system forecasted values compared to the original simulated SDC data. The primary task of implementing an autonomous CF involves the study of supervised ANN learning methodology which aims at making “knowledgeable" decision for the best SKU sales plan and stocks management.

Keywords: CPFR, artificial neural networks, global logistics, supply and demand chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963
277 Research on the Aeration Systems’ Efficiency of a Lab-Scale Wastewater Treatment Plant

Authors: Oliver Marunțălu, Elena Elisabeta Manea, Lăcrămioara Diana Robescu, Mihai Necșoiu, Gheorghe Lăzăroiu, Dana Andreya Bondrea

Abstract:

In order to obtain efficient pollutants removal in small-scale wastewater treatment plants, uniform water flow has to be achieved. The experimental setup, designed for treating high-load wastewater (leachate), consists of two aerobic biological reactors and a lamellar settler. Both biological tanks were aerated by using three different types of aeration systems - perforated pipes, membrane air diffusers and tube ceramic diffusers. The possibility of homogenizing the water mass with each of the air diffusion systems was evaluated comparatively. The oxygen concentration was determined by optical sensors with data logging. The experimental data was analyzed comparatively for all three different air dispersion systems aiming to identify the oxygen concentration variation during different operational conditions. The Oxygenation Capacity was calculated for each of the three systems and used as performance and selection parameter. The global mass transfer coefficients were also evaluated as important tools in designing the aeration system. Even though using the tubular porous diffusers leads to higher oxygen concentration compared to the perforated pipe system (which provides medium-sized bubbles in the aqueous solution), it doesn’t achieve the threshold limit of 80% oxygen saturation in less than 30 minutes. The study has shown that the optimal solution for the studied configuration was the radial air diffusers which ensure an oxygen saturation of 80% in 20 minutes. An increment of the values was identified when the air flow was increased.

Keywords: Flow, aeration, bioreactor, oxygen concentration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2429
276 Analysis of Supply Side Factors Affecting Bank Financing of Non-Oil Exports in Nigeria

Authors: Sama’ila Idi Ningi, Abubakar Yusuf Dutse

Abstract:

The banking sector poses a lot of problems in Nigeria in general and the non-oil export sector in particular. The banks' lack effectiveness in handling small, medium or long-term credit risk (lack of training of loan officers, lack of information on borrowers and absence of a reliable credit registry) results in non-oil exporters being burdened with high requirements, such as up to three years of financial statements, enough collateral to cover both the loan principal and interest (including a cash deposit that may be up to 30% of the loans' net present value), and to provide every detail of the international trade transaction in question. The stated problems triggered this research. Consequently, information on bank financing of non-oil exports was collected from 100 respondents from the 20 Deposit Money Banks (DMBs) in Nigeria. The data was analysed by the use of descriptive statistics correlation and regression. It is found that, Nigerian banks are participants in the financing of non-oil exports. Despite their participation, the rate of interest for credit extended to non-oil export is usually high, ranging between 15-20%. Small and medium sized non-oil export businesses lack the credit history for banks to judge them as reputable. Banks also consider the non-oil export sector very risky for investment. The banks actually do grant less credit than the exporters may require and therefore are not properly funded by banks. Banks grant very low volume of foreign currency loan in addition to, unfavorable exchange rate at which Naira is exchanged to the Dollar and other currencies in the country. This makes importation of inputs costly and negatively impacted on the non-oil export performance in Nigeria.

Keywords: Supply Side Factors, Bank Financing, Non-Oil Exports.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2683
275 An Evaluation of the Feasibility of Several Industrial Wastes and Natural Materials as Precursors for the Production of Alkali Activated Materials

Authors: O. Alelweet, S. Pavia

Abstract:

In order to face current compelling environmental problems affecting the planet, the construction industry needs to adapt. It is widely acknowledged that there is a need for durable, high-performance, low-greenhouse gas emission binders that can be used as an alternative to Portland cement (PC) to lower the environmental impact of construction. Alkali activated materials (AAMs) are considered a more sustainable alternative to PC materials. The binders of AAMs result from the reaction of an alkali metal source and a silicate powder or precursor which can be a calcium silicate or an aluminosilicate-rich material. This paper evaluates the particle size, specific surface area, chemical and mineral composition and amorphousness of silicate materials (most industrial waste locally produced in Ireland and Saudi Arabia) to develop alkali-activated binders that can replace PC resources in specific applications. These include recycled ceramic brick, bauxite, illitic clay, fly ash and metallurgical slag. According to the results, the wastes are reactive and comply with building standards requirements. The study also evidenced that the reactivity of the Saudi bauxite (with significant kaolinite) can be enhanced on thermal activation; and high calcium in the slag will promote reaction; which should be possible with low alkalinity activators. The wastes evidenced variable water demands that will be taken into account for mixing with the activators. Finally, further research is proposed to further determine the reactive fraction of the clay-based precursors.

Keywords: Reactivity, water demand, alkali-activated materials, brick, bauxite, illitic clay, fly ash, slag.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 728
274 Online Information Seeking: A Review of the Literature in the Health Domain

Authors: Sharifah Sumayyah Engku Alwi, Masrah Azrifah Azmi Murad

Abstract:

The development of the information technology and Internet has been transforming the healthcare industry. The internet is continuously accessed to seek for health information and there are variety of sources, including search engines, health websites, and social networking sites. Providing more and better information on health may empower individuals, however, ensuring a high quality and trusted health information could pose a challenge. Moreover, there is an ever-increasing amount of information available, but they are not necessarily accurate and up to date. Thus, this paper aims to provide an insight of the models and frameworks related to online health information seeking of consumers. It begins by exploring the definition of information behavior and information seeking to provide a better understanding of the concept of information seeking. In this study, critical factors such as performance expectancy, effort expectancy, and social influence will be studied in relation to the value of seeking health information. It also aims to analyze the effect of age, gender, and health status as the moderator on the factors that influence online health information seeking, i.e. trust and information quality. A preliminary survey will be carried out among the health professionals to clarify the research problems which exist in the real world, at the same time producing a conceptual framework. A final survey will be distributed to five states of Malaysia, to solicit the feedback on the framework. Data will be analyzed using SPSS and SmartPLS 3.0 analysis tools. It is hoped that at the end of this study, a novel framework that can improve online health information seeking is developed. Finally, this paper concludes with some suggestions on the models and frameworks that could improve online health information seeking.

Keywords: Information behavior, information seeking, online health information, technology acceptance model, the theory of planned behavior, UTAUT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
273 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms

Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov

Abstract:

The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems do not scale well on cluster containing multiple Central Processing Units (multi-CPUs cluster) or cluster containing multiple Graphics Processing Units (multi-GPUs cluster). For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration, instead of two for standard CG (Conjugate Gradient). The standard and pipelined CG methods need the vector entries generated by current GPU and other GPUs for matrix-vector product. So the communication between GPUs becomes a major performance bottleneck on miltiGPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.

Keywords: Conjugate Gradient, GPU, parallel programming, pipelined algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 345
272 Applications of AUSM+ Scheme on Subsonic, Supersonic and Hypersonic Flows Fields

Authors: Muhammad Yamin Younis, Muhammad Amjad Sohail, Tawfiqur Rahman, Zaka Muhammad, Saifur Rahman Bakaul

Abstract:

The performance of Advection Upstream Splitting Method AUSM schemes are evaluated against experimental flow fields at different Mach numbers and results are compared with experimental data of subsonic, supersonic and hypersonic flow fields. The turbulent model used here is SST model by Menter. The numerical predictions include lift coefficient, drag coefficient and pitching moment coefficient at different mach numbers and angle of attacks. This work describes a computational study undertaken to compute the Aerodynamic characteristics of different air vehicles configurations using a structured Navier-Stokes computational technique. The CFD code bases on the idea of upwind scheme for the convective (convective-moving) fluxes. CFD results for GLC305 airfoil and cone cylinder tail fined missile calculated on above mentioned turbulence model are compared with the available data. Wide ranges of Mach number from subsonic to hypersonic speeds are simulated and results are compared. When the computation is done by using viscous turbulence model the above mentioned coefficients have a very good agreement with the experimental values. AUSM scheme is very efficient in the regions of very high pressure gradients like shock waves and discontinuities. The AUSM versions simulate the all types of flows from lower subsonic to hypersonic flow without oscillations.

Keywords: Subsonic, supersonic, Hypersonic, AUSM+, Drag Coefficient, lift Coefficient, Pitching moment coefficient, pressure Coefficient, turbulent flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3213