Search results for: real time and embedded systems.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10985

Search results for: real time and embedded systems.

485 Feasibility Study for a Castor oil Extraction Plant in South Africa

Authors: Mohamed Belaid, Edison Muzenda, Getrude Mitilene, Mansoor Mollagee

Abstract:

A feasibility study for the design and construction of a pilot plant for the extraction of castor oil in South Africa was conducted. The study emphasized the four critical aspects of project feasibility analysis, namely technical, financial, market and managerial aspects. The technical aspect involved research on existing oil extraction technologies, namely: mechanical pressing and solvent extraction, as well as assessment of the proposed production site for both short and long term viability of the project. The site is on the outskirts of Nkomazi village in the Mpumalanga province, where connections for water and electricity are currently underway, potential raw material supply proves to be reliable since the province is known for its commercial farming. The managerial aspect was evaluated based on the fact that the current producer of castor oil will be fully involved in the project while receiving training and technical assistance from Sasol Technology, the TSC and SEDA. Market and financial aspects were evaluated and the project was considered financially viable with a Net Present Value (NPV) of R2 731 687 and an Internal Rate of Return (IRR) of 18% at an annual interest rate of 10.5%. The payback time is 6years for analysis over the first 10 years with a net income of R1 971 000 in the first year. The project was thus found to be feasible with high chance of success while contributing to socio-economic development. It was recommended for lab tests to be conducted to establish process kinetics that would be used in the initial design of the plant.

Keywords: Mechanical pressing, Net Present Value, Oilextraction, Project feasibility, Solvent extraction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6082
484 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography

Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi

Abstract:

Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.

Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618
483 Estimation of Asphalt Pavement Surfaces Using Image Analysis Technique

Authors: Mohammad A. Khasawneh

Abstract:

Asphalt concrete pavements gradually lose their skid resistance causing safety problems especially under wet conditions and high driving speeds. In order to enact the actual field polishing and wearing process of asphalt pavement surfaces in a laboratory setting, several laboratory-scale accelerated polishing devices were developed by different agencies. To mimic the actual process, friction and texture measuring devices are needed to quantify surface deterioration at different polishing intervals that reflect different stages of the pavement life. The test could still be considered lengthy and to some extent labor-intensive. Therefore, there is a need to come up with another method that can assist in investigating the bituminous pavement surface characteristics in a practical and time-efficient test procedure.

The purpose of this paper is to utilize a well-developed image analysis technique to characterize asphalt pavement surfaces without the need to use conventional friction and texture measuring devices in an attempt to shorten and simplify the polishing procedure in the lab.

Promising findings showed the possibility of using image analysis in lieu of the labor-sensitive-variable-in-nature friction and texture measurements. It was found that the exposed aggregate surface area of asphalt specimens made from limestone and gravel aggregates produced solid evidence of the validity of this method in describing asphalt pavement surfaces. Image analysis results correlated well with the British Pendulum Numbers (BPN), Polish Values (PV) and Mean Texture Depth (MTD) values.

Keywords: Friction, Image Analysis, Polishing, Statistical Analysis, Texture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2559
482 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

Authors: B. Elshafei, X. Mao

Abstract:

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Keywords: Data fusion, Gaussian process regression, signal denoise, temporal extrapolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 501
481 An Analysis of Collapse Mechanism of Thin- Walled Circular Tubes Subjected to Bending

Authors: Somya Poonaya, Chawalit Thinvongpituk, Umphisak Teeboonma

Abstract:

Circular tubes have been widely used as structural members in engineering application. Therefore, its collapse behavior has been studied for many decades, focusing on its energy absorption characteristics. In order to predict the collapse behavior of members, one could rely on the use of finite element codes or experiments. These tools are helpful and high accuracy but costly and require extensive running time. Therefore, an approximating model of tubes collapse mechanism is an alternative for early step of design. This paper is also aimed to develop a closed-form solution of thin-walled circular tube subjected to bending. It has extended the Elchalakani et al.-s model (Int. J. Mech. Sci.2002; 44:1117-1143) to include the rate of energy dissipation of rolling hinge in the circumferential direction. The 3-D geometrical collapse mechanism was analyzed by adding the oblique hinge lines along the longitudinal tube within the length of plastically deforming zone. The model was based on the principal of energy rate conservation. Therefore, the rates of internal energy dissipation were calculated for each hinge lines which are defined in term of velocity field. Inextensional deformation and perfect plastic material behavior was assumed in the derivation of deformation energy rate. The analytical result was compared with experimental result. The experiment was conducted with a number of tubes having various D/t ratios. Good agreement between analytical and experiment was achieved.

Keywords: Bending, Circular tube, Energy, Mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3511
480 Modified Energy and Link Failure Recovery Routing Algorithm for Wireless Sensor Network

Authors: M. Jayekumar, V. Nagarajan

Abstract:

Wireless sensor network finds role in environmental monitoring, industrial applications, surveillance applications, health monitoring and other supervisory applications. Sensing devices form the basic operational unit of the network that is self-battery powered with limited life time. Sensor node spends its limited energy for transmission, reception, routing and sensing information. Frequent energy utilization for the above mentioned process leads to network lifetime degradation. To enhance energy efficiency and network lifetime, we propose a modified energy optimization and node recovery post failure method, Energy-Link Failure Recovery Routing (E-LFRR) algorithm. In our E-LFRR algorithm, two phases namely, Monitored Transmission phase and Replaced Transmission phase are devised to combat worst case link failure conditions. In Monitored Transmission phase, the Actuator Node monitors and identifies suitable nodes for shortest path transmission. The Replaced Transmission phase dispatches the energy draining node at early stage from the active link and replaces it with the new node that has sufficient energy. Simulation results illustrate that this combined methodology reduces overhead, energy consumption, delay and maintains considerable amount of alive nodes thereby enhancing the network performance.

Keywords: Actuator node, energy efficient routing, energy hole, link failure recovery, link utilization, wireless sensor network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1192
479 Coupling Heat and Mass Transfer for Hydrogen-Assisted Self-Ignition Behaviors of Propane-Air Mixtures in Catalytic Micro-Channels

Authors: Junjie Chen, Deguang Xu

Abstract:

Transient simulation of the hydrogen-assisted self-ignition of propane-air mixtures were carried out in platinum-coated micro-channels from ambient cold-start conditions, using a two-dimensional model with reduced-order reaction schemes, heat conduction in the solid walls, convection and surface radiation heat transfer. The self-ignition behavior of hydrogen-propane mixed fuel is analyzed and compared with the heated feed case. Simulations indicate that hydrogen can successfully cause self-ignition of propane-air mixtures in catalytic micro-channels with a 0.2 mm gap size, eliminating the need for startup devices. The minimum hydrogen composition for propane self-ignition is found to be in the range of 0.8-2.8% (on a molar basis), and increases with increasing wall thermal conductivity, and decreasing inlet velocity or propane composition. Higher propane-air ratio results in earlier ignition. The ignition characteristics of hydrogen-assisted propane qualitatively resemble the selectively inlet feed preheating mode. Transient response of the mixed hydrogen- propane fuel reveals sequential ignition of propane followed by hydrogen. Front-end propane ignition is observed in all cases. Low wall thermal conductivities cause earlier ignition of the mixed hydrogen-propane fuel, subsequently resulting in low exit temperatures. The transient-state behavior of this micro-scale system is described, and the startup time and minimization of hydrogen usage are discussed.

Keywords: Micro-combustion, Self-ignition, Hydrogen addition, Heat transfer, Catalytic combustion, Transient simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1885
478 Evaluating Emission Reduction Due to a Proposed Light Rail Service: A Micro-Level Analysis

Authors: Saeid Eshghi, Neeraj Saxena, Abdulmajeed Alsultan

Abstract:

Carbon dioxide (CO2) alongside other gas emissions in the atmosphere cause a greenhouse effect, resulting in an increase of the average temperature of the planet. Transportation vehicles are among the main contributors of CO2 emission. Stationary vehicles with initiated motors produce more emissions than mobile ones. Intersections with traffic lights that force the vehicles to become stationary for a period of time produce more CO2 pollution than other parts of the road. This paper focuses on analyzing the CO2 produced by the traffic flow at Anzac Parade Road - Barker Street intersection in Sydney, Australia, before and after the implementation of Light rail transport (LRT). The data are gathered during the construction phase of the LRT by collecting the number of vehicles on each path of the intersection for 15 minutes during the evening rush hour of 1 week (6-7 pm, July 04-31, 2018) and then multiplied by 4 to calculate the flow of vehicles in 1 hour. For analyzing the data, the microscopic simulation software “VISSIM” has been used. Through the analysis, the traffic flow was processed in three stages: before and after implementation of light rail train, and one during the construction phase. Finally, the traffic results were input into another software called “EnViVer”, to calculate the amount of CO2 during 1 h. The results showed that after the implementation of the light rail, CO2 will drop by a minimum of 13%. This finding provides an evidence that light rail is a sustainable mode of transport.

Keywords: Carbon dioxide, emission modeling, light rail, microscopic model, traffic flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 946
477 Dynamic Threshold Adjustment Approach For Neural Networks

Authors: Hamza A. Ali, Waleed A. J. Rasheed

Abstract:

The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.

Keywords: Classification, Recognition, Neural Networks, Pattern Recognition, Generalization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627
476 Identifying the Traditional Color Scheme in Decorative Patterns Used by the Bahnar Ethnic Group in the Central Highlands of Vietnam

Authors: Nguyen Viet Tan

Abstract:

The Bahnar is one of 11 indigenous groups living in the Central Highlands of Vietnam. It is one among the four most popular groups in this area, including the Mnong who speak the same language of Mon Khmer family, while both groups of the Jrai and the Rhade belong to the Malayo-Polynesian language family. These groups once captured fertile plateaus, left their cultural and artistic heritage which affected the remaining small groups. Despite the difference in ethnic origins, these groups seem to share similar beliefs, customs and related folk arts after a very long time living beside each other. However, through an in-depth study, this paper points out the fact that the decorative patterns used by the Bahnar are different from the other ethnic groups, especially in color. Based on historical materials from the local museums and some studies in 1980s when all of the ethnic groups in this area had still lived in self-sufficient condition, this paper characterizes the traditional color scheme used by the Bahnar and identifies the difference in decorative motifs of this group compared to the others by pointing out they do not use green in their usual decorative patterns. Moreover, combined with some field surveys recently, through comparative analysis, it also discovers stylistic variations of these patterns in the process of cultural exchange with the other ethnic groups, both in and out of the region, in modern living conditions. This study helps to preserve and promote the traditional values and cultural identity of the Bahnar people in the Central Highlands of Vietnam, avoiding the fusion of styles among groups during the cultural exchange.

Keywords: Bahnar ethic group, decorative patterns, the central highland of Vietnam, traditional color scheme.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 643
475 A Study on the Effect of Mg and Ag Additions and Age Hardening Treatment on the Properties of As-Cast Al-Cu-Mg-Ag Alloys

Authors: Ahmed. S. Alasmari, M. S. Soliman, Magdy M. El-Rayes

Abstract:

This study focuses on the effect of the addition of magnesium (Mg) and silver (Ag) on the mechanical properties of aluminum based alloys. The alloying elements will be added at different levels using the factorial design of experiments of 22; the two factors are Mg and Ag at two levels of concentration. The superior mechanical properties of the produced Al-Cu-Mg-Ag alloys after aging will be resulted from a unique type of precipitation named as Ω-phase. The formed precipitate enhanced the tensile strength and thermal stability. This paper further investigated the microstructure and mechanical properties of as cast Al–Cu–Mg–Ag alloys after being complete homogenized treatment at 520 °C for 8 hours followed by isothermally age hardening process at 190 °C for different periods of time. The homogenization at 520 °C for 8 hours was selected based on homogenization study at various temperatures and times. The alloys’ microstructures were studied by using optical microscopy (OM). In addition to that, the fracture surface investigation was performed using a scanning electronic microscope (SEM). Studying the microstructure of aged Al-Cu-Mg-Ag alloys reveal that the grains are equiaxed with an average grain size of about 50 µm. A detailed fractography study for fractured surface of the aged alloys exhibited a mixed fracture whereby the random fracture suggested crack propagation along the grain boundaries while the dimples indicated that the fracture was ductile. The present result has shown that alloy 5 has the highest hardness values and the best mechanical behaviors.

Keywords: Precipitation hardening, aluminum alloys, aging, design of experiments, analysis of variance, heat treatments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1184
474 Urban Air Pollution – Trend and Forecasting of Major Pollutants by Timeseries Analysis

Authors: A.L. Seetharam, B.L. Udaya Simha

Abstract:

The Bangalore City is facing the acute problem of pollution in the atmosphere due to the heavy increase in the traffic and developmental activities in recent years. The present study is an attempt in the direction to assess trend of the ambient air quality status of three stations, viz., AMCO Batteries Factory, Mysore Road, GRAPHITE INDIA FACTORY, KHB Industrial Area, Whitefield and Ananda Rao Circle, Gandhinagar with respect to some of the major criteria pollutants such as Total Suspended particular matter (SPM), Oxides of nitrogen (NOx), and Oxides of sulphur (SO2). The sites are representative of various kinds of growths viz., commercial, residential and industrial, prevailing in Bangalore, which are contributing to air pollution. The concentration of Sulphur Dioxide (SO2) at all locations showed a falling trend due to use of refined petrol and diesel in the recent years. The concentration of Oxides of nitrogen (NOx) showed an increasing trend but was within the permissible limits. The concentration of the Suspended particular matter (SPM) showed the mixed trend. The correlation between model and observed values is found to vary from 0.4 to 0.7 for SO2, 0.45 to 0.65 for NOx and 0.4 to 0.6 for SPM. About 80% of data is observed to fall within the error band of ±50%. Forecast test for the best fit models showed the same trend as actual values in most of the cases. However, the deviation observed in few cases could be attributed to change in quality of petro products, increase in the volume of traffic, introduction of LPG as fuel in many types of automobiles, poor condition of roads, prevailing meteorological conditions, etc.

Keywords: Bangalore, urban air pollution, time series analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008
473 Retrieving Extended High Dynamic Range from Digital Negative Image - An Experiment on Architectural Photo Imaging

Authors: See Zi Siang, Khairul Hazrin Hashim, Harold Thwaites, Lee Xia Sheng, Ooi Wooi Har

Abstract:

The paper explores the development of an optimization of method and apparatus for retrieving extended high dynamic range from digital negative image. Architectural photo imaging can benefit from high dynamic range imaging (HDRI) technique for preserving and presenting sufficient luminance in the shadow and highlight clipping image areas. The HDRI technique that requires multiple exposure images as the source of HDRI rendering may not be effective in terms of time efficiency during the acquisition process and post-processing stage, considering it has numerous potential imaging variables and technical limitations during the multiple exposure process. This paper explores an experimental method and apparatus that aims to expand the dynamic range from digital negative image in HDRI environment. The method and apparatus explored is based on a single source of RAW image acquisition for the use of HDRI post-processing. It will cater the optimization in order to avoid and minimize the conventional HDRI photographic errors caused by different physical conditions during the photographing process and the misalignment of multiple exposed image sequences. The study observes the characteristics and capabilities of RAW image format as digital negative used for the retrieval of extended high dynamic range process in HDRI environment.

Keywords: High Dynamic Range Image, Photography Workflow Optimization, Digital Negative Image, Architectural Image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1617
472 Impact of Urbanization Growth on Disease Spread and Outbreak Response: Exploring Strategies for Enhancing Resilience

Authors: Raquel Vianna Duarte Cardoso, Eduarda Lobato Faria, José Jorge Boueri

Abstract:

Rapid urbanization has transformed the global landscape, presenting significant challenges to public health. This article delves into the impact of urbanization on the spread of infectious diseases in cities and identifies crucial strategies to enhance urban community resilience. Massive urbanization over recent decades has created conducive environments for the rapid spread of diseases due to population density, mobility, and unequal living conditions. Urbanization has been observed to increase exposure to pathogens and foster conditions conducive to disease outbreaks, including seasonal flu, vector-borne diseases, and respiratory infections. In order to tackle these issues, a range of cross-disciplinary approaches are suggested. These encompass the enhancement of urban healthcare infrastructure, emphasizing the need for robust investments in hospitals, clinics, and healthcare systems to keep pace with the burgeoning healthcare requirements in urban environments. Moreover, the establishment of disease monitoring and surveillance mechanisms is indispensable, as it allows for the timely detection of outbreaks, enabling swift responses. Additionally, community engagement and education play a pivotal role in advocating for personal hygiene, vaccination, and preventive measures, thus playing a pivotal role in diminishing disease transmission. Lastly, the promotion of sustainable urban planning, which includes the creation of cities with green spaces, access to clean water, and proper sanitation, can significantly mitigate the risks associated with waterborne and vector-borne diseases. The article is based on the analysis of scientific literature, and it offers a comprehensive insight into the complexities of the relationship between urbanization and health. It places a strong emphasis on the urgent need for integrated approaches to improve urban resilience in the face of health challenges.

Keywords: Infectious diseases dissemination, public health, urbanization impacts, urban resilience.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83
471 Applicability of Overhangs for Energy Saving in Existing High-Rise Housing in Different Climates

Authors: Qiong He, S. Thomas Ng

Abstract:

Upgrading the thermal performance of building envelope of existing residential buildings is an effective way to reduce heat gain or heat loss. Overhang device is a common solution for building envelope improvement as it can cut down solar heat gain and thereby can reduce the energy used for space cooling in summer time. Despite that, overhang can increase the demand for indoor heating in winter due to its function of lowering the solar heat gain. Obviously, overhang has different impacts on energy use in different climatic zones which have different energy demand. To evaluate the impact of overhang device on building energy performance under different climates of China, an energy analysis model is built up in a computer-based simulation program known as DesignBuilder based on the data of a typical high-rise residential building. The energy simulation results show that single overhang is able to cut down around 5% of the energy consumption of the case building in the stand-alone situation or about 2% when the building is surrounded by other buildings in regions which predominantly rely on space cooling though it has no contribution to energy reduction in cold region. In regions with cold summer and cold winter, adding overhang over windows can cut down around 4% and 1.8% energy use with and without adjoining buildings, respectively. The results indicate that overhang might not an effective shading device to reduce the energy consumption in the mixed climate or cold regions.

Keywords: Overhang, energy analysis, computer-based simulation, high-rise residential building, mutual shading, climate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1447
470 Evaluation of Dynamic Behavior a Machine Tool Spindle System through Modal and Unbalance Response Analysis

Authors: Khairul Jauhari, Achmad Widodo, Ismoyo Haryanto

Abstract:

The spindle system is one of the most important components of machine tool. The dynamic properties of the spindle affect the machining productivity and quality of the work pieces. Thus, it is important and necessary to determine its dynamic characteristics of spindles in the design and development in order to avoid forced resonance. The finite element method (FEM) has been adopted in order to obtain the dynamic behavior of spindle system. For this reason, obtaining the Campbell diagrams and determining the critical speeds are very useful to evaluate the spindle system dynamics. The unbalance response of the system to the center of mass unbalance at the cutting tool is also calculated to investigate the dynamic behavior. In this paper, we used an ANSYS Parametric Design Language (APDL) program which based on finite element method has been implemented to make the full dynamic analysis and evaluation of the results. Results show that the calculated critical speeds are far from the operating speed range of the spindle, thus, the spindle would not experience resonance, and the maximum unbalance response at operating speed is still with acceptable limit. ANSYS Parametric Design Language (APDL) can be used by spindle designer as tools in order to increase the product quality, reducing cost, and time consuming in the design and development stages.

Keywords: ANSYS parametric design language (APDL), Campbell diagram, Critical speeds, Unbalance response, The Spindle system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2830
469 Rheological and Computational Analysis of Crude Oil Transportation

Authors: Praveen Kumar, Satish Kumar, Jashanpreet Singh

Abstract:

Transportation of unrefined crude oil from the production unit to a refinery or large storage area by a pipeline is difficult due to the different properties of crude in various areas. Thus, the design of a crude oil pipeline is a very complex and time consuming process, when considering all the various parameters. There were three very important parameters that play a significant role in the transportation and processing pipeline design; these are: viscosity profile, temperature profile and the velocity profile of waxy crude oil through the crude oil pipeline. Knowledge of the Rheological computational technique is required for better understanding the flow behavior and predicting the flow profile in a crude oil pipeline. From these profile parameters, the material and the emulsion that is best suited for crude oil transportation can be predicted. Rheological computational fluid dynamic technique is a fast method used for designing flow profile in a crude oil pipeline with the help of computational fluid dynamics and rheological modeling. With this technique, the effect of fluid properties including shear rate range with temperature variation, degree of viscosity, elastic modulus and viscous modulus was evaluated under different conditions in a transport pipeline. In this paper, two crude oil samples was used, as well as a prepared emulsion with natural and synthetic additives, at different concentrations ranging from 1,000 ppm to 3,000 ppm. The rheological properties was then evaluated at a temperature range of 25 to 60 °C and which additive was best suited for transportation of crude oil is determined. Commercial computational fluid dynamics (CFD) has been used to generate the flow, velocity and viscosity profile of the emulsions for flow behavior analysis in crude oil transportation pipeline. This rheological CFD design can be further applied in developing designs of pipeline in the future.

Keywords: Natural surfactant, crude oil, rheology, CFD, viscosity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675
468 Influence of Sr(BO2)2 Doping on Superconducting Properties of (Bi,Pb)-2223 Phase

Authors: N. G. Margiani, I. G. Kvartskhava, G. A. Mumladze, Z. A. Adamia

Abstract:

Chemical doping with different elements and compounds at various amounts represents the most suitable approach to improve the superconducting properties of bismuth-based superconductors for technological applications. In this paper, the influence of partial substitution of Sr(BO2)2 for SrO on the phase formation kinetics and transport properties of (Bi,Pb)-2223 HTS has been studied for the first time. Samples with nominal composition Bi1.7Pb0.3Sr2-xCa2Cu3Oy[Sr(BO2)2]x, x=0, 0.0375, 0.075, 0.15, 0.25, were prepared by the standard solid state processing. The appropriate mixtures were calcined at 845 oC for 40 h. The resulting materials were pressed into pellets and annealed at 837 oC for 30 h in air. Superconducting properties of undoped (reference) and Sr(BO2)2-doped (Bi,Pb)-2223 compounds were investigated through X-ray diffraction (XRD), resistivity (ρ) and transport critical current density (Jc) measurements. The surface morphology changes in the prepared samples were examined by scanning electron microscope (SEM). XRD and Jc studies have shown that the low level Sr(BO2)2 doping (x=0.0375-0.075) to the Sr-site promotes the formation of high-Tc phase and leads to the enhancement of current carrying capacity in (Bi,Pb)-2223 HTS. The doped sample with x=0.0375 has the best performance compared to other prepared samples. The estimated volume fraction of (Bi,Pb)-2223 phase increases from ~25 % for reference specimen to ~70 % for x=0.0375. Moreover, strong increase in the self-field Jc value was observed for this dopant amount (Jc=340 A/cm2), compared to an undoped sample (Jc=110 A/cm2). Pronounced enhancement of superconducting properties of (Bi,Pb)-2223 superconductor can be attributed to the acceleration of high-Tc phase formation as well as the improvement of inter-grain connectivity by small amounts of Sr(BO2)2 dopant.

Keywords: Bismuth-based superconductor, critical current density, phase formation, Sr(BO2)2 doping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 755
467 Developing Laser Spot Position Determination and PRF Code Detection with Quadrant Detector

Authors: Mohamed Fathy Heweage, Xiao Wen, Ayman Mokhtar, Ahmed Eldamarawy

Abstract:

In this paper, we are interested in modeling, simulation, and measurement of the laser spot position with a quadrant detector. We enhance detection and tracking of semi-laser weapon decoding system based on microcontroller. The system receives the reflected pulse through quadrant detector and processes the laser pulses through a processing circuit, a microcontroller decoding laser pulse reflected by the target. The seeker accuracy will be enhanced by the decoding system, the laser detection time based on the receiving pulses number is reduced, a gate is used to limit the laser pulse width. The model is implemented based on Pulse Repetition Frequency (PRF) technique with two microcontroller units (MCU). MCU1 generates laser pulses with different codes. MCU2 decodes the laser code and locks the system at the specific code. The codes EW selected based on the two selector switches. The system is implemented and tested in Proteus ISIS software. The implementation of the full position determination circuit with the detector is produced. General system for the spot position determination was performed with the laser PRF for incident radiation and the mechanical system for adjusting system at different angles. The system test results show that the system can detect the laser code with only three received pulses based on the narrow gate signal, and good agreement between simulation and measured system performance is obtained.

Keywords: 4-quadrant detector, pulse code detection, laser guided weapons, pulse repetition frequency, ATmega 32 microcontrollers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
466 Identification of Critical Success Factors in Non-Formal Service Sector Using Delphi Technique

Authors: Amol A. Talankar, Prakash Verma, Nitin Seth

Abstract:

The purpose of this study is to identify the critical success factors (CSFs) for the effective implementation of Six Sigma in non-formal service Sectors.

Based on the survey of literature, the critical success factors (CSFs) for Six Sigma have been identified and are assessed for their importance in Non-formal service sector using Delphi Technique. These selected CSFs were put forth to the panel of expert to cluster them and prepare cognitive map to establish their relationship.

All the critical success factors examined and obtained from the review of literature have been assessed for their importance with respect to their contribution to Six Sigma effectiveness in non formal service sector.

The study is limited to the non-formal service sectors involved in the organization of religious festival only. However, the similar exercise can be conducted for broader sample of other non-formal service sectors like temple/ashram management, religious tours management etc.

The research suggests an approach to identify CSFs of Six Sigma for Non-formal service sector. All the CSFs of the formal service sector will not be applicable to Non-formal services, hence opinion of experts was sought to add or delete the CSFs. In the first round of Delphi, the panel of experts has suggested, two new CSFs-“competitive benchmarking (F19) and resident’s involvement (F28)”, which were added for assessment in the next round of Delphi.  One of the CSFs-“fulltime six sigma personnel (F15)” has been omitted in proposed clusters of CSFs for non-formal organization, as it is practically impossible to deploy full time trained Six Sigma recruits.

Keywords: Critical success factors (CSFs), Quality assurance, non-formal service sectors, Six Sigma.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2452
465 Buildings Founded on Thermal Insulation Layer Subjected to Earthquake Load

Authors: D. Koren, V. Kilar

Abstract:

The modern energy-efficient houses are often founded on a thermal insulation (TI) layer placed under the building’s RC foundation slab.The purpose of the paper is to identify the potential problems of the buildings founded on TI layer from the seismic point of view. The two main goals of the study were to assess the seismic behavior of such buildings, and to search for the critical structural parameters affecting the response of the superstructure as well as of the extruded polystyrene (XPS) layer. As a test building a multi-storeyed RC frame structure with and without the XPS layer under the foundation slab has been investigated utilizing nonlinear dynamic (time-history) and static (pushover) analyses. The structural response has been investigated with reference to the following performance parameters: i) Building’s lateral roof displacements, ii) Edge compressive and shear strains of the XPS, iii) Horizontal accelerations of the superstructure, iv) Plastic hinge patterns of the superstructure, v) Part of the foundation in compression, and vi) Deformations of the underlying soil and vertical displacements of the foundation slab (i.e. identifying the potential uplift). The results have shown that in the case of higher and stiff structures lying on firm soil the use of XPS under the foundation slab might induce amplified structural peak responses compared to the building models without XPS under the foundation slab. The analysis has revealed that the superstructure as well as the XPS response is substantially affected by the stiffness of the foundation slab.

Keywords: Extruded polystyrene (XPS), foundation on thermal insulation, energy-efficient buildings, nonlinear seismic analysis, seismic response, soil–structure interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2229
464 Post-Traumatic Stress Disorder: Management at the Montfort Hospital

Authors: Kay-Anne Haykal, Issack Biyong

Abstract:

The post-traumatic stress disorder (PTSD) rises from exposure to a traumatic event and appears by a persistent experience of this event. Several psychiatric co-morbidities are associated with PTSD and include mood disorders, anxiety disorders, and substance abuse. The main objective was to compare the criteria for PTSD according to the literature to those used to diagnose a patient in a francophone hospital and to check the correspondence of these two criteria. 700 medical charts of admitted patients on the medicine or psychiatric unit at the Montfort Hospital were identified with the following diagnoses: major depressive disorder, bipolar disorder, anxiety disorder, substance abuse, and PTSD for the period of time between April 2005 and March 2006. Multiple demographic criteria were assembled. Also, for every chart analyzed, the PTSD criteria, according to the Manual of Mental Disorders (DSM) IV were found, identified, and grouped according to pre-established codes. An analysis using the receiver operating characteristic (ROC) method was elaborated for the study of data. A sample of 57 women and 50 men was studied. Age was varying between 18 and 88 years with a median age of 48. According to the PTSD criteria in the DSM IV, 12 patients should have the diagnosis of PTSD in opposition to only two identified in the medical charts. The ROC method establishes that with the combination of data from PTSD and depression, the sensitivity varies between 0,127 and 0,282, and the specificity varies between 0,889 and 0,917. Otherwise, if we examine the PTSD data alone, the sensibility jumps to 0.50, and the specificity varies between 0,781 and 0,895. This study confirms the presence of an underdiagnosed and treated PTSD that causes severe perturbations for the affected individual.

Keywords: Post-Traumatic Stress Disorder, diagnosis, co-morbidities, mental health disorders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1109
463 Does Material Choice Drive Sustainability of 3D Printing?

Authors: Jeremy Faludi, Zhongyin Hu, Shahd Alrashed, Christopher Braunholz, Suneesh Kaul, Leulekal Kassaye

Abstract:

Environmental impacts of six 3D printers using various materials were compared to determine if material choice drove sustainability, or if other factors such as machine type, machine size, or machine utilization dominate. Cradle-to-grave life-cycle assessments were performed, comparing a commercial-scale FDM machine printing in ABS plastic, a desktop FDM machine printing in ABS, a desktop FDM machine printing in PET and PLA plastics, a polyjet machine printing in its proprietary polymer, an SLA machine printing in its polymer, and an inkjet machine hacked to print in salt and dextrose. All scenarios were scored using ReCiPe Endpoint H methodology to combine multiple impact categories, comparing environmental impacts per part made for several scenarios per machine. Results showed that most printers’ ecological impacts were dominated by electricity use, not materials, and the changes in electricity use due to different plastics was not significant compared to variation from one machine to another. Variation in machine idle time determined impacts per part most strongly. However, material impacts were quite important for the inkjet printer hacked to print in salt: In its optimal scenario, it had up to 1/38th the impacts coreper part as the worst-performing machine in the same scenario. If salt parts were infused with epoxy to make them more physically robust, then much of this advantage disappeared, and material impacts actually dominated or equaled electricity use. Future studies should also measure DMLS and SLS processes / materials.

Keywords: 3D printing, Additive Manufacturing, Sustainability, Life-cycle assessment, Design for Environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3609
462 Choosing R-tree or Quadtree Spatial DataIndexing in One Oracle Spatial Database System to Make Faster Showing Geographical Map in Mobile Geographical Information System Technology

Authors: Maruto Masserie Sardadi, Mohd Shafry bin Mohd Rahim, Zahabidin Jupri, Daut bin Daman

Abstract:

The latest Geographic Information System (GIS) technology makes it possible to administer the spatial components of daily “business object," in the corporate database, and apply suitable geographic analysis efficiently in a desktop-focused application. We can use wireless internet technology for transfer process in spatial data from server to client or vice versa. However, the problem in wireless Internet is system bottlenecks that can make the process of transferring data not efficient. The reason is large amount of spatial data. Optimization in the process of transferring and retrieving data, however, is an essential issue that must be considered. Appropriate decision to choose between R-tree and Quadtree spatial data indexing method can optimize the process. With the rapid proliferation of these databases in the past decade, extensive research has been conducted on the design of efficient data structures to enable fast spatial searching. Commercial database vendors like Oracle have also started implementing these spatial indexing to cater to the large and diverse GIS. This paper focuses on the decisions to choose R-tree and quadtree spatial indexing using Oracle spatial database in mobile GIS application. From our research condition, the result of using Quadtree and R-tree spatial data indexing method in one single spatial database can save the time until 42.5%.

Keywords: Indexing, Mobile GIS, MapViewer, Oracle SpatialDatabase.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4035
461 Formant Tracking Linear Prediction Model using HMMs for Noisy Speech Processing

Authors: Zaineb Ben Messaoud, Dorra Gargouri, Saida Zribi, Ahmed Ben Hamida

Abstract:

This paper presents a formant-tracking linear prediction (FTLP) model for speech processing in noise. The main focus of this work is the detection of formant trajectory based on Hidden Markov Models (HMM), for improved formant estimation in noise. The approach proposed in this paper provides a systematic framework for modelling and utilization of a time- sequence of peaks which satisfies continuity constraints on parameter; the within peaks are modelled by the LP parameters. The formant tracking LP model estimation is composed of three stages: (1) a pre-cleaning multi-band spectral subtraction stage to reduce the effect of residue noise on formants (2) estimation stage where an initial estimate of the LP model of speech for each frame is obtained (3) a formant classification using probability models of formants and Viterbi-decoders. The evaluation results for the estimation of the formant tracking LP model tested in Gaussian white noise background, demonstrate that the proposed combination of the initial noise reduction stage with formant tracking and LPC variable order analysis, results in a significant reduction in errors and distortions. The performance was evaluated with noisy natual vowels extracted from international french and English vocabulary speech signals at SNR value of 10dB. In each case, the estimated formants are compared to reference formants.

Keywords: Formants Estimation, HMM, Multi Band Spectral Subtraction, Variable order LPC coding, White Gauusien Noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1962
460 An Application of Path Planning Algorithms for Autonomous Inspection of Buried Pipes with Swarm Robots

Authors: Richard Molyneux, Christopher Parrott, Kirill Horoshenkov

Abstract:

This paper aims to demonstrate how various algorithms can be implemented within swarms of autonomous robots to provide continuous inspection within underground pipeline networks. Current methods of fault detection within pipes are costly, time consuming and inefficient. As such, solutions tend toward a more reactive approach, repairing faults, as opposed to proactively seeking leaks and blockages. The paper presents an efficient inspection method, showing that autonomous swarm robotics is a viable way of monitoring underground infrastructure. Tailored adaptations of various Vehicle Routing Problems (VRP) and path-planning algorithms provide a customised inspection procedure for complicated networks of underground pipes. The performance of multiple algorithms is compared to determine their effectiveness and feasibility. Notable inspirations come from ant colonies and stigmergy, graph theory, the k-Chinese Postman Problem ( -CPP) and traffic theory. Unlike most swarm behaviours which rely on fast communication between agents, underground pipe networks are a highly challenging communication environment with extremely limited communication ranges. This is due to the extreme variability in the pipe conditions and relatively high attenuation of acoustic and radio waves with which robots would usually communicate. This paper illustrates how to optimise the inspection process and how to increase the frequency with which the robots pass each other, without compromising the routes they are able to take to cover the whole network.

Keywords: Autonomous inspection, buried pipes, stigmergy, swarm intelligence, vehicle routing problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1014
459 Optimal and Critical Path Analysis of State Transportation Network Using Neo4J

Authors: Pallavi Bhogaram, Xiaolong Wu, Min He, Onyedikachi Okenwa

Abstract:

A transportation network is a realization of a spatial network, describing a structure which permits either vehicular movement or flow of some commodity. Examples include road networks, railways, air routes, pipelines, and many more. The transportation network plays a vital role in maintaining the vigor of the nation’s economy. Hence, ensuring the network stays resilient all the time, especially in the face of challenges such as heavy traffic loads and large scale natural disasters, is of utmost importance. In this paper, we used the Neo4j application to develop the graph. Neo4j is the world's leading open-source, NoSQL, a native graph database that implements an ACID-compliant transactional backend to applications. The Southern California network model is developed using the Neo4j application and obtained the most critical and optimal nodes and paths in the network using centrality algorithms. The edge betweenness centrality algorithm calculates the critical or optimal paths using Yen's k-shortest paths algorithm, and the node betweenness centrality algorithm calculates the amount of influence a node has over the network. The preliminary study results confirm that the Neo4j application can be a suitable tool to study the important nodes and the critical paths for the major congested metropolitan area.

Keywords: Transportation network, critical path, connectivity reliability, network model, Neo4J application, optimal path, critical path, edge betweenness centrality index, node betweenness centrality index, Yen’s k-shortest paths.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 853
458 The Impact of Cooperative Learning on Numerical Methods Course

Authors: Sara Bilal, Abdi Omar Shuriye, Raihan Othman

Abstract:

Numerical Methods is a course that can be conducted using workshops and group discussion. This study has been implemented on undergraduate students of level two at the Faculty of Engineering, International Islamic University Malaysia. The Numerical Method course has been delivered to two Sections 1 and 2 with 44 and 22 students in each section, respectively. Systematic steps have been followed to apply the student centered learning approach in teaching Numerical Method course. Initially, the instructor has chosen the topic which was Euler’s Method to solve Ordinary Differential Equations (ODE) to be learned. The students were then divided into groups with five members in each group. Initial instructions have been given to the group members to prepare their subtopics before meeting members from other groups to discuss the subtopics in an expert group inside the classroom. For the time assigned for the classroom discussion, the setting of the classroom was rearranged to accommodate the student centered learning approach. Teacher strength was by monitoring the process of learning inside and outside the class. The students have been assessed during the migrating to the expert groups, recording of a video explanation outside the classroom and during the final examination. Euler’s Method to solve the ODE was set as part of Question 3(b) in the final exam. It is observed that none of the students from both sections obtained a zero grade in Q3(b), compared to Q3(a) and Q3(c). Also, for Section 1(44 students), 29 students obtained the full mark of 7/7, while only 10 obtained 7/7 for Q3(a) and no students obtained 6/6 for Q3(c). Finally, we can recommend that the Numerical Method course be moved toward more student-centered Learning classrooms where the students will be engaged in group discussion rather than having a teacher one man show.

Keywords: Teacher centered learning, student centered learning, mathematic, numerical methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471
457 Co-Administration Effects of Conjugated Linoleic Acid and L-Carnitine on Weight Gain and Biochemical Profile in Diet Induced Obese Rats

Authors: Maryam Nazari, Majid Karandish, Alihossein Saberi

Abstract:

Obesity as a global health challenge motivates pharmaceutical industries to produce anti-obesity drugs. However, effectiveness of these agents is remained unclear. Because of popularity of dietary supplements, the aim of this study was tp investigate the effects of Conjugated Linoleic Acid (CLA) and L-carnitine (LC) on serum glucose, triglyceride, cholesterol and weight changes in diet induced obese rats. 48 male Wistar rats were randomly divided into two groups: Normal fat diet (n=8), and High fat diet (HFD) (n=32). After eight weeks, the second group which was maintained on HFD until the end of study, was subdivided into four categories: a) 500 mg Corn Oil (as control group), b) 500 mg CLA, c) 200 mg LC, d) 500 mg CLA+ 200 mg LC.All doses are planned per kg body weights, which were administered by oral gavage for four weeks. Body weights were measured and recorded weekly by means of a digital scale. At the end of the study, blood samples were collected for biochemical markers measurement. SPSS Version 16 was used for statistical analysis. At the end of 8th week, a significant difference in weight was observed between HFD and NFD group. After 12 weeks, LC significantly reduced weight gain by 4.2%. Trend of weight gain in CLA and CLA+LC groups was insignificantly decelerated. CLA+LC reduced triglyceride level significantly, but just CLA had significant influence on total cholesterol and insignificant decreasing effect on FBS. Our results showed that an obesogenic diet in a relative short time led to obesity and dyslipidemia which can be modified by LC and CLA to some extent.

Keywords: Conjugated linoleic acid, high fat diet, L-carnitine, obesity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 942
456 Estimation of Bio-Kinetic Coefficients for Treatment of Brewery Wastewater

Authors: Abimbola M. Enitan, Josiah Adeyemo

Abstract:

Anaerobic modeling is a useful tool to describe and simulate the condition and behaviour of anaerobic treatment units for better effluent quality and biogas generation. The present investigation deals with the anaerobic treatment of brewery wastewater with varying organic loads. The chemical oxygen demand (COD) and total suspended solids (TSS) of the influent and effluent of the bioreactor were determined at various retention times to generate data for kinetic coefficients. The bio-kinetic coefficients in the modified Stover–Kincannon kinetic and methane generation models were determined to study the performance of anaerobic digestion process. At steady-state, the determination of the kinetic coefficient (K), the endogenous decay coefficient (Kd), the maximum growth rate of microorganisms (μmax), the growth yield coefficient (Y), ultimate methane yield (Bo), maximum utilization rate constant Umax and the saturation constant (KB) in the model were calculated to be 0.046 g/g COD, 0.083 (d¯¹), 0.117 (d-¹), 0.357 g/g, 0.516 (L CH4/gCODadded), 18.51 (g/L/day) and 13.64 (g/L/day) respectively. The outcome of this study will help in simulation of anaerobic model to predict usable methane and good effluent quality during the treatment of industrial wastewater. Thus, this will protect the environment, conserve natural resources, saves time and reduce cost incur by the industries for the discharge of untreated or partially treated wastewater. It will also contribute to a sustainable long-term clean development mechanism for the optimization of the methane produced from anaerobic degradation of waste in a close system.

Keywords: Brewery wastewater, methane generation model, environment, anaerobic modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4207