Search results for: Refined technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3141

Search results for: Refined technique

2301 Spacecraft Neural Network Control System Design using FPGA

Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah

Abstract:

Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products of space technologies. A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. Field programmable gate array (FPGA) is a digital device that owns reprogrammable properties and robust flexibility. For the neural network based instrument prototype in real time application, conventional specific VLSI neural chip design suffers the limitation in time and cost. With low precision artificial neural network design, FPGAs have higher speed and smaller size for real time application than the VLSI and DSP chips. So, many researchers have made great efforts on the realization of neural network (NN) using FPGA technique. In this paper, an introduction of ANN and FPGA technique are briefly shown. Also, Hardware Description Language (VHDL) code has been proposed to implement ANNs as well as to present simulation results with floating point arithmetic. Synthesis results for ANN controller are developed using Precision RTL. Proposed VHDL implementation creates a flexible, fast method and high degree of parallelism for implementing ANN. The implementation of multi-layer NN using lookup table LUT reduces the resource utilization for implementation and time for execution.

Keywords: Spacecraft, neural network, FPGA, VHDL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3009
2300 Assessing the Theoretical Suitability of Sentinel-2 and WorldView-3 Data for Hydrocarbon Mapping of Spill Events, Using HYSS

Authors: K. Tunde Olagunju, C. Scott Allen, F.D. (Freek) van der Meer

Abstract:

Identification of hydrocarbon oil in remote sensing images is often the first step in monitoring oil during spill events. Most remote sensing methods adopt techniques for hydrocarbon identification to achieve detection in order to model an appropriate cleanup program. Identification on optical sensors does not only allow for detection but also for characterization and quantification. Until recently, in optical remote sensing, quantification and characterization were only potentially possible using high-resolution laboratory and airborne imaging spectrometers (hyperspectral data). Unlike multispectral, hyperspectral data are not freely available, as this data category is mainly obtained via airborne survey at present. In this research, two operational high-resolution multispectral satellites (WorldView-3 and Sentinel-2) are theoretically assessed for their suitability for hydrocarbon characterization, using the Hydrocarbon Spectra Slope model (HYSS). This method utilized the two most persistent hydrocarbon diagnostic/absorption features at 1.73 µm and 2.30 µm for hydrocarbon mapping on multispectral data. In this research, spectra measurement of seven different hydrocarbon oils (crude and refined oil) taken on 10 different substrates with the use of laboratory ASD Fieldspec were convolved to Sentinel-2 and WorldView-3 resolution, using their full width half maximum (FWHM) parameter. The resulting hydrocarbon slope values obtained from the studied samples enable clear qualitative discrimination of most hydrocarbons, despite the presence of different background substrates, particularly on WorldView-3. Due to close conformity of central wavelengths and narrow bandwidths to key hydrocarbon bands used in HYSS, the statistical significance for qualitative analysis on WorldView-3 sensors for all studied hydrocarbon oil returned with 95% confidence level (P-value ˂ 0.01), except for Diesel. Using multifactor analysis of variance (MANOVA), the discriminating power of HYSS is statistically significant for most hydrocarbon-substrate combinations on Sentinel-2 and WorldView-3 FWHM, revealing the potential of these two operational multispectral sensors as rapid response tools for hydrocarbon mapping. One notable exception is highly transmissive hydrocarbons on Sentinel-2 data due to the non-conformity of spectral bands with key hydrocarbon absorptions and the relatively coarse bandwidth (> 100 nm).

Keywords: hydrocarbon, oil spill, remote sensing, hyperspectral, multispectral, hydrocarbon – substrate combination, Sentinel-2, WorldView-3

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 705
2299 Switching Studies on Ge15In5Te56Ag24 Thin Films

Authors: Diptoshi Roy, G. Sreevidya Varma, S. Asokan, Chandasree Das

Abstract:

Germanium Telluride based quaternary thin film switching devices with composition Ge15In5Te56Ag24, have been deposited in sandwich geometry on glass substrate with aluminum as top and bottom electrodes. The bulk glassy form of the said composition is prepared by melt quenching technique. In this technique, appropriate quantity of elements with high purity are taken in a quartz ampoule and sealed under a vacuum of 10-5 mbar. Then, it is allowed to rotate in a horizontal rotary furnace for 36 hours to ensure homogeneity of the melt. After that, the ampoule is quenched into a mixture of ice - water and NaOH to get the bulk ingot of the sample. The sample is then coated on a glass substrate using flash evaporation technique at a vacuum level of 10-6 mbar. The XRD report reveals the amorphous nature of the thin film sample and Energy - Dispersive X-ray Analysis (EDAX) confirms that the film retains the same chemical composition as that of the base sample. Electrical switching behavior of the device is studied with the help of Keithley (2410c) source-measure unit interfaced with Lab VIEW 7 (National Instruments). Switching studies, mainly SET (changing the state of the material from amorphous to crystalline) operation is conducted on the thin film form of the sample. This device is found to manifest memory switching as the device remains 'ON' even after the removal of the electric field. Also it is found that amorphous Ge15In5Te56Ag24 thin film unveils clean memory type of electrical switching behavior which can be justified by the absence of fluctuation in the I-V characteristics. The I-V characteristic also reveals that the switching is faster in this sample as no data points could be seen in the negative resistance region during the transition to on state and this leads to the conclusion of fast phase change during SET process. Scanning Electron Microscopy (SEM) studies are performed on the chosen sample to study the structural changes at the time of switching. SEM studies on the switched Ge15In5Te56Ag24 sample has shown some morphological changes at the place of switching wherein it can be explained that a conducting crystalline channel is formed in the device when the device switches from high resistance to low resistance state. From these studies it can be concluded that the material may find its application in fast switching Non-Volatile Phase Change Memory (PCM) Devices.

Keywords: Chalcogenides, vapor deposition, electrical switching, PCM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
2298 Weighted-Distance Sliding Windows and Cooccurrence Graphs for Supporting Entity-Relationship Discovery in Unstructured Text

Authors: Paolo Fantozzi, Luigi Laura, Umberto Nanni

Abstract:

The problem of Entity relation discovery in structured data, a well covered topic in literature, consists in searching within unstructured sources (typically, text) in order to find connections among entities. These can be a whole dictionary, or a specific collection of named items. In many cases machine learning and/or text mining techniques are used for this goal. These approaches might be unfeasible in computationally challenging problems, such as processing massive data streams. A faster approach consists in collecting the cooccurrences of any two words (entities) in order to create a graph of relations - a cooccurrence graph. Indeed each cooccurrence highlights some grade of semantic correlation between the words because it is more common to have related words close each other than having them in the opposite sides of the text. Some authors have used sliding windows for such problem: they count all the occurrences within a sliding windows running over the whole text. In this paper we generalise such technique, coming up to a Weighted-Distance Sliding Window, where each occurrence of two named items within the window is accounted with a weight depending on the distance between items: a closer distance implies a stronger evidence of a relationship. We develop an experiment in order to support this intuition, by applying this technique to a data set consisting in the text of the Bible, split into verses.

Keywords: Cooccurrence graph, entity relation graph, unstructured text, weighted distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 684
2297 Incidence, Occurrence, Classification and Outcome of Small Animal Fractures: A Retrospective Study (2005-2010)

Authors: L. M. Ben Ali

Abstract:

A retrospective study was undertaken to record the occurrence and pattern of fractures in small animals (dogs and cats) from year 2005 to 2010. A total of 650 cases were presented in small animal surgery unit out of which of 116 (dogs and cats) were presented with history of fractures of different bones. A total of 17.8% (116/650) cases were of fractures which constituted dogs 67% while cats were 23%. The majority of animals were intact. Trauma in the form of road side accident was the principal cause of fractures in dogs whereas as in cats it was fall from height. The ages of the fractured dog ranged from 4 months to 12 years whereas in cat it was from 4 weeks to 10 years. The femoral fractures represented 37.5% and 25% respectively in dogs and cats. Diaphysis, distal metaphyseal and supracondylar fractures were the most affected sites in dog and cats. Tibial fracture in dogs and cats represented 21.5% and 10% while humoral fractures were 7.9% and 14% in dogs and cats respectively. Humoral condyler fractures were most commonly seen in puppies aged 4 to 6 months. Fractured radius-ulna incidence was 19% and 14% in dogs and cats respectively. Other fractures recorded were of lumbar vertebrae, mandible and metacarpals etc. The management comprised of external and internal fixation in both the species. The most common internal fixation technique employed was Intramedullary fixation in long followed by other methods like stack or cross pinning, wiring etc as per findings in the cases. The cast bandage was used majorly as mean for external coaptation. The paper discusses the outcome of the case as per the technique employed.

Keywords: Animal, Fracture, Incidence, Occurrence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4674
2296 Adjustment of a PET Scanner for PEPT

Authors: Alireza Sadrmomtaz

Abstract:

Positron emission particle tracking (PEPT) is a technique in which a single radioactive tracer particle can be accurately tracked as it moves. A limitation of PET is that in order to reconstruct a tomographic image it is necessary to acquire a large volume of data (millions of events), so it is difficult to study rapidly changing systems. By considering this fact, PEPT is a very fast process compared with PET. In PEPT detecting both photons defines a line and the annihilation is assumed to have occurred somewhere along this line. The location of the tracer can be determined to within a few mm from coincident detection of a small number of pairs of back-to-back gamma rays and using triangulation. This can be achieved many times per second and the track of a moving particle can be reliably followed. This technique was invented at the University of Birmingham [1]. The attempt in PEPT is not to form an image of the tracer particle but simply to determine its location with time. If this tracer is followed for a long enough period within a closed, circulating system it explores all possible types of motion. The application of PEPT to industrial process systems carried out at the University of Birmingham is categorized in two subjects: the behaviour of granular materials and viscous fluids. Granular materials are processed in industry for example in the manufacture of pharmaceuticals, ceramics, food, polymers and PEPT has been used in a number of ways to study the behaviour of these systems [2]. PEPT allows the possibility of tracking a single particle within the bed [3]. Also PEPT has been used for studying systems such as: fluid flow, viscous fluids in mixers [4], using a neutrally-buoyant tracer particle [5].

Keywords: PET, BGO, Particle Tracking, ECAT 931, List mode, PEPT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1403
2295 Performance Analysis of MIMO Based Multi-User Cooperation Diversity Over Various Fading Channels

Authors: Zuhaib Ashfaq Khan, Imran Khan, Nandana Rajatheva

Abstract:

In this paper, hybrid FDMA-TDMA access technique in a cooperative distributive fashion introducing and implementing a modified protocol introduced in [1] is analyzed termed as Power and Cooperation Diversity Gain Protocol (PCDGP). A wireless network consists of two users terminal , two relays and a destination terminal equipped with two antennas. The relays are operating in amplify-and-forward (AF) mode with a fixed gain. Two operating modes: cooperation-gain mode and powergain mode are exploited from source terminals to relays, as it is working in a best channel selection scheme. Vertical BLAST (Bell Laboratories Layered Space Time) or V-BLAST with minimum mean square error (MMSE) nulling is used at the relays to perfectly detect the joint signals from multiple source terminals. The performance is analyzed using binary phase shift keying (BPSK) modulation scheme and investigated over independent and identical (i.i.d) Rayleigh, Ricean-K and Nakagami-m fading environments. Subsequently, simulation results show that the proposed scheme can provide better signal quality of uplink users in a cooperative communication system using hybrid FDMATDMA technique.

Keywords: Cooperation Diversity, Best Channel Selectionscheme, MIMO relay networks, V-BLAST, QRdecomposition, and MMSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002
2294 Speech Enhancement Using Wavelet Coefficients Masking with Local Binary Patterns

Authors: Christian Arcos, Marley Vellasco, Abraham Alcaim

Abstract:

In this paper, we present a wavelet coefficients masking based on Local Binary Patterns (WLBP) approach to enhance the temporal spectra of the wavelet coefficients for speech enhancement. This technique exploits the wavelet denoising scheme, which splits the degraded speech into pyramidal subband components and extracts frequency information without losing temporal information. Speech enhancement in each high-frequency subband is performed by binary labels through the local binary pattern masking that encodes the ratio between the original value of each coefficient and the values of the neighbour coefficients. This approach enhances the high-frequency spectra of the wavelet transform instead of eliminating them through a threshold. A comparative analysis is carried out with conventional speech enhancement algorithms, demonstrating that the proposed technique achieves significant improvements in terms of PESQ, an international recommendation of objective measure for estimating subjective speech quality. Informal listening tests also show that the proposed method in an acoustic context improves the quality of speech, avoiding the annoying musical noise present in other speech enhancement techniques. Experimental results obtained with a DNN based speech recognizer in noisy environments corroborate the superiority of the proposed scheme in the robust speech recognition scenario.

Keywords: Binary labels, local binary patterns, mask, wavelet coefficients, speech enhancement, speech recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1017
2293 Modeling and Visualizing Seismic Wave Propagation in Elastic Medium Using Multi-Dimension Wave Digital Filtering Approach

Authors: Jason Chien-Hsun Tseng, Nguyen Dong-Thai Dao, Chong-Ching Chang

Abstract:

A novel PDE solver using the multidimensional wave digital filtering (MDWDF) technique to achieve the solution of a 2D seismic wave system is presented. In essence, the continuous physical system served by a linear Kirchhoff circuit is transformed to an equivalent discrete dynamic system implemented by a MD wave digital filtering (MDWDF) circuit. This amounts to numerically approximating the differential equations used to describe elements of a MD passive electronic circuit by a grid-based difference equations implemented by the so-called state quantities within the passive MDWDF circuit. So the digital model can track the wave field on a dense 3D grid of points. Details about how to transform the continuous system into a desired discrete passive system are addressed. In addition, initial and boundary conditions are properly embedded into the MDWDF circuit in terms of state quantities. Graphic results have clearly demonstrated some physical effects of seismic wave (P-wave and S–wave) propagation including radiation, reflection, and refraction from and across the hard boundaries. Comparison between the MDWDF technique and the finite difference time domain (FDTD) approach is also made in terms of the computational efficiency.

Keywords: Seismic Wave Propagation, Multi-dimension WaveDigital Filters, Partial Differential Equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1435
2292 Data Mining for Cancer Management in Egypt Case Study: Childhood Acute Lymphoblastic Leukemia

Authors: Nevine M. Labib, Michael N. Malek

Abstract:

Data Mining aims at discovering knowledge out of data and presenting it in a form that is easily comprehensible to humans. One of the useful applications in Egypt is the Cancer management, especially the management of Acute Lymphoblastic Leukemia or ALL, which is the most common type of cancer in children. This paper discusses the process of designing a prototype that can help in the management of childhood ALL, which has a great significance in the health care field. Besides, it has a social impact on decreasing the rate of infection in children in Egypt. It also provides valubale information about the distribution and segmentation of ALL in Egypt, which may be linked to the possible risk factors. Undirected Knowledge Discovery is used since, in the case of this research project, there is no target field as the data provided is mainly subjective. This is done in order to quantify the subjective variables. Therefore, the computer will be asked to identify significant patterns in the provided medical data about ALL. This may be achieved through collecting the data necessary for the system, determimng the data mining technique to be used for the system, and choosing the most suitable implementation tool for the domain. The research makes use of a data mining tool, Clementine, so as to apply Decision Trees technique. We feed it with data extracted from real-life cases taken from specialized Cancer Institutes. Relevant medical cases details such as patient medical history and diagnosis are analyzed, classified, and clustered in order to improve the disease management.

Keywords: Data Mining, Decision Trees, Knowledge Discovery, Leukemia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2215
2291 Memorabilia of Suan Sunandha through Interactive User Interface

Authors: Nalinee Sophatsathit

Abstract:

The objectives of memorabilia of Suan Sunandha are to develop a general knowledge presentation about the historical royal garden through interactive graphic simulation technique and to employ high-functionality context in enhancing interactive user navigation. The approach infers non-intrusive display of relevant history in response to situational context. User’s navigation runs through the virtual reality campus, consisting of new and restored buildings. A flash back presentation of information pertaining to the history in the form of photos, paintings, and textual descriptions are displayed along each passing-by building. To keep the presentation lively, graphical simulation is created in a serendipity game play so that the user can both learn and enjoy the educational tour. The benefits of this human-computer interaction development are two folds. First, lively presentation technique and situational context modeling are developed that entail a usable paradigm of knowledge and information presentation combinations. Second, cost effective training and promotion for both internal personnel and public visitors to learn and keep informed of this historical royal garden can be furnished without the need for a dedicated public relations service. Future improvement on graphic simulation and ability based display can extend this work to be more realistic, user-friendly, and informative for all.

Keywords: Interactive user navigation, high-functionality context, situational context, human-computer interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1599
2290 Prediction of Compressive Strength of Concrete from Early Age Test Result Using Design of Experiments (RSM)

Authors: Salem Alsanusi, Loubna Bentaher

Abstract:

Response Surface Methods (RSM) provide statistically validated predictive models that can then be manipulated for finding optimal process configurations. Variation transmitted to responses from poorly controlled process factors can be accounted for by the mathematical technique of propagation of error (POE), which facilitates ‘finding the flats’ on the surfaces generated by RSM. The dual response approach to RSM captures the standard deviation of the output as well as the average. It accounts for unknown sources of variation. Dual response plus propagation of error (POE) provides a more useful model of overall response variation. In our case, we implemented this technique in predicting compressive strength of concrete of 28 days in age. Since 28 days is quite time consuming, while it is important to ensure the quality control process. This paper investigates the potential of using design of experiments (DOE-RSM) to predict the compressive strength of concrete at 28th day. Data used for this study was carried out from experiment schemes at university of Benghazi, civil engineering department. A total of 114 sets of data were implemented. ACI mix design method was utilized for the mix design. No admixtures were used, only the main concrete mix constituents such as cement, coarseaggregate, fine aggregate and water were utilized in all mixes. Different mix proportions of the ingredients and different water cement ratio were used. The proposed mathematical models are capable of predicting the required concrete compressive strength of concrete from early ages.

Keywords: Mix proportioning, response surface methodology, compressive strength, optimal design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2214
2289 Environmental Impact Assessment of Gotv and Hydro-Electric Dam on the Karoon River Using ICOLD Technique

Authors: A. Sayadi, A. Khodadadi D., S. Partani

Abstract:

Today Environmental Impact Assessment (EIA) is known as one of the most important tools for decision makers in the construction of civil and industrial projects towards sustainable development. In the past, projects were evaluated based on cost and benefit analysis regardless of the physical and biological environmental effects and its socio-economical impacts. According to the Department of Environment (DOE) of Iran's regulations, the construction of hydroelectric dams is an activity that requires an EIA report. In this paper the environmental impact assessment of the Gotvand hydro-electrical dam has been evaluated in the three environment elements, biological, Physical-chemical and cultural units. This dam is one of the largest dams in Iran with a volume of 4500 MCM and is going to be the last dam on the Karoon River in the south of Iran. In this paper the ICOLD (International Commission on Large Dams) technique was employed for the environmental impact assessment of the dam. The research includes all socio economical and environmental effects of the dam during the construction and operation of the hydro electric dam and Environmental management, monitoring and mitigation of negative impacts were analyzed. In this project the results led to using some techniques to protect the destructive impacts on biological aspects beside the effective long time period impacts on the biological aspects. The impacts on physical aspects are temporary and negative commonly that could be restored and rehabilitated in natural process in the long time in operation period.

Keywords: "Gotvand Hydro Electric Dam", "EIA", "ICOLD and Leopold matrices"

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3387
2288 Operational Representation of Certain Hypergeometric Functions by Means of Fractional Derivatives and Integrals

Authors: Manoj Singh, Mumtaz Ahmad Khan, Abdul Hakim Khan

Abstract:

The investigation in the present paper is to obtain certain types of relations for the well known hypergeometric functions by employing the technique of fractional derivative and integral.

Keywords: Fractional Derivatives and Integrals, Hypergeometric functions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
2287 Spectroscopic and SEM Investigation of TCPP in Titanium Matrix

Authors: R.Rahimi, F.Moharrami

Abstract:

Titanium gels doped with water-soluble cationic porphyrin were synthesized by the sol–gel polymerization of Ti (OC4H9)4. In this work we investigate the spectroscopic properties along with SEM images of tetra carboxyl phenyl porphyrin when incorporated into porous matrix produced by the sol–gel technique.

Keywords: TCPP, Titanium matrix, UV/Vis spectroscopy, SEM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1573
2286 Variational EM Inference Algorithm for Gaussian Process Classification Model with Multiclass and Its Application to Human Action Classification

Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park

Abstract:

In this paper, we propose the variational EM inference algorithm for the multi-class Gaussian process classification model that can be used in the field of human behavior recognition. This algorithm can drive simultaneously both a posterior distribution of a latent function and estimators of hyper-parameters in a Gaussian process classification model with multiclass. Our algorithm is based on the Laplace approximation (LA) technique and variational EM framework. This is performed in two steps: called expectation and maximization steps. First, in the expectation step, using the Bayesian formula and LA technique, we derive approximately the posterior distribution of the latent function indicating the possibility that each observation belongs to a certain class in the Gaussian process classification model. Second, in the maximization step, using a derived posterior distribution of latent function, we compute the maximum likelihood estimator for hyper-parameters of a covariance matrix necessary to define prior distribution for latent function. These two steps iteratively repeat until a convergence condition satisfies. Moreover, we apply the proposed algorithm with human action classification problem using a public database, namely, the KTH human action data set. Experimental results reveal that the proposed algorithm shows good performance on this data set.

Keywords: Bayesian rule, Gaussian process classification model with multiclass, Gaussian process prior, human action classification, laplace approximation, variational EM algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758
2285 Comparison of the Effects of Continuous Flow Microwave Pre-treatment with Different Intensities on the Anaerobic Digestion of Sewage Sludge for Sustainable Energy Recovery from Sewage Treatment Plant

Authors: D. Hephzibah, P. Kumaran, N. M. Saifuddin

Abstract:

Anaerobic digestion is a well-known technique for sustainable energy recovery from sewage sludge. However, sewage sludge digestion is restricted due to certain factors. Pre-treatment methods have been established in various publications as a promising technique to improve the digestibility of the sewage sludge and to enhance the biogas generated which can be used for energy recovery. In this study, continuous flow microwave (MW) pre-treatment with different intensities were compared by using 5 L semi-continuous digesters at a hydraulic retention time of 27 days. We focused on the effects of MW at different intensities on the sludge solubilization, sludge digestibility, and biogas production of the untreated and MW pre-treated sludge. The MW pre-treatment demonstrated an increase in the ratio of soluble chemical oxygen demand to total chemical oxygen demand (sCOD/tCOD) and volatile fatty acid (VFA) concentration. Besides that, the total volatile solid (TVS) removal efficiency and tCOD removal efficiency also increased during the digestion of the MW pre-treated sewage sludge compared to the untreated sewage sludge. Furthermore, the biogas yield also subsequently increases due to the pre-treatment effect. A higher MW power level and irradiation time generally enhanced the biogas generation which has potential for sustainable energy recovery from sewage treatment plant. However, the net energy balance tabulation shows that the MW pre-treatment leads to negative net energy production.

Keywords: Anaerobic digestion, biogas, microwave pre-treatment, sewage sludge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2147
2284 Estimation of the Mean of the Selected Population

Authors: Kalu Ram Meena, Aditi Kar Gangopadhyay, Satrajit Mandal

Abstract:

Two normal populations with different means and same variance are considered, where the variance is known. The population with the smaller sample mean is selected. Various estimators are constructed for the mean of the selected normal population. Finally, they are compared with respect to the bias and MSE risks by the mehod of Monte-Carlo simulation and their performances are analysed with the help of graphs.

Keywords: Estimation after selection, Brewster-Zidek technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1405
2283 Vehicle Gearbox Fault Diagnosis Based On Cepstrum Analysis

Authors: Mohamed El Morsy, Gabriela Achtenová

Abstract:

Research on damage of gears and gear pairs using vibration signals remains very attractive, because vibration signals from a gear pair are complex in nature and not easy to interpret. Predicting gear pair defects by analyzing changes in vibration signal of gears pairs in operation is a very reliable method. Therefore, a suitable vibration signal processing technique is necessary to extract defect information generally obscured by the noise from dynamic factors of other gear pairs.This article presents the value of cepstrum analysis in vehicle gearbox fault diagnosis. Cepstrum represents the overall power content of a whole family of harmonics and sidebands when more than one family of sidebands is present at the same time. The concept for the measurement and analysis involved in using the technique are briefly outlined. Cepstrum analysis is used for detection of an artificial pitting defect in a vehicle gearbox loaded with different speeds and torques. The test stand is equipped with three dynamometers; the input dynamometer serves asthe internal combustion engine, the output dynamometers introduce the load on the flanges of the output joint shafts. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. Also, a method for fault diagnosis of gear faults is presented based on order Cepstrum. The procedure is illustrated with the experimental vibration data of the vehicle gearbox. The results show the effectiveness of Cepstrum analysis in detection and diagnosis of the gear condition.

Keywords: Cepstrum analysis, fault diagnosis, gearbox.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3310
2282 Detection of Action Potentials in the Presence of Noise Using Phase-Space Techniques

Authors: Christopher Paterson, Richard Curry, Alan Purvis, Simon Johnson

Abstract:

Emerging Bio-engineering fields such as Brain Computer Interfaces, neuroprothesis devices and modeling and simulation of neural networks have led to increased research activity in algorithms for the detection, isolation and classification of Action Potentials (AP) from noisy data trains. Current techniques in the field of 'unsupervised no-prior knowledge' biosignal processing include energy operators, wavelet detection and adaptive thresholding. These tend to bias towards larger AP waveforms, AP may be missed due to deviations in spike shape and frequency and correlated noise spectrums can cause false detection. Also, such algorithms tend to suffer from large computational expense. A new signal detection technique based upon the ideas of phasespace diagrams and trajectories is proposed based upon the use of a delayed copy of the AP to highlight discontinuities relative to background noise. This idea has been used to create algorithms that are computationally inexpensive and address the above problems. Distinct AP have been picked out and manually classified from real physiological data recorded from a cockroach. To facilitate testing of the new technique, an Auto Regressive Moving Average (ARMA) noise model has been constructed bases upon background noise of the recordings. Along with the AP classification means this model enables generation of realistic neuronal data sets at arbitrary signal to noise ratio (SNR).

Keywords: Action potential detection, Low SNR, Phase spacediagrams/trajectories, Unsupervised/no-prior knowledge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643
2281 Time Effective Structural Frequency Response Testing with Oblique Impact

Authors: Khoo Shin Yee, Lian Yee Cheng, Ong Zhi Chao, Zubaidah Ismail, Siamak Noroozi

Abstract:

Structural frequency response testing is accurate in identifying the dynamic characteristic of a machinery structure. In practical perspective, conventional structural frequency response testing such as experimental modal analysis with impulse technique (also known as “impulse testing”) has limitation especially on its long acquisition time. The high acquisition time is mainly due to the redundancy procedure where the engineer has to repeatedly perform the test in 3 directions, namely the axial-, horizontal- and vertical-axis, in order to comprehensively define the dynamic behavior of a 3D structure. This is unfavorable to numerous industries where the downtime cost is high. This study proposes to reduce the testing time by using oblique impact. Theoretically, a single oblique impact can induce significant vibration responses and vibration modes in all the 3 directions. Hence, the acquisition time with the implementation of the oblique impulse technique can be reduced by a factor of three (i.e. for a 3D dynamic system). This study initiates an experimental investigation of impulse testing with oblique excitation. A motor-driven test rig has been used for the testing purpose. Its dynamic characteristic has been identified using the impulse testing with the conventional normal impact and the proposed oblique impact respectively. The results show that the proposed oblique impulse testing is able to obtain all the desired natural frequencies in all 3 directions and thus providing a feasible solution for a fast and time effective way of conducting the impulse testing.

Keywords: Frequency response function, impact testing, modal analysis, oblique angle, oblique impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 932
2280 Participation in IAEA Proficiency Test to Analyse Cobalt, Strontium and Caesium in Seawater Using Direct Counting and Radiochemical Techniques

Authors: S. Visetpotjanakit, C. Khrautongkieo

Abstract:

Radiation monitoring in the environment and foodstuffs is one of the main responsibilities of Office of Atoms for Peace (OAP) as the nuclear regulatory body of Thailand. The main goal of the OAP is to assure the safety of the Thai people and environment from any radiological incidents. Various radioanalytical methods have been developed to monitor radiation and radionuclides in the environmental and foodstuff samples. To validate our analytical performance, several proficiency test exercises from the International Atomic Energy Agency (IAEA) have been performed. Here, the results of a proficiency test exercise referred to as the Proficiency Test for Tritium, Cobalt, Strontium and Caesium Isotopes in Seawater 2017 (IAEA-RML-2017-01) are presented. All radionuclides excepting ³H were analysed using various radioanalytical methods, i.e. direct gamma-ray counting for determining ⁶⁰Co, ¹³⁴Cs and ¹³⁷Cs and developed radiochemical techniques for analysing ¹³⁴Cs, ¹³⁷Cs using AMP pre-concentration technique and 90Sr using di-(2-ethylhexyl) phosphoric acid (HDEHP) liquid extraction technique. The analysis results were submitted to IAEA. All results passed IAEA criteria, i.e. accuracy, precision and trueness and obtained ‘Accepted’ statuses. These confirm the data quality from the OAP environmental radiation laboratory to monitor radiation in the environment.

Keywords: International atomic energy agency, proficiency test, radiation monitoring, seawater.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 823
2279 Estimation of Asphalt Pavement Surfaces Using Image Analysis Technique

Authors: Mohammad A. Khasawneh

Abstract:

Asphalt concrete pavements gradually lose their skid resistance causing safety problems especially under wet conditions and high driving speeds. In order to enact the actual field polishing and wearing process of asphalt pavement surfaces in a laboratory setting, several laboratory-scale accelerated polishing devices were developed by different agencies. To mimic the actual process, friction and texture measuring devices are needed to quantify surface deterioration at different polishing intervals that reflect different stages of the pavement life. The test could still be considered lengthy and to some extent labor-intensive. Therefore, there is a need to come up with another method that can assist in investigating the bituminous pavement surface characteristics in a practical and time-efficient test procedure.

The purpose of this paper is to utilize a well-developed image analysis technique to characterize asphalt pavement surfaces without the need to use conventional friction and texture measuring devices in an attempt to shorten and simplify the polishing procedure in the lab.

Promising findings showed the possibility of using image analysis in lieu of the labor-sensitive-variable-in-nature friction and texture measurements. It was found that the exposed aggregate surface area of asphalt specimens made from limestone and gravel aggregates produced solid evidence of the validity of this method in describing asphalt pavement surfaces. Image analysis results correlated well with the British Pendulum Numbers (BPN), Polish Values (PV) and Mean Texture Depth (MTD) values.

Keywords: Friction, Image Analysis, Polishing, Statistical Analysis, Texture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2559
2278 Stress-Strain Relation for Hybrid Fiber Reinforced Concrete at Elevated Temperature

Authors: Josef Novák, Alena Kohoutková

Abstract:

The performance of concrete structures in fire depends on several factors which include, among others, the change in material properties due to the fire. Today, fiber reinforced concrete (FRC) belongs to materials which have been widely used for various structures and elements. While the knowledge and experience with FRC behavior under ambient temperature is well-known, the effect of elevated temperature on its behavior has to be deeply investigated. This paper deals with an experimental investigation and stress‑strain relations for hybrid fiber reinforced concrete (HFRC) which contains siliceous aggregates, polypropylene and steel fibers. The main objective of the experimental investigation is to enhance a database of mechanical properties of concrete composites with addition of fibers subject to elevated temperature as well as to validate existing stress-strain relations for HFRC. Within the investigation, a unique heat transport test, compressive test and splitting tensile test were performed on 150 mm cubes heated up to 200, 400, and 600 °C with the aim to determine a time period for uniform heat distribution in test specimens and the mechanical properties of the investigated concrete composite, respectively. Both findings obtained from the presented experimental test as well as experimental data collected from scientific papers so far served for validating the computational accuracy of investigated stress-strain relations for HFRC which have been developed during last few years. Owing to the presence of steel and polypropylene fibers, HFRC becomes a unique material whose structural performance differs from conventional plain concrete when exposed to elevated temperature. Polypropylene fibers in HFRC lower the risk of concrete spalling as the fibers burn out shortly with increasing temperature due to low ignition point and as a consequence pore pressure decreases. On the contrary, the increase in the concrete porosity might affect the mechanical properties of the material. To validate this thought requires enhancing the existing result database which is very limited and does not contain enough data. As a result of the poor database, only few stress-strain relations have been developed so far to describe the structural performance of HFRC at elevated temperature. Moreover, many of them are inconsistent and need to be refined. Most of them also do not take into account the effect of both a fiber type and fiber content. Such approach might be vague especially when high amount of polypropylene fibers are used. Therefore, the existing relations should be validated in detail based on other experimental results.

Keywords: Elevated temperature, fiber reinforced concrete, mechanical properties, stress strain relation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1122
2277 Reconstitute Information about Discontinued Water Quality Variables in the Nile Delta Monitoring Network Using Two Record Extension Techniques

Authors: Bahaa Khalil, Taha B. M. J. Ouarda, André St-Hilaire

Abstract:

The world economic crises and budget constraints have caused authorities, especially those in developing countries, to rationalize water quality monitoring activities. Rationalization consists of reducing the number of monitoring sites, the number of samples, and/or the number of water quality variables measured. The reduction in water quality variables is usually based on correlation. If two variables exhibit high correlation, it is an indication that some of the information produced may be redundant. Consequently, one variable can be discontinued, and the other continues to be measured. Later, the ordinary least squares (OLS) regression technique is employed to reconstitute information about discontinued variable by using the continuously measured one as an explanatory variable. In this paper, two record extension techniques are employed to reconstitute information about discontinued water quality variables, the OLS and the Line of Organic Correlation (LOC). An empirical experiment is conducted using water quality records from the Nile Delta water quality monitoring network in Egypt. The record extension techniques are compared for their ability to predict different statistical parameters of the discontinued variables. Results show that the OLS is better at estimating individual water quality records. However, results indicate an underestimation of the variance in the extended records. The LOC technique is superior in preserving characteristics of the entire distribution and avoids underestimation of the variance. It is concluded from this study that the OLS can be used for the substitution of missing values, while LOC is preferable for inferring statements about the probability distribution.

Keywords: Record extension, record augmentation, monitoringnetworks, water quality indicators.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612
2276 Retrieving Extended High Dynamic Range from Digital Negative Image - An Experiment on Architectural Photo Imaging

Authors: See Zi Siang, Khairul Hazrin Hashim, Harold Thwaites, Lee Xia Sheng, Ooi Wooi Har

Abstract:

The paper explores the development of an optimization of method and apparatus for retrieving extended high dynamic range from digital negative image. Architectural photo imaging can benefit from high dynamic range imaging (HDRI) technique for preserving and presenting sufficient luminance in the shadow and highlight clipping image areas. The HDRI technique that requires multiple exposure images as the source of HDRI rendering may not be effective in terms of time efficiency during the acquisition process and post-processing stage, considering it has numerous potential imaging variables and technical limitations during the multiple exposure process. This paper explores an experimental method and apparatus that aims to expand the dynamic range from digital negative image in HDRI environment. The method and apparatus explored is based on a single source of RAW image acquisition for the use of HDRI post-processing. It will cater the optimization in order to avoid and minimize the conventional HDRI photographic errors caused by different physical conditions during the photographing process and the misalignment of multiple exposed image sequences. The study observes the characteristics and capabilities of RAW image format as digital negative used for the retrieval of extended high dynamic range process in HDRI environment.

Keywords: High Dynamic Range Image, Photography Workflow Optimization, Digital Negative Image, Architectural Image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1617
2275 Identification of Critical Success Factors in Non-Formal Service Sector Using Delphi Technique

Authors: Amol A. Talankar, Prakash Verma, Nitin Seth

Abstract:

The purpose of this study is to identify the critical success factors (CSFs) for the effective implementation of Six Sigma in non-formal service Sectors.

Based on the survey of literature, the critical success factors (CSFs) for Six Sigma have been identified and are assessed for their importance in Non-formal service sector using Delphi Technique. These selected CSFs were put forth to the panel of expert to cluster them and prepare cognitive map to establish their relationship.

All the critical success factors examined and obtained from the review of literature have been assessed for their importance with respect to their contribution to Six Sigma effectiveness in non formal service sector.

The study is limited to the non-formal service sectors involved in the organization of religious festival only. However, the similar exercise can be conducted for broader sample of other non-formal service sectors like temple/ashram management, religious tours management etc.

The research suggests an approach to identify CSFs of Six Sigma for Non-formal service sector. All the CSFs of the formal service sector will not be applicable to Non-formal services, hence opinion of experts was sought to add or delete the CSFs. In the first round of Delphi, the panel of experts has suggested, two new CSFs-“competitive benchmarking (F19) and resident’s involvement (F28)”, which were added for assessment in the next round of Delphi.  One of the CSFs-“fulltime six sigma personnel (F15)” has been omitted in proposed clusters of CSFs for non-formal organization, as it is practically impossible to deploy full time trained Six Sigma recruits.

Keywords: Critical success factors (CSFs), Quality assurance, non-formal service sectors, Six Sigma.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2452
2274 A Study of the Planning and Designing of the Built Environment under the Green Transit-Oriented Development

Authors: Wann-Ming Wey

Abstract:

In recent years, the problems of global climate change and natural disasters have induced the concerns and attentions of environmental sustainability issues for the public. Aside from the environmental planning efforts done for human environment, Transit-Oriented Development (TOD) has been widely used as one of the future solutions for the sustainable city development. In order to be more consistent with the urban sustainable development, the development of the built environment planning based on the concept of Green TOD which combines both TOD and Green Urbanism is adapted here. The connotation of the urban development under the green TOD including the design toward environment protect, the maximum enhancement resources and the efficiency of energy use, use technology to construct green buildings and protected areas, natural ecosystems and communities linked, etc. Green TOD is not only to provide the solution to urban traffic problems, but to direct more sustainable and greener consideration for future urban development planning and design. In this study, we use both the TOD and Green Urbanism concepts to proceed to the study of the built environment planning and design. Fuzzy Delphi Technique (FDT) is utilized to screen suitable criteria of the green TOD. Furthermore, Fuzzy Analytic Network Process (FANP) and Quality Function Deployment (QFD) were then developed to evaluate the criteria and prioritize the alternatives. The study results can be regarded as the future guidelines of the built environment planning and designing under green TOD development in Taiwan.

Keywords: Green transit-oriented development, built environment, fuzzy Delphi technique, quality function deployment, fuzzy analytic network process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519
2273 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique

Authors: Nishant Shrivastava, D. K. Sehgal

Abstract:

In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.

Keywords: Finite element, Lagrangian, optimal stress location, serendipity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 633
2272 Mean-Variance Optimization of Portfolios with Return of Premium Clauses in a DC Pension Plan with Multiple Contributors under Constant Elasticity of Variance Model

Authors: Bright O. Osu, Edikan E. Akpanibah, Chidinma Olunkwa

Abstract:

In this paper, mean-variance optimization of portfolios with the return of premium clauses in a defined contribution (DC) pension plan with multiple contributors under constant elasticity of variance (CEV) model is studied. The return clauses which permit death members to claim their accumulated wealth are considered, the remaining wealth is not equally distributed by the remaining members as in literature. We assume that before investment, the surplus which includes funds of members who died after retirement adds to the total wealth. Next, we consider investments in a risk-free asset and a risky asset to meet up the expected returns of the remaining members and obtain an optimized problem with the help of extended Hamilton Jacobi Bellman equation. We obtained the optimal investment strategies for the two assets and the efficient frontier of the members by using a stochastic optimal control technique. Furthermore, we studied the effect of the various parameters of the optimal investment strategies and the effect of the risk-averse level on the efficient frontier. We observed that the optimal investment strategy is the same as in literature, secondly, we observed that the surplus decreases the proportion of the wealth invested in the risky asset.

Keywords: DC pension fund, Hamilton Jacobi Bellman equation, optimal investment strategies, stochastic optimal control technique, return of premiums clauses, mean-variance utility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 775