Search results for: PWM switching technique
327 The Significance of Embodied Energy in Certified Passive Houses
Authors: Robert H. Crawford, André Stephan
Abstract:
Certifications such as the Passive House Standard aim to reduce the final space heating energy demand of residential buildings. Space conditioning, notably heating, is responsible for nearly 70% of final residential energy consumption in Europe. There is therefore significant scope for the reduction of energy consumption through improvements to the energy efficiency of residential buildings. However, these certifications totally overlook the energy embodied in the building materials used to achieve this greater operational energy efficiency. The large amount of insulation and the triple-glazed high efficiency windows require a significant amount of energy to manufacture. While some previous studies have assessed the life cycle energy demand of passive houses, including their embodied energy, these rely on incomplete assessment techniques which greatly underestimate embodied energy and can lead to misleading conclusions. This paper analyses the embodied and operational energy demands of a case study passive house using a comprehensive hybrid analysis technique to quantify embodied energy. Results show that the embodied energy is much more significant than previously thought. Also, compared to a standard house with the same geometry, structure, finishes and number of people, a passive house can use more energy over 80 years, mainly due to the additional materials required. Current building energy efficiency certifications should widen their system boundaries to include embodied energy in order to reduce the life cycle energy demand of residential buildings.
Keywords: Embodied energy, Hybrid analysis, Life cycle energy analysis, Passive house.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2901326 Impact of the Electricity Market Prices on Energy Storage Operation during the COVID-19 Pandemic
Authors: Marin Mandić, Elis Sutlović, Tonći Modrić, Luka Stanić
Abstract:
With the restructuring and deregulation of the power system, storage owners, generation companies or private producers can offer their multiple services on various power markets and earn income in different types of markets, such as the day-ahead, real-time, ancillary services market, etc. During the COVID-19 pandemic, electricity prices, as well as ancillary services prices, increased significantly. The optimization of the energy storage operation was performed using a suitable model for simulating the operation of a pumped storage hydropower plant under market conditions. The objective function maximizes the income earned through energy arbitration, regulation-up, regulation-down and spinning reserve services. The optimization technique used for solving the objective function is mixed integer linear programming (MILP). In numerical examples, the pumped storage hydropower plant operation has been optimized considering the already achieved hourly electricity market prices from Nord Pool for the pre-pandemic (2019) and the pandemic (2020 and 2021) years. The impact of the electricity market prices during the COVID-19 pandemic on energy storage operation is shown through the analysis of income, operating hours, reserved capacity and consumed energy for each service. The results indicate the role of energy storage during a significant fluctuation in electricity and services prices.
Keywords: Electrical market prices, electricity market, energy storage optimization, mixed integer linear programming, MILP, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 517325 Indian License Plate Detection and Recognition Using Morphological Operation and Template Matching
Authors: W. Devapriya, C. Nelson Kennedy Babu, T. Srihari
Abstract:
Automatic License plate recognition (ALPR) is a technology which recognizes the registration plate or number plate or License plate of a vehicle. In this paper, an Indian vehicle number plate is mined and the characters are predicted in efficient manner. ALPR involves four major technique i) Pre-processing ii) License Plate Location Identification iii) Individual Character Segmentation iv) Character Recognition. The opening phase, named pre-processing helps to remove noises and enhances the quality of the image using the conception of Morphological Operation and Image subtraction. The second phase, the most puzzling stage ascertain the location of license plate using the protocol Canny Edge detection, dilation and erosion. In the third phase, each characters characterized by Connected Component Approach (CCA) and in the ending phase, each segmented characters are conceptualized using cross correlation template matching- a scheme specifically appropriate for fixed format. Major application of ALPR is Tolling collection, Border Control, Parking, Stolen cars, Enforcement, Access Control, Traffic control. The database consists of 500 car images taken under dissimilar lighting condition is used. The efficiency of the system is 97%. Our future focus is Indian Vehicle License Plate Validation (Whether License plate of a vehicle is as per Road transport and highway standard).
Keywords: Automatic License plate recognition, Character recognition, Number plate Recognition, Template matching, morphological operation, canny edge detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2405324 Boundary Layer Flow of a Casson Nanofluid past a Vertical Exponentially Stretching Cylinder in the Presence of a Transverse Magnetic Field with Internal Heat Generation/Absorption
Authors: G. Sarojamma, K. Vendabai
Abstract:
An analysis is carried out to investigate the effect of magnetic field and heat source on the steady boundary layer flow and heat transfer of a Casson nanofluid over a vertical cylinder stretching exponentially along its radial direction. Using a similarity transformation, the governing mathematical equations, with the boundary conditions are reduced to a system of coupled, non –linear ordinary differential equations. The resulting system is solved numerically by the fourth order Runge – Kutta scheme with shooting technique. The influence of various physical parameters such as Reynolds number, Prandtl number, magnetic field, Brownian motion parameter, thermophoresis parameter, Lewis number and the natural convection parameter are presented graphically and discussed for non – dimensional velocity, temperature and nanoparticle volume fraction. Numerical data for the skin – friction coefficient, local Nusselt number and the local Sherwood number have been tabulated for various parametric conditions. It is found that the local Nusselt number is a decreasing function of Brownian motion parameter Nb and the thermophoresis parameter Nt.
Keywords: Casson nanofluid, Boundary layer flow, Internal heat generation/absorption, Exponentially stretching cylinder, Heat transfer, Brownian motion, Thermophoresis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2822323 Modal Analysis of Machine Tool Column Using Finite Element Method
Authors: Migbar Assefa
Abstract:
The performance of a machine tool is eventually assessed by its ability to produce a component of the required geometry in minimum time and at small operating cost. It is customary to base the structural design of any machine tool primarily upon the requirements of static rigidity and minimum natural frequency of vibration. The operating properties of machines like cutting speed, feed and depth of cut as well as the size of the work piece also have to be kept in mind by a machine tool structural designer. This paper presents a novel approach to the design of machine tool column for static and dynamic rigidity requirement. Model evaluation is done effectively through use of General Finite Element Analysis software ANSYS. Studies on machine tool column are used to illustrate finite element based concept evaluation technique. This paper also presents results obtained from the computations of thin walled box type columns that are subjected to torsional and bending loads in case of static analysis and also results from modal analysis. The columns analyzed are square and rectangle based tapered open column, column with cover plate, horizontal partitions and with apertures. For the analysis purpose a total of 70 columns were analyzed for bending, torsional and modal analysis. In this study it is observed that the orientation and aspect ratio of apertures have no significant effect on the static and dynamic rigidity of the machine tool structure.
Keywords: Finite Element Modeling, Modal Analysis, Machine tool structure, Static Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5036322 Influence of Laminated Textile Structures on Mechanical Performance of NF-Epoxy Composites
Authors: A. R. Azrin Hani, R. Ahmad, M. Mariatti
Abstract:
Textile structures are engineered and fabricated to meet worldwide structural applications. Nevertheless, research varying textile structure on natural fibre as composite reinforcement was found to be very limited. Most of the research is focusing on short fibre and random discontinuous orientation of the reinforcement structure. Realizing that natural fibre (NF) composite had been widely developed to be used as synthetic fibre composite replacement, this research attempted to examine the influence of woven and cross-ply laminated structure towards its mechanical performances. Laminated natural fibre composites were developed using hand lay-up and vacuum bagging technique. Impact and flexural strength were investigated as a function of fibre type (coir and kenaf) and reinforcement structure (imbalanced plain woven, 0°/90° cross-ply and +45°/-45° cross-ply). Multi-level full factorial design of experiment (DOE) and analysis of variance (ANOVA) was employed to impart data as to how fibre type and reinforcement structure parameters affect the mechanical properties of the composites. This systematic experimentation has led to determination of significant factors that predominant influences the impact and flexural properties of the textile composites. It was proven that both fibre type and reinforcement structure demonstrated significant difference results. Overall results indicated that coir composite and woven structure exhibited better impact and flexural strength. Yet, cross-ply composite structure demonstrated better fracture resistance.Keywords: Cross-ply composite, Flexural strength, Impact strength, Textile natural fibre composite, Woven composite.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2432321 A Novel Hopfield Neural Network for Perfect Calculation of Magnetic Resonance Spectroscopy
Authors: Hazem M. El-Bakry
Abstract:
In this paper, an automatic determination algorithm for nuclear magnetic resonance (NMR) spectra of the metabolites in the living body by magnetic resonance spectroscopy (MRS) without human intervention or complicated calculations is presented. In such method, the problem of NMR spectrum determination is transformed into the determination of the parameters of a mathematical model of the NMR signal. To calculate these parameters efficiently, a new model called modified Hopfield neural network is designed. The main achievement of this paper over the work in literature [30] is that the speed of the modified Hopfield neural network is accelerated. This is done by applying cross correlation in the frequency domain between the input values and the input weights. The modified Hopfield neural network can accomplish complex dignals perfectly with out any additinal computation steps. This is a valuable advantage as NMR signals are complex-valued. In addition, a technique called “modified sequential extension of section (MSES)" that takes into account the damping rate of the NMR signal is developed to be faster than that presented in [30]. Simulation results show that the calculation precision of the spectrum improves when MSES is used along with the neural network. Furthermore, MSES is found to reduce the local minimum problem in Hopfield neural networks. Moreover, the performance of the proposed method is evaluated and there is no effect on the performance of calculations when using the modified Hopfield neural networks.
Keywords: Hopfield Neural Networks, Cross Correlation, Nuclear Magnetic Resonance, Magnetic Resonance Spectroscopy, Fast Fourier Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845320 Application of RS and GIS Technique for Identifying Groundwater Potential Zone in Gomukhi Nadhi Sub Basin, South India
Authors: Punitha Periyasamy, Mahalingam Sudalaimuthu, Sachikanta Nanda, Arasu Sundaram
Abstract:
India holds 17.5% of the world’s population but has only 2% of the total geographical area of the world where 27.35% of the area is categorized as wasteland due to lack of or less groundwater. So there is a demand for excessive groundwater for agricultural and non agricultural activities to balance its growth rate. With this in mind, an attempt is made to find the groundwater potential zone in Gomukhi Nadhi sub basin of Vellar River basin, TamilNadu, India covering an area of 1146.6 Sq.Km consists of 9 blocks from Peddanaickanpalayam to Virudhachalam in the sub basin. The thematic maps such as Geology, Geomorphology, Lineament, Landuse and Landcover and Drainage are prepared for the study area using IRS P6 data. The collateral data includes rainfall, water level, soil map are collected for analysis and inference. The digital elevation model (DEM) is generated using Shuttle Radar Topographic Mission (SRTM) and the slope of the study area is obtained. ArcGIS 10.1 acts as a powerful spatial analysis tool to find out the ground water potential zones in the study area by means of weighted overlay analysis. Each individual parameter of the thematic maps are ranked and weighted in accordance with their influence to increase the water level in the ground. The potential zones in the study area are classified viz., Very Good, Good, Moderate, Poor with its aerial extent of 15.67, 381.06, 575.38, 174.49 Sq.Km respectively.
Keywords: ArcGIS, DEM, Groundwater, Recharge, Weighted Overlay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2993319 Stages of Changes for Physical Activity among Iranian Adolescent Girls
Authors: Ashraf Pirasteh, Alireza Hidarnia, Ali Asghari, Soghrate Faghihzadeh, Fazlollah Ghofranipour
Abstract:
Background: Regular physical activity contributes positively to physical and psychological health. In the present study, the stages of change of physical activity and the total physical Aims: The aim of this study was to investigate the proportion of adolescent girls in each stages of change and the causative factors associated with physical activity such as the related social support and self efficacy in a sample of the high school students. Methods: In this study, Social Cognitive Theory (SCT) and the Transtheorical Model (TTM) guided instrument development. The data regarding the demographics, psychosocial determinants of physical activity, stage of change and physical activity was gathered by questionnaires. Several measures of psychosocial determinants of physical activity were translated from English into Persian using the back-translation technique. These translated measures were administered to 512 ninth and tenth-grade Iranian high school students for factor analysis. Results: The distribution of the stage of change for physical activity was as follow: 18/5% in precontemplation, 23.4% in contemplation, 38.2% in preparation, 4.6% in action and 15.3% in maintenance. They were in 80.1% pre-adoption stages (precontemplation stage, contemplation stage and preparation stage) and 19.9% post-adoption stages (action stage and maintenance stage) of physical activity. There was a significant relate between age and physical activity in adolescent girls (age-related decline of physical activity) p<0001. Conclusion: The findings of the present study can contribute to improve health behaviors and for administration of health promotion programs in the adolescent populations.Keywords: Adolescent, Iranian girls, Physical activity, Stages of change
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978318 High Temperature Deformation Behavior of Cr-containing Superplastic Iron Aluminide
Authors: Seok Hong Min, Woo Young Jung, Tae Kwon Ha
Abstract:
Superplastic deformation and high temperature load relaxation behavior of coarse-grained iron aluminides with the composition of Fe-28 at.% Al have been investigated. A series of load relaxation and tensile tests were conducted at temperatures ranging from 600 to 850oC. The flow curves obtained from load relaxation tests were found to have a sigmoidal shape and to exhibit stress vs. strain rate data in a very wide strain rate range from 10-7/s to 10-2/s. Tensile tests have been conducted at various initial strain rates ranging from 3×10-5/s to 1×10-2/s. Maximum elongation of ~500 % was obtained at the initial strain rate of 3×10-5/s and the maximum strain rate sensitivity was found to be 0.68 at 850oC in binary Fe-28Al alloy. Microstructure observation through the optical microscopy (OM) and the electron back-scattered diffraction (EBSD) technique has been carried out on the deformed specimens and it has revealed the evidences for grain boundary migration and grain refinement to occur during superplastic deformation, suggesting the dynamic recrystallization mechanism. The addition of Cr by the amount of 5 at.% appeared to deteriorate the superplasticity of the binary iron aluminide. By applying the internal variable theory of structural superplasticity, the addition of Cr has been revealed to lower the contribution of the frictional resistance to dislocation glide during high temperature deformation of the Fe3Al alloy.Keywords: Iron aluminide (Fe3Al), large grain size, structural superplasticity, dynamic recrystallization, chromium (Cr).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1788317 Preparation of Fe3Si/Ferrite Micro- and Nano-Powder Composite
Authors: R. Bures, M. Streckova, M. Faberova, P. Kurek
Abstract:
Composite material based on Fe3Si micro-particles and Mn-Zn nano-ferrite was prepared using powder metallurgy technology. The sol-gel followed by autocombustion process was used for synthesis of Mn0.8Zn0.2Fe2O4 ferrite. 3 wt.% of mechanically milled ferrite was mixed with Fe3Si powder alloy. Mixed micro-nano powder system was homogenized by the Resonant Acoustic Mixing using ResodynLabRAM Mixer. This non-invasive homogenization technique was used to preserve spherical morphology of Fe3Si powder particles. Uniaxial cold pressing in the closed die at pressure 600 MPa was applied to obtain a compact sample. Microwave sintering of green compact was realized at 800°C, 20 minutes, in air. Density of the powders and composite was measured by Hepycnometry. Impulse excitation method was used to measure elastic properties of sintered composite. Mechanical properties were evaluated by measurement of transverse rupture strength (TRS) and Vickers hardness (HV). Resistivity was measured by 4 point probe method. Ferrite phase distribution in volume of the composite was documented by metallographic analysis. It has been found that nano-ferrite particle distributed among micro- particles of Fe3Si powder alloy led to high relative density (~93%) and suitable mechanical properties (TRS >100 MPa, HV ~1GPa, E-modulus ~140 GPa) of the composite. High electric resistivity (R~6.7 ohm.cm) of prepared composite indicate their potential application as soft magnetic material at medium and high frequencies.
Keywords: Micro- and nano-composite, soft magnetic materials, microwave sintering, mechanical and electric properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3791316 A Propagator Method like Algorithm for Estimation of Multiple Real-Valued Sinusoidal Signal Frequencies
Authors: Sambit Prasad Kar, P.Palanisamy
Abstract:
In this paper a novel method for multiple one dimensional real valued sinusoidal signal frequency estimation in the presence of additive Gaussian noise is postulated. A computationally simple frequency estimation method with efficient statistical performance is attractive in many array signal processing applications. The prime focus of this paper is to combine the subspace-based technique and a simple peak search approach. This paper presents a variant of the Propagator Method (PM), where a collaborative approach of SUMWE and Propagator method is applied in order to estimate the multiple real valued sine wave frequencies. A new data model is proposed, which gives the dimension of the signal subspace is equal to the number of frequencies present in the observation. But, the signal subspace dimension is twice the number of frequencies in the conventional MUSIC method for estimating frequencies of real-valued sinusoidal signal. The statistical analysis of the proposed method is studied, and the explicit expression of asymptotic (large-sample) mean-squared-error (MSE) or variance of the estimation error is derived. The performance of the method is demonstrated, and the theoretical analysis is substantiated through numerical examples. The proposed method can achieve sustainable high estimation accuracy and frequency resolution at a lower SNR, which is verified by simulation by comparing with conventional MUSIC, ESPRIT and Propagator Method.
Keywords: Frequency estimation, peak search, subspace-based method without eigen decomposition, quadratic convex function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1731315 A Growing Natural Gas Approach for Evaluating Quality of Software Modules
Authors: Parvinder S. Sandhu, Sandeep Khimta, Kiranpreet Kaur
Abstract:
The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.
Keywords: Growing Neural Gas, data clustering, fault prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1865314 Context Detection in Spreadsheets Based on Automatically Inferred Table Schema
Authors: Alexander Wachtel, Michael T. Franzen, Walter F. Tichy
Abstract:
Programming requires years of training. With natural language and end user development methods, programming could become available to everyone. It enables end users to program their own devices and extend the functionality of the existing system without any knowledge of programming languages. In this paper, we describe an Interactive Spreadsheet Processing Module (ISPM), a natural language interface to spreadsheets that allows users to address ranges within the spreadsheet based on inferred table schema. Using the ISPM, end users are able to search for values in the schema of the table and to address the data in spreadsheets implicitly. Furthermore, it enables them to select and sort the spreadsheet data by using natural language. ISPM uses a machine learning technique to automatically infer areas within a spreadsheet, including different kinds of headers and data ranges. Since ranges can be identified from natural language queries, the end users can query the data using natural language. During the evaluation 12 undergraduate students were asked to perform operations (sum, sort, group and select) using the system and also Excel without ISPM interface, and the time taken for task completion was compared across the two systems. Only for the selection task did users take less time in Excel (since they directly selected the cells using the mouse) than in ISPM, by using natural language for end user software engineering, to overcome the present bottleneck of professional developers.Keywords: Natural language processing, end user development; natural language interfaces, human computer interaction, data recognition, dialog systems, spreadsheet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1122313 Assessment of the Number of Damaged Buildings from a Flood Event Using Remote Sensing Technique
Authors: Jaturong Som-ard
Abstract:
The heavy rainfall from 3rd to 22th January 2017 had swamped much area of Ranot district in southern Thailand. Due to heavy rainfall, the district was flooded which had a lot of effects on economy and social loss. The major objective of this study is to detect flooding extent using Sentinel-1A data and identify a number of damaged buildings over there. The data were collected in two stages as pre-flooding and during flood event. Calibration, speckle filtering, geometric correction, and histogram thresholding were performed with the data, based on intensity spectral values to classify thematic maps. The maps were used to identify flooding extent using change detection, along with the buildings digitized and collected on JOSM desktop. The numbers of damaged buildings were counted within the flooding extent with respect to building data. The total flooded areas were observed as 181.45 sq.km. These areas were mostly occurred at Ban khao, Ranot, Takhria, and Phang Yang sub-districts, respectively. The Ban khao sub-district had more occurrence than the others because this area is located at lower altitude and close to Thale Noi and Thale Luang lakes than others. The numbers of damaged buildings were high in Khlong Daen (726 features), Tha Bon (645 features), and Ranot sub-district (604 features), respectively. The final flood extent map might be very useful for the plan, prevention and management of flood occurrence area. The map of building damage can be used for the quick response, recovery and mitigation to the affected areas for different concern organization.Keywords: Flooding extent, Sentinel-1A data, JOSM desktop, damaged buildings.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 938312 Statistical Feature Extraction Method for Wood Species Recognition System
Authors: Mohd Iz'aan Paiz Bin Zamri, Anis Salwa Mohd Khairuddin, Norrima Mokhtar, Rubiyah Yusof
Abstract:
Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.Keywords: Classification, fuzzy, inspection system, image analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743311 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography
Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi
Abstract:
Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.
Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618310 Fuzzy Logic Based Improved Range Free Localization for Wireless Sensor Networks
Authors: Ashok Kumar, Vinod Kumar
Abstract:
Wireless Sensor Networks (WSNs) are used to monitor/observe vast inaccessible regions through deployment of large number of sensor nodes in the sensing area. For majority of WSN applications, the collected data needs to be combined with geographic information of its origin to make it useful for the user; information received from remote Sensor Nodes (SNs) that are several hops away from base station/sink is meaningless without knowledge of its source. In addition to this, location information of SNs can also be used to propose/develop new network protocols for WSNs to improve their energy efficiency and lifetime. In this paper, range free localization protocols for WSNs have been proposed. The proposed protocols are based on weighted centroid localization technique, where the edge weights of SNs are decided by utilizing fuzzy logic inference for received signal strength and link quality between the nodes. The fuzzification is carried out using (i) Mamdani, (ii) Sugeno, and (iii) Combined Mamdani Sugeno fuzzy logic inference. Simulation results demonstrate that proposed protocols provide better accuracy in node localization compared to conventional centroid based localization protocols despite presence of unintentional radio frequency interference from radio frequency (RF) sources operating in same frequency band.
Keywords: localization, range free, received signal strength, link quality indicator, Mamdani fuzzy logic inference, Sugeno fuzzy logic inference.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2631309 Unsteady Transonic Aerodynamic Analysis for Oscillatory Airfoils using Time Spectral Method
Authors: Mohamad Reza. Mohaghegh, Majid. Malek Jafarian
Abstract:
This research proposes an algorithm for the simulation of time-periodic unsteady problems via the solution unsteady Euler and Navier-Stokes equations. This algorithm which is called Time Spectral method uses a Fourier representation in time and hence solve for the periodic state directly without resolving transients (which consume most of the resources in a time-accurate scheme). Mathematical tools used here are discrete Fourier transformations. It has shown tremendous potential for reducing the computational cost compared to conventional time-accurate methods, by enforcing periodicity and using Fourier representation in time, leading to spectral accuracy. The accuracy and efficiency of this technique is verified by Euler and Navier-Stokes calculations for pitching airfoils. Because of flow turbulence nature, Baldwin-Lomax turbulence model has been used at viscous flow analysis. The results presented by the Time Spectral method are compared with experimental data. It has shown tremendous potential for reducing the computational cost compared to the conventional time-accurate methods, by enforcing periodicity and using Fourier representation in time, leading to spectral accuracy, because results verify the small number of time intervals per pitching cycle required to capture the flow physics.Keywords: Time Spectral Method, Time-periodic unsteadyflow, Discrete Fourier transform, Pitching airfoil, Turbulence flow
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770308 Associated Map and Inter-Purchase Time Model for Multiple-Category Products
Authors: Ching-I Chen
Abstract:
The continued rise of e-commerce is the main driver of the rapid growth of global online purchase. Consumers can nearly buy everything they want at one occasion through online shopping. The purchase behavior models which focus on single product category are insufficient to describe online shopping behavior. Therefore, analysis of multi-category purchase gets more and more popular. For example, market basket analysis explores customers’ buying tendency of the association between product categories. The information derived from market basket analysis facilitates to make cross-selling strategies and product recommendation system.
To detect the association between different product categories, we use the market basket analysis with the multidimensional scaling technique to build an associated map which describes how likely multiple product categories are bought at the same time. Besides, we also build an inter-purchase time model for associated products to describe how likely a product will be bought after its associated product is bought. We classify inter-purchase time behaviors of multi-category products into nine types, and use a mixture regression model to integrate those behaviors under our assumptions of purchase sequences. Our sample data is from comScore which provides a panelist-label database that captures detailed browsing and buying behavior of internet users across the United States. Finding the inter-purchase time from books to movie is shorter than the inter-purchase time from movies to books. According to the model analysis and empirical results, this research finally proposes the applications and recommendations in the management.
Keywords: Multiple-category purchase behavior, inter-purchase time, market basket analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871307 Simulating Dynamics of Thoracolumbar Spine Derived from Life MOD under Haptic Forces
Authors: K. T. Huynh, I. Gibson, W. F. Lu, B. N. Jagdish
Abstract:
In this paper, the construction of a detailed spine model is presented using the LifeMOD Biomechanics Modeler. The detailed spine model is obtained by refining spine segments in cervical, thoracic and lumbar regions into individual vertebra segments, using bushing elements representing the intervertebral discs, and building various ligamentous soft tissues between vertebrae. In the sagittal plane of the spine, constant force will be applied from the posterior to anterior during simulation to determine dynamic characteristics of the spine. The force magnitude is gradually increased in subsequent simulations. Based on these recorded dynamic properties, graphs of displacement-force relationships will be established in terms of polynomial functions by using the least-squares method and imported into a haptic integrated graphic environment. A thoracolumbar spine model with complex geometry of vertebrae, which is digitized from a resin spine prototype, will be utilized in this environment. By using the haptic technique, surgeons can touch as well as apply forces to the spine model through haptic devices to observe the locomotion of the spine which is computed from the displacement-force relationship graphs. This current study provides a preliminary picture of our ongoing work towards building and simulating bio-fidelity scoliotic spine models in a haptic integrated graphic environment whose dynamic properties are obtained from LifeMOD. These models can be helpful for surgeons to examine kinematic behaviors of scoliotic spines and to propose possible surgical plans before spine correction operations.Keywords: Haptic interface, LifeMOD, spine modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1905306 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark
Authors: B. Elshafei, X. Mao
Abstract:
The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.
Keywords: Data fusion, Gaussian process regression, signal denoise, temporal extrapolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 501305 A Multigranular Linguistic Additive Ratio Assessment Model in Group Decision Making
Authors: Wiem Daoud Ben Amor, Luis Martínez López, Jr., Hela Moalla Frikha
Abstract:
Most of the multi-criteria group decision making (MCGDM) problems dealing with qualitative criteria require consideration of the large background of expert information. It is common that experts have different degrees of knowledge for giving their alternative assessments according to criteria. So, it seems logical that they use different evaluation scales to express their judgment, i.e., multi granular linguistic scales. In this context, we propose the extension of the classical additive ratio assessment (ARAS) method to the case of a hierarchical linguistics term for managing multi granular linguistic scales in uncertain context where uncertainty is modeled by means in linguistic information. The proposed approach is called the extended hierarchical linguistics-ARAS method (ELH-ARAS). Within the ELH-ARAS approach, the decision maker (DMs) can diagnose the results (the ranking of the alternatives) in a decomposed style i.e., not only at one level of the hierarchy but also at the intermediate ones. Also, the developed approach allows a feedback transformation i.e., the collective final results of all experts are able to be transformed at any level of the extended linguistic hierarchy that each expert has previously used. Therefore, the ELH-ARAS technique makes it easier for decision-makers to understand the results. Finally, an MCGDM case study is given to illustrate the proposed approach.
Keywords: Additive ratio assessment, extended hierarchical linguistic, multi-criteria group decision making problems, multi granular linguistic contexts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 362304 Improving Similarity Search Using Clustered Data
Authors: Deokho Kim, Wonwoo Lee, Jaewoong Lee, Teresa Ng, Gun-Ill Lee, Jiwon Jeong
Abstract:
This paper presents a method for improving object search accuracy using a deep learning model. A major limitation to provide accurate similarity with deep learning is the requirement of huge amount of data for training pairwise similarity scores (metrics), which is impractical to collect. Thus, similarity scores are usually trained with a relatively small dataset, which comes from a different domain, causing limited accuracy on measuring similarity. For this reason, this paper proposes a deep learning model that can be trained with a significantly small amount of data, a clustered data which of each cluster contains a set of visually similar images. In order to measure similarity distance with the proposed method, visual features of two images are extracted from intermediate layers of a convolutional neural network with various pooling methods, and the network is trained with pairwise similarity scores which is defined zero for images in identical cluster. The proposed method outperforms the state-of-the-art object similarity scoring techniques on evaluation for finding exact items. The proposed method achieves 86.5% of accuracy compared to the accuracy of the state-of-the-art technique, which is 59.9%. That is, an exact item can be found among four retrieved images with an accuracy of 86.5%, and the rest can possibly be similar products more than the accuracy. Therefore, the proposed method can greatly reduce the amount of training data with an order of magnitude as well as providing a reliable similarity metric.
Keywords: Visual search, deep learning, convolutional neural network, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 825303 Dynamic Threshold Adjustment Approach For Neural Networks
Authors: Hamza A. Ali, Waleed A. J. Rasheed
Abstract:
The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.
Keywords: Classification, Recognition, Neural Networks, Pattern Recognition, Generalization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627302 Time and Wavelength Division Multiplexing Passive Optical Network Comparative Analysis: Modulation Formats and Channel Spacings
Authors: A. Fayad, Q. Alqhazaly, T. Cinkler
Abstract:
In light of the substantial increase in end-user requirements and the incessant need of network operators to upgrade the capabilities of access networks, in this paper, the performance of the different modulation formats on eight-channels Time and Wavelength Division Multiplexing Passive Optical Network (TWDM-PON) transmission system has been examined and compared. Limitations and features of modulation formats have been determined to outline the most suitable design to enhance the data rate and transmission reach to obtain the best performance of the network. The considered modulation formats are On-Off Keying Non-Return-to-Zero (NRZ-OOK), Carrier Suppressed Return to Zero (CSRZ), Duo Binary (DB), Modified Duo Binary (MODB), Quadrature Phase Shift Keying (QPSK), and Differential Quadrature Phase Shift Keying (DQPSK). The performance has been analyzed by varying transmission distances and bit rates under different channel spacing. Furthermore, the system is evaluated in terms of minimum Bit Error Rate (BER) and Quality factor (Qf) without applying any dispersion compensation technique, or any optical amplifier. Optisystem software was used for simulation purposes.
Keywords: Bit Error Rate, BER, Carrier Suppressed Return to Zero, CSRZ, Duo Binary, DB, Differential Quadrature Phase Shift Keying, DQPSK, Modified Duo Binary, MODB, On-Off Keying Non-Return-to-Zero, NRZ-OOK, Quality factor, Qf, Time and Wavelength Division Multiplexing Passive Optical Network, TWDM-PON.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1037301 Vocational Teaching Method: A Conceptual Model in Teaching Automotive Practical Work
Authors: Adnan Ahmad, Yusri Kamin, Asnol Dahar Minghat, Mohd. Khir Nordin, Dayana Farzeha, Ahmad Nabil
Abstract:
The purpose of this study is to identify the teaching method practices of the practical work subject in Vocational Secondary School. This study examined the practice of Vocational Teaching Method in Automotive Practical Work. The quantitative method used the sets of the questionnaire. 283 students and 63 teachers involved from ten VSS involved in this research. Research finding showed in conducting the introduction session teachers prefer used the demonstration method and questioning technique. While in deliver the content of practical task, teachers applied group monitoring and problem solving approach. To conclude the task of automotive practical work, teachers choose re-explain and report writing to make sure students really understand all the process of teaching. VTM-APW also involved the competency-based concept to embed in the model. Derived from factors investigated, research produced the combination of elements in teaching skills and vocational skills which could be used as the best teaching method in automotive practical work for school level. As conclusion this study has concluded that the VTM-APW model is able to apply in teaching to make an improvement with current practices in Vocational Secondary School. Hence, teachers are suggested to use this method to enhance student's knowledge in Automotive and teachers will deliver skills to the current and future workforce relevant with the required competency skilled in workplace.
Keywords: Vocational Teaching Method, Practical Task, Teacher Preferences, Student Preferences.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3935300 Analysis of Aiming Performance for Games Using Mapping Method of Corneal Reflections Based on Two Different Light Sources
Authors: Yoshikazu Onuki, Itsuo Kumazawa
Abstract:
Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.
Keywords: Point-of-gaze, gaze estimation, head movement, corneal reflections, two infrared light sources, game.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1071299 Developing Laser Spot Position Determination and PRF Code Detection with Quadrant Detector
Authors: Mohamed Fathy Heweage, Xiao Wen, Ayman Mokhtar, Ahmed Eldamarawy
Abstract:
In this paper, we are interested in modeling, simulation, and measurement of the laser spot position with a quadrant detector. We enhance detection and tracking of semi-laser weapon decoding system based on microcontroller. The system receives the reflected pulse through quadrant detector and processes the laser pulses through a processing circuit, a microcontroller decoding laser pulse reflected by the target. The seeker accuracy will be enhanced by the decoding system, the laser detection time based on the receiving pulses number is reduced, a gate is used to limit the laser pulse width. The model is implemented based on Pulse Repetition Frequency (PRF) technique with two microcontroller units (MCU). MCU1 generates laser pulses with different codes. MCU2 decodes the laser code and locks the system at the specific code. The codes EW selected based on the two selector switches. The system is implemented and tested in Proteus ISIS software. The implementation of the full position determination circuit with the detector is produced. General system for the spot position determination was performed with the laser PRF for incident radiation and the mechanical system for adjusting system at different angles. The system test results show that the system can detect the laser code with only three received pulses based on the narrow gate signal, and good agreement between simulation and measured system performance is obtained.
Keywords: 4-quadrant detector, pulse code detection, laser guided weapons, pulse repetition frequency, ATmega 32 microcontrollers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534298 A Comparative Analysis Approach Based on Fuzzy AHP, TOPSIS and PROMETHEE for the Selection Problem of GSCM Solutions
Authors: Omar Boutkhoum, Mohamed Hanine, Abdessadek Bendarag
Abstract:
Sustainable economic growth is nowadays driving firms to extend toward the adoption of many green supply chain management (GSCM) solutions. However, the evaluation and selection of these solutions is a matter of concern that needs very serious decisions, involving complexity owing to the presence of various associated factors. To resolve this problem, a comparative analysis approach based on multi-criteria decision-making methods is proposed for adequate evaluation of sustainable supply chain management solutions. In the present paper, we propose an integrated decision-making model based on FAHP (Fuzzy Analytic Hierarchy Process), TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) and PROMETHEE (Preference Ranking Organisation METHod for Enrichment Evaluations) to contribute to a better understanding and development of new sustainable strategies for industrial organizations. Due to the varied importance of the selected criteria, FAHP is used to identify the evaluation criteria and assign the importance weights for each criterion, while TOPSIS and PROMETHEE methods employ these weighted criteria as inputs to evaluate and rank the alternatives. The main objective is to provide a comparative analysis based on TOPSIS and PROMETHEE processes to help make sound and reasoned decisions related to the selection problem of GSCM solution.Keywords: GSCM solutions, multi-criteria analysis, FAHP, TOPSIS, PROMETHEE, decision support system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 938