Search results for: proposed concept
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7226

Search results for: proposed concept

536 A Test Methodology to Measure the Open-Loop Voltage Gain of an Operational Amplifier

Authors: Maninder Kaur Gill, Alpana Agarwal

Abstract:

It is practically not feasible to measure the open-loop voltage gain of the operational amplifier in the open loop configuration. It is because the open-loop voltage gain of the operational amplifier is very large. In order to avoid the saturation of the output voltage, a very small input should be given to operational amplifier which is not possible to be measured practically by a digital multimeter. A test circuit for measurement of open loop voltage gain of an operational amplifier has been proposed and verified using simulation tools as well as by experimental methods on breadboard. The main advantage of this test circuit is that it is simple, fast, accurate, cost effective, and easy to handle even on a breadboard. The test circuit requires only the device under test (DUT) along with resistors. This circuit has been tested for measurement of open loop voltage gain for different operational amplifiers. The underlying goal is to design testable circuits for various analog devices that are simple to realize in VLSI systems, giving accurate results and without changing the characteristics of the original system. The DUTs used are LM741CN and UA741CP. For LM741CN, the simulated gain and experimentally measured gain (average) are calculated as 89.71 dB and 87.71 dB, respectively. For UA741CP, the simulated gain and experimentally measured gain (average) are calculated as 101.15 dB and 105.15 dB, respectively. These values are found to be close to the datasheet values.

Keywords: Device under test, open-loop voltage gain, operational amplifier, test circuit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3334
535 Generalized Vortex Lattice Method for Predicting Characteristics of Wings with Flap and Aileron Deflection

Authors: Mondher Yahyaoui

Abstract:

A generalized vortex lattice method for complex lifting surfaces with flap and aileron deflection is formulated. The method is not restricted by the linearized theory assumption and accounts for all standard geometric lifting surface parameters: camber, taper, sweep, washout, dihedral, in addition to flap and aileron deflection. Thickness is not accounted for since the physical lifting body is replaced by a lattice of panels located on the mean camber surface. This panel lattice setup and the treatment of different wake geometries is what distinguish the present work form the overwhelming majority of previous solutions based on the vortex lattice method. A MATLAB code implementing the proposed formulation is developed and validated by comparing our results to existing experimental and numerical ones and good agreement is demonstrated. It is then used to study the accuracy of the widely used classical vortex-lattice method. It is shown that the classical approach gives good agreement in the clean configuration but is off by as much as 30% when a flap or aileron deflection of 30° is imposed. This discrepancy is mainly due the linearized theory assumption associated with the conventional method. A comparison of the effect of four different wake geometries on the values of aerodynamic coefficients was also carried out and it is found that the choice of the wake shape had very little effect on the results.

Keywords: Aileron deflection, camber-surface-bound vortices, classical VLM, Generalized VLM, flap deflection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5056
534 Multi-Factor Optimization Method through Machine Learning in Building Envelope Design: Focusing on Perforated Metal Façade

Authors: Jinwooung Kim, Jae-Hwan Jung, Seong-Jun Kim, Sung-Ah Kim

Abstract:

Because the building envelope has a significant impact on the operation and maintenance stage of the building, designing the facade considering the performance can improve the performance of the building and lower the maintenance cost of the building. In general, however, optimizing two or more performance factors confronts the limits of time and computational tools. The optimization phase typically repeats infinitely until a series of processes that generate alternatives and analyze the generated alternatives achieve the desired performance. In particular, as complex geometry or precision increases, computational resources and time are prohibitive to find the required performance, so an optimization methodology is needed to deal with this. Instead of directly analyzing all the alternatives in the optimization process, applying experimental techniques (heuristic method) learned through experimentation and experience can reduce resource waste. This study proposes and verifies a method to optimize the double envelope of a building composed of a perforated panel using machine learning to the design geometry and quantitative performance. The proposed method is to achieve the required performance with fewer resources by supplementing the existing method which cannot calculate the complex shape of the perforated panel.

Keywords: Building envelope, machine learning, perforated metal, multi-factor optimization, façade.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1224
533 Suppression of Narrowband Interference in Impulse Radio Based High Data Rate UWB WPAN Communication System Using NLOS Channel Model

Authors: Bikramaditya Das, Susmita Das

Abstract:

Study on suppression of interference in time domain equalizers is attempted for high data rate impulse radio (IR) ultra wideband communication system. The narrow band systems may cause interference with UWB devices as it is having very low transmission power and the large bandwidth. SRAKE receiver improves system performance by equalizing signals from different paths. This enables the use of SRAKE receiver techniques in IRUWB systems. But Rake receiver alone fails to suppress narrowband interference (NBI). A hybrid SRake-MMSE time domain equalizer is proposed to overcome this by taking into account both the effect of the number of rake fingers and equalizer taps. It also combats intersymbol interference. A semi analytical approach and Monte-Carlo simulation are used to investigate the BER performance of SRAKEMMSE receiver on IEEE 802.15.3a UWB channel models. Study on non-line of sight indoor channel models (both CM3 and CM4) illustrates that bit error rate performance of SRake-MMSE receiver with NBI performs better than that of Rake receiver without NBI. We show that for a MMSE equalizer operating at high SNR-s the number of equalizer taps plays a more significant role in suppressing interference.

Keywords: IR-UWB, UWB, IEEE 802.15.3a, NBI, data rate, bit error rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691
532 The Experiences of Coronary Heart Disease Patients: Biopsychosocial Perspective

Authors: Christopher C. Anyadubalu

Abstract:

Biological, psychological and social experiences and perceptions of healthcare services in patients medically diagnosed of coronary heart disease were investigated using a sample of 10 participants whose responses to the in-depth interview questions were analyzed based on inter-and-intra-case analyses. The results obtained revealed that advancing age, single status, divorce and/or death of spouse and the issue of single parenting negatively impacted patients- biopsychosocial experiences. The patients- experiences of physical signs and symptoms, anxiety and depression, past serious medical conditions, use of self-prescribed medications, family history of poor mental/medical or physical health, nutritional problems and insufficient physical activities heightened their risk of coronary attack. Collectivist culture served as a big source of relieve to the patients. Patients- temperament, experience of different chronic life stresses/challenges, mood alteration, regular drinking, smoking/gambling, and family/social impairments compounded their health situation. Patients were satisfied with the biomedical services rendered by the healthcare personnel, whereas their psychological and social needs were not attended to. Effective procedural treatment model, a holistic and multidimensional approach to the treatment of heart disease patients was proposed.

Keywords: Biopsychosocial, Coronary Heart Disease, Experience, Patients, Perception, Perspective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2619
531 Students’ Level of Knowledge Construction and Pattern of Social Interaction in an Online Forum

Authors: K. Durairaj, I. N. Umar

Abstract:

The asynchronous discussion forum is one of the most widely used activities in learning management system environment. Online forum allows participants to interact, construct knowledge, and can be used to complement face to face sessions in blended learning courses. However, to what extent do the students perceive the benefits or advantages of forum remain to be seen. Through content and social network analyses, instructors will be able to gauge the students’ engagement and knowledge construction level. Thus, this study aims to analyze the students’ level of knowledge construction and their participation level that occur through online discussion. It also attempts to investigate the relationship between the level of knowledge construction and their social interaction patterns. The sample involves 23 students undertaking a master course in one public university in Malaysia. The asynchronous discussion forum was conducted for three weeks as part of the course requirement. The finding indicates that the level of knowledge construction is quite low. Also, the density value of 0.11 indicating the overall communication among the participants in the forum is low. This study reveals that strong and significant correlations between SNA measures (in-degree centrality, out-degree centrality) and level of knowledge construction. Thus, allocating these active students in different group aids the interactive discussion takes place. Finally, based upon the findings, some recommendations to increase students’ level of knowledge construction and also for further research are proposed.

Keywords: Asynchronous Discussion Forums, Content Analysis, Knowledge Construction, Social Network Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2211
530 A Case Study on the Numerical-Probability Approach for Deep Excavation Analysis

Authors: Komeil Valipourian

Abstract:

Urban advances and the growing need for developing infrastructures has increased the importance of deep excavations. In this study, after the introducing probability analysis as an important issue, an attempt has been made to apply it for the deep excavation project of Bangkok’s Metro as a case study. For this, the numerical probability model has been developed based on the Finite Difference Method and Monte Carlo sampling approach. The results indicate that disregarding the issue of probability in this project will result in an inappropriate design of the retaining structure. Therefore, probabilistic redesign of the support is proposed and carried out as one of the applications of probability analysis. A 50% reduction in the flexural strength of the structure increases the failure probability just by 8% in the allowable range and helps improve economic conditions, while maintaining mechanical efficiency. With regard to the lack of efficient design in most deep excavations, by considering geometrical and geotechnical variability, an attempt was made to develop an optimum practical design standard for deep excavations based on failure probability. On this basis, a practical relationship is presented for estimating the maximum allowable horizontal displacement, which can help improve design conditions without developing the probability analysis.

Keywords: Numerical probability modeling, deep excavation, allowable maximum displacement, finite difference method, FDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 692
529 Parametric Analysis and Optimal Design of Functionally Graded Plates Using Particle Swarm Optimization Algorithm and a Hybrid Meshless Method

Authors: Foad Nazari, Seyed Mahmood Hosseini, Mohammad Hossein Abolbashari, Mohammad Hassan Abolbashari

Abstract:

The present study is concerned with the optimal design of functionally graded plates using particle swarm optimization (PSO) algorithm. In this study, meshless local Petrov-Galerkin (MLPG) method is employed to obtain the functionally graded (FG) plate’s natural frequencies. Effects of two parameters including thickness to height ratio and volume fraction index on the natural frequencies and total mass of plate are studied by using the MLPG results. Then the first natural frequency of the plate, for different conditions where MLPG data are not available, is predicted by an artificial neural network (ANN) approach which is trained by back-error propagation (BEP) technique. The ANN results show that the predicted data are in good agreement with the actual one. To maximize the first natural frequency and minimize the mass of FG plate simultaneously, the weighted sum optimization approach and PSO algorithm are used. However, the proposed optimization process of this study can provide the designers of FG plates with useful data.

Keywords: Optimal design, natural frequency, FG plate, hybrid meshless method, MLPG method, ANN approach, particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1433
528 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: Anomaly detection, autoencoder, data centers, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 742
527 High Speed Video Transmission for Telemedicine using ATM Technology

Authors: J. P. Dubois, H. M. Chiu

Abstract:

In this paper, we study statistical multiplexing of VBR video in ATM networks. ATM promises to provide high speed realtime multi-point to central video transmission for telemedicine applications in rural hospitals and in emergency medical services. Video coders are known to produce variable bit rate (VBR) signals and the effects of aggregating these VBR signals need to be determined in order to design a telemedicine network infrastructure capable of carrying these signals. We first model the VBR video signal and simulate it using a generic continuous-data autoregressive (AR) scheme. We carry out the queueing analysis by the Fluid Approximation Model (FAM) and the Markov Modulated Poisson Process (MMPP). The study has shown a trade off: multiplexing VBR signals reduces burstiness and improves resource utilization, however, the buffer size needs to be increased with an associated economic cost. We also show that the MMPP model and the Fluid Approximation model fit best, respectively, the cell region and the burst region. Therefore, a hybrid MMPP and FAM completely characterizes the overall performance of the ATM statistical multiplexer. The ramifications of this technology are clear: speed, reliability (lower loss rate and jitter), and increased capacity in video transmission for telemedicine. With migration to full IP-based networks still a long way to achieving both high speed and high quality of service, the proposed ATM architecture will remain of significant use for telemedicine.

Keywords: ATM, multiplexing, queueing, telemedicine, VBR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
526 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: Graph cuts, lung CT scan, lung parenchyma segmentation, patch based similarity metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 743
525 Computer-Aided Classification of Liver Lesions Using Contrasting Features Difference

Authors: Hussein Alahmer, Amr Ahmed

Abstract:

Liver cancer is one of the common diseases that cause the death. Early detection is important to diagnose and reduce the incidence of death. Improvements in medical imaging and image processing techniques have significantly enhanced interpretation of medical images. Computer-Aided Diagnosis (CAD) systems based on these techniques play a vital role in the early detection of liver disease and hence reduce liver cancer death rate.  This paper presents an automated CAD system consists of three stages; firstly, automatic liver segmentation and lesion’s detection. Secondly, extracting features. Finally, classifying liver lesions into benign and malignant by using the novel contrasting feature-difference approach. Several types of intensity, texture features are extracted from both; the lesion area and its surrounding normal liver tissue. The difference between the features of both areas is then used as the new lesion descriptors. Machine learning classifiers are then trained on the new descriptors to automatically classify liver lesions into benign or malignant. The experimental results show promising improvements. Moreover, the proposed approach can overcome the problems of varying ranges of intensity and textures between patients, demographics, and imaging devices and settings.

Keywords: CAD system, difference of feature, Fuzzy c means, Liver segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1421
524 A Trainable Neural Network Ensemble for ECG Beat Classification

Authors: Atena Sajedin, Shokoufeh Zakernejad, Soheil Faridi, Mehrdad Javadi, Reza Ebrahimpour

Abstract:

This paper illustrates the use of a combined neural network model for classification of electrocardiogram (ECG) beats. We present a trainable neural network ensemble approach to develop customized electrocardiogram beat classifier in an effort to further improve the performance of ECG processing and to offer individualized health care. We process a three stage technique for detection of premature ventricular contraction (PVC) from normal beats and other heart diseases. This method includes a denoising, a feature extraction and a classification. At first we investigate the application of stationary wavelet transform (SWT) for noise reduction of the electrocardiogram (ECG) signals. Then feature extraction module extracts 10 ECG morphological features and one timing interval feature. Then a number of multilayer perceptrons (MLPs) neural networks with different topologies are designed. The performance of the different combination methods as well as the efficiency of the whole system is presented. Among them, Stacked Generalization as a proposed trainable combined neural network model possesses the highest recognition rate of around 95%. Therefore, this network proves to be a suitable candidate in ECG signal diagnosis systems. ECG samples attributing to the different ECG beat types were extracted from the MIT-BIH arrhythmia database for the study.

Keywords: ECG beat Classification; Combining Classifiers;Premature Ventricular Contraction (PVC); Multi Layer Perceptrons;Wavelet Transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2216
523 Impact of Flexibility on Patient Satisfaction and Behavioral Intention: A Critical Reassessment and Model Development

Authors: Pradeep Kumar, Shibashish Chakraborty, Sasadhar Bera

Abstract:

In the anticipation of demand fluctuations, services cannot be inventoried and hence it creates a difficult problem in marketing of services. The inability to meet customers (patients) requirements in healthcare context has more serious consequences than other service sectors. In order to meet patient requirements in the current uncertain environment, healthcare organizations are seeking ways for improved service delivery. Flexibility provides a mechanism for reducing variability in service encounters and improved performance. Flexibility is defined as the ability of the organization to cope with changing circumstances or instability caused by the environment. Patient satisfaction is an important performance outcome of healthcare organizations. However, the paucity of information exists in healthcare delivery context to examine the impact of flexibility on patient satisfaction and behavioral intention. The present study is an attempt to develop a conceptual foundation for investigating overall impact of flexibility on patient satisfaction and behavioral intention. Several dimensions of flexibility in healthcare context are examined and proposed to have a significant impact on patient satisfaction and intention. Furthermore, the study involves a critical examination of determinants of patient satisfaction and development of a comprehensive view the relationship between flexibility, patient satisfaction and behavioral intention. Finally, theoretical contributions and implications for healthcare professionals are suggested from flexibility perspective.

Keywords: Healthcare, flexibility, patient satisfaction, behavioral intention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1581
522 Computational Prediction of Complicated Atmospheric Motion for Spinning or non- Spinning Projectiles

Authors: Dimitrios N. Gkritzapis, Elias E. Panagiotopoulos, Dionissios P. Margaris, Dimitrios G. Papanikas

Abstract:

A full six degrees of freedom (6-DOF) flight dynamics model is proposed for the accurate prediction of short and long-range trajectories of high spin and fin-stabilized projectiles via atmospheric flight to final impact point. The projectiles is assumed to be both rigid (non-flexible), and rotationally symmetric about its spin axis launched at low and high pitch angles. The mathematical model is based on the full equations of motion set up in the no-roll body reference frame and is integrated numerically from given initial conditions at the firing site. The projectiles maneuvering motion depends on the most significant force and moment variations, in addition to wind and gravity. The computational flight analysis takes into consideration the Mach number and total angle of attack effects by means of the variable aerodynamic coefficients. For the purposes of the present work, linear interpolation has been applied from the tabulated database of McCoy-s book. The developed computational method gives satisfactory agreement with published data of verified experiments and computational codes on atmospheric projectile trajectory analysis for various initial firing flight conditions.

Keywords: Constant-Variable aerodynamic coefficients, low and high pitch angles, wind.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2422
521 SNC Based Network Layer Design for Underwater Wireless Communication Used in Coral Farms

Authors: T. T. Manikandan, Rajeev Sukumaran

Abstract:

For maintaining the biodiversity of many ecosystems the existence of coral reefs play a vital role. But due to many factors such as pollution and coral mining, coral reefs are dying day by day. One way to protect the coral reefs is to farm them in a carefully monitored underwater environment and restore it in place of dead corals. For successful farming of corals in coral farms, different parameters of the water in the farming area need to be monitored and maintained at optimal level. Sensing underwater parameters using wireless sensor nodes is an effective way for precise and continuous monitoring in a highly dynamic environment like oceans. Here the sensed information is of varying importance and it needs to be provided with desired Quality of Service(QoS) guarantees in delivering the information to offshore monitoring centers. The main interest of this research is Stochastic Network Calculus (SNC) based modeling of network layer design for underwater wireless sensor communication. The model proposed in this research enforces differentiation of service in underwater wireless sensor communication with the help of buffer sizing and link scheduling. The delay and backlog bounds for such differentiated services are analytically derived using stochastic network calculus.

Keywords: Underwater Coral Farms, SNC, differentiated service, delay bound, backlog bound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 367
520 Numerical Study of Natural Convection Effects in Latent Heat Storage using Aluminum Fins and Spiral Fillers

Authors: Lippong Tan, Yuenting Kwok, Ahbijit Date, Aliakbar Akbarzadeh

Abstract:

A numerical investigation has carried out to understand the melting characteristics of phase change material (PCM) in a fin type latent heat storage with the addition of embedded aluminum spiral fillers. It is known that melting performance of PCM can be significantly improved by increasing the number of embedded metallic fins in the latent heat storage system but to certain values where only lead to small improvement in heat transfer rate. Hence, adding aluminum spiral fillers within the fin gap can be an option to improve heat transfer internally. This paper presents extensive computational visualizations on the PCM melting patterns of the proposed fin-spiral fillers configuration. The aim of this investigation is to understand the PCM-s melting behaviors by observing the natural convection currents movement and melting fronts formation. Fluent 6.3 simulation software was utilized in producing twodimensional visualizations of melting fractions, temperature distributions and flow fields to illustrate the melting process internally. The results show that adding aluminum spiral fillers in Fin type latent heat storage can promoted small but more active natural convection currents and improve melting of PCM.

Keywords: Phase change material, thermal enhancement, aluminum spiral fillers, fins

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3405
519 Automatic Detection of Defects in Ornamental Limestone Using Wavelets

Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas

Abstract:

A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.

Keywords: Automatic detection, wavelets, defects, fracture lines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1166
518 Longitudinal Vibration of a Micro-Beam in a Micro-Scale Fluid Media

Authors: M. Ghanbari, S. Hossainpour, G. Rezazadeh

Abstract:

In this paper, longitudinal vibration of a micro-beam in micro-scale fluid media has been investigated. The proposed mathematical model for this study is made up of a micro-beam and a micro-plate at its free end. An AC voltage is applied to the pair of piezoelectric layers on the upper and lower surfaces of the micro-beam in order to actuate it longitudinally. The whole structure is bounded between two fixed plates on its upper and lower surfaces. The micro-gap between the structure and the fixed plates is filled with fluid. Fluids behave differently in micro-scale than macro, so the fluid field in the gap has been modeled based on micro-polar theory. The coupled governing equations of motion of the micro-beam and the micro-scale fluid field have been derived. Due to having non-homogenous boundary conditions, derived equations have been transformed to an enhanced form with homogenous boundary conditions. Using Galerkin-based reduced order model, the enhanced equations have been discretized over the beam and fluid domains and solve simultaneously in order to obtain force response of the micro-beam. Effects of micro-polar parameters of the fluid as characteristic length scale, coupling parameter and surface parameter on the response of the micro-beam have been studied.

Keywords: Micro-polar theory, Galerkin method, MEMS, micro-fluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 655
517 A Multi-layer Artificial Neural Network Architecture Design for Load Forecasting in Power Systems

Authors: Axay J Mehta, Hema A Mehta, T.C.Manjunath, C. Ardil

Abstract:

In this paper, the modelling and design of artificial neural network architecture for load forecasting purposes is investigated. The primary pre-requisite for power system planning is to arrive at realistic estimates of future demand of power, which is known as Load Forecasting. Short Term Load Forecasting (STLF) helps in determining the economic, reliable and secure operating strategies for power system. The dependence of load on several factors makes the load forecasting a very challenging job. An over estimation of the load may cause premature investment and unnecessary blocking of the capital where as under estimation of load may result in shortage of equipment and circuits. It is always better to plan the system for the load slightly higher than expected one so that no exigency may arise. In this paper, a load-forecasting model is proposed using a multilayer neural network with an appropriately modified back propagation learning algorithm. Once the neural network model is designed and trained, it can forecast the load of the power system 24 hours ahead on daily basis and can also forecast the cumulative load on daily basis. The real load data that is used for the Artificial Neural Network training was taken from LDC, Gujarat Electricity Board, Jambuva, Gujarat, India. The results show that the load forecasting of the ANN model follows the actual load pattern more accurately throughout the forecasted period.

Keywords: Power system, Load forecasting, Neural Network, Neuron, Stabilization, Network structure, Load.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3423
516 Modeling and Analysis for Effective Capacity of a Cross-Layer Optimized Wireless Networks

Authors: Reham A. El-mayet, Hesham M. El-Badawy, Salwa H. Elramly

Abstract:

New generation mobile communication networks have the ability of supporting triple play. In order that, Orthogonal Frequency Division Multiplexing (OFDM) access techniques have been chosen to enlarge the system ability for high data rates networks. Many of cross-layer modeling and optimization schemes for Quality of Service (QoS) and capacity of downlink multiuser OFDM system were proposed. In this paper, the Maximum Weighted Capacity (MWC) based resource allocation at the Physical (PHY) layer is used. This resource allocation scheme provides a much better QoS than the previous resource allocation schemes, while maintaining the highest or nearly highest capacity and costing similar complexity. In addition, the Delay Satisfaction (DS) scheduling at the Medium Access Control (MAC) layer, which allows more than one connection to be served in each slot is used. This scheduling technique is more efficient than conventional scheduling to investigate both of the number of users as well as the number of subcarriers against system capacity. The system will be optimized for different operational environments: the outdoor deployment scenarios as well as the indoor deployment scenarios are investigated and also for different channel models. In addition, effective capacity approach [1] is used not only for providing QoS for different mobile users, but also to increase the total wireless network's throughput.

Keywords: Cross-layer, effective capacity, LTE, OFDM, QoS, resource allocation, wireless networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1796
515 Trajectory Guided Recognition of Hand Gestures having only Global Motions

Authors: M. K. Bhuyan, P. K. Bora, D. Ghosh

Abstract:

One very interesting field of research in Pattern Recognition that has gained much attention in recent times is Gesture Recognition. In this paper, we consider a form of dynamic hand gestures that are characterized by total movement of the hand (arm) in space. For these types of gestures, the shape of the hand (palm) during gesturing does not bear any significance. In our work, we propose a model-based method for tracking hand motion in space, thereby estimating the hand motion trajectory. We employ the dynamic time warping (DTW) algorithm for time alignment and normalization of spatio-temporal variations that exist among samples belonging to the same gesture class. During training, one template trajectory and one prototype feature vector are generated for every gesture class. Features used in our work include some static and dynamic motion trajectory features. Recognition is accomplished in two stages. In the first stage, all unlikely gesture classes are eliminated by comparing the input gesture trajectory to all the template trajectories. In the next stage, feature vector extracted from the input gesture is compared to all the class prototype feature vectors using a distance classifier. Experimental results demonstrate that our proposed trajectory estimator and classifier is suitable for Human Computer Interaction (HCI) platform.

Keywords: Hand gesture, human computer interaction, key video object plane, dynamic time warping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2742
514 Adhesive Connections in Timber: A Comparison between Rough and Smooth Wood Bonding Surfaces

Authors: Valentina Di Maria, Anton Ianakiev

Abstract:

The use OF adhesive anchors for wooden constructions is an efficient technology to connect and design timber members in new timber structures and to rehabilitate the damaged structural members of historical buildings. Due to the lack of standard regulation in this specific area of structural design, designers’ choices are still supported by test analysis that enables knowledge, and the prediction, of the structural behaviour of glued in rod joints. The paper outlines an experimental research activity aimed at identifying the tensile resistance capacity of several new adhesive joint prototypes made of epoxy resin, steel bar and timber, Oak and Douglas Fir species. The development of new adhesive connectors has been carried out by using epoxy to glue stainless steel bars into pre-drilled holes, characterised by smooth and rough internal surfaces, in timber samples. The realization of a threaded contact surface using a specific drill bit has led to an improved bond between wood and epoxy. The applied changes have also reduced the cost of the joints’ production. The paper presents the results of this parametric analysis and a Finite Element analysis that enables identification and study of the internal stress distribution in the proposed adhesive anchors.

Keywords: Glued in rod joints, adhesive anchors, timber, epoxy, rough contact surface, threaded hole shape.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3322
513 Performance Evaluation of AOMDV-PAMAC Protocols for Ad Hoc Networks

Authors: B. Malarkodi, S. K. Riyaz Hussain, B. Venkataramani

Abstract:

Power consumption of nodes in ad hoc networks is a critical issue as they predominantly operate on batteries. In order to improve the lifetime of an ad hoc network, all the nodes must be utilized evenly and the power required for connections must be minimized. In this project a link layer algorithm known as Power Aware medium Access Control (PAMAC) protocol is proposed which enables the network layer to select a route with minimum total power requirement among the possible routes between a source and a destination provided all nodes in the routes have battery capacity above a threshold. When the battery capacity goes below a predefined threshold, routes going through these nodes will be avoided and these nodes will act only as source and destination. Further, the first few nodes whose battery power drained to the set threshold value are pushed to the exterior part of the network and the nodes in the exterior are brought to the interior. Since less total power is required to forward packets for each connection. The network layer protocol AOMDV is basically an extension to the AODV routing protocol. AOMDV is designed to form multiple routes to the destination and it also avoid the loop formation so that it reduces the unnecessary congestion to the channel. In this project, the performance of AOMDV is evaluated using PAMAC as a MAC layer protocol and the average power consumption, throughput and average end to end delay of the network are calculated and the results are compared with that of the other network layer protocol AODV.

Keywords: AODV, PAMAC, AOMDV, Power consumption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1825
512 3D Liver Segmentation from CT Images Using a Level Set Method Based on a Shape and Intensity Distribution Prior

Authors: Nuseiba M. Altarawneh, Suhuai Luo, Brian Regan, Guijin Tang

Abstract:

Liver segmentation from medical images poses more challenges than analogous segmentations of other organs. This contribution introduces a liver segmentation method from a series of computer tomography images. Overall, we present a novel method for segmenting liver by coupling density matching with shape priors. Density matching signifies a tracking method which operates via maximizing the Bhattacharyya similarity measure between the photometric distribution from an estimated image region and a model photometric distribution. Density matching controls the direction of the evolution process and slows down the evolving contour in regions with weak edges. The shape prior improves the robustness of density matching and discourages the evolving contour from exceeding liver’s boundaries at regions with weak boundaries. The model is implemented using a modified distance regularized level set (DRLS) model. The experimental results show that the method achieves a satisfactory result. By comparing with the original DRLS model, it is evident that the proposed model herein is more effective in addressing the over segmentation problem. Finally, we gauge our performance of our model against matrices comprising of accuracy, sensitivity, and specificity.

Keywords: Bhattacharyya distance, distance regularized level set (DRLS) model, liver segmentation, level set method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2338
511 Broadband PowerLine Communications: Performance Analysis

Authors: Justinian Anatory, Nelson Theethayi, M. M. Kissaka, N. H. Mvungi

Abstract:

Power line channel is proposed as an alternative for broadband data transmission especially in developing countries like Tanzania [1]. However the channel is affected by stochastic attenuation and deep notches which can lead to the limitation of channel capacity and achievable data rate. Various studies have characterized the channel without giving exactly the maximum performance and limitation in data transfer rate may be this is due to complexity of channel modeling being used. In this paper the channel performance of medium voltage, low voltage and indoor power line channel is presented. In the investigations orthogonal frequency division multiplexing (OFDM) with phase shift keying (PSK) as carrier modulation schemes is considered, for indoor, medium and low voltage channels with typical ten branches and also Golay coding is applied for medium voltage channel. From channels, frequency response deep notches are observed in various frequencies which can lead to reduce the achievable data rate. However, is observed that data rate up to 240Mbps is realized for a signal to noise ratio of about 50dB for indoor and low voltage channels, however for medium voltage a typical link with ten branches is affected by strong multipath and coding is required for feasible broadband data transfer.

Keywords: Powerline Communications, branched network, channel model, modulation, channel performance, OFDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833
510 Seamless Multicast Handover in Fmipv6-Based Networks

Authors: Moneeb Gohar, Seok Joo Koh, Tae-Won Um, Hyun-Woo Lee

Abstract:

This paper proposes a fast tree join scheme to provide seamless multicast handover in the mobile networks based on the Fast Mobile IPv6 (FMIPv6). In the existing FMIPv6-based multicast handover scheme, the bi-directional tunnelling or the remote subscription is employed with the packet forwarding from the previous access router (AR) to the new AR. In general, the remote subscription approach is preferred to the bi-directional tunnelling one, since in the remote subscription scheme we can exploit an optimized multicast path from a multicast source to many mobile receivers. However, in the remote subscription scheme, if the tree joining operation takes a long time, the amount of data packets to be forwarded and buffered for multicast handover will increase, and thus the corresponding buffer may overflow, which results in severe packet losses. In order to reduce these costs associated with packet forwarding and buffering, this paper proposes the fast join to multicast tree, in which the new AR will join the multicast tree as fast as possible, so that the new multicast data packets can also arrive at the new AR, by which the packet forwarding and buffering costs can be reduced. From numerical analysis, it is shown that the proposed scheme can give better performance than the existing FMIPv6-based multicast handover schemes in terms of the multicast packet delivery costs.

Keywords: Mobile Multicast, FMIPv6, Seamless Handover, Fast Tree Join.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1426
509 A Comparative Study of Global Power Grids and Global Fossil Energy Pipelines Using GIS Technology

Authors: Wenhao Wang, Xinzhi Xu, Limin Feng, Wei Cong

Abstract:

This paper comprehensively investigates current development status of global power grids and fossil energy pipelines (oil and natural gas), proposes a standard visual platform of global power and fossil energy based on Geographic Information System (GIS) technology. In this visual platform, a series of systematic visual models is proposed with global spatial data, systematic energy and power parameters. Under this visual platform, the current Global Power Grids Map and Global Fossil Energy Pipelines Map are plotted within more than 140 countries and regions across the world. Using the multi-scale fusion data processing and modeling methods, the world’s global fossil energy pipelines and power grids information system basic database is established, which provides important data supporting global fossil energy and electricity research. Finally, through the systematic and comparative study of global fossil energy pipelines and global power grids, the general status of global fossil energy and electricity development are reviewed, and energy transition in key areas are evaluated and analyzed. Through the comparison analysis of fossil energy and clean energy, the direction of relevant research is pointed out for clean development and energy transition.

Keywords: Energy Transition, geographic information system, fossil energy, power systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 967
508 Low Value Capacitance Measurement System with Adjustable Lead Capacitance Compensation

Authors: Gautam Sarkar, Anjan Rakshit, Amitava Chatterjee, Kesab Bhattacharya

Abstract:

The present paper describes the development of a low cost, highly accurate low capacitance measurement system that can be used over a range of 0 – 400 pF with a resolution of 1 pF. The range of capacitance may be easily altered by a simple resistance or capacitance variation of the measurement circuit. This capacitance measurement system uses quad two-input NAND Schmitt trigger circuit CD4093B with hysteresis for the measurement and this system is integrated with PIC 18F2550 microcontroller for data acquisition purpose. The microcontroller interacts with software developed in the PC end through USB architecture and an attractive graphical user interface (GUI) based system is developed in the PC end to provide the user with real time, online display of capacitance under measurement. The system uses a differential mode of capacitance measurement, with reference to a trimmer capacitance, that effectively compensates lead capacitances, a notorious error encountered in usual low capacitance measurements. The hysteresis provided in the Schmitt-trigger circuits enable reliable operation of the system by greatly minimizing the possibility of false triggering because of stray interferences, usually regarded as another source of significant error. The real life testing of the proposed system showed that our measurements could produce highly accurate capacitance measurements, when compared to cutting edge, high end digital capacitance meters.

Keywords: Capacitance measurement, NAND Schmitt trigger, microcontroller, GUI, lead compensation, hysteresis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7370
507 LOD Exploitation and Fast Silhouette Detection for Shadow Volumes

Authors: Mustafa S. Fawad, Wang Wencheng, Wu Enhua

Abstract:

Shadows add great amount of realism to a scene and many algorithms exists to generate shadows. Recently, Shadow volumes (SVs) have made great achievements to place a valuable position in the gaming industries. Looking at this, we concentrate on simple but valuable initial partial steps for further optimization in SV generation, i.e.; model simplification and silhouette edge detection and tracking. Shadow volumes (SVs) usually takes time in generating boundary silhouettes of the object and if the object is complex then the generation of edges become much harder and slower in process. The challenge gets stiffer when real time shadow generation and rendering is demanded. We investigated a way to use the real time silhouette edge detection method, which takes the advantage of spatial and temporal coherence, and exploit the level-of-details (LOD) technique for reducing silhouette edges of the model to use the simplified version of the model for shadow generation speeding up the running time. These steps highly reduce the execution time of shadow volume generations in real-time and are easily flexible to any of the recently proposed SV techniques. Our main focus is to exploit the LOD and silhouette edge detection technique, adopting them to further enhance the shadow volume generations for real time rendering.

Keywords: LOD, perception, Shadow Volumes, SilhouetteEdge, Spatial and Temporal coherence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613