Search results for: Geometric error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1565

Search results for: Geometric error

1145 Evaluating Hourly Sulphur Dioxide and Ground Ozone Simulated with the Air Quality Model in Lima, Peru

Authors: Odón R. Sánchez-Ccoyllo, Elizabeth Ayma-Choque, Alan Llacza

Abstract:

Sulphur dioxide (SO₂) and surface-ozone (O₃) concentrations are associated with diseases. The objective of this research is to evaluate the effectiveness of the air-quality Weather Research and Forecasting model coupled to Chemistry (WRF-Chem) model with a horizontal resolution of 5 km x 5 km. For this purpose, the measurements of the hourly SO₂ and O₃ concentrations available in three air quality monitoring stations in Lima, Peru were used for the purpose of validating the simulations of the SO₂ and O₃ concentrations obtained with the WRF-Chem model in February 2018. For the quantitative evaluation of the simulations of these gases, statistical techniques were implemented, such as the average of the simulations; the average of the measurements; the Mean Bias (MeB); the Mean Error (MeE); and the Root Mean Square Error (RMSE). The results of these statistical metrics indicated that the simulated SO₂ and O₃ values over-predicted the SO₂ and O₃ measurements. For the SO₂ concentration, the MeB values varied from 0.58 to 26.35 µg/m³; the MeE values varied from 8.75 to 26.5 µg/m³; the RMSE values varied from 13.3 to 31.79 µg/m³; while for O₃ concentrations the statistical values of the MeB varied from 37.52 to 56.29 µg/m³; the MeE values varied from 37.54 to 56.70 µg/m³; the RMSE values varied from 43.05 to 69.56 µg/m³.

Keywords: Ground-ozone, Lima, Sulphur dioxide, WRF-Chem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 371
1144 A Novel RLS Based Adaptive Filtering Method for Speech Enhancement

Authors: Pogula Rakesh, T. Kishore Kumar

Abstract:

Speech enhancement is a long standing problem with numerous applications like teleconferencing, VoIP, hearing aids and speech recognition. The motivation behind this research work is to obtain a clean speech signal of higher quality by applying the optimal noise cancellation technique. Real-time adaptive filtering algorithms seem to be the best candidate among all categories of the speech enhancement methods. In this paper, we propose a speech enhancement method based on Recursive Least Squares (RLS) adaptive filter of speech signals. Experiments were performed on noisy data which was prepared by adding AWGN, Babble and Pink noise to clean speech samples at -5dB, 0dB, 5dB and 10dB SNR levels. We then compare the noise cancellation performance of proposed RLS algorithm with existing NLMS algorithm in terms of Mean Squared Error (MSE), Signal to Noise ratio (SNR) and SNR Loss. Based on the performance evaluation, the proposed RLS algorithm was found to be a better optimal noise cancellation technique for speech signals.

Keywords: Adaptive filter, Adaptive Noise Canceller, Mean Squared Error, Noise reduction, NLMS, RLS, SNR, SNR Loss.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3186
1143 Reduction of Linear Time-Invariant Systems Using Routh-Approximation and PSO

Authors: S. Panda, S. K. Tomar, R. Prasad, C. Ardil

Abstract:

Order reduction of linear-time invariant systems employing two methods; one using the advantages of Routh approximation and other by an evolutionary technique is presented in this paper. In Routh approximation method the denominator of the reduced order model is obtained using Routh approximation while the numerator of the reduced order model is determined using the indirect approach of retaining the time moments and/or Markov parameters of original system. By this method the reduced order model guarantees stability if the original high order model is stable. In the second method Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical examples.

Keywords: Model Order Reduction, Markov Parameters, Routh Approximation, Particle Swarm Optimization, Integral Squared Error, Steady State Stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3294
1142 Applying Multiple Kinect on the Development of a Rapid 3D Mannequin Scan Platform

Authors: Shih-Wen Hsiao, Yi-Cheng Tsao

Abstract:

In the field of reverse engineering and creative industries, applying 3D scanning process to obtain geometric forms of the objects is a mature and common technique. For instance, organic objects such as faces and nonorganic objects such as products could be scanned to acquire the geometric information for further application. However, although the data resolution of 3D scanning device is increasing and there are more and more abundant complementary applications, the penetration rate of 3D scanning for the public is still limited by the relative high price of the devices. On the other hand, Kinect, released by Microsoft, is known for its powerful functions, considerably low price, and complete technology and database support. Therefore, related studies can be done with the applying of Kinect under acceptable cost and data precision. Due to the fact that Kinect utilizes optical mechanism to extracting depth information, limitations are found due to the reason of the straight path of the light. Thus, various angles are required sequentially to obtain the complete 3D information of the object when applying a single Kinect for 3D scanning. The integration process which combines the 3D data from different angles by certain algorithms is also required. This sequential scanning process costs much time and the complex integration process often encounter some technical problems. Therefore, this paper aimed to apply multiple Kinects simultaneously on the field of developing a rapid 3D mannequin scan platform and proposed suggestions on the number and angles of Kinects. In the content, a method of establishing the coordination based on the relation between mannequin and the specifications of Kinect is proposed, and a suggestion of angles and number of Kinects is also described. An experiment of applying multiple Kinect on the scanning of 3D mannequin is constructed by Microsoft API, and the results show that the time required for scanning and technical threshold can be reduced in the industries of fashion and garment design.

Keywords: 3D scan, depth sensor, fashion and garment design, mannequin, multiple kinect sensor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2279
1141 Application of Biometrics to Obtain High Entropy Cryptographic Keys

Authors: Sanjay Kanade, Danielle Camara, Dijana Petrovska-Delacretaz, Bernadette Dorizzi

Abstract:

In this paper, a two factor scheme is proposed to generate cryptographic keys directly from biometric data, which unlike passwords, are strongly bound to the user. Hash value of the reference iris code is used as a cryptographic key and its length depends only on the hash function, being independent of any other parameter. The entropy of such keys is 94 bits, which is much higher than any other comparable system. The most important and distinct feature of this scheme is that it regenerates the reference iris code by providing a genuine iris sample and the correct user password. Since iris codes obtained from two images of the same eye are not exactly the same, error correcting codes (Hadamard code and Reed-Solomon code) are used to deal with the variability. The scheme proposed here can be used to provide keys for a cryptographic system and/or for user authentication. The performance of this system is evaluated on two publicly available databases for iris biometrics namely CBS and ICE databases. The operating point of the system (values of False Acceptance Rate (FAR) and False Rejection Rate (FRR)) can be set by properly selecting the error correction capacity (ts) of the Reed- Solomon codes, e.g., on the ICE database, at ts = 15, FAR is 0.096% and FRR is 0.76%.

Keywords:

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2092
1140 Fuzzy Controller Design for TCSC to Improve Power Oscillations Damping

Authors: M Nayeripour, H. Khorsand, A. Roosta, T. Niknam, E. Azad

Abstract:

Series compensators have been used for many years, to increase the stability and load ability of transmission line. They compensate retarded or advanced volt drop of transmission lines by placing advanced or retarded voltage in series with them to compensate the effective reactance, which cause to increase load ability of transmission lines. In this paper, two method of fuzzy controller, based on power reference tracking and impedance reference tracking have been developed on TCSC controller in order to increase load ability and improving power oscillation damping of system. In these methods, fire angle of thyristors are determined directly through the special Rule-bases with the error and change of error as the inputs. The simulation results of two area four- machines power system show the good performance of power oscillation damping in system. Comparison of this method with classical PI controller shows the increasing speed of system response in power oscillation damping.

Keywords: TCSC, Two area network, Fuzzy controller, Power oscillation damping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2000
1139 A Nano-Scaled SRAM Guard Band Design with Gaussian Mixtures Model of Complex Long Tail RTN Distributions

Authors: Worawit Somha, Hiroyuki Yamauchi

Abstract:

This paper proposes, for the first time, how the challenges facing the guard-band designs including the margin assist-circuits scheme for the screening-test in the coming process generations should be addressed. The increased screening error impacts are discussed based on the proposed statistical analysis models. It has been shown that the yield-loss caused by the misjudgment on the screening test would become 5-orders of magnitude larger than that for the conventional one when the amplitude of random telegraph noise (RTN) caused variations approaches to that of random dopant fluctuation. Three fitting methods to approximate the RTN caused complex Gamma mixtures distributions by the simple Gaussian mixtures model (GMM) are proposed and compared. It has been verified that the proposed methods can reduce the error of the fail-bit predictions by 4-orders of magnitude.

Keywords: Mixtures of Gaussian, Random telegraph noise, EM algorithm, Long-tail distribution, Fail-bit analysis, Static random access memory, Guard band design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
1138 Software Effort Estimation Models Using Radial Basis Function Network

Authors: E. Praynlin, P. Latha

Abstract:

Software Effort Estimation is the process of estimating the effort required to develop software. By estimating the effort, the cost and schedule required to estimate the software can be determined. Accurate Estimate helps the developer to allocate the resource accordingly in order to avoid cost overrun and schedule overrun. Several methods are available in order to estimate the effort among which soft computing based method plays a prominent role. Software cost estimation deals with lot of uncertainty among all soft computing methods neural network is good in handling uncertainty. In this paper Radial Basis Function Network is compared with the back propagation network and the results are validated using six data sets and it is found that RBFN is best suitable to estimate the effort. The Results are validated using two tests the error test and the statistical test.

Keywords: Software cost estimation, Radial Basis Function Network (RBFN), Back propagation function network, Mean Magnitude of Relative Error (MMRE).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2390
1137 Performance Analysis of MIMO-OFDM Using Convolution Codes with QAM Modulation

Authors: I Gede Puja Astawa, Yoedy Moegiharto, Ahmad Zainudin, Imam Dui Agus Salim, Nur Annisa Anggraeni

Abstract:

Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs. Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier transmits Rayleigh multipath channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2x2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4x4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4x4 MIMO-OFDM system without coding, power saving 7dB of 2x2 MIMO-OFDM and significant power savings from SISO-OFDM system

Keywords: Convolution code, OFDM, MIMO, QAM, BER.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3391
1136 HaskellFL: A Tool for Detecting Logical Errors in Haskell

Authors: Vanessa Vasconcelos, Mariza A. S. Bigonha

Abstract:

Understanding and using the functional paradigm is a challenge for many programmers. Looking for logical errors in code may take a lot of a developer’s time when a program grows in size. In order to facilitate both processes, this paper presents HaskellFL, a tool that uses fault localization techniques to locate a logical error in Haskell code. The Haskell subset used in this work is sufficiently expressive for those studying Functional Programming to get immediate help debugging their code and to answer questions about key concepts associated with the functional paradigm. HaskellFL was tested against Functional Programming assignments submitted by students enrolled at the Functional Programming class at the Federal University of Minas Gerais and against exercises from the Exercism Haskell track that are publicly available in GitHub. This work also evaluated the effectiveness of two fault localization techniques, Tarantula and Ochiai, in the Haskell context. Furthermore, the EXAM score was chosen to evaluate the tool’s effectiveness, and results showed that HaskellFL reduced the effort needed to locate an error for all tested scenarios. The results also showed that the Ochiai method was more effective than Tarantula.

Keywords: Debug, fault localization, functional programming, Haskell.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 736
1135 Appraisal of Relativistic Effects on GNSS Receiver Positioning

Authors: I. Yakubu, Y. Y. Ziggah, E. A. Gyamera

Abstract:

The Global Navigation Satellite System (GNSS) started with the launch of the United State Department of Defense Global Positioning System (GPS). GNSS systems has grown over the years to include: GLONASS (Russia); Galileo (European Union); BeiDou (China). Any GNSS architecture consists of three major segments: Space, Control and User Segments. Errors such as; multipath, ionospheric and tropospheric effects, satellite clocks, receiver noise and orbit errors (relativity effect) have significant effects on GNSS positioning. To obtain centimeter level accuracy, the impacts of the relative motion of the satellites and earth need to be taken into account. This paper discusses the relevance of the theory of relativity as a source of error for GNSS receivers for position fix based on available relevant literature. Review of relevant literature reveals that due to relativity; Time dilation, Gravitational frequency shift and Sagnac effect cause significant influence on the use of GNSS receivers for positioning by an error range of ± 2.5 m based on pseudo-range computation.

Keywords: GNSS, relativistic effects, pseudo-range, accuracy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 399
1134 Performance Evaluation of a Minimum Mean Square Error-Based Physical Sidelink Share Channel Receiver under Fading Channel

Authors: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis

Abstract:

Cellular Vehicle to Everything (C-V2X) is considered a promising solution for future autonomous driving. From Release 16 to Release 17, the Third Generation Partnership Project (3GPP) has introduced the definitions and services for 5G New Radio (NR) V2X. Since establishing a simulator for C-V2X communications is an essential preliminary step to achieve reliable and stable communication links, this paper proposes a complete framework of a link-level simulator based on the 3GPP specifications for the Physical Sidelink Share Channel (PSSCH) of the 5G NR Physical Layer (PHY). In this framework, several algorithms in the receiver part, i.e., sliding window in channel estimation and Minimum Mean Square Error (MMSE)-based equalization, are developed. Finally, the performance of the developed PSSCH receiver is validated through extensive simulations under different assumptions.

Keywords: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 156
1133 On Hyperbolic Gompertz Growth Model

Authors: Angela Unna Chukwu, Samuel Oluwafemi Oyamakin

Abstract:

We proposed a Hyperbolic Gompertz Growth Model (HGGM), which was developed by introducing a shape parameter (allometric). This was achieved by convoluting hyperbolic sine function on the intrinsic rate of growth in the classical gompertz growth equation. The resulting integral solution obtained deterministically was reprogrammed into a statistical model and used in modeling the height and diameter of Pines (Pinus caribaea). Its ability in model prediction was compared with the classical gompertz growth model, an approach which mimicked the natural variability of height/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using goodness of fit tests and model selection criteria. The Kolmogorov Smirnov test and Shapiro-Wilk test was also used to test the compliance of the error term to normality assumptions while the independence of the error term was confirmed using the runs test. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic gompertz growth models better than the source model (classical gompertz growth model) while the results of R2, Adj. R2, MSE and AIC confirmed the predictive power of the Hyperbolic Gompertz growth models over its source model.

Keywords: Height, Dbh, forest, Pinus caribaea, hyperbolic, gompertz.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2710
1132 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network

Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy

Abstract:

This paper aims to provide an interpretation of artificial neural networks (ANNs) and explore some of its implications. The interpretation views ANNs as a memory which encodes instances of experience. An experiment explores the behavior of encoding and retrieval of instances from memory. A localised representation ANN is created that allows control over encoding and retrieved memory sample size and is experimented with using the MNIST digits dataset. The relationship between input familiarity, conflict within retrieved samples, and error rates is described and demonstrated to be an effective driver for memory encoding. Results indicate that selective encoding and retrieval samples that allow detection of memory conflicts produce optimal performance, and that error rates are normally distributed with input familiarity and conflict. By using input familiarity and sample consistency to guide memory encoding, the number of encoding trials on the dataset were reduced to 18.33% of the training data while maintaining good recognition performance on the test data.

Keywords: Artificial Neural Networks, ANNs, representation, memory, conflict monitoring, confidence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 512
1131 Model Reduction of Linear Systems by Conventional and Evolutionary Techniques

Authors: S. Panda, S. K. Tomar, R. Prasad, C. Ardil

Abstract:

Reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM), using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Mihailov stability criterion and continued fraction expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. In the evolutionary technique method Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.

Keywords: Reduced Order Modeling, Stability, Continued Fraction Expansions, Mihailov Stability Criterion, Particle Swarm Optimization, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929
1130 Reducing Unplanned Extubation in Psychiatric LTC

Authors: Jih-Rue Pan, Feng-Chuan Pan

Abstract:

Today-s healthcare industries had become more patient-centric than profession-centric, from which the issues of quality of healthcare and the patient safety are the major concerns in the modern healthcare facilities. An unplanned extubation (UE) may be detrimental to the patient-s life, and thus is one of the major indexes of patient safety and healthcare quality. A high UE rate not only defeated the healthcare quality as well as the patient safety policy but also the nurses- morality, and job satisfaction. The UE problem in a psychiatric hospital is unique and may be a tough challenge for the healthcare professionals for the patients were mostly lacking communication capabilities. We reported with this essay a particular project that was organized to reduce the UE rate from the current 2.3% to a lower and satisfactory level in the long-term care units of a psychiatric hospital. The project was conducted between March 1st, 2011 and August 31st, 2011. Based on the error information gathered from varied units of the hospital, the team analyzed the root causes with possible solutions proposed to the meetings. Four solutions were then concluded with consensus and launched to the units in question. The UE rate was now reduced to a level of 0.17%. Experience from this project, the procedure and the tools adopted would be good reference to other hospitals.

Keywords: Unplanned extubation, patient safety, error information

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1841
1129 An Improved Performance of the SRM Drives Using Z-Source Inverter with the Simplified Fuzzy Logic Rule Base

Authors: M. Hari Prabhu

Abstract:

This paper is based on the performance of the Switched Reluctance Motor (SRM) drives using Z-Source Inverter with the simplified rule base of Fuzzy Logic Controller (FLC) with the output scaling factor (SF) self-tuning mechanism are proposed. The aim of this paper is to simplify the program complexity of the controller by reducing the number of fuzzy sets of the membership functions (MFs) without losing the system performance and stability via the adjustable controller gain. ZSI exhibits both voltage-buck and voltage-boost capability. It reduces line harmonics, improves reliability, and extends output voltage range. The output SF of the controller can be tuned continuously by a gain updating factor, whose value is derived from fuzzy logic, with the plant error and error change ratio as input variables. Then the results, carried out on a four-phase 6/8 pole SRM based on the dSPACEDS1104 platform, to show the feasibility and effectiveness of the devised methods and also performance of the proposed controllers will be compared with conventional counterpart.

Keywords: Fuzzy logic controller, scaling factor (SF), switched reluctance motor (SRM), variable-speed drives.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2430
1128 Adaptive Pulse Coupled Neural Network Parameters for Image Segmentation

Authors: Thejaswi H. Raya, Vineetha Bettaiah, Heggere S. Ranganath

Abstract:

For over a decade, the Pulse Coupled Neural Network (PCNN) based algorithms have been successfully used in image interpretation applications including image segmentation. There are several versions of the PCNN based image segmentation methods, and the segmentation accuracy of all of them is very sensitive to the values of the network parameters. Most methods treat PCNN parameters like linking coefficient and primary firing threshold as global parameters, and determine them by trial-and-error. The automatic determination of appropriate values for linking coefficient, and primary firing threshold is a challenging problem and deserves further research. This paper presents a method for obtaining global as well as local values for the linking coefficient and the primary firing threshold for neurons directly from the image statistics. Extensive simulation results show that the proposed approach achieves excellent segmentation accuracy comparable to the best accuracy obtainable by trial-and-error for a variety of images.

Keywords: Automatic Selection of PCNN Parameters, Image Segmentation, Neural Networks, Pulse Coupled Neural Network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2290
1127 The Riemann Barycenter Computation and Means of Several Matrices

Authors: Miklos Palfia

Abstract:

An iterative definition of any n variable mean function is given in this article, which iteratively uses the two-variable form of the corresponding two-variable mean function. This extension method omits recursivity which is an important improvement compared with certain recursive formulas given before by Ando-Li-Mathias, Petz- Temesi. Furthermore it is conjectured here that this iterative algorithm coincides with the solution of the Riemann centroid minimization problem. Certain simulations are given here to compare the convergence rate of the different algorithms given in the literature. These algorithms will be the gradient and the Newton mehod for the Riemann centroid computation.

Keywords: Means, matrix means, operator means, geometric mean, Riemannian center of mass.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1791
1126 Computation of Natural Logarithm Using Abstract Chemical Reaction Networks

Authors: Iuliia Zarubiieva, Joyun Tseng, Vishwesh Kulkarni

Abstract:

Recent researches has focused on nucleic acids as a substrate for designing biomolecular circuits for in situ monitoring and control. A common approach is to express them by a set of idealised abstract chemical reaction networks (ACRNs). Here, we present new results on how abstract chemical reactions, viz., catalysis, annihilation and degradation, can be used to implement circuit that accurately computes logarithm function using the method of Arithmetic-Geometric Mean (AGM), which has not been previously used in conjunction with ACRNs.

Keywords: Abstract chemical reaction network, DNA strand displacement, natural logarithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1030
1125 Solar Energy Collection using a Double-layer Roof

Authors: S. Kong Wang

Abstract:

The purpose of this study is to investigate the efficiency of a double-layer roof in collecting solar energy as an application to the areas such as raising high-end temperature of organic Rankine cycle (ORC). The by-product of the solar roof is to reduce building air-conditioning loads. The experimental apparatus are arranged to evaluate the effects of the solar roof in absorbing solar energy. The flow channel is basically formed by an aluminum plate on top of a plywood plate. The geometric configurations in which the effects of absorbing energy is analyzed include: a bare uncovered aluminum plate, a glass-covered aluminum plate, a glass-covered/black-painted aluminum plate, a plate with variable lengths, a flow channel with stuffed material (in an attempt on enhancement of heat conduction), and a flow channel with variable slanted angles. The experimental results show that the efficiency of energy collection varies from 0.6 % to 11 % for the geometric configurations mentioned above. An additional study is carried out using CFD simulation to investigate the effects of fins on the aluminum plate. It shows that due to vastly enhanced heat conduction, the efficiency can reach ~23 % if 50 fins are installed on the aluminum plate. The study shows that a double-layer roof can efficiently absorb solar energy and substantially reduce building air-conditioning loads. On the high end of an organic Rankine cycle, a solar pond is used to replace the warm surface water of the sea as OTEC (ocean thermal energy conversion) is the driving energy for the ORC. The energy collected from the double-layered solar roof can be pumped into the pond and raise the pond temperature as the pond surface area is equivalently increased by nearly one-fourth of the total area of the double-layer solar roof. The effect of raising solar pond temperature is especially prominent if the double-layer solar roofs are installed in a community area.

Keywords: solar energy collection, double-layer solar roof, energy conservation, ORC, OTEC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2342
1124 Study of Effect of Gear Tooth Accuracy on Transmission Mount Vibration

Authors: Kalyan Deepak Kolla, Ketan Paua, Rajkumar Bhagate

Abstract:

Transmission dynamics occupy major role in customer perception of the product in both senses of touch and quality of sound. The quantity and quality of sound perceived is more concerned with the whine noise of the gears engaged. Whine noise is tonal in nature and tonal noises cause fatigue and irritation to customers, which in turn affect the quality of the product. Transmission error is the usual suspect for whine noise, which can be caused due to misalignments, tolerances, manufacturing variabilities. In-cabin noise is also more sensitive to the gear design. As the details of the gear tooth design and manufacturing are in microns, anything out of the tolerance zone, either in design or manufacturing, will cause a whine noise. This will also cause high variation in stress and deformation due to change in the load and leads to the fatigue failure of the gears. Hence gear design and development take priority in the transmission development process. This paper aims to study such variability by considering five pairs of helical spur gears and their effect on the transmission error, contact pattern and vibration level on the transmission.

Keywords: Gears, whine noise, manufacturing variability, mount vibration variability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 650
1123 Optimal Channel Equalization for MIMO Time-Varying Channels

Authors: Ehab F. Badran, Guoxiang Gu

Abstract:

We consider optimal channel equalization for MIMO (multi-input/multi-output) time-varying channels in the sense of MMSE (minimum mean-squared-error), where the observation noise can be non-stationary. We show that all ZF (zero-forcing) receivers can be parameterized in an affine form which eliminates completely the ISI (inter-symbol-interference), and optimal channel equalizers can be designed through minimization of the MSE (mean-squarederror) between the detected signals and the transmitted signals, among all ZF receivers. We demonstrate that the optimal channel equalizer is a modified Kalman filter, and show that under the AWGN (additive white Gaussian noise) assumption, the proposed optimal channel equalizer minimizes the BER (bit error rate) among all possible ZF receivers. Our results are applicable to optimal channel equalization for DWMT (discrete wavelet multitone), multirate transmultiplexers, OFDM (orthogonal frequency division multiplexing), and DS (direct sequence) CDMA (code division multiple access) wireless data communication systems. A design algorithm for optimal channel equalization is developed, and several simulation examples are worked out to illustrate the proposed design algorithm.

Keywords: Channel equalization, Kalman filtering, Time-varying systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1838
1122 Imputing Missing Data in Electronic Health Records: A Comparison of Linear and Non-Linear Imputation Models

Authors: Alireza Vafaei Sadr, Vida Abedi, Jiang Li, Ramin Zand

Abstract:

Missing data is a common challenge in medical research and can lead to biased or incomplete results. When the data bias leaks into models, it further exacerbates health disparities; biased algorithms can lead to misclassification and reduced resource allocation and monitoring as part of prevention strategies for certain minorities and vulnerable segments of patient populations, which in turn further reduce data footprint from the same population – thus, a vicious cycle. This study compares the performance of six imputation techniques grouped into Linear and Non-Linear models, on two different real-world electronic health records (EHRs) datasets, representing 17864 patient records. The mean absolute percentage error (MAPE) and root mean squared error (RMSE) are used as performance metrics, and the results show that the Linear models outperformed the Non-Linear models in terms of both metrics. These results suggest that sometimes Linear models might be an optimal choice for imputation in laboratory variables in terms of imputation efficiency and uncertainty of predicted values.

Keywords: EHR, Machine Learning, imputation, laboratory variables, algorithmic bias.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 191
1121 An Efficient Burst Errors Combating for Image Transmission over Mobile WPANs

Authors: Mohsen A. M. El-Bendary, Mostafa A. R. El-Tokhy

Abstract:

This paper presents an efficient burst error spreading tool. Also, it studies a vital issue in wireless communications, which is the transmission of images over wireless networks. IEEE ZigBee 802.15.4 is a short-range communication standard that could be used for small distance multimedia transmissions. In fact, the ZigBee network is a Wireless Personal Area Network (WPAN), which needs a strong interleaving mechanism for protection against error bursts. Also, it is low power technology and utilized in the Wireless Sensor Networks (WSN) implementation. This paper presents the chaotic interleaving scheme as a data randomization tool for this purpose. This scheme depends on the chaotic Baker map. The mobility effects on the image transmission are studied with different velocity through utilizing the Jakes’ model. A comparison study between the proposed chaotic interleaving scheme and the traditional block and convolutional interleaving schemes for image transmission over a correlated fading channel is presented. The simulation results show the superiority of the proposed chaotic interleaving scheme over the traditional schemes.

Keywords: WPANs, Burst Errors, Mobility, Interleaving Techniques, Fading channels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2035
1120 Design of Parity-Preserving Reversible Logic Signed Array Multipliers

Authors: Mojtaba Valinataj

Abstract:

Reversible logic as a new favorable design domain can be used for various fields especially creating quantum computers because of its speed and intangible power consumption. However, its susceptibility to a variety of environmental effects may lead to yield the incorrect results. In this paper, because of the importance of multiplication operation in various computing systems, some novel reversible logic array multipliers are proposed with error detection capability by incorporating the parity-preserving gates. The new designs are presented for two main parts of array multipliers, partial product generation and multi-operand addition, by exploiting the new arrangements of existing gates, which results in two signed parity-preserving array multipliers. The experimental results reveal that the best proposed 4×4 multiplier in this paper reaches 12%, 24%, and 26% enhancements in the number of constant inputs, number of required gates, and quantum cost, respectively, compared to previous design. Moreover, the best proposed design is generalized for n×n multipliers with general formulations to estimate the main reversible logic criteria as the functions of the multiplier size.

Keywords: Array multipliers, Baugh-Wooley method, error detection, parity-preserving gates, quantum computers, reversible logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1033
1119 Roll of Membership functions in Fuzzy Logic for Prediction of Shoot Length of Mustard Plant Based on Residual Analysis

Authors: Satyendra Nath Mandal, J. Pal Choudhury, Dilip De, S. R. Bhadra Chaudhuri

Abstract:

The selection for plantation of a particular type of mustard plant depending on its productivity (pod yield) at the stage of maturity. The growth of mustard plant dependent on some parameters of that plant, these are shoot length, number of leaves, number of roots and roots length etc. As the plant is growing, some leaves may be fall down and some new leaves may come, so it can not gives the idea to develop the relationship with the seeds weight at mature stage of that plant. It is not possible to find the number of roots and root length of mustard plant at growing stage that will be harmful of this plant as roots goes deeper to deeper inside the land. Only the value of shoot length which increases in course of time can be measured at different time instances. Weather parameters are maximum and minimum humidity, rain fall, maximum and minimum temperature may effect the growth of the plant. The parameters of pollution, water, soil, distance and crop management may be dominant factors of growth of plant and its productivity. Considering all parameters, the growth of the plant is very uncertain, fuzzy environment can be considered for the prediction of shoot length at maturity of the plant. Fuzzification plays a greater role for fuzzification of data, which is based on certain membership functions. Here an effort has been made to fuzzify the original data based on gaussian function, triangular function, s-function, Trapezoidal and L –function. After that all fuzzified data are defuzzified to get normal form. Finally the error analysis (calculation of forecasting error and average error) indicates the membership function appropriate for fuzzification of data and use to predict the shoot length at maturity. The result is also verified using residual (Absolute Residual, Maximum of Absolute Residual, Mean Absolute Residual, Mean of Mean Absolute Residual, Median of Absolute Residual and Standard Deviation) analysis.

Keywords: Fuzzification, defuzzification, gaussian function, triangular function, trapezoidal function, s-function, , membership function, residual analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2320
1118 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: Additive models, local polynomial regression, residuals, mean square error, variable selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1014
1117 On Adaptive Optimization of Filter Performance Based on Markov Representation for Output Prediction Error

Authors: Hong Son Hoang, Remy Baraille

Abstract:

This paper addresses the problem of how one can improve the performance of a non-optimal filter. First the theoretical question on dynamical representation for a given time correlated random process is studied. It will be demonstrated that for a wide class of random processes, having a canonical form, there exists a dynamical system equivalent in the sense that its output has the same covariance function. It is shown that the dynamical approach is more effective for simulating and estimating a Markov and non- Markovian random processes, computationally is less demanding, especially with increasing of the dimension of simulated processes. Numerical examples and estimation problems in low dimensional systems are given to illustrate the advantages of the approach. A very useful application of the proposed approach is shown for the problem of state estimation in very high dimensional systems. Here a modified filter for data assimilation in an oceanic numerical model is presented which is proved to be very efficient due to introducing a simple Markovian structure for the output prediction error process and adaptive tuning some parameters of the Markov equation.

Keywords: Statistical simulation, canonical form, dynamical system, Markov and non-Markovian processes, data assimilation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1302
1116 Comparison of Alternative Models to Predict Lean Meat Percentage of Lamb Carcasses

Authors: Vasco A. P. Cadavez, Fernando C. Monteiro

Abstract:

The objective of this study was to develop and compare alternative prediction equations of lean meat proportion (LMP) of lamb carcasses. Forty (40) male lambs, 22 of Churra Galega Bragançana Portuguese local breed and 18 of Suffolk breed were used. Lambs were slaughtered, and carcasses weighed approximately 30 min later in order to obtain hot carcass weight (HCW). After cooling at 4º C for 24-h a set of seventeen carcass measurements was recorded. The left side of carcasses was dissected into muscle, subcutaneous fat, inter-muscular fat, bone, and remainder (major blood vessels, ligaments, tendons, and thick connective tissue sheets associated with muscles), and the LMP was evaluated as the dissected muscle percentage. Prediction equations of LMP were developed, and fitting quality was evaluated through the coefficient of determination of estimation (R2 e) and standard error of estimate (SEE). Models validation was performed by k-fold crossvalidation and the coefficient of determination of prediction (R2 p) and standard error of prediction (SEP) were computed. The BT2 measurement was the best single predictor and accounted for 37.8% of the LMP variation with a SEP of 2.30%. The prediction of LMP of lamb carcasses can be based simple models, using as predictors the HCW and one fat thickness measurement.

Keywords: Bootstrap, Carcass, Lambs, Lean meat

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1623