Search results for: speckle noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1175

Search results for: speckle noise

785 Low Power CMOS Amplifier Design for Wearable Electrocardiogram Sensor

Authors: Ow Tze Weng, Suhaila Isaak, Yusmeeraz Yusof

Abstract:

The trend of health care screening devices in the world is increasingly towards the favor of portability and wearability, especially in the most common electrocardiogram (ECG) monitoring system. This is because these wearable screening devices are not restricting the patient’s freedom and daily activities. While the demand of low power and low cost biomedical system on chip (SoC) is increasing in exponential way, the front end ECG sensors are still suffering from flicker noise for low frequency cardiac signal acquisition, 50 Hz power line electromagnetic interference, and the large unstable input offsets due to the electrode-skin interface is not attached properly. In this paper, a high performance CMOS amplifier for ECG sensors that suitable for low power wearable cardiac screening is proposed. The amplifier adopts the highly stable folded cascode topology and later being implemented into RC feedback circuit for low frequency DC offset cancellation. By using 0.13 µm CMOS technology from Silterra, the simulation results show that this front end circuit can achieve a very low input referred noise of 1 pV/√Hz and high common mode rejection ratio (CMRR) of 174.05 dB. It also gives voltage gain of 75.45 dB with good power supply rejection ratio (PSSR) of 92.12 dB. The total power consumption is only 3 µW and thus suitable to be implemented with further signal processing and classification back end for low power biomedical SoC.

Keywords: CMOS, ECG, amplifier, low power

Procedia PDF Downloads 246
784 High-Resolution ECG Automated Analysis and Diagnosis

Authors: Ayad Dalloo, Sulaf Dalloo

Abstract:

Electrocardiogram (ECG) recording is prone to complications, on analysis by physicians, due to noise and artifacts, thus creating ambiguity leading to possible error of diagnosis. Such drawbacks may be overcome with the advent of high resolution Methods, such as Discrete Wavelet Analysis and Digital Signal Processing (DSP) techniques. This ECG signal analysis is implemented in three stages: ECG preprocessing, features extraction and classification with the aim of realizing high resolution ECG diagnosis and improved detection of abnormal conditions in the heart. The preprocessing stage involves removing spurious artifacts (noise), due to such factors as muscle contraction, motion, respiration, etc. ECG features are extracted by applying DSP and suggested sloping method techniques. These measured features represent peak amplitude values and intervals of P, Q, R, S, R’, and T waves on ECG, and other features such as ST elevation, QRS width, heart rate, electrical axis, QR and QT intervals. The classification is preformed using these extracted features and the criteria for cardiovascular diseases. The ECG diagnostic system is successfully applied to 12-lead ECG recordings for 12 cases. The system is provided with information to enable it diagnoses 15 different diseases. Physician’s and computer’s diagnoses are compared with 90% agreement, with respect to physician diagnosis, and the time taken for diagnosis is 2 seconds. All of these operations are programmed in Matlab environment.

Keywords: ECG diagnostic system, QRS detection, ECG baseline removal, cardiovascular diseases

Procedia PDF Downloads 295
783 Vertical Vibration Mitigation along Railway Lines

Authors: Jürgen Keil, Frank Walther

Abstract:

This article presents two innovative solutions for vertical vibration mitigation barriers including experimental and numerical investigations on the completed barriers. There is a continuing growth of exposure to noise and vibration in people´s daily lives due to the quest for more mobility and flexibility. In previous times neglected, immissions caused by vibrations can lead, for example, to secondary noise or damage in the adjacent buildings. Also people can feel very affected by vibrations. But unlike in new construction, in existing infrastructure and buildings action can be taken almost only on the transmission path of those vibrations. In the following two solutions were shown how vibrations on the transmission path can be mitigated. These are the jet grouting method and a new installation method (patent pending) by means of a prefabricated hollow box which is filled with vibration reducing mats and driven down to depth, are presented. The essential results of the numerical and experimental investigations on the completed wave barriers are included as well. This article is based on the results of a field test with the participation of Keller Holding, which was executed in the context of the European research project RIVAS (Railway Induced Vibration Abatement Solutions), and on a thesis done at the Technical University of Dresden with the involvement of BAUGRUND DRESDEN Ingenieurgesellschaft mbH and the Keller Holding GmbH.

Keywords: jet grouting, rail way lines, vertical vibration mitigation, vibration reducing mats

Procedia PDF Downloads 402
782 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 126
781 Rotorcraft Performance and Environmental Impact Evaluation by Multidisciplinary Modelling

Authors: Pierre-Marie Basset, Gabriel Reboul, Binh DangVu, Sébastien Mercier

Abstract:

Rotorcraft provides invaluable services thanks to their Vertical Take-Off and Landing (VTOL), hover and low speed capabilities. Yet their use is still often limited by their cost and environmental impact, especially noise and energy consumption. One of the main brakes to the expansion of the use of rotorcraft for urban missions is the environmental impact. The first main concern for the population is the noise. In order to develop the transversal competency to assess the rotorcraft environmental footprint, a collaboration has been launched between six research departments within ONERA. The progress in terms of models and methods are capitalized into the numerical workshop C.R.E.A.T.I.O.N. “Concepts of Rotorcraft Enhanced Assessment Through Integrated Optimization Network”. A typical mission for which the environmental impact issue is of great relevance has been defined. The first milestone is to perform the pre-sizing of a reference helicopter for this mission. In a second milestone, an alternate rotorcraft concept has been defined: a tandem rotorcraft with optional propulsion. The key design trends are given for the pre-sizing of this rotorcraft aiming at a significant reduction of the global environmental impact while still giving equivalent flight performance and safety with respect to the reference helicopter. The models and methods have been improved for catching sooner and more globally, the relative variations on the environmental impact when changing the rotorcraft architecture, the pre-design variables and the operation parameters.

Keywords: environmental impact, flight performance, helicopter, multi objectives multidisciplinary optimization, rotorcraft

Procedia PDF Downloads 268
780 Assessment of the Number of Damaged Buildings from a Flood Event Using Remote Sensing Technique

Authors: Jaturong Som-ard

Abstract:

The heavy rainfall from 3rd to 22th January 2017 had swamped much area of Ranot district in southern Thailand. Due to heavy rainfall, the district was flooded which had a lot of effects on economy and social loss. The major objective of this study is to detect flooding extent using Sentinel-1A data and identify a number of damaged buildings over there. The data were collected in two stages as pre-flooding and during flood event. Calibration, speckle filtering, geometric correction, and histogram thresholding were performed with the data, based on intensity spectral values to classify thematic maps. The maps were used to identify flooding extent using change detection, along with the buildings digitized and collected on JOSM desktop. The numbers of damaged buildings were counted within the flooding extent with respect to building data. The total flooded areas were observed as 181.45 sq.km. These areas were mostly occurred at Ban khao, Ranot, Takhria, and Phang Yang sub-districts, respectively. The Ban khao sub-district had more occurrence than the others because this area is located at lower altitude and close to Thale Noi and Thale Luang lakes than others. The numbers of damaged buildings were high in Khlong Daen (726 features), Tha Bon (645 features), and Ranot sub-district (604 features), respectively. The final flood extent map might be very useful for the plan, prevention and management of flood occurrence area. The map of building damage can be used for the quick response, recovery and mitigation to the affected areas for different concern organization.

Keywords: flooding extent, Sentinel-1A data, JOSM desktop, damaged buildings

Procedia PDF Downloads 190
779 Fano-Resonance-Based Wideband Acoustic Metamaterials with Highly Efficient Ventilation

Authors: Xi-Wen Xiao, Tzy-Rong Lin, Chien-Hao Liu

Abstract:

Ventilated acoustic metamaterials have attracted considerable research attention due to their low-frequency absorptions and efficient fluid ventilations. In this research, a wideband acoustic metamaterial with auditory filtering ability and efficient ventilation capacity were proposed. In contrast to a conventional Fano-like resonator, a Fano-like resonator composed of a resonant unit and two nonresonant units with a large opening area of 68% for fluid passages was developed. In addition, the coupling mechanism to improve the narrow bandwidths of conventional Fano-resonance-based meta-materials was included. With a suitable design, the output sound waves of the resonant and nonresonant states were out of phase to achieve sound absorptions in the far fields. Therefore, three-element and five-element coupled Fano-like metamaterials were designed and simulated with the help of the finite element software to obtain the filtering fractional bandwidths of 42.5% and 61.8%, respectively. The proposed approach can be extended to multiple coupled resonators for obtaining ultra-wide bandwidths and can be implemented with 3D printing for practical applications. The research results are expected to be beneficial for sound filtering or noise reductions in duct applications and limited-volume spaces.

Keywords: fano resonance, noise reduction, resonant coupling, sound filtering, ventilated acoustic metamaterial

Procedia PDF Downloads 113
778 Optimized Processing of Neural Sensory Information with Unwanted Artifacts

Authors: John Lachapelle

Abstract:

Introduction: Neural stimulation is increasingly targeted toward treatment of back pain, PTSD, Parkinson’s disease, and for sensory perception. Sensory recording during stimulation is important in order to examine neural response to stimulation. Most neural amplifiers (headstages) focus on noise efficiency factor (NEF). Conversely, neural headstages need to handle artifacts from several sources including power lines, movement (EMG), and neural stimulation itself. In this work a layered approach to artifact rejection is used to reduce corruption of the neural ENG signal by 60dBv, resulting in recovery of sensory signals in rats and primates that would previously not be possible. Methods: The approach combines analog techniques to reduce and handle unwanted signal amplitudes. The methods include optimized (1) sensory electrode placement, (2) amplifier configuration, and (3) artifact blanking when necessary. The techniques together are like concentric moats protecting a castle; only the wanted neural signal can penetrate. There are two conditions in which the headstage operates: unwanted artifact < 50mV, linear operation, and artifact > 50mV, fast-settle gain reduction signal limiting (covered in more detail in a separate paper). Unwanted Signals at the headstage input: Consider: (a) EMG signals are by nature < 10mV. (b) 60 Hz power line signals may be > 50mV with poor electrode cable conditions; with careful routing much of the signal is common to both reference and active electrode and rejected in the differential amplifier with <50mV remaining. (c) An unwanted (to the neural recorder) stimulation signal is attenuated from stimulation to sensory electrode. The voltage seen at the sensory electrode can be modeled Φ_m=I_o/4πσr. For a 1 mA stimulation signal, with 1 cm spacing between electrodes, the signal is <20mV at the headstage. Headstage ASIC design: The front end ASIC design is designed to produce < 1% THD at 50mV input; 50 times higher than typical headstage ASICs, with no increase in noise floor. This requires careful balance of amplifier stages in the headstage ASIC, as well as consideration of the electrodes effect on noise. The ASIC is designed to allow extremely small signal extraction on low impedance (< 10kohm) electrodes with configuration of the headstage ASIC noise floor to < 700nV/rt-Hz. Smaller high impedance electrodes (> 100kohm) are typically located closer to neural sources and transduce higher amplitude signals (> 10uV); the ASIC low-power mode conserves power with 2uV/rt-Hz noise. Findings: The enhanced neural processing ASIC has been compared with a commercial neural recording amplifier IC. Chronically implanted primates at MGH demonstrated the presence of commercial neural amplifier saturation as a result of large environmental artifacts. The enhanced artifact suppression headstage ASIC, in the same setup, was able to recover and process the wanted neural signal separately from the suppressed unwanted artifacts. Separately, the enhanced artifact suppression headstage ASIC was able to separate sensory neural signals from unwanted artifacts in mouse-implanted peripheral intrafascicular electrodes. Conclusion: Optimizing headstage ASICs allow observation of neural signals in the presence of large artifacts that will be present in real-life implanted applications, and are targeted toward human implantation in the DARPA HAPTIX program.

Keywords: ASIC, biosensors, biomedical signal processing, biomedical sensors

Procedia PDF Downloads 327
777 Taguchi-Based Six Sigma Approach to Optimize Surface Roughness for Milling Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using Six Sigma methodologies to improve the surface roughness of a manufactured part produced by the CNC milling machine. It presents a case study where the surface roughness of milled aluminum is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for a CNC milling process. The six sigma methodology, DMAIC (design, measure, analyze, improve, and control) approach, was applied in this study to improve the process, reduce defects, and ultimately reduce costs. The Taguchi-based six sigma approach was applied to identify the optimized processing parameters that led to the targeted surface roughness specified by our customer. A L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of feed rate, depth of cut, spindle speed, and surface roughness. The noise factor is the difference between the old cutting tool and the new cutting tool. The confirmation run with the optimal parameters confirmed that the new parameter settings are correct. The new settings also improved the process capability index. The purpose of this study is that the Taguchi–based six sigma approach can be efficiently used to phase out defects and improve the process capability index of the CNC milling process.

Keywords: CNC machining, six sigma, surface roughness, Taguchi methodology

Procedia PDF Downloads 242
776 Improved Signal-To-Noise Ratio by the 3D-Functionalization of Fully Zwitterionic Surface Coatings

Authors: Esther Van Andel, Stefanie C. Lange, Maarten M. J. Smulders, Han Zuilhof

Abstract:

False outcomes of diagnostic tests are a major concern in medical health care. To improve the reliability of surface-based diagnostic tests, it is of crucial importance to diminish background signals that arise from the non-specific binding of biomolecules, a process called fouling. The aim is to create surfaces that repel all biomolecules except the molecule of interest. This can be achieved by incorporating antifouling protein repellent coatings in between the sensor surface and it’s recognition elements (e.g. antibodies, sugars, aptamers). Zwitterionic polymer brushes are considered excellent antifouling materials, however, to be able to bind the molecule of interest, the polymer brushes have to be functionalized and so far this was only achieved at the expense of either antifouling or binding capacity. To overcome this limitation, we combined both features into one single monomer: a zwitterionic sulfobetaine, ensuring antifouling capabilities, equipped with a clickable azide moiety which allows for further functionalization. By copolymerizing this monomer together with a standard sulfobetaine, the number of azides (and with that the number of recognition elements) can be tuned depending on the application. First, the clickable azido-monomer was synthesized and characterized, followed by copolymerizing this monomer to yield functionalizable antifouling brushes. The brushes were fully characterized using surface characterization techniques like XPS, contact angle measurements, G-ATR-FTIR and XRR. As a proof of principle, the brushes were subsequently functionalized with biotin via strain-promoted alkyne azide click reactions, which yielded a fully zwitterionic biotin-containing 3D-functionalized coating. The sensing capacity was evaluated by reflectometry using avidin and fibrinogen containing protein solutions. The surfaces showed excellent antifouling properties as illustrated by the complete absence of non-specific fibrinogen binding, while at the same time clear responses were seen for the specific binding of avidin. A great increase in signal-to-noise ratio was observed, even when the amount of functional groups was lowered to 1%, compared to traditional modification of sulfobetaine brushes that rely on a 2D-approach in which only the top-layer can be functionalized. This study was performed on stoichiometric silicon nitride surfaces for future microring resonator based assays, however, this methodology can be transferred to other biosensor platforms which are currently being investigated. The approach presented herein enables a highly efficient strategy for selective binding with retained antifouling properties for improved signal-to-noise ratios in binding assays. The number of recognition units can be adjusted to a specific need, e.g. depending on the size of the analyte to be bound, widening the scope of these functionalizable surface coatings.

Keywords: antifouling, signal-to-noise ratio, surface functionalization, zwitterionic polymer brushes

Procedia PDF Downloads 305
775 Seismic Integrity Determination of Dams in Urban Areas

Authors: J. M. Mayoral, M. Anaya

Abstract:

The urban and economic development of cities demands the construction of water use and flood control infrastructure. Likewise, it is necessary to determine the safety level of the structures built with the current standards and if it is necessary to define the reinforcement actions. The foregoing is even more important in structures of great importance, such as dams, since they imply a greater risk for the population in case of failure or undesirable operating conditions (e.g., seepage, cracks, subsidence). This article presents a methodology for determining the seismic integrity of dams in urban areas. From direct measurements of the dynamic properties using geophysical exploration and ambient seismic noise measurements, the seismic integrity of the concrete-faced rockfill dam selected as a case of study is evaluated. To validate the results, two accelerometer stations were installed (e.g., free field and crest of the dam). Once the dynamic properties were determined, three-dimensional finite difference models were developed to evaluate the dam seismic performance for different intensities of movement, considering the site response and soil-structure interaction effects. The seismic environment was determined from the uniform hazard spectra for several return periods. Based on the results obtained, the safety level of the dam against different seismic actions was determined, and the effectiveness of ambient seismic noise measurements in dynamic characterization and subsequent evaluation of the seismic integrity of urban dams was evaluated.

Keywords: risk, seismic, soil-structure interaction, urban dams

Procedia PDF Downloads 116
774 Vibroacoustic Modulation of Wideband Vibrations and its Possible Application for Windmill Blade Diagnostics

Authors: Abdullah Alnutayfat, Alexander Sutin, Dong Liu

Abstract:

Wind turbine has become one of the most popular energy productions. However, failure of blades and maintenance costs evolve into significant issues in the wind power industry, so it is essential to detect the initial blade defects to avoid the collapse of the blades and structure. This paper aims to apply modulation of high-frequency blade vibrations by low-frequency blade rotation, which is close to the known Vibro-Acoustic Modulation (VAM) method. The high-frequency wideband blade vibration is produced by the interaction of the surface blades with the environment air turbulence, and the low-frequency modulation is produced by alternating bending stress due to gravity. The low-frequency load of rotational wind turbine blades ranges between 0.2-0.4 Hz and can reach up to 2 Hz for strong wind. The main difference between this study and previous ones on VAM methods is the use of a wideband vibration signal from the blade's natural vibrations. Different features of the vibroacoustic modulation are considered using a simple model of breathing crack. This model considers the simple mechanical oscillator, where the parameters of the oscillator are varied due to low-frequency blade rotation. During the blade's operation, the internal stress caused by the weight of the blade modifies the crack's elasticity and damping. The laboratory experiment using steel samples demonstrates the possibility of VAM using a probe wideband noise signal. A cycle load with a small amplitude was used as a pump wave to damage the tested sample, and a small transducer generated a wideband probe wave. The received signal demodulation was conducted using the Detecting of Envelope Modulation on Noise (DEMON) approach. In addition, the experimental results were compared with the modulation index (MI) technique regarding the harmonic pump wave. The wideband and traditional VAM methods demonstrated similar sensitivity for earlier detection of invisible cracks. Importantly, employing a wideband probe signal with the DEMON approach speeds up and simplifies testing since it eliminates the need to conduct tests repeatedly for various harmonic probe frequencies and to adjust the probe frequency.

Keywords: vibro-acoustic modulation, detecting of envelope modulation on noise, damage, turbine blades

Procedia PDF Downloads 97
773 Performance Comparison of Wideband Covariance Matrix Sparse Representation (W-CMSR) with Other Wideband DOA Estimation Methods

Authors: Sandeep Santosh, O. P. Sahu

Abstract:

In this paper, performance comparison of wideband covariance matrix sparse representation (W-CMSR) method with other existing wideband Direction of Arrival (DOA) estimation methods has been made.W-CMSR relies less on a priori information of the incident signal number than the ordinary subspace based methods.Consider the perturbation free covariance matrix of the wideband array output. The diagonal covariance elements are contaminated by unknown noise variance. The covariance matrix of array output is conjugate symmetric i.e its upper right triangular elements can be represented by lower left triangular ones.As the main diagonal elements are contaminated by unknown noise variance,slide over them and align the lower left triangular elements column by column to obtain a measurement vector.Simulation results for W-CMSR are compared with simulation results of other wideband DOA estimation methods like Coherent signal subspace method (CSSM), Capon, l1-SVD, and JLZA-DOA. W-CMSR separate two signals very clearly and CSSM, Capon, L1-SVD and JLZA-DOA fail to separate two signals clearly and an amount of pseudo peaks exist in the spectrum of L1-SVD.

Keywords: W-CMSR, wideband direction of arrival (DOA), covariance matrix, electrical and computer engineering

Procedia PDF Downloads 469
772 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.

Keywords: injection molding, shrinkage, six sigma, Taguchi parameter design

Procedia PDF Downloads 178
771 Interaction between Breathiness and Nasality: An Acoustic Analysis

Authors: Pamir Gogoi, Ratree Wayland

Abstract:

This study investigates the acoustic measures of breathiness when coarticulated with nasality. The acoustic correlates of breathiness and nasality that has already been well established after years of empirical research. Some of these acoustic parameters - like low frequency peaks and wider bandwidths- are common for both nasal and breathy voice. Therefore, it is likely that these parameters interact when a sound is coarticulated with breathiness and nasality. This leads to the hypothesis that the acoustic parameters, which usually act as robust cues in differentiating between breathy and modal voice, might not be reliable cues for differentiating between breathy and modal voice when breathiness is coarticulated with nasality. The effect of nasality on the perception of breathiness has been explored in earlier studies using synthesized speech. The results showed that perceptually, nasality and breathiness do interact. The current study investigates if a similar pattern is observed in natural speech. The study is conducted on Marathi, an Indo-Aryan language which has a three-way contrast between nasality and breathiness. That is, there is a phonemic distinction between nasals, breathy voice and breathy-nasals. Voice quality parameters like – H1-H2 (Difference between the amplitude of first and second harmonic), H1-A3 (Difference between the amplitude of first harmonic and third formant, CPP (Cepstral Peak Prominence), HNR (Harmonics to Noise ratio) and B1 (Bandwidth of first formant) were extracted. Statistical models like linear mixed effects regression and Random Forest classifiers show that measures that capture the noise component in the signal- like CPP and HNR- can classify breathy voice from modal voice better than spectral measures when breathy voice is coarticulated with nasality.

Keywords: breathiness, marathi, nasality, voice quality

Procedia PDF Downloads 93
770 Threshold Sand Detection Limits for Acoustic Monitors in Multiphase Flow

Authors: Vinod Ponnagandla, Brenton McLaury, Siamack Shirazi

Abstract:

Sand production can lead to deposition of particles or erosion. Low production rates resulting in deposition can partially clog systems and cause under deposit corrosion. Commercially available nonintrusive acoustic sand detectors are attractive as they claim to detect sand production. Acoustic sand detectors are used during oil and gas production; however, operators often do not know the threshold detection limits of these devices. It is imperative to know the detection limits to appropriately plan for cleaning of separation equipment or examine risk of erosion. These monitors are based on detecting the acoustic signature of sand as the particles impact the pipe walls. The objective of this work is to determine threshold detection limits for acoustic sand monitors that are commercially available. The minimum threshold sand concentration that can be detected in a pipe are determined as a function of flowing gas and liquid velocities. A large scale flow loop with a 4-inch test section is utilized. Commercially available sand monitors (ClampOn and Roxar) are evaluated for different flow regimes, sand sizes and pipe orientation (vertical and horizontal). The manufacturers’ recommend that the monitors be placed on a bend to maximize the number of particle impacts, so results are shown for monitors placed at 45 and 90 degree positions in a bend. Acoustic sand monitors that clamp to the outside of pipe are passive and listen for solid particle impact noise. The threshold sand rate is calculated by eliminating the background noise created by the flow of gas and liquid in the pipe for various flow regimes that are generated in horizontal and vertical test sections. The average sand sizes examined are 150 and 300 microns. For stratified and bubbly flows the threshold sand rates are much higher than other flow regimes such as slug and annular flow regimes that are investigated. However, the background noise generated by slug flow regime is very high and cause a high uncertainty in detection limits. The threshold sand rates for annular flow and dry gas conditions are the lowest because of high gas velocities. The effects of monitor placement around elbows that are in vertical and horizontal pipes are also examined for 150 micron. The results show that the threshold sand rates that are detected in vertical orientation are generally lower for all various flow regimes that are investigated.

Keywords: acoustic monitor, sand, multiphase flow, threshold

Procedia PDF Downloads 404
769 Automatic Registration of Rail Profile Based Local Maximum Curvature Entropy

Authors: Hao Wang, Shengchun Wang, Weidong Wang

Abstract:

On the influence of train vibration and environmental noise on the measurement of track wear, we proposed a method for automatic extraction of circular arc on the inner or outer side of the rail waist and achieved the high-precision registration of rail profile. Firstly, a polynomial fitting method based on truncated residual histogram was proposed to find the optimal fitting curve of the profile and reduce the influence of noise on profile curve fitting. Then, based on the curvature distribution characteristics of the fitting curve, the interval search algorithm based on dynamic window’s maximum curvature entropy was proposed to realize the automatic segmentation of small circular arc. At last, we fit two circle centers as matching reference points based on small circular arcs on both sides and realized the alignment from the measured profile to the standard designed profile. The static experimental results show that the mean and standard deviation of the method are controlled within 0.01mm with small measurement errors and high repeatability. The dynamic test also verified the repeatability of the method in the train-running environment, and the dynamic measurement deviation of rail wear is within 0.2mm with high repeatability.

Keywords: curvature entropy, profile registration, rail wear, structured light, train-running

Procedia PDF Downloads 258
768 Risks beyond Cyber in IoT Infrastructure and Services

Authors: Mattias Bergstrom

Abstract:

Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.

Keywords: IoT, security, infrastructure, SCADA, blockchain, AI

Procedia PDF Downloads 106
767 Significance of High Specific Speed in Circulating Water Pump, Which Can Cause Cavitation, Noise and Vibration

Authors: Chandra Gupt Porwal

Abstract:

Excessive vibration means increased wear, increased repair efforts, bad product selection & quality and high energy consumption. This may be sometimes experienced by cavitation or suction/discharge re-circulation which could occur only when net positive suction head available NPSHA drops below the net positive suction head required NPSHR. Cavitation can cause axial surging if it is excessive, will damage mechanical seals, bearings, possibly other pump components frequently and shorten the life of the impeller. Efforts have been made to explain Suction Energy (SE), Specific Speed (Ns), Suction Specific Speed (Nss), NPSHA, NPSHR & their significance, possible reasons of cavitation /internal re-circulation, its diagnostics and remedial measures to arrest and prevent cavitation in this paper. A case study is presented by the author highlighting that the root cause of unwanted noise and vibration is due to cavitation, caused by high specific speeds or inadequate net- positive suction head available which results in damages to material surfaces of impeller & suction bells and degradation of machine performance, its capacity and efficiency too. The author strongly recommends revisiting the technical specifications of CW pumps to provide sufficient NPSH margin ratios > 1.5, for future projects and Nss be limited to 8500 -9000 for cavitation free operation.

Keywords: best efficiency point (BEP), net positive suction head NPSHA, NPSHR, specific speed NS, suction specific speed NSS

Procedia PDF Downloads 252
766 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images

Authors: Mehrnoosh Omati, Mahmod Reza Sahebi

Abstract:

The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.

Keywords: coupled Markov random field (MRF), environment, object-based analysis, polarimetric SAR (PolSAR) images

Procedia PDF Downloads 216
765 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 488
764 The Time-Frequency Domain Reflection Method for Aircraft Cable Defects Localization

Authors: Reza Rezaeipour Honarmandzad

Abstract:

This paper introduces an aircraft cable fault detection and location method in light of TFDR keeping in mind the end goal to recognize the intermittent faults adequately and to adapt to the serial and after-connector issues being hard to be distinguished in time domain reflection. In this strategy, the correlation function of reflected and reference signal is used to recognize and find the airplane fault as per the qualities of reflected and reference signal in time-frequency domain, so the hit rate of distinguishing and finding intermittent faults can be enhanced adequately. In the work process, the reflected signal is interfered by the noise and false caution happens frequently, so the threshold de-noising technique in light of wavelet decomposition is used to diminish the noise interference and lessen the shortcoming alert rate. At that point the time-frequency cross connection capacity of the reference signal and the reflected signal based on Wigner-Ville appropriation is figured so as to find the issue position. Finally, LabVIEW is connected to execute operation and control interface, the primary capacity of which is to connect and control MATLAB and LABSQL. Using the solid computing capacity and the bottomless capacity library of MATLAB, the signal processing turn to be effortlessly acknowledged, in addition LabVIEW help the framework to be more dependable and upgraded effectively.

Keywords: aircraft cable, fault location, TFDR, LabVIEW

Procedia PDF Downloads 476
763 Video Compression Using Contourlet Transform

Authors: Delara Kazempour, Mashallah Abasi Dezfuli, Reza Javidan

Abstract:

Video compression used for channels with limited bandwidth and storage devices has limited storage capabilities. One of the most popular approaches in video compression is the usage of different transforms. Discrete cosine transform is one of the video compression methods that have some problems such as blocking, noising and high distortion inappropriate effect in compression ratio. wavelet transform is another approach is better than cosine transforms in balancing of compression and quality but the recognizing of curve curvature is so limit. Because of the importance of the compression and problems of the cosine and wavelet transforms, the contourlet transform is most popular in video compression. In the new proposed method, we used contourlet transform in video image compression. Contourlet transform can save details of the image better than the previous transforms because this transform is multi-scale and oriented. This transform can recognize discontinuity such as edges. In this approach we lost data less than previous approaches. Contourlet transform finds discrete space structure. This transform is useful for represented of two dimension smooth images. This transform, produces compressed images with high compression ratio along with texture and edge preservation. Finally, the results show that the majority of the images, the parameters of the mean square error and maximum signal-to-noise ratio of the new method based contourlet transform compared to wavelet transform are improved but in most of the images, the parameters of the mean square error and maximum signal-to-noise ratio in the cosine transform is better than the method based on contourlet transform.

Keywords: video compression, contourlet transform, discrete cosine transform, wavelet transform

Procedia PDF Downloads 441
762 Study of Acoustic Resonance of Model Liquid Rocket Combustion Chamber and Its Suppression

Authors: Vimal O. Kumar, C. K. Muthukumaran, P. Rakesh

Abstract:

Liquid rocket engine (LRE) combustion chamber is subjected to pressure oscillation during the combustion process. The combustion noise (acoustic noise) is a broad band, small amplitude, high frequency component pressure oscillation. They constitute only a minor fraction ( < 1%) of the entire combustion process. However, this high frequency oscillation is huge concern during the design phase of LRE combustion chamber as it would cause catastrophic failure of the chamber. Depends on the chamber geometry, certain frequencies form standing wave pattern, and they resonate with high amplitude and are known as Eigen modes. These Eigen modes could cause failures unless it is suppressed to be within safe limits. These modes are categorized into radial, tangential, and azimuthal modes, and their structure inside the combustion chamber is of interest to the researchers. In the present proposal, experimental as well as numerical simulation will be performed to obtain the frequency-amplitude characteristics of the model combustion chamber for different baffle configuration. The main objective of this study is to find effect of baffle configuration that would provide better suppression of acoustic modes. The experimental study aims at measuring the frequency amplitude characteristics at certain points in the chamber wall. The experimental measurement will be also used for scheme used in numerical simulation. In addition to experiments, numerical simulation would provide detailed structure of the Eigenmodes exhibited and their level of suppression with the aid of different baffle configurations.

Keywords: baffle, instability, liquid rocket engine, pressure response of chamber

Procedia PDF Downloads 119
761 Preferences of Electric Buses in Public Transport; Conclusions from Real Life Testing in Eight Swedish Municipalities

Authors: Sven Borén, Lisiana Nurhadi, Henrik Ny

Abstract:

From a theoretical perspective, electric buses can be more sustainable and can be cheaper than fossil fuelled buses in city traffic. The authors have not found other studies based on actual urban public transport in Swedish winter climate. Further on, noise measurements from buses for the European market were found old. The aims of this follow-up study was therefore to test and possibly verify in a real-life environment how energy efficient and silent electric buses are, and then conclude on if electric buses are preferable to use in public transport. The Ebusco 2.0 electric bus, fitted with a 311 kWh battery pack, was used and the tests were carried out during November 2014-April 2015 in eight municipalities in the south of Sweden. Six tests took place in urban traffic and two took place in more of a rural traffic setting. The energy use for propulsion was measured via logging of the internal system in the bus and via an external charging meter. The average energy use turned out to be 8% less (0,96 kWh/km) than assumed in the earlier theoretical study. This rate allows for a 320 km range in public urban traffic. The interior of the bus was kept warm by a diesel heater (biodiesel will probably be used in a future operational traffic situation), which used 0,67 kWh/km in January. This verified that electric buses can be up to 25% cheaper when used in public transport in cities for about eight years. The noise was found to be lower, primarily during acceleration, than for buses with combustion engines in urban bus traffic. According to our surveys, most passengers and drivers appreciated the silent and comfortable ride and preferred electric buses rather than combustion engine buses. Bus operators and passenger transport executives were also positive to start using electric buses for public transport. The operators did however point out that procurement processes need to account for eventual risks regarding this new technology, along with personnel education. The study revealed that it is possible to establish a charging infrastructure for almost all studied bus lines. However, design of a charging infrastructure for each municipality requires further investigations, including electric grid capacity analysis, smart location of charging points, and tailored schedules to allow fast charging. In conclusion, electric buses proved to be a preferable alternative for all stakeholders involved in public bus transport in the studied municipalities. However, in order to electric buses to be a prominent support for sustainable development, they need to be charged either by stand-alone units or via an expansion of the electric grid, and the electricity should be made from new renewable sources.

Keywords: sustainability, electric, bus, noise, greencharge

Procedia PDF Downloads 341
760 DWT-SATS Based Detection of Image Region Cloning

Authors: Michael Zimba

Abstract:

A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency sub-band of the DWT of the suspicious image thereby leaving valuable information in the other three sub-bands, the proposed algorithm simultaneously extracts features from all the four sub-bands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates.

Keywords: affine transformation, discrete wavelet transform, radix sort, SATS

Procedia PDF Downloads 227
759 Robust Heart Rate Estimation from Multiple Cardiovascular and Non-Cardiovascular Physiological Signals Using Signal Quality Indices and Kalman Filter

Authors: Shalini Rankawat, Mansi Rankawat, Rahul Dubey, Mazad Zaveri

Abstract:

Physiological signals such as electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often seriously corrupted by noise, artifacts, and missing data, which lead to errors in the estimation of heart rate (HR) and incidences of false alarm from ICU monitors. Clinical support in ICU requires most reliable heart rate estimation. Cardiac activity, because of its relatively high electrical energy, may introduce artifacts in Electroencephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG) recordings. This paper presents a robust heart rate estimation method by detection of R-peaks of ECG artifacts in EEG, EMG & EOG signals, using energy-based function and a novel Signal Quality Index (SQI) assessment technique. SQIs of physiological signals (EEG, EMG, & EOG) were obtained by correlation of nonlinear energy operator (teager energy) of these signals with either ECG or ABP signal. HR is estimated from ECG, ABP, EEG, EMG, and EOG signals from separate Kalman filter based upon individual SQIs. Data fusion of each HR estimate was then performed by weighing each estimate by the Kalman filters’ SQI modified innovations. The fused signal HR estimate is more accurate and robust than any of the individual HR estimate. This method was evaluated on MIMIC II data base of PhysioNet from bedside monitors of ICU patients. The method provides an accurate HR estimate even in the presence of noise and artifacts.

Keywords: ECG, ABP, EEG, EMG, EOG, ECG artifacts, Teager-Kaiser energy, heart rate, signal quality index, Kalman filter, data fusion

Procedia PDF Downloads 693
758 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique

Authors: Satyasen Panda, Urmila Bhanja

Abstract:

In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.

Keywords: Cross Correlation (CC), Three dimensional Optical Code Division Multiple Access (3-D OCDMA), Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA), Multiple Access Interference (MAI), Phase Induced Intensity Noise (PIIN), Three Dimensional Modified Quadratic Congruence/Modified Prime (3-D MQC/MP) code

Procedia PDF Downloads 412
757 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust

Procedia PDF Downloads 129
756 Robust Design of a Ball Joint Considering Uncertainties

Authors: Bong-Su Sin, Jong-Kyu Kim, Se-Il Song, Kwon-Hee Lee

Abstract:

An automobile ball joint is a pivoting element used to allow rotational motion between the parts of the steering and suspension system. And it plays a role in smooth transmission of steering movement, also reduction in impact from the road surface. A ball joint is under various repeated loadings that may cause cracks and abrasion. This damages lead to safety problems of a car, as well as reducing the comfort of the driver's ride, and raise questions about the ball joint procedure and the whole durability of the suspension system. Accordingly, it is necessary to ensure the high durability and reliability of a ball joint. The structural responses of stiffness and pull-out strength were then calculated to check if the design satisfies the related requirements. The analysis was sequentially performed, following the caulking process. In this process, the deformation and stress results obtained from the analysis were saved. Sequential analysis has a strong advantage, in that it can be analyzed by considering the deformed shape and residual stress. The pull-out strength means the required force to pull the ball stud out from the ball joint assembly. The low pull-out strength can deteriorate the structural stability and safety performances. In this study, two design variables and two noise factors were set up. Two design variables were the diameter of a stud and the angle of a socket. And two noise factors were defined as the uncertainties of Young's modulus and yield stress of a seat. The DOE comprises 81 cases using these conditions. Robust design of a ball joint was performed using the DOE. The pull-out strength was generated from the uncertainties in the design variables and the design parameters. The purpose of robust design is to find the design with target response and smallest variation.

Keywords: ball joint, pull-out strength, robust design, design of experiments

Procedia PDF Downloads 421