Search results for: noise attenuation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1329

Search results for: noise attenuation

519 Phenomenon of Raveling Distress on the Flexible Pavements: An Overview

Authors: Syed Ali Shahbaz Shah

Abstract:

In the last few years, Bituminous Asphaltic roads are becoming popular day by day in the world. Plenty of research has been carried out to identify many advantages like safety, environmental effects, and comfort. Some other benefits are minimal noise and skid resistance enhancement. Besides the benefits of asphaltic roads, the permeable structure of the road also causes some distress, and raveling is one of the crucial defects. The main reason behind this distress is the failure of adhesion between bitumen mortar, specifically due to excessive load from heavy traffic. The main focus of this study is to identify the root cause and propose both the long-term and the short-term solutions of raveling on a specific road section depicting the overall road situation from the bridge of Kahuta road towards the intersection of the Islamabad express highway. The methodology adopted for this purpose is visual inspections in-situ. It was noted that there were chunks of debris on the road surface, which indicates that the asphalt binder is aged the most probably. Further laboratory testing would confirm that either asphalt binder is aged or inadequate compaction was adept during cold weather paving.

Keywords: asphaltic roads, asphalt binder, distress, raveling

Procedia PDF Downloads 105
518 A Novel NRIS Index to Evaluate Brain Activity in Prefrontal Regions While Listening to First and Second Languages for Long Time Periods

Authors: Kensho Takahashi, Ko Watanabe, Takashi Kaburagi, Hiroshi Tanaka, Kajiro Watanabe, Yosuke Kurihara

Abstract:

Near-infrared spectroscopy (NIRS) has been widely used as a non-invasive method to measure brain activity, but it is corrupted by baseline drift noise. Here we present a method to measure regional cerebral blood flow as a derivative of NIRS output. We investigate whether, when listening to languages, blood flow can reasonably localize and represent regional brain activity or not. The prefrontal blood flow distribution pattern when advanced second-language listeners listened to a second language (L2) was most similar to that when listening to their first language (L1) among the patterns of mean and standard deviation. In experiments with 25 healthy subjects, the maximum blood flow was localized to the left BA46 of advanced listeners. The blood flow presented is robust to baseline drift and stably localizes regional brain activity.

Keywords: NIRS, oxy-hemoglobin, baseline drift, blood flow, working memory, BA46, first language, second language

Procedia PDF Downloads 554
517 Analyzing On-Line Process Data for Industrial Production Quality Control

Authors: Hyun-Woo Cho

Abstract:

The monitoring of industrial production quality has to be implemented to alarm early warning for unusual operating conditions. Furthermore, identification of their assignable causes is necessary for a quality control purpose. For such tasks many multivariate statistical techniques have been applied and shown to be quite effective tools. This work presents a process data-based monitoring scheme for production processes. For more reliable results some additional steps of noise filtering and preprocessing are considered. It may lead to enhanced performance by eliminating unwanted variation of the data. The performance evaluation is executed using data sets from test processes. The proposed method is shown to provide reliable quality control results, and thus is more effective in quality monitoring in the example. For practical implementation of the method, an on-line data system must be available to gather historical and on-line data. Recently large amounts of data are collected on-line in most processes and implementation of the current scheme is feasible and does not give additional burdens to users.

Keywords: detection, filtering, monitoring, process data

Procedia PDF Downloads 550
516 Video Foreground Detection Based on Adaptive Mixture Gaussian Model for Video Surveillance Systems

Authors: M. A. Alavianmehr, A. Tashk, A. Sodagaran

Abstract:

Modeling background and moving objects are significant techniques for video surveillance and other video processing applications. This paper presents a foreground detection algorithm that is robust against illumination changes and noise based on adaptive mixture Gaussian model (GMM), and provides a novel and practical choice for intelligent video surveillance systems using static cameras. In the previous methods, the image of still objects (background image) is not significant. On the contrary, this method is based on forming a meticulous background image and exploiting it for separating moving objects from their background. The background image is specified either manually, by taking an image without vehicles, or is detected in real-time by forming a mathematical or exponential average of successive images. The proposed scheme can offer low image degradation. The simulation results demonstrate high degree of performance for the proposed method.

Keywords: image processing, background models, video surveillance, foreground detection, Gaussian mixture model

Procedia PDF Downloads 511
515 NFResNet: Multi-Scale and U-Shaped Networks for Deblurring

Authors: Tanish Mittal, Preyansh Agrawal, Esha Pahwa, Aarya Makwana

Abstract:

Multi-Scale and U-shaped Networks are widely used in various image restoration problems, including deblurring. Keeping in mind the wide range of applications, we present a comparison of these architectures and their effects on image deblurring. We also introduce a new block called as NFResblock. It consists of a Fast Fourier Transformation layer and a series of modified Non-Linear Activation Free Blocks. Based on these architectures and additions, we introduce NFResnet and NFResnet+, which are modified multi-scale and U-Net architectures, respectively. We also use three differ-ent loss functions to train these architectures: Charbonnier Loss, Edge Loss, and Frequency Reconstruction Loss. Extensive experiments on the Deep Video Deblurring dataset, along with ablation studies for each component, have been presented in this paper. The proposed architectures achieve a considerable increase in Peak Signal to Noise (PSNR) ratio and Structural Similarity Index (SSIM) value.

Keywords: multi-scale, Unet, deblurring, FFT, resblock, NAF-block, nfresnet, charbonnier, edge, frequency reconstruction

Procedia PDF Downloads 127
514 Developing an Advanced Algorithm Capable of Classifying News, Articles and Other Textual Documents Using Text Mining Techniques

Authors: R. B. Knudsen, O. T. Rasmussen, R. A. Alphinas

Abstract:

The reason for conducting this research is to develop an algorithm that is capable of classifying news articles from the automobile industry, according to the competitive actions that they entail, with the use of Text Mining (TM) methods. It is needed to test how to properly preprocess the data for this research by preparing pipelines which fits each algorithm the best. The pipelines are tested along with nine different classification algorithms in the realm of regression, support vector machines, and neural networks. Preliminary testing for identifying the optimal pipelines and algorithms resulted in the selection of two algorithms with two different pipelines. The two algorithms are Logistic Regression (LR) and Artificial Neural Network (ANN). These algorithms are optimized further, where several parameters of each algorithm are tested. The best result is achieved with the ANN. The final model yields an accuracy of 0.79, a precision of 0.80, a recall of 0.78, and an F1 score of 0.76. By removing three of the classes that created noise, the final algorithm is capable of reaching an accuracy of 94%.

Keywords: Artificial Neural network, Competitive dynamics, Logistic Regression, Text classification, Text mining

Procedia PDF Downloads 117
513 A Review on the Potential of Electric Vehicles in Reducing World CO2 Footprints

Authors: S. Alotaibi, S. Omer, Y. Su

Abstract:

The conventional Internal Combustion Engine (ICE) based vehicles are a threat to the environment as they account for a large proportion of the overall greenhouse gas (GHG) emissions in the world. Hence, it is required to replace these vehicles with more environment-friendly vehicles. Electric Vehicles (EVs) are promising technologies which offer both human comfort “noise, pollution” as well as reduced (or no) emissions of GHGs. In this paper, different types of EVs are reviewed and their advantages and disadvantages are identified. It is found that in terms of fuel economy, Plug-in Hybrid EVs (PHEVs) have the best fuel economy, followed by Hybrid EVs (HEVs) and ICE vehicles. Since Battery EVs (BEVs) do not use any fuel, their fuel economy is estimated as price per kilometer. Similarly, in terms of GHG emissions, BEVs are the most environmentally friendly since they do not result in any emissions while HEVs and PHEVs produce less emissions compared to the conventional ICE based vehicles. Fuel Cell EVs (FCEVs) are also zero-emission vehicles, but they have large costs associated with them. Finally, if the electricity is provided by using the renewable energy technologies through grid connection, then BEVs could be considered as zero emission vehicles.

Keywords: electric vehicles, zero emission car, fuel economy, CO₂ footprint

Procedia PDF Downloads 140
512 Distangling Biological Noise in Cellular Images with a Focus on Explainability

Authors: Manik Sharma, Ganapathy Krishnamurthi

Abstract:

The cost of some drugs and medical treatments has risen in recent years, that many patients are having to go without. A classification project could make researchers more efficient. One of the more surprising reasons behind the cost is how long it takes to bring new treatments to market. Despite improvements in technology and science, research and development continues to lag. In fact, finding new treatment takes, on average, more than 10 years and costs hundreds of millions of dollars. If successful, we could dramatically improve the industry's ability to model cellular images according to their relevant biology. In turn, greatly decreasing the cost of treatments and ensure these treatments get to patients faster. This work aims at solving a part of this problem by creating a cellular image classification model which can decipher the genetic perturbations in cell (occurring naturally or artificially). Another interesting question addressed is what makes the deep-learning model decide in a particular fashion, which can further help in demystifying the mechanism of action of certain perturbations and paves a way towards the explainability of the deep-learning model.

Keywords: cellular images, genetic perturbations, deep-learning, explainability

Procedia PDF Downloads 102
511 Design an Intelligent Fire Detection System Based on Neural Network and Particle Swarm Optimization

Authors: Majid Arvan, Peyman Beygi, Sina Rokhsati

Abstract:

In-time detection of fire in buildings is of great importance. Employing intelligent methods in data processing in fire detection systems leads to a significant reduction of fire damage at lowest cost. In this paper, the raw data obtained from the fire detection sensor networks in buildings is processed by using intelligent methods based on neural networks and the likelihood of fire happening is predicted. In order to enhance the quality of system, the noise in the sensor data is reduced by analyzing wavelets and applying SVD technique. Meanwhile, the proposed neural network is trained using particle swarm optimization (PSO). In the simulation work, the data is collected from sensor network inside the room and applied to the proposed network. Then the outputs are compared with conventional MLP network. The simulation results represent the superiority of the proposed method over the conventional one.

Keywords: intelligent fire detection, neural network, particle swarm optimization, fire sensor network

Procedia PDF Downloads 376
510 Influence of Inertial Forces of Large Bearings Utilized in Wind Energy Assemblies

Authors: S. Barabas, F. Sarbu, B. Barabas, A. Fota

Abstract:

Main objective of this paper is to establish a link between inertial forces of the bearings used in construction of wind power plant and its behavior. Using bearings with lower inertial forces has the immediate effect of decreasing inertia rotor system, with significant results in increased energy efficiency, due to decreased friction forces between rollers and raceways. The FEM analysis shows the appearance of uniform contact stress at the ends of the rollers, demonstrated the necessity of production of low mass bearings. Favorable results are expected in the economic field, by reducing material consumption and by increasing the durability of bearings. Using low mass bearings with hollow rollers instead of solid rollers has an impact on working temperature, on vibrations and noise which decrease. Implementation of types of hollow rollers of cylindrical tubular type, instead of expensive rollers with logarithmic profile, will bring significant inertial forces decrease with large benefits in behavior of wind power plant.

Keywords: inertial forces, Von Mises stress, hollow rollers, wind turbine

Procedia PDF Downloads 349
509 Indigenous Patch Clamp Technique: Design of Highly Sensitive Amplifier Circuit for Measuring and Monitoring of Real Time Ultra Low Ionic Current through Cellular Gates

Authors: Moez ul Hassan, Bushra Noman, Sarmad Hameed, Shahab Mehmood, Asma Bashir

Abstract:

The importance of Noble prize winning “Patch Clamp Technique” is well documented. However, Patch Clamp Technique is very expensive and hence hinders research in developing countries. In this paper, detection, processing and recording of ultra low current from induced cells by using transimpedence amplifier is described. The sensitivity of the proposed amplifier is in the range of femto amperes (fA). Capacitive-feedback is used with active load to obtain a 20MΩ transimpedance gain. The challenging task in designing includes achieving adequate performance in gain, noise immunity and stability. The circuit designed by the authors was able to measure current in the rangeof 300fA to 100pA. Adequate performance shown by the amplifier with different input current and outcome result was found to be within the acceptable error range. Results were recorded using LabVIEW 8.5®for further research.

Keywords: drug discovery, ionic current, operational amplifier, patch clamp

Procedia PDF Downloads 512
508 Digital Reconstruction of Museum's Statue Using 3D Scanner for Cultural Preservation in Indonesia

Authors: Ahmad Zaini, F. Muhammad Reza Hadafi, Surya Sumpeno, Muhtadin, Mochamad Hariadi

Abstract:

The lack of information about museum’s collection reduces the number of visits of museum. Museum’s revitalization is an urgent activity to increase the number of visits. The research's roadmap is building a web-based application that visualizes museum in the virtual form including museum's statue reconstruction in the form of 3D. This paper describes implementation of three-dimensional model reconstruction method based on light-strip pattern on the museum statue using 3D scanner. Noise removal, alignment, meshing and refinement model's processes is implemented to get a better 3D object reconstruction. Model’s texture derives from surface texture mapping between object's images with reconstructed 3D model. Accuracy test of dimension of the model is measured by calculating relative error of virtual model dimension compared against the original object. The result is realistic three-dimensional model textured with relative error around 4.3% to 5.8%.

Keywords: 3D reconstruction, light pattern structure, texture mapping, museum

Procedia PDF Downloads 457
507 Intelligent Process Data Mining for Monitoring for Fault-Free Operation of Industrial Processes

Authors: Hyun-Woo Cho

Abstract:

The real-time fault monitoring and diagnosis of large scale production processes is helpful and necessary in order to operate industrial process safely and efficiently producing good final product quality. Unusual and abnormal events of the process may have a serious impact on the process such as malfunctions or breakdowns. This work try to utilize process measurement data obtained in an on-line basis for the safe and some fault-free operation of industrial processes. To this end, this work evaluated the proposed intelligent process data monitoring framework based on a simulation process. The monitoring scheme extracts the fault pattern in the reduced space for the reliable data representation. Moreover, this work shows the results of using linear and nonlinear techniques for the monitoring purpose. It has shown that the nonlinear technique produced more reliable monitoring results and outperforms linear methods. The adoption of the qualitative monitoring model helps to reduce the sensitivity of the fault pattern to noise.

Keywords: process data, data mining, process operation, real-time monitoring

Procedia PDF Downloads 631
506 Surface Roughness of AlSi/10%AlN Metal Matrix Composite Material Using the Taguchi Method

Authors: Nurul Na'imy Wan, Mohamad Sazali Said, Jaharah Ab. Ghani, Mohd Asri Selamat

Abstract:

This paper presents the surface roughness of the Aluminium silicon alloy (AlSi) matrix composite which has been reinforced with aluminium nitride (AlN), with three types of carbide inserts. Experiments were conducted at various cutting speeds, feed rates, and depths of cut, according to the Taguchi method, using a standard orthogonal array L27 (34). The signal-to-noise (S/N) ratio and analysis of variance are applied to study the characteristic performance of machining parameters in measuring the surface roughness during the milling operation. The analysis of results, using the Taguchi method concluded that a combination of low feed rate, medium depth of cut, low cutting speed, and insert TiB2 give a better value of surface roughness. From Taguchi method, it was found that cutting speed of 230m/min, feed rate of 0.4 mm/tooth, depth of cut of 0.5mm and type of insert of TiB2 were the optimal machining parameters that gave the optimal value of surface roughness.

Keywords: AlSi/AlN Metal Matrix Composite (MMC), surface roughness, Taguchi method

Procedia PDF Downloads 459
505 Machine Learning Approach for Yield Prediction in Semiconductor Production

Authors: Heramb Somthankar, Anujoy Chakraborty

Abstract:

This paper presents a classification study on yield prediction in semiconductor production using machine learning approaches. A complicated semiconductor production process is generally monitored continuously by signals acquired from sensors and measurement sites. A monitoring system contains a variety of signals, all of which contain useful information, irrelevant information, and noise. In the case of each signal being considered a feature, "Feature Selection" is used to find the most relevant signals. The open-source UCI SECOM Dataset provides 1567 such samples, out of which 104 fail in quality assurance. Feature extraction and selection are performed on the dataset, and useful signals were considered for further study. Afterward, common machine learning algorithms were employed to predict whether the signal yields pass or fail. The most relevant algorithm is selected for prediction based on the accuracy and loss of the ML model.

Keywords: deep learning, feature extraction, feature selection, machine learning classification algorithms, semiconductor production monitoring, signal processing, time-series analysis

Procedia PDF Downloads 101
504 Blind Super-Resolution Reconstruction Based on PSF Estimation

Authors: Osama A. Omer, Amal Hamed

Abstract:

Successful blind image Super-Resolution algorithms require the exact estimation of the Point Spread Function (PSF). In the absence of any prior information about the imagery system and the true image; this estimation is normally done by trial and error experimentation until an acceptable restored image quality is obtained. Multi-frame blind Super-Resolution algorithms often have disadvantages of slow convergence and sensitiveness to complex noises. This paper presents a Super-Resolution image reconstruction algorithm based on estimation of the PSF that yields the optimum restored image quality. The estimation of PSF is performed by the knife-edge method and it is implemented by measuring spreading of the edges in the reproduced HR image itself during the reconstruction process. The proposed image reconstruction approach is using L1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. A series of experiment results show that the proposed method can outperform other previous work robustly and efficiently.

Keywords: blind, PSF, super-resolution, knife-edge, blurring, bilateral, L1 norm

Procedia PDF Downloads 358
503 Objective Evaluation on Medical Image Compression Using Wavelet Transformation

Authors: Amhimmid Mohammed Saffour, Mustafa Mohamed Abdullah

Abstract:

The use of computers for handling image data in the healthcare is growing. However, the amount of data produced by modern image generating techniques is vast. This data might be a problem from a storage point of view or when the data is sent over a network. This paper using wavelet transform technique for medical images compression. MATLAB program, are designed to evaluate medical images storage and transmission time problem at Sebha Medical Center Libya. In this paper, three different Computed Tomography images which are abdomen, brain and chest have been selected and compressed using wavelet transform. Objective evaluation has been performed to measure the quality of the compressed images. For this evaluation, the results show that the Peak Signal to Noise Ratio (PSNR) which indicates the quality of the compressed image is ranging from (25.89db to 34.35db for abdomen images, 23.26db to 33.3db for brain images and 25.5db to 36.11db for chest images. These values shows that the compression ratio is nearly to 30:1 is acceptable.

Keywords: medical image, Matlab, image compression, wavelet's, objective evaluation

Procedia PDF Downloads 283
502 Impact of Hard Limited Clipping Crest Factor Reduction Technique on Bit Error Rate in OFDM Based Systems

Authors: Theodore Grosch, Felipe Koji Godinho Hoshino

Abstract:

In wireless communications, 3GPP LTE is one of the solutions to meet the greater transmission data rate demand. One issue inherent to this technology is the PAPR (Peak-to-Average Power Ratio) of OFDM (Orthogonal Frequency Division Multiplexing) modulation. This high PAPR affects the efficiency of power amplifiers. One approach to mitigate this effect is the Crest Factor Reduction (CFR) technique. In this work, we simulate the impact of Hard Limited Clipping Crest Factor Reduction technique on BER (Bit Error Rate) in OFDM based Systems. In general, the results showed that CFR has more effects on higher digital modulation schemes, as expected. More importantly, we show the worst-case degradation due to CFR on QPSK, 16QAM, and 64QAM signals in a linear system. For example, hard clipping of 9 dB results in a 2 dB increase in signal to noise energy at a 1% BER for 64-QAM modulation.

Keywords: bit error rate, crest factor reduction, OFDM, physical layer simulation

Procedia PDF Downloads 356
501 Significance of Molecular Autophagic Pathway in Gaucher Disease Pathology

Authors: Ozlem Oral, Emre Taskin, Aysel Yuce, Serap Dokmeci, Devrim Gozuacik

Abstract:

Autophagy is an evolutionary conserved lysosome-dependent catabolic pathway, responsible for the degradation of long-lived proteins, abnormal aggregates and damaged organelles which cannot be degraded by the ubiquitin-proteasome system. Lysosomes degrade the substrates through the activity of lysosomal hydrolases and lysosomal membrane-bound proteins. Mutations in the coding region of these proteins cause malfunctional lysosomes, which contributes to the pathogenesis of lysosomal storage diseases. Gaucher disease is a lysosomal storage disease resulting from the mutation of a lysosomal membrane-associated glycoprotein called glucocerebrosidase and its cofactor saposin C. The disease leads to intracellular accumulation of glucosylceramide and other glycolipids. Because of the essential role of lysosomes in autophagic degradation, Gaucher disease may directly be linked to this pathway. In this study, we investigated the expression of autophagy and/or lysosome-related genes and proteins in fibroblast cells isolated from patients with different mutations. We carried out confocal microscopy analysis and examined autophagic flux by utilizing the differential pH sensitivities of RFP and GFP in mRFP-GFP-LC3 probe. We also evaluated lysosomal pH by active lysosome staining and lysosomal enzyme activity. Beside lysosomes, we also performed proteasomal activity and cell death analysis in patient samples. Our data showed significant attenuation in the expression of key autophagy-related genes and accumulation of their proteins in mutant cells. We found decreased the ability of autophagosomes to fuse with lysosomes, associated with elevated lysosomal pH and reduced lysosomal enzyme activity. Proteasomal degradation and cell death analysis showed reduced proteolytic activity of the proteasome, which consequently leads to increased susceptibility to cell death. Our data indicate that the major degradation pathways are affected by multifunctional lysosomes in mutant patient cells and may underlie in the mechanism of clinical severity of Gaucher patients. (This project is supported by TUBITAK-3501-National Young Researchers Career Development Program, Project No: 112T130).

Keywords: autophagy, Gaucher's disease, glucocerebrosidase, mutant fibroblasts

Procedia PDF Downloads 321
500 Comparative Study of Different Enhancement Techniques for Computed Tomography Images

Authors: C. G. Jinimole, A. Harsha

Abstract:

One of the key problems facing in the analysis of Computed Tomography (CT) images is the poor contrast of the images. Image enhancement can be used to improve the visual clarity and quality of the images or to provide a better transformation representation for further processing. Contrast enhancement of images is one of the acceptable methods used for image enhancement in various applications in the medical field. This will be helpful to visualize and extract details of brain infarctions, tumors, and cancers from the CT image. This paper presents a comparison study of five contrast enhancement techniques suitable for the contrast enhancement of CT images. The types of techniques include Power Law Transformation, Logarithmic Transformation, Histogram Equalization, Contrast Stretching, and Laplacian Transformation. All these techniques are compared with each other to find out which enhancement provides better contrast of CT image. For the comparison of the techniques, the parameters Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used. Logarithmic Transformation provided the clearer and best quality image compared to all other techniques studied and has got the highest value of PSNR. Comparison concludes with better approach for its future research especially for mapping abnormalities from CT images resulting from Brain Injuries.

Keywords: computed tomography, enhancement techniques, increasing contrast, PSNR and MSE

Procedia PDF Downloads 304
499 High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography

Authors: Khalid A. Al-Afandy, El-Sayyed El-Rabaie, Osama Salah, Ahmed El-Mhalaway

Abstract:

This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques.

Keywords: steganography, stego, LSB, crop

Procedia PDF Downloads 263
498 Engineering Method to Measure the Impact Sound Improvement with Floor Coverings

Authors: Katarzyna Baruch, Agata Szelag, Jaroslaw Rubacha, Bartlomiej Chojnacki, Tadeusz Kamisinski

Abstract:

Methodology used to measure the reduction of transmitted impact sound by floor coverings situated on a massive floor is described in ISO 10140-3: 2010. To carry out such tests, the standardised reverberation room separated by a standard floor from the second measuring room are required. The need to have a special laboratory results in high cost and low accessibility of this measurement. The authors propose their own engineering method to measure the impact sound improvement with floor coverings. This method does not require standard rooms and floor. This paper describes the measurement procedure of proposed engineering method. Further, verification tests were performed. Validation of the proposed method was based on the analytical model, Statistical Energy Analysis (SEA) model and empirical measurements. The received results were related to corresponding ones obtained from ISO 10140-3:2010 measurements. The study confirmed the usefulness of the engineering method.

Keywords: building acoustic, impact noise, impact sound insulation, impact sound transmission, reduction of impact sound

Procedia PDF Downloads 319
497 Stabilization of Rotational Motion of Spacecrafts Using Quantized Two Torque Inputs Based on Random Dither

Authors: Yusuke Kuramitsu, Tomoaki Hashimoto, Hirokazu Tahara

Abstract:

The control problem of underactuated spacecrafts has attracted a considerable amount of interest. The control method for a spacecraft equipped with less than three control torques is useful when one of the three control torques had failed. On the other hand, the quantized control of systems is one of the important research topics in recent years. The random dither quantization method that transforms a given continuous signal to a discrete signal by adding artificial random noise to the continuous signal before quantization has also attracted a considerable amount of interest. The objective of this study is to develop the control method based on random dither quantization method for stabilizing the rotational motion of a rigid spacecraft with two control inputs. In this paper, the effectiveness of random dither quantization control method for the stabilization of rotational motion of spacecrafts with two torque inputs is verified by numerical simulations.

Keywords: spacecraft control, quantized control, nonlinear control, random dither method

Procedia PDF Downloads 173
496 Mitigation of Electromagnetic Interference Generated by GPIB Control-Network in AC-DC Transfer Measurement System

Authors: M. M. Hlakola, E. Golovins, D. V. Nicolae

Abstract:

The field of instrumentation electronics is undergoing an explosive growth, due to its wide range of applications. The proliferation of electrical devices in a close working proximity can negatively influence each other’s performance. The degradation in the performance is due to electromagnetic interference (EMI). This paper investigates the negative effects of electromagnetic interference originating in the General Purpose Interface Bus (GPIB) control-network of the ac-dc transfer measurement system. Remedial measures of reducing measurement errors and failure of range of industrial devices due to EMI have been explored. The ac-dc transfer measurement system was analyzed for the common-mode (CM) EMI effects. Further investigation of coupling path as well as more accurate identification of noise propagation mechanism has been outlined. To prevent the occurrence of common-mode (ground loops) which was identified between the GPIB system control circuit and the measurement circuit, a microcontroller-driven GPIB switching isolator device was designed, prototyped, programmed and validated. This mitigation technique has been explored to reduce EMI effectively.

Keywords: CM, EMI, GPIB, ground loops

Procedia PDF Downloads 287
495 Achievement of Livable and Healthy City through the Design of Green and Blue Infrastructure: A Case Study on City of Isfahan, Iran

Authors: Reihaneh Rafiemanzelat

Abstract:

due to towards the rapid urbanization, cities throughout the world faced to rapid growth through gray infrastructure. Therefore designing cities based on green and blue infrastructure can offer the best solution to support healthy urban environment. This conformation with a wide range of ecosystem service has a positive impact on the regulation of air temperature, noise reduction, air quality, and also create a pleasant environment for humans activities. Research mainly focuses on the concept and principles of green and blue infrastructure in the city of Esfahan at the center of Iran in order to create a livable and healthy environment. Design principles for green and blue infrastructure are classified into two different but interconnect evaluations. Healthy green infrastructure assessing based on; volume, shape, location, dispersion, and maintenance. For blue infrastructure there are three aspects of water and ecosystem which are; the contribution of water on medical health, the contribution of water on mental health, and creating possibilities to exercise.

Keywords: healthy cities, livability, urban landscape, green and blue infrastructure

Procedia PDF Downloads 296
494 Specific Emitter Identification Based on Refined Composite Multiscale Dispersion Entropy

Authors: Shaoying Guo, Yanyun Xu, Meng Zhang, Weiqing Huang

Abstract:

The wireless communication network is developing rapidly, thus the wireless security becomes more and more important. Specific emitter identification (SEI) is an vital part of wireless communication security as a technique to identify the unique transmitters. In this paper, a SEI method based on multiscale dispersion entropy (MDE) and refined composite multiscale dispersion entropy (RCMDE) is proposed. The algorithms of MDE and RCMDE are used to extract features for identification of five wireless devices and cross-validation support vector machine (CV-SVM) is used as the classifier. The experimental results show that the total identification accuracy is 99.3%, even at low signal-to-noise ratio(SNR) of 5dB, which proves that MDE and RCMDE can describe the communication signal series well. In addition, compared with other methods, the proposed method is effective and provides better accuracy and stability for SEI.

Keywords: cross-validation support vector machine, refined com- posite multiscale dispersion entropy, specific emitter identification, transient signal, wireless communication device

Procedia PDF Downloads 128
493 Numerical Study of Piled Raft Foundation Under Vertical Static and Seismic Loads

Authors: Hamid Oumer Seid

Abstract:

Piled raft foundation (PRF) is a union of pile and raft working together through the interaction of soil-pile, pile-raft, soil-raft and pile-pile to provide adequate bearing capacity and controlled settlement. A uniform pile positioning is used in PRF; however, there is a wide room for optimization through parametric study under vertical load to result in a safer and economical foundation. Addis Ababa is found in seismic zone 3 with a peak ground acceleration (PGA) above the threshold of damage, which makes investigating the performance of PRF under seismic load considering the dynamic kinematic soil structure interaction (SSI) vital. The study area is located in Addis Ababa around Mexico (commercial bank) and Kirkos (Nib, Zemen and United Bank) in which input parameters (pile length, pile diameter, pile spacing, raft area, raft thickness and load) are taken. A finite difference-based numerical software, FLAC3D V6, was used for the analysis. The Kobe (1995) and Northridge (1994) earthquakes were selected, and deconvolution analysis was done. A close load sharing between pile and raft was achieved at a spacing of 7D with different pile lengths and diameters. The maximum settlement reduction achieved is 9% for a pile of 2m diameter by increasing length from 10m to 20m, which shows pile length is not effective in reducing settlement. The installation of piles results in an increase in the negative bending moment of the raft compared with an unpiled raft. Hence, the optimized design depends on pile spacing and the raft edge length, while pile length and diameter are not significant parameters. An optimized piled raft configuration (𝐴𝐺/𝐴𝑅 = 0.25 at the center and piles provided around the edge) has reduced pile number by 40% and differential settlement by 95%. The dynamic analysis shows acceleration plot at the top of the piled raft has PGA of 0.25𝑚2/𝑠𝑒𝑐 and 0.63𝑚2/𝑠𝑒𝑐 for Northridge (1994) and Kobe (1995) earthquakes, respectively, due to attenuation of seismic waves. Pile head displacement (maximum is 2mm, and it is under the allowable limit) is affected by the PGA rather than the duration of an earthquake. End bearing and friction PRF performed similarly under two different earthquakes except for their vertical settlement considering SSI. Hence, PRF has shown adequate resistance to seismic loads.

Keywords: FLAC3D V6, earthquake, optimized piled raft foundation, pile head department

Procedia PDF Downloads 17
492 Dynamics of Adiabatic Rapid Passage in an Open Rabi Dimer Model

Authors: Justin Zhengjie Tan, Yang Zhao

Abstract:

Adiabatic Rapid Passage, a popular method of achieving population inversion, is studied in a Rabi dimer model in the presence of noise which acts as a dissipative environment. The integration of the multi-Davydov D2 Ansatz into the time-dependent variational framework enables us to model the intricate quantum system accurately. By influencing the system with a driving field strength resonant with the energy spacing, the probability of adiabatic rapid passage, which is modelled after the Landau Zener model, can be derived along with several other observables, such as the photon population. The effects of a dissipative environment can be reproduced by coupling the system to a common phonon mode. By manipulating the strength and frequency of the driving field, along with the coupling strength of the phonon mode to the qubits, we are able to control the qubits and photon dynamics and subsequently increase the probability of Adiabatic Rapid Passage happening.

Keywords: quantum electrodynamics, adiabatic rapid passage, Landau-Zener transitions, dissipative environment

Procedia PDF Downloads 78
491 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality

Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan

Abstract:

Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.

Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application

Procedia PDF Downloads 69
490 Immiscible Polymer Blends with Controlled Nanoparticle Location for Excellent Microwave Absorption: A Compartmentalized Approach

Authors: Sourav Biswas, Goutam Prasanna Kar, Suryasarathi Bose

Abstract:

In order to obtain better materials, control in the precise location of nanoparticles is indispensable. It was shown here that ordered arrangement of nanoparticles, possessing different characteristics (electrical/magnetic dipoles), in the blend structure can result in excellent microwave absorption. This is manifested from a high reflection loss of ca. -67 dB for the best blend structure designed here. To attenuate electromagnetic radiations, the key parameters i.e. high electrical conductivity and large dielectric/magnetic loss are targeted here using a conducting inclusion [multiwall carbon nanotubes, MWNTs]; ferroelectric nanostructured material with associated relaxations in the GHz frequency [barium titanate, BT]; and a loss ferromagnetic nanoparticles [nickel ferrite, NF]. In this study, bi-continuous structures were designed using 50/50 (by wt) blends of polycarbonate (PC) and polyvinylidene fluoride (PVDF). The MWNTs was modified using an electron acceptor molecule; a derivative of perylenediimide, which facilitates π-π stacking with the nanotubes and stimulates efficient charge transport in the blends. The nanoscopic materials have specific affinity towards the PVDF phase. Hence, by introducing surface-active groups, ordered arrangement can be tailored. To accomplish this, both BT and NF was first hydroxylated followed by introducing amine-terminal groups on the surface. The latter facilitated in nucleophilic substitution reaction with PC and resulted in their precise location. In this study, we have shown for the first time that by compartmentalized approach, superior EM attenuation can be achieved. For instance, when the nanoparticles were localized exclusively in the PVDF phase or in both the phases, the minimum reflection loss was ca. -18 dB (for MWNT/BT mixture) and -29 dB (for MWNT/NF mixture), and the shielding was primarily through reflection. Interestingly, by adopting the compartmentalized approach where in, the lossy materials were in the PC phase and the conducting inclusion (MWNT) in PVDF, an outstanding reflection loss of ca. -57 dB (for BT and MWNT combination) and -67 dB (for NF and MWNT combination) was noted and the shielding was primarily through absorption. Thus, the approach demonstrates that nanoscopic structuring in the blends can be achieved under macroscopic processing conditions and this strategy can further be explored to design microwave absorbers.

Keywords: barium titanate, EMI shielding, MWNTs, nickel ferrite

Procedia PDF Downloads 442