Search results for: underwater noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1274

Search results for: underwater noise

734 Anisotropic Behavior of Sand Stabilized with Colloidal Silica

Authors: Eleni Maria Pavlopoulou, Vasiliki N. Georgiannou, Filippos C. Chortis

Abstract:

The response of M31 sand stabilized with colloidal silica (CS) aqueous gel is investigated in the laboratory. CS is introduced in the water regime, forming a hydrosol. The low viscosity hydrosol thickens in a controllable manner to form a stable, non-toxic gel; the gel fills the pore space, retains the pore water, and supports the grain structure. The role of colloidal silica on subsequent sand behavior is examined with the aid of direct shear, triaxial, and normal compression tests. Under the examined loading modes, while the strength of the treated sand is enhanced, its stiffness may reduce, and its compressibility increase. However, in most geotechnical problems, the loading conditions are complex, involving changes in both stress magnitude and direction. Rotation of principal stresses (σ1, σ2, σ3) in varying amounts expressed as angle α, (from α=0° to 90°) in concurrence with increasing shear stress loading is commonly encountered in soil structures such as foundations, embankments, underwater slopes. To assess the influence of anisotropy on the response of sands before and after their stabilization, hollow cylinder tests were performed. The behavior of stabilized sand is compared with the characteristic sand behavior, i.e., a reduction in peak stress ratio associated with a softer stress-strain response with the increasing angle a. The influence of the magnitude of the intermediate principal stress (σ2) on the mechanical response of treated and untreated sand is also examined.

Keywords: anisotropy, colloidal silica, laboratory tests, sands, soil stabilization

Procedia PDF Downloads 135
733 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning

Authors: T. Bryan , V. Kepuska, I. Kostnaic

Abstract:

A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.

Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit

Procedia PDF Downloads 253
732 Sorting Fish by Hu Moments

Authors: J. M. Hernández-Ontiveros, E. E. García-Guerrero, E. Inzunza-González, O. R. López-Bonilla

Abstract:

This paper presents the implementation of an algorithm that identifies and accounts different fish species: Catfish, Sea bream, Sawfish, Tilapia, and Totoaba. The main contribution of the method is the fusion of the characteristics of invariance to the position, rotation and scale of the Hu moments, with the proper counting of fish. The identification and counting is performed, from an image under different noise conditions. From the experimental results obtained, it is inferred the potentiality of the proposed algorithm to be applied in different scenarios of aquaculture production.

Keywords: counting fish, digital image processing, invariant moments, pattern recognition

Procedia PDF Downloads 409
731 Development of Latent Fingerprints on Non-Porous Surfaces Recovered from Fresh and Sea Water

Authors: A. Somaya Madkour, B. Abeer sheta, C. Fatma Badr El Dine, D. Yasser Elwakeel, E. Nermine AbdAllah

Abstract:

Criminal offenders have a fundamental goal not to leave any traces at the crime scene. Some may suppose that items recovered underwater will have no forensic value, therefore, they try to destroy the traces by throwing items in water. These traces are subjected to the destructive environmental effects. This can represent a challenge for Forensic experts investigating finger marks. Accordingly, the present study was conducted to determine the optimal method for latent fingerprints development on non-porous surfaces submerged in aquatic environments at different time interval. The two factors analyzed in this study were the nature of aquatic environment and length of submerged time. In addition, the quality of developed finger marks depending on the used method was also assessed. Therefore, latent fingerprints were deposited on metallic, plastic and glass objects and submerged in fresh or sea water for one, two, and ten days. After recovery, the items were subjected to cyanoacrylate fuming, black powder and small particle reagent processing and the prints were examined. Each print was evaluated according to fingerprint quality assessment scale. The present study demonstrated that the duration of submersion affects the quality of finger marks; the longer the duration, the worse the quality.The best results of visualization were achieved using cyanoacrylate either in fresh or sea water. This study has also revealed that the exposure to sea water had more destructive influence on the quality of detected finger marks.

Keywords: fingerprints, fresh water, sea, non-porous

Procedia PDF Downloads 455
730 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks

Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev

Abstract:

One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.

Keywords: channel estimation, inter-cell interference, pilot contamination attacks, wireless communications

Procedia PDF Downloads 217
729 Brain-Computer Interfaces That Use Electroencephalography

Authors: Arda Ozkurt, Ozlem Bozkurt

Abstract:

Brain-computer interfaces (BCIs) are devices that output commands by interpreting the data collected from the brain. Electroencephalography (EEG) is a non-invasive method to measure the brain's electrical activity. Since it was invented by Hans Berger in 1929, it has led to many neurological discoveries and has become one of the essential components of non-invasive measuring methods. Despite the fact that it has a low spatial resolution -meaning it is able to detect when a group of neurons fires at the same time-, it is a non-invasive method, making it easy to use without possessing any risks. In EEG, electrodes are placed on the scalp, and the voltage difference between a minimum of two electrodes is recorded, which is then used to accomplish the intended task. The recordings of EEGs include, but are not limited to, the currents along dendrites from synapses to the soma, the action potentials along the axons connecting neurons, and the currents through the synaptic clefts connecting axons with dendrites. However, there are some sources of noise that may affect the reliability of the EEG signals as it is a non-invasive method. For instance, the noise from the EEG equipment, the leads, and the signals coming from the subject -such as the activity of the heart or muscle movements- affect the signals detected by the electrodes of the EEG. However, new techniques have been developed to differentiate between those signals and the intended ones. Furthermore, an EEG device is not enough to analyze the data from the brain to be used by the BCI implication. Because the EEG signal is very complex, to analyze it, artificial intelligence algorithms are required. These algorithms convert complex data into meaningful and useful information for neuroscientists to use the data to design BCI devices. Even though for neurological diseases which require highly precise data, invasive BCIs are needed; non-invasive BCIs - such as EEGs - are used in many cases to help disabled people's lives or even to ease people's lives by helping them with basic tasks. For example, EEG is used to detect before a seizure occurs in epilepsy patients, which can then prevent the seizure with the help of a BCI device. Overall, EEG is a commonly used non-invasive BCI technique that has helped develop BCIs and will continue to be used to detect data to ease people's lives as more BCI techniques will be developed in the future.

Keywords: BCI, EEG, non-invasive, spatial resolution

Procedia PDF Downloads 71
728 Synthesis of Filtering in Stochastic Systems on Continuous-Time Memory Observations in the Presence of Anomalous Noises

Authors: S. Rozhkova, O. Rozhkova, A. Harlova, V. Lasukov

Abstract:

We have conducted the optimal synthesis of root-mean-squared objective filter to estimate the state vector in the case if within the observation channel with memory the anomalous noises with unknown mathematical expectation are complement in the function of the regular noises. The synthesis has been carried out for linear stochastic systems of continuous-time.

Keywords: mathematical expectation, filtration, anomalous noise, memory

Procedia PDF Downloads 247
727 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments

Authors: Rahul Paul, Peter Mctaggart, Luke Skinner

Abstract:

Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.

Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry

Procedia PDF Downloads 99
726 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 232
725 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 113
724 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: altitude estimation, drone, image processing, trajectory planning

Procedia PDF Downloads 113
723 Evaluation of Natural Frequency of Single and Grouped Helical Piles

Authors: Maryam Shahbazi, Amy B. Cerato

Abstract:

The importance of a systems’ natural frequency (fn) emerges when the vibration force frequency is equivalent to foundation's fn which causes response amplitude (resonance) that may cause irreversible damage to the structure. Several factors such as pile geometry (e.g., length and diameter), soil density, load magnitude, pile condition, and physical structure affect the fn of a soil-pile system; some of these parameters are evaluated in this study. Although experimental and analytical studies have assessed the fn of a soil-pile system, few have included individual and grouped helical piles. Thus, the current study aims to provide quantitative data on dynamic characteristics of helical pile-soil systems from full-scale shake table tests that will allow engineers to predict more realistic dynamic response under motions with variable frequency ranges. To evaluate the fn of single and grouped helical piles in dry dense sand, full-scale shake table tests were conducted in a laminar box (6.7 m x 3.0 m with 4.6 m high). Two different diameters (8.8 cm and 14 cm) helical piles were embedded in the soil box with corresponding lengths of 3.66m (excluding one pile with length of 3.96) and 4.27m. Different configurations were implemented to evaluate conditions such as fixed and pinned connections. In the group configuration, all four piles with similar geometry were tied together. Simulated real earthquake motions, in addition to white noise, were applied to evaluate the wide range of soil-pile system behavior. The Fast Fourier Transform (FFT) of measured time history responses using installed strain gages and accelerometers were used to evaluate fn. Both time-history records using accelerometer or strain gages were found to be acceptable for calculating fn. In this study, the existence of a pile reduced the fn of the soil slightly. Greater fn occurred on single piles with larger l/d ratios (higher slenderness ratio). Also, regardless of the connection type, the more slender pile group which is obviously surrounded by more soil, yielded higher natural frequencies under white noise, which may be due to exhibiting more passive soil resistance around it. Relatively speaking, within both pile groups, a pinned connection led to a lower fn than a fixed connection (e.g., for the same pile group the fn’s are 5.23Hz and 4.65Hz for fixed and pinned connections, respectively). Generally speaking, a stronger motion causes nonlinear behavior and degrades stiffness which reduces a pile’s fn; even more, reduction occurs in soil with a lower density. Moreover, fn of dense sand under white noise signal was obtained 5.03 which is reduced by 44% when an earthquake with the acceleration of 0.5g was applied. By knowing the factors affecting fn, the designer can effectively match the properties of the soil to a type of pile and structure to attempt to avoid resonance. The quantitative results in this study assist engineers in predicting a probable range of fn for helical pile foundations under potential future earthquake, and machine loading applied forces.

Keywords: helical pile, natural frequency, pile group, shake table, stiffness

Procedia PDF Downloads 133
722 Performance Comparison of Non-Binary RA and QC-LDPC Codes

Authors: Ni Wenli, He Jing

Abstract:

Repeat–Accumulate (RA) codes are subclass of LDPC codes with fast encoder structures. In this paper, we consider a nonbinary extension of binary LDPC codes over GF(q) and construct a non-binary RA code and a non-binary QC-LDPC code over GF(2^4), we construct non-binary RA codes with linear encoding method and non-binary QC-LDPC codes with algebraic constructions method. And the BER performance of RA and QC-LDPC codes over GF(q) are compared with BP decoding and by simulation over the Additive White Gaussian Noise (AWGN) channels.

Keywords: non-binary RA codes, QC-LDPC codes, performance comparison, BP algorithm

Procedia PDF Downloads 376
721 Observations on the Eastern Red Sea Elasmobranchs: Data on Their Distribution and Ecology

Authors: Frappi Sofia, Nicolas Pilcher, Sander DenHaring, Royale Hardenstine, Luis Silva, Collin Williams, Mattie Rodrigue, Vincent Pieriborne, Mohammed Qurban, Carlos M. Duarte

Abstract:

Nowadays, elasmobranch populations are disappearing at a dangerous rate, mainly due to overexploitation, extensive fisheries, as well as climate change. The decline of these species can trigger a cascade effect, which may eventually lead to detrimental impacts on local ecosystems. The Elasmobranch in the Red Sea is facing one of the highest risks of extinction, mainly due to unregulated fisheries activities. Thus, it is of paramount importance to assess their current distribution and unveil their environmental preferences in order to improve conservation measures. Important data have been collected throughout the whole red Sea during the Red Sea Decade Expedition (RSDE) to achieve this goal. Elasmobranch sightings were gathered through the use of submarines, remotely operated underwater vehicles (ROV), scuba diving operations, and helicopter surveys. Over a period of 5 months, we collected 891 sightings, 52 with submarines, 138 with the ROV, 67 with the scuba diving teams, and 634 from helicopters. In total, we observed 657 and 234 individuals from the superorder Batoidea and Selachimorpha, respectively. The most common shark encountered was Iago omanensis, a deep-water shark of the order Carcharhiniformes. To each sighting, data on temperature, salinity density, and dissolved oxygen were integrated to reveal favorable conditions for each species. Additionally, an extensive literature review on elasmobranch research in the Eastern Red Sea has been carried out in order to obtain more data on local populations and to be able to highlight patterns of their distribution.

Keywords: distribution, elasmobranchs, habitat, rays, red sea, sharks

Procedia PDF Downloads 85
720 Analyzing Competition in Public Construction Projects

Authors: Khaled Hesham Hyari, Amjad Almani

Abstract:

Construction projects in the public sector are commonly awarded through competitive bidding. In the last decade, the Construction projects environment in the Middle East went through many changes. These changes have been caused by different factors including the economic crisis, delays in monthly payments, international competition and reduced number of projects. These factors had a great impact on the bidding behaviors of contractors and their pricing strategies. This paper examines the competition characteristics in public construction projects through an analysis of bidding results of contractors in public construction projects over a period of 6 years (2006-2011) in Jordan. The analyzed projects include all categories of projects such as infrastructure, buildings, transportation and engineering services (design and supervision contracts). Data for the projects were obtained from the General Tender’s Directorate in Jordan and includes 462 projects. The analysis performed in this projects includes, studying the bid spread in all projects as it is an indication of the level of competition in the analyzed bids. The analysis studied the factors that affect bid spread such as number of bidders, Value of the project, Project category and years. It also studying the “Signal to Noise Ratio” in all projects as it is an indication of the accuracy of cost estimating performed by competing bidders and bidder´s evaluation of project risks. The analysis performed includes the relationship between signal to noise ratio and different parameters such as project category, number of bidders and changes over years. Moreover, the analysis includes determining the bidder´s aggressiveness in bidding as it is an indication of competition level in such projects. This was performed by determining the pack price which can be considered as the true value of the project and comparing it with the lowest bid submitted for each project to determine the level of aggressiveness in submitted bids. The analysis performed in this project should prove to be useful to owners in understanding bidding behaviors of contractors and pointing out areas that needs improvement in preparing bidding documents. Also the project should be useful to contractors in understanding the competitive bidding environment and should help them to improve their bidding strategies to maximize the success rate in obtaining contracts.

Keywords: construction projects, competitive bidding, public construction, competition

Procedia PDF Downloads 333
719 Generation of High-Quality Synthetic CT Images from Cone Beam CT Images Using A.I. Based Generative Networks

Authors: Heeba A. Gurku

Abstract:

Introduction: Cone Beam CT(CBCT) images play an integral part in proper patient positioning in cancer patients undergoing radiation therapy treatment. But these images are low in quality. The purpose of this study is to generate high-quality synthetic CT images from CBCT using generative models. Material and Methods: This study utilized two datasets from The Cancer Imaging Archive (TCIA) 1) Lung cancer dataset of 20 patients (with full view CBCT images) and 2) Pancreatic cancer dataset of 40 patients (only 27 patients having limited view images were included in the study). Cycle Generative Adversarial Networks (GAN) and its variant Attention Guided Generative Adversarial Networks (AGGAN) models were used to generate the synthetic CTs. Models were evaluated by visual evaluation and on four metrics, Structural Similarity Index Measure (SSIM), Peak Signal Noise Ratio (PSNR) Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), to compare the synthetic CT and original CT images. Results: For pancreatic dataset with limited view CBCT images, our study showed that in Cycle GAN model, MAE, RMSE, PSNR improved from 12.57to 8.49, 20.94 to 15.29 and 21.85 to 24.63, respectively but structural similarity only marginally increased from 0.78 to 0.79. Similar, results were achieved with AGGAN with no improvement over Cycle GAN. However, for lung dataset with full view CBCT images Cycle GAN was able to reduce MAE significantly from 89.44 to 15.11 and AGGAN was able to reduce it to 19.77. Similarly, RMSE was also decreased from 92.68 to 23.50 in Cycle GAN and to 29.02 in AGGAN. SSIM and PSNR also improved significantly from 0.17 to 0.59 and from 8.81 to 21.06 in Cycle GAN respectively while in AGGAN SSIM increased to 0.52 and PSNR increased to 19.31. In both datasets, GAN models were able to reduce artifacts, reduce noise, have better resolution, and better contrast enhancement. Conclusion and Recommendation: Both Cycle GAN and AGGAN were significantly able to reduce MAE, RMSE and PSNR in both datasets. However, full view lung dataset showed more improvement in SSIM and image quality than limited view pancreatic dataset.

Keywords: CT images, CBCT images, cycle GAN, AGGAN

Procedia PDF Downloads 83
718 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 453
717 Interruption Overload in an Office Environment: Hungarian Survey Focusing on the Factors that Affect Job Satisfaction and Work Efficiency

Authors: Fruzsina Pataki-Bittó, Edit Németh

Abstract:

On the one hand, new technologies and communication tools improve employee productivity and accelerate information and knowledge transfer, while on the other hand, information overload and continuous interruptions make it even harder to concentrate at work. It is a great challenge for companies to find the right balance, while there is also an ongoing demand to recruit and retain the talented employees who are able to adopt the modern work style and effectively use modern communication tools. For this reason, this research does not focus on the objective measures of office interruptions, but aims to find those disruption factors which influence the comfort and job satisfaction of employees, and the way how they feel generally at work. The focus of this research is on how employees feel about the different types of interruptions, which are those they themselves identify as hindering factors, and those they feel as stress factors. By identifying and then reducing these destructive factors, job satisfaction can reach a higher level and employee turnover can be reduced. During the research, we collected information from depth interviews and questionnaires asking about work environment, communication channels used in the workplace, individual communication preferences, factors considered as disruptions, and individual steps taken to avoid interruptions. The questionnaire was completed by 141 office workers from several types of workplaces based in Hungary. Even though 66 respondents are working at Hungarian offices of multinational companies, the research is about the characteristics of the Hungarian labor force. The most important result of the research shows that while more than one third of the respondents consider office noise as a disturbing factor, personal inquiries are welcome and considered useful, even if in such cases the work environment will not be convenient to solve tasks requiring concentration. Analyzing the sizes of the offices, in an open-space environment, the rate of those who consider office noise as a disturbing factor is surprisingly lower than in smaller office rooms. Opinions are more diverse regarding information communication technologies. In addition to the interruption factors affecting the employees' job satisfaction, the research also focuses on the role of the offices in the 21st century.

Keywords: information overload, interruption, job satisfaction, office environment, work efficiency

Procedia PDF Downloads 227
716 Scar Removal Stretegy for Fingerprint Using Diffusion

Authors: Mohammad A. U. Khan, Tariq M. Khan, Yinan Kong

Abstract:

Fingerprint image enhancement is one of the most important step in an automatic fingerprint identification recognition (AFIS) system which directly affects the overall efficiency of AFIS. The conventional fingerprint enhancement like Gabor and Anisotropic filters do fill the gaps in ridge lines but they fail to tackle scar lines. To deal with this problem we are proposing a method for enhancing the ridges and valleys with scar so that true minutia points can be extracted with accuracy. Our results have shown an improved performance in terms of enhancement.

Keywords: fingerprint image enhancement, removing noise, coherence, enhanced diffusion

Procedia PDF Downloads 516
715 The Three-dimensional Response of Mussel Plaque Anchoring to Wet Substrates under Directional Tensions

Authors: Yingwei Hou, Tao Liu, Yong Pang

Abstract:

The paper explored the three-dimensional deformation of mussel plaques anchor to wet polydimethylsiloxane (PDMS) substrates under tension stress with different angles. Mussel plaques exhibiting natural adhesive structures, have attracted significant attention for their remarkable adhesion properties. Understanding their behavior under mechanical stress, particularly in a three-dimensional context, holds immense relevance for biomimetic material design and bio-inspired adhesive development. This study employed a novel approach to investigate the 3D deformation of the PDMS substrates anchored by mussel plaques subjected to controlled tension. Utilizing our customized stereo digital image correlation technique and mechanical mechanics analyses, we found the distributions of the displacement and resultant force on the substrate became concentrated under the plaque. Adhesion and sucking mechanisms were analyzed for the mussel plaque-substrate system under tension until detachment. The experimental findings were compared with a developed model using finite element analysis and the results provide new insights into mussels’ attachment mechanism. This research not only contributes to the fundamental understanding of biological adhesion but also holds promising implications for the design of innovative adhesive materials with applications in fields such as medical adhesives, underwater technologies, and industrial bonding. The comprehensive exploration of mussel plaque behavior in three dimensions is important for advancements in biomimicry and materials science, fostering the development of adhesives that emulate nature's efficiency.

Keywords: adhesion mechanism, mytilus edulis, mussel plaque, stereo digital image correlation

Procedia PDF Downloads 57
714 Morphological Comparison of the Total Skeletal of (Common Bottlenose Dolphin) Tursiops truncatus and (Harbour Porpoise) Phocoena phocoena

Authors: Onur Yaşar, Okan Bilge, Ortaç Onmuş

Abstract:

The aim of this study is to investigate and compare the locomotion structures, especially the bone structures, of two different dolphin species, the Common bottlenose dolphin Tursiops truncatus and the Harbor porpoise Phocoena phocoena, and to provide a more detailed and descriptive comparison. To compare the structures of bones of two study species; first, the Spinous Process (SP), Inferior Articular Process (IAP), Laminae Vertebrae (LA), Foramen Vertebrae (FV), Corpus Vertebrae (CV), Transverse Process (TP) were determined and then the length of the Spinous Process (LSP), length of the Foramen Vertebrae (LFV), area of the Corpus Vertebrae (ACV), and length of the Transverse Process (LTP) were measured from the caudal view. The spine consists of a total of 61 vertebrae (7 cervical, 13 thoracic, 14 lumbar, and 27 caudal vertebrae) in the Common bottlenose dolphin, while the Harbor Porpoise has 63 vertebrae (7 cervical, 12 thoracic, 14 lumbar, 30 caudal. In the Common bottlenose dolphin, epiphyseal ossification was between the 21st caudal vertebra and the 27th caudal vertebra, while in the Harbor porpoise, it was observed in all vertebrae. Ankylosing spondylitis was observed in the C1 and C2 vertebrae in the Common bottlenose dolphin and in all cervical vertebrae between C1 and C6 in the Harbor porpoise. We argue that this difference in fused cervical vertebrae between the two species may be due to the fact that the neck movements of the Harbor porpoise in the vertical and horizontal axes are more limited than those of the Common bottlenose dolphin. We also think that as the number of fused cervical vertebrae increases, underwater maneuvers are performed at a wider angle, but to test this idea, we think that different species of dolphins should be compared and the different age groups should be investigated.

Keywords: anatomy, morphometry, vertebrae, common bottlenose dolphin, Tursiops truncatus, harbour porpoise, Phocoena phocoena

Procedia PDF Downloads 48
713 Construction of the Large Scale Biological Networks from Microarrays

Authors: Fadhl Alakwaa

Abstract:

One of the sustainable goals of the system biology is understanding gene-gene interactions. Hence, gene regulatory networks (GRN) need to be constructed for understanding the disease ontology and to reduce the cost of drug development. To construct gene regulatory from gene expression we need to overcome many challenges such as data denoising and dimensionality. In this paper, we develop an integrated system to reduce data dimension and remove the noise. The generated network from our system was validated via available interaction databases and was compared to previous methods. The result revealed the performance of our proposed method.

Keywords: gene regulatory network, biclustering, denoising, system biology

Procedia PDF Downloads 239
712 Automatic Target Recognition in SAR Images Based on Sparse Representation Technique

Authors: Ahmet Karagoz, Irfan Karagoz

Abstract:

Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained.

Keywords: automatic target recognition, sparse representation, image classification, SAR images

Procedia PDF Downloads 366
711 Global Stability Of Nonlinear Itô Equations And N. V. Azbelev's W-method

Authors: Arcady Ponosov., Ramazan Kadiev

Abstract:

The work studies the global moment stability of solutions of systems of nonlinear differential Itô equations with delays. A modified regularization method (W-method) for the analysis of various types of stability of such systems, based on the choice of the auxiliaryequations and applications of the theory of positive invertible matrices, is proposed and justified. Development of this method for deterministic functional differential equations is due to N.V. Azbelev and his students. Sufficient conditions for the moment stability of solutions in terms of the coefficients for sufficiently general as well as specific classes of Itô equations are given.

Keywords: asymptotic stability, delay equations, operator methods, stochastic noise

Procedia PDF Downloads 224
710 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 148
709 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 309
708 On the Influence of Sleep Habits for Predicting Preterm Births: A Machine Learning Approach

Authors: C. Fernandez-Plaza, I. Abad, E. Diaz, I. Diaz

Abstract:

Births occurring before the 37th week of gestation are considered preterm births. A threat of preterm is defined as the beginning of regular uterine contractions, dilation and cervical effacement between 23 and 36 gestation weeks. To author's best knowledge, the factors that determine the beginning of the birth are not completely defined yet. In particular, the incidence of sleep habits on preterm births is weekly studied. The aim of this study is to develop a model to predict the factors affecting premature delivery on pregnancy, based on the above potential risk factors, including those derived from sleep habits and light exposure at night (introduced as 12 variables obtained by a telephone survey using two questionnaires previously used by other authors). Thus, three groups of variables were included in the study (maternal, fetal and sleep habits). The study was approved by Research Ethics Committee of the Principado of Asturias (Spain). An observational, retrospective and descriptive study was performed with 481 births between January 1, 2015 and May 10, 2016 in the University Central Hospital of Asturias (Spain). A statistical analysis using SPSS was carried out to compare qualitative and quantitative variables between preterm and term delivery. Chi-square test qualitative variable and t-test for quantitative variables were applied. Statistically significant differences (p < 0.05) between preterm vs. term births were found for primiparity, multi-parity, kind of conception, place of residence or premature rupture of membranes and interruption during nights. In addition to the statistical analysis, machine learning methods to look for a prediction model were tested. In particular, tree based models were applied as the trade-off between performance and interpretability is especially suitable for this study. C5.0, recursive partitioning, random forest and tree bag models were analysed using caret R-package. Cross validation with 10-folds and parameter tuning to optimize the methods were applied. In addition, different noise reduction methods were applied to the initial data using NoiseFiltersR package. The best performance was obtained by C5.0 method with Accuracy 0.91, Sensitivity 0.93, Specificity 0.89 and Precision 0.91. Some well known preterm birth factors were identified: Cervix Dilation, maternal BMI, Premature rupture of membranes or nuchal translucency analysis in the first trimester. The model also identifies other new factors related to sleep habits such as light through window, bedtime on working days, usage of electronic devices before sleeping from Mondays to Fridays or change of sleeping habits reflected in the number of hours, in the depth of sleep or in the lighting of the room. IF dilation < = 2.95 AND usage of electronic devices before sleeping from Mondays to Friday = YES and change of sleeping habits = YES, then preterm is one of the predicting rules obtained by C5.0. In this work a model for predicting preterm births is developed. It is based on machine learning together with noise reduction techniques. The method maximizing the performance is the one selected. This model shows the influence of variables related to sleep habits in preterm prediction.

Keywords: machine learning, noise reduction, preterm birth, sleep habit

Procedia PDF Downloads 148
707 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 289
706 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 149
705 Sigma-Delta ADCs Converter a Study Case

Authors: Thiago Brito Bezerra, Mauro Lopes de Freitas, Waldir Sabino da Silva Júnior

Abstract:

The Sigma-Delta A/D converters have been proposed as a practical application for A/D conversion at high rates because of its simplicity and robustness to imperfections in the circuit, also because the traditional converters are more difficult to implement in VLSI technology. These difficulties with conventional conversion methods need precise analog components in their filters and conversion circuits, and are more vulnerable to noise and interference. This paper aims to analyze the architecture, function and application of Analog-Digital converters (A/D) Sigma-Delta to overcome these difficulties, showing some simulations using the Simulink software and Multisim.

Keywords: analysis, oversampling modulator, A/D converters, sigma-delta

Procedia PDF Downloads 329