Search results for: removing noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1565

Search results for: removing noise

1055 Permeodynamic Particulate Matter Filtration for Improved Air Quality

Authors: Hamad M. Alnagran, Mohammed S. Imbabi

Abstract:

Particulate matter (PM) in the air we breathe is detrimental to health. Overcoming this problem has attracted interest and prompted research on the use of PM filtration in commercial buildings and homes to be carried out. The consensus is that tangible health benefits can result from the use of PM filters in most urban environments, to clean up the building’s fresh air supply and thereby reduce exposure of residents to airborne PM. The authors have investigated and are developing a new large-scale Permeodynamic Filtration Technology (PFT) capable of permanently filtering and removing airborne PMs from outdoor spaces, thus also benefiting internal spaces such as the interiors of buildings. Theoretical models were developed, and laboratory trials carried out to determine, and validate through measurement permeodynamic filtration efficiency and pressure drop as functions of PM particle size distributions. The conclusion is that PFT offers a potentially viable, cost effective end of pipe solution to the problem of airborne PM.

Keywords: air filtration, particulate matter, particle size distribution, permeodynamic

Procedia PDF Downloads 186
1054 Removal of Phenol from Aqueous Solution Using Watermelon (Citrullus C. lanatus) Rind

Authors: Fidelis Chigondo

Abstract:

This study focuses on investigating the effectiveness of watermelon rind in phenol removal from aqueous solution. The effects of various parameters (pH, initial phenol concentration, biosorbent dosage and contact time) on phenol adsorption were investigated. The pH of 2, initial phenol concentration of 40 ppm, the biosorbent dosage of 0.6 g and contact time of 6 h also deduced to be the optimum conditions for the adsorption process. The maximum phenol removal under optimized conditions was 85%. The sorption data fitted to the Freundlich isotherm with a regression coefficient of 0.9824. The kinetics was best described by the intraparticle diffusion model and Elovich Equation with regression coefficients of 1 and 0.8461 respectively showing that the reaction is chemisorption on a heterogeneous surface and the intraparticle diffusion rate only is the rate determining step. The study revealed that watermelon rind has a potential of removing phenol from industrial wastewaters.

Keywords: biosorption, phenol, biosorbent, watermelon rind

Procedia PDF Downloads 233
1053 Effective Planning of Public Transportation Systems: A Decision Support Application

Authors: Ferdi Sönmez, Nihal Yorulmaz

Abstract:

Decision making on the true planning of the public transportation systems to serve potential users is a must for metropolitan areas. To take attraction of travelers to projected modes of transport, adequately fair overall travel times should be provided. In this fashion, other benefits such as lower traffic congestion, road safety and lower noise and atmospheric pollution may be earned. The congestion which comes with increasing demand of public transportation is becoming a part of our lives and making residents’ life difficult. Hence, regulations should be done to reduce this congestion. To provide a constructive and balanced regulation in public transportation systems, right stations should be located in right places. In this study, it is aimed to design and implement a Decision Support System (DSS) Application to determine the optimal bus stop places for public transport in Istanbul which is one of the biggest and oldest cities in the world. Required information is gathered from IETT (Istanbul Electricity, Tram and Tunnel) Enterprises which manages all public transportation services in Istanbul Metropolitan Area. By using the most real-like values, cost assignments are made. The cost is calculated with the help of equations produced by bi-level optimization model. For this study, 300 buses, 300 drivers, 10 lines and 110 stops are used. The user cost of each station and the operator cost taken place in lines are calculated. Some components like cost, security and noise pollution are considered as significant factors affecting the solution of set covering problem which is mentioned for identifying and locating the minimum number of possible bus stops. Preliminary research and model development for this study refers to previously published article of the corresponding author. Model results are represented with the intent of decision support to the specialists on locating stops effectively.

Keywords: operator cost, bi-level optimization model, user cost, urban transportation

Procedia PDF Downloads 226
1052 The Fallacy around Inserting Brackets to Evaluate Expressions Involving Multiplication and Division

Authors: Manduth Ramchander

Abstract:

Evaluating expressions involving multiplication and division can give rise to the fallacy that brackets can be arbitrarily inserted into expressions involving multiplication and division. The aim of this article was to draw upon mathematical theory to prove that brackets cannot be arbitrarily inserted into expressions involving multiplication and division and in particular in expressions where division precedes multiplication. In doing so, it demonstrates that the notion that two different answers are possible, when evaluating expressions involving multiplication and division, is indeed a false one. Searches conducted in a number of scholarly databases unearthed the rules to be applied when removing brackets from expressions, which revealed that consideration needs to be given to sign changes when brackets are removed. The rule pertaining to expressions involving multiplication and division was then extended upon, in its reverse format, to prove that brackets cannot be arbitrarily inserted into expressions involving multiplication and division. The application of the rule demonstrates that an expression involving multiplication and division can have only one correct answer. It is recommended that both the rule and its reverse be included in the curriculum, preferably at the juncture when manipulation with brackets is introduced.

Keywords: brackets, multiplications and division, operations, order

Procedia PDF Downloads 146
1051 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning

Authors: T. Bryan , V. Kepuska, I. Kostnaic

Abstract:

A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.

Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit

Procedia PDF Downloads 234
1050 The Psycho-Linguistic Aspect of Translation Gaps in Teaching English for Specific Purposes

Authors: Elizaveta Startseva, Elena Notina, Irina Bykova, Valentina Ulyumdzhieva, Natallia Zhabo

Abstract:

With the various existing models of intercultural communication that contain a vast number of stages for foreign language acquisition, there is a need for conscious perception of the foreign culture. Such a process is associated with the emergence of linguistic conflict with the consistent students’ desire to solve the problem of the language differences, along with cultural discrepancies. The aim of this study is to present the modern ways and methods of removing psycholinguistic conflict through skills development in professional translation and intercultural communication. The study was conducted in groups of 1-4-year students of Medical Institute and Agro-Technological Institute RUDN university. In the course of training, students got knowledge in such disciplines as basic grammar and vocabulary of the English language, phonetics, lexicology, introduction to linguistics, theory of translation, annotating and referencing media texts and texts in specialty. The students learned to present their research work, participated in the University and exit conferences with their reports and presentations. Common strategies of removing linguistic and cultural conflict can be attributed to the development of such abilities of a language personality as a commitment to communication and cooperation, the formation of cultural awareness and empathy of other cultures of the individual, realistic self-esteem, emotional stability, tolerance, etc. The process of mastering a foreign language and culture of the target language leads to a reduplication of linguistic identity, which leads to successive formation of the so-called 'secondary linguistic personality.' In our study, we tried to approach the problem comprehensively, focusing on the translation gaps for technical and non-technical language still missing such a typology which could classify all of the lacunas on the same principle. When obtaining the background knowledge, students learn to overcome the difficulties posed by the national-specific and linguistic differences of cultures in contact, i.e., to eliminate the gaps (to fill in and compensate). Compensation gaps is a means of fixing it, the initial phase of elimination, followed in some cases and some not is filling semantic voids (plenus). The concept of plenus occurs in most cases of translation gaps, for example in the transcription and transliteration of (intercultural and exoticism), the replication (reproduction of the morphemic structure of words or idioms. In all the above cases the task of the translator is to ensure an identical response of the receptors of the original and translated texts, since any statement is created with the goal of obtaining communicative effect, and hence pragmatic potential is the most important part of its contents. The practical value of our work lies in improving the methodology of teaching English for specific purposes on the basis of psycholinguistic concept of the secondary language personality.

Keywords: lacuna, language barrier, plenus, secondary language personality

Procedia PDF Downloads 268
1049 Applying the Fuzzy Analytic Network Process to Establish the Relative Importance of Knowledge Sharing Barriers

Authors: Van Dong Phung, Igor Hawryszkiewycz, Kyeong Kang, Muhammad Hatim Binsawad

Abstract:

Knowledge sharing (KS) is the key to creativity and innovation in any organizations. Overcoming the KS barriers has created new challenges for designing in dynamic and complex environment. There may be interrelations and interdependences among the barriers. The purpose of this paper is to present a review of literature of KS barriers and impute the relative importance of them through the fuzzy analytic network process that is a generalization of the analytical hierarchy process (AHP). It helps to prioritize the barriers to find ways to remove them to facilitate KS. The study begins with a brief description of KS barriers and the most critical ones. The FANP and its role in identifying the relative importance of KS barriers are explained. The paper, then, proposes the model for research and expected outcomes. The study suggests that the use of the FANP is appropriate to impute the relative importance of KS barriers which are intertwined and interdependent. Implications and future research are also proposed.

Keywords: FANP, ANP, knowledge sharing barriers, knowledge sharing, removing barriers, knowledge management

Procedia PDF Downloads 315
1048 Sorting Fish by Hu Moments

Authors: J. M. Hernández-Ontiveros, E. E. García-Guerrero, E. Inzunza-González, O. R. López-Bonilla

Abstract:

This paper presents the implementation of an algorithm that identifies and accounts different fish species: Catfish, Sea bream, Sawfish, Tilapia, and Totoaba. The main contribution of the method is the fusion of the characteristics of invariance to the position, rotation and scale of the Hu moments, with the proper counting of fish. The identification and counting is performed, from an image under different noise conditions. From the experimental results obtained, it is inferred the potentiality of the proposed algorithm to be applied in different scenarios of aquaculture production.

Keywords: counting fish, digital image processing, invariant moments, pattern recognition

Procedia PDF Downloads 391
1047 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks

Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev

Abstract:

One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.

Keywords: channel estimation, inter-cell interference, pilot contamination attacks, wireless communications

Procedia PDF Downloads 199
1046 Brain-Computer Interfaces That Use Electroencephalography

Authors: Arda Ozkurt, Ozlem Bozkurt

Abstract:

Brain-computer interfaces (BCIs) are devices that output commands by interpreting the data collected from the brain. Electroencephalography (EEG) is a non-invasive method to measure the brain's electrical activity. Since it was invented by Hans Berger in 1929, it has led to many neurological discoveries and has become one of the essential components of non-invasive measuring methods. Despite the fact that it has a low spatial resolution -meaning it is able to detect when a group of neurons fires at the same time-, it is a non-invasive method, making it easy to use without possessing any risks. In EEG, electrodes are placed on the scalp, and the voltage difference between a minimum of two electrodes is recorded, which is then used to accomplish the intended task. The recordings of EEGs include, but are not limited to, the currents along dendrites from synapses to the soma, the action potentials along the axons connecting neurons, and the currents through the synaptic clefts connecting axons with dendrites. However, there are some sources of noise that may affect the reliability of the EEG signals as it is a non-invasive method. For instance, the noise from the EEG equipment, the leads, and the signals coming from the subject -such as the activity of the heart or muscle movements- affect the signals detected by the electrodes of the EEG. However, new techniques have been developed to differentiate between those signals and the intended ones. Furthermore, an EEG device is not enough to analyze the data from the brain to be used by the BCI implication. Because the EEG signal is very complex, to analyze it, artificial intelligence algorithms are required. These algorithms convert complex data into meaningful and useful information for neuroscientists to use the data to design BCI devices. Even though for neurological diseases which require highly precise data, invasive BCIs are needed; non-invasive BCIs - such as EEGs - are used in many cases to help disabled people's lives or even to ease people's lives by helping them with basic tasks. For example, EEG is used to detect before a seizure occurs in epilepsy patients, which can then prevent the seizure with the help of a BCI device. Overall, EEG is a commonly used non-invasive BCI technique that has helped develop BCIs and will continue to be used to detect data to ease people's lives as more BCI techniques will be developed in the future.

Keywords: BCI, EEG, non-invasive, spatial resolution

Procedia PDF Downloads 57
1045 Synthesis of Filtering in Stochastic Systems on Continuous-Time Memory Observations in the Presence of Anomalous Noises

Authors: S. Rozhkova, O. Rozhkova, A. Harlova, V. Lasukov

Abstract:

We have conducted the optimal synthesis of root-mean-squared objective filter to estimate the state vector in the case if within the observation channel with memory the anomalous noises with unknown mathematical expectation are complement in the function of the regular noises. The synthesis has been carried out for linear stochastic systems of continuous-time.

Keywords: mathematical expectation, filtration, anomalous noise, memory

Procedia PDF Downloads 226
1044 Theoretical and Experimental Analysis of End Milling Process with Multiple Finger Inserted Cutters

Authors: G. Krishna Mohana Rao, P. Ravi Kumar

Abstract:

Milling is the process of removing unwanted material with suitable tool. Even though the milling process is having wider application, the vibration of machine tool and work piece during the process produces chatter on the products. Various methods of preventing the chatter have been incorporated into machine tool systems. Damper is cut into equal number of parts. Each part is called as finger. Multiple fingers were inserted in the hollow portion of the shank to reduce tool vibrations. In the present work, nonlinear static and dynamic analysis of the damper inserted end milling cutter used to reduce the chatter was done. A comparison is made for the milling cutter with multiple dampers. Surface roughness was determined by machining with multiple finger inserted milling cutters.

Keywords: damping inserts, end milling, vibrations, nonlinear dynamic analysis, number of fingers

Procedia PDF Downloads 508
1043 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments

Authors: Rahul Paul, Peter Mctaggart, Luke Skinner

Abstract:

Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.

Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry

Procedia PDF Downloads 84
1042 Use of Microbial Fuel Cell for Metal Recovery from Wastewater

Authors: Surajbhan Sevda

Abstract:

Metal containing wastewater is generated in large quintiles due to rapid industrialization. Generally, the metal present in wastewater is not biodegradable and can be accumulated in living animals, humans and plant tissue, causing disorder and diseases. The conventional metal recovery methods include chemical, physical and biological methods, but these are chemical and energy intensive. The recent development in microbial fuel cell (MFC) technology provides a new approach for metal recovery; this technology offers a flexible platform for both reduction and oxidation reaction oriented process. The use of MFCs will be a new platform for more efficient and low energy approach for metal recovery from the wastewater. So far metal recover was extensively studied using chemical, physical and biological methods. The MFCs present a new and efficient approach for removing and recovering metals from different wastewater, suggesting the use of different electrode for metal recovery can be a new efficient and effective approach.

Keywords: metal recovery, microbial fuel cell, wastewater, bioelectricity

Procedia PDF Downloads 202
1041 Malposition of Femoral Component in Total Hip Arthroplasty

Authors: Renate Krassnig, Gloria M. Hohenberger, Uldis Berzins, Stefen Fischerauer

Abstract:

Background: Only a few reports discuss the effectiveness of intraoperative radiographs for placing femoral components. Therefore there is no international standard in using intraoperative imaging in the proceeding of total hip replacement. Method: Case report; an 84-year-old female patient underwent changing the components of the Total hip arthroplasty (THA) because of aseptic loosening. Due to circumstances, the surgeon decided to implant a cemented femoral component. The procedure was without any significant abnormalities. The first postoperative radiograph was planned after recovery – as usual. The x-ray imaging showed a misplaced femoral component. Therefore a CT-scan was performed additionally and the malposition of the cemented femoral component was confirmed. The patient had to undergo another surgery – removing of the cemented femoral component and implantation of a new well placed one. Conclusion: Intraoperative imaging of the femoral component is not a common standard but this case shows that intraoperative imaging is a useful method for detecting errors and gives the surgeon the opportunity to correct errors intraoperatively.

Keywords: femoral component, intraoperative imaging, malplacement, revison

Procedia PDF Downloads 188
1040 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 217
1039 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 89
1038 The Evaluation of Fuel Desulfurization Performance of Choline-Chloride Based Deep Eutectic Solvents with Addition of Graphene Oxide as Catalyst

Authors: Chiau Yuan Lim, Hayyiratul Fatimah Mohd Zaid, Fai Kait Chong

Abstract:

Deep Eutectic Solvent (DES) is used in various applications due to its simplicity in synthesis procedure, biodegradable, inexpensive and easily available chemical ingredients. Graphene Oxide is a popular catalyst that being used in various processes due to its stacking carbon sheets in layer which theoretically rapid up the catalytic processes. In this study, choline chloride based DESs were synthesized and ChCl-PEG(1:4) was found to be the most effective DES in performing desulfurization, which it is able to remove up to 47.4% of the sulfur content in the model oil in just 10 minutes, and up to 95% of sulfur content after repeat the process for six times. ChCl-PEG(1:4) able to perform up to 32.7% desulfurization on real diesel after 6 multiple stages. Thus, future research works should focus on removing the impurities on real diesel before utilising DESs in petroleum field.

Keywords: choline chloride, deep eutectic solvent, fuel desulfurization, graphene oxide

Procedia PDF Downloads 137
1037 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: altitude estimation, drone, image processing, trajectory planning

Procedia PDF Downloads 96
1036 Evaluation of Natural Frequency of Single and Grouped Helical Piles

Authors: Maryam Shahbazi, Amy B. Cerato

Abstract:

The importance of a systems’ natural frequency (fn) emerges when the vibration force frequency is equivalent to foundation's fn which causes response amplitude (resonance) that may cause irreversible damage to the structure. Several factors such as pile geometry (e.g., length and diameter), soil density, load magnitude, pile condition, and physical structure affect the fn of a soil-pile system; some of these parameters are evaluated in this study. Although experimental and analytical studies have assessed the fn of a soil-pile system, few have included individual and grouped helical piles. Thus, the current study aims to provide quantitative data on dynamic characteristics of helical pile-soil systems from full-scale shake table tests that will allow engineers to predict more realistic dynamic response under motions with variable frequency ranges. To evaluate the fn of single and grouped helical piles in dry dense sand, full-scale shake table tests were conducted in a laminar box (6.7 m x 3.0 m with 4.6 m high). Two different diameters (8.8 cm and 14 cm) helical piles were embedded in the soil box with corresponding lengths of 3.66m (excluding one pile with length of 3.96) and 4.27m. Different configurations were implemented to evaluate conditions such as fixed and pinned connections. In the group configuration, all four piles with similar geometry were tied together. Simulated real earthquake motions, in addition to white noise, were applied to evaluate the wide range of soil-pile system behavior. The Fast Fourier Transform (FFT) of measured time history responses using installed strain gages and accelerometers were used to evaluate fn. Both time-history records using accelerometer or strain gages were found to be acceptable for calculating fn. In this study, the existence of a pile reduced the fn of the soil slightly. Greater fn occurred on single piles with larger l/d ratios (higher slenderness ratio). Also, regardless of the connection type, the more slender pile group which is obviously surrounded by more soil, yielded higher natural frequencies under white noise, which may be due to exhibiting more passive soil resistance around it. Relatively speaking, within both pile groups, a pinned connection led to a lower fn than a fixed connection (e.g., for the same pile group the fn’s are 5.23Hz and 4.65Hz for fixed and pinned connections, respectively). Generally speaking, a stronger motion causes nonlinear behavior and degrades stiffness which reduces a pile’s fn; even more, reduction occurs in soil with a lower density. Moreover, fn of dense sand under white noise signal was obtained 5.03 which is reduced by 44% when an earthquake with the acceleration of 0.5g was applied. By knowing the factors affecting fn, the designer can effectively match the properties of the soil to a type of pile and structure to attempt to avoid resonance. The quantitative results in this study assist engineers in predicting a probable range of fn for helical pile foundations under potential future earthquake, and machine loading applied forces.

Keywords: helical pile, natural frequency, pile group, shake table, stiffness

Procedia PDF Downloads 119
1035 Performance Comparison of Non-Binary RA and QC-LDPC Codes

Authors: Ni Wenli, He Jing

Abstract:

Repeat–Accumulate (RA) codes are subclass of LDPC codes with fast encoder structures. In this paper, we consider a nonbinary extension of binary LDPC codes over GF(q) and construct a non-binary RA code and a non-binary QC-LDPC code over GF(2^4), we construct non-binary RA codes with linear encoding method and non-binary QC-LDPC codes with algebraic constructions method. And the BER performance of RA and QC-LDPC codes over GF(q) are compared with BP decoding and by simulation over the Additive White Gaussian Noise (AWGN) channels.

Keywords: non-binary RA codes, QC-LDPC codes, performance comparison, BP algorithm

Procedia PDF Downloads 359
1034 Industrial Wastewater Treatment Improvements Using Activated Carbon

Authors: Mamdouh Y. Saleh, Gaber El Enany, Medhat H. Elzahar, Moustafa H. Omran

Abstract:

The discharge limits of industrial waste water effluents are subjected to regulations which are getting more restricted with time. A former research occurred in Port Said city studied the efficiency of treating industrial wastewater using the first stage (A-stage) of the multiple-stage plant (AB-system).From the results of this former research, the effluent treated wastewater has high rates of total dissolved solids (TDS) and chemical oxygen demand (COD). The purpose of this paper is to improve the treatment process in removing TDS and COD. Thus, a pilot plant was constructed at wastewater pump station in the industrial area in the south of Port Said. Experimental work was divided into several groups adding activated carbon with different dosages to waste water, and for each group waste water was filtered after being mixed with activated carbon. pH and TSS as variables were also studied. At the end of this paper, a comparison was made between the efficiency of using activated carbon and the efficiency of using limestone in the same circumstances.

Keywords: adsorption, COD removal, filtration, TDS removal

Procedia PDF Downloads 477
1033 A Survey of Feature Selection and Feature Extraction Techniques in Machine Learning

Authors: Samina Khalid, Shamila Nasreen

Abstract:

Dimensionality reduction as a preprocessing step to machine learning is effective in removing irrelevant and redundant data, increasing learning accuracy, and improving result comprehensibility. However, the recent increase of dimensionality of data poses a severe challenge to many existing feature selection and feature extraction methods with respect to efficiency and effectiveness. In the field of machine learning and pattern recognition, dimensionality reduction is important area, where many approaches have been proposed. In this paper, some widely used feature selection and feature extraction techniques have analyzed with the purpose of how effectively these techniques can be used to achieve high performance of learning algorithms that ultimately improves predictive accuracy of classifier. An endeavor to analyze dimensionality reduction techniques briefly with the purpose to investigate strengths and weaknesses of some widely used dimensionality reduction methods is presented.

Keywords: age related macular degeneration, feature selection feature subset selection feature extraction/transformation, FSA’s, relief, correlation based method, PCA, ICA

Procedia PDF Downloads 474
1032 Analyzing Competition in Public Construction Projects

Authors: Khaled Hesham Hyari, Amjad Almani

Abstract:

Construction projects in the public sector are commonly awarded through competitive bidding. In the last decade, the Construction projects environment in the Middle East went through many changes. These changes have been caused by different factors including the economic crisis, delays in monthly payments, international competition and reduced number of projects. These factors had a great impact on the bidding behaviors of contractors and their pricing strategies. This paper examines the competition characteristics in public construction projects through an analysis of bidding results of contractors in public construction projects over a period of 6 years (2006-2011) in Jordan. The analyzed projects include all categories of projects such as infrastructure, buildings, transportation and engineering services (design and supervision contracts). Data for the projects were obtained from the General Tender’s Directorate in Jordan and includes 462 projects. The analysis performed in this projects includes, studying the bid spread in all projects as it is an indication of the level of competition in the analyzed bids. The analysis studied the factors that affect bid spread such as number of bidders, Value of the project, Project category and years. It also studying the “Signal to Noise Ratio” in all projects as it is an indication of the accuracy of cost estimating performed by competing bidders and bidder´s evaluation of project risks. The analysis performed includes the relationship between signal to noise ratio and different parameters such as project category, number of bidders and changes over years. Moreover, the analysis includes determining the bidder´s aggressiveness in bidding as it is an indication of competition level in such projects. This was performed by determining the pack price which can be considered as the true value of the project and comparing it with the lowest bid submitted for each project to determine the level of aggressiveness in submitted bids. The analysis performed in this project should prove to be useful to owners in understanding bidding behaviors of contractors and pointing out areas that needs improvement in preparing bidding documents. Also the project should be useful to contractors in understanding the competitive bidding environment and should help them to improve their bidding strategies to maximize the success rate in obtaining contracts.

Keywords: construction projects, competitive bidding, public construction, competition

Procedia PDF Downloads 317
1031 Carbon Storage in Natural Mangrove Biomass: Its Destruction and Potential Impact on Climate Change in the UAE

Authors: Hedaya Ali Al Ameri, Alya A. Arabi

Abstract:

Measuring the level of carbon storage in mangroves’ biomass has a potential impact in the climate change of UAE. Carbon dioxide is one of greenhouse gases. It is considered to be a main reason for global warming. Deforestation is a key source of the increase in carbon dioxide whereas forests such as mangroves assist in removing carbon dioxide from atmosphere by storing them in its biomass and soil. By using Kauffman and Donato methodology, above- and below-ground biomass and carbon stored in UAE’s natural mangroves were quantified. Carbon dioxide equivalent (CO2eq) released to the atmosphere was then estimated in case of mangroves deforestation in the UAE. The results show that the mean total biomass of mangroves in the UAE ranged from 15.75 Mg/ha to 3098.69 Mg/ha. The estimated CO2eq released upon deforestation in the UAE was found to have a minimal effect on the temperature increase and thus global warming.

Keywords: carbon stored in biomass, mangrove deforestation, temperature change, United Arab Emirate

Procedia PDF Downloads 382
1030 Operation Parameters of Vacuum Cleaned Filters

Authors: Wilhelm Hoeflinger, Thomas Laminger, Johannes Wolfslehner

Abstract:

For vacuum cleaned dust filters, used e. g. in textile industry, there exist no calculation methods to determine design parameters (e. g. traverse speed of the nozzle, filter area...). In this work a method to calculate the optimum traverse speed of the nozzle of an industrial-size flat dust filter at a given mean pressure drop and filter face velocity was elaborated. Well-known equations for the design of a cleanable multi-chamber bag-house-filter were modified in order to take into account a continuously regeneration of a dust filter by a nozzle. Thereby, the specific filter medium resistance and the specific cake resistance values are needed which can be derived from filter tests under constant operation conditions. A lab-scale filter test rig was used to derive the specific filter media resistance value and the specific cake resistance value for vacuum cleaned filter operation. Three different filter media were tested and the determined parameters were compared to each other.

Keywords: design of dust filter, dust removing, filter regeneration, operation parameters

Procedia PDF Downloads 371
1029 Countercurrent Flow Simulation of Gas-Solid System in a Purge Column Using Computational Fluid Dynamics Techniques

Authors: T. J. Jamaleddine

Abstract:

Purge columns or degasser vessels are widely used in the polyolefin process for removing trapped hydrocarbons and in-excess catalyst residues from the polymer particles. A uniform distribution of purged gases coupled with a plug-flow characteristic inside the column system is desirable to obtain optimum desorption characteristics of trapped hydrocarbon and catalyst residues. Computational Fluid Dynamics (CFD) approach is a promising tool for design optimization of these vessels. The success of this approach is profoundly dependent on the solution strategy and the choice of geometrical layout at the vessel outlet. Filling the column with solids and initially solving for the solids flow minimized numerical diffusion substantially. Adopting a cylindrical configuration at the vessel outlet resulted in less numerical instability and resembled the hydrodynamics flow of solids in the hopper segment reasonably well.

Keywords: CFD, degasser vessel, gas-solids flow, gas purging, purge column, species transport

Procedia PDF Downloads 112
1028 A Brief Review of Urban Green Vegetation (Green Wall) in Reduction of Air Pollution

Authors: Masoumeh Pirhadi

Abstract:

Air pollution is becoming a major health problem affecting millions. In support of this observation, the world health organization estimates that many people feel unhealthy due to pollution. This is a coupled fact that one of the main global sources of air pollution in cities is greenhouse gas emissions due heavy traffic. Green walls are developed as a sustainable strategy to reduce pollution by increasing vegetation in developed areas without occupying space in the city. This concept an offer advantageous environmental benefits and they can also be proposed for aesthetic purposes, and today they are used to preserve the urban environment. Green walls can also create environments that can promote a healthy lifestyle. Findings of multiple studies also indicate that Green infrastructure in cities is a strategy for improving air quality and increasing the sustainability of cities. Since these green solutions (green walls) act as porous materials that affect the diffusion of air pollution they can also act as a removing air vents that clean the air. Therefore, implementation of this strategy can be considered as a prominent factor in achieving a cleaner environment.

Keywords: green vegetation, air pollution, green wall, urban area

Procedia PDF Downloads 137
1027 Generation of High-Quality Synthetic CT Images from Cone Beam CT Images Using A.I. Based Generative Networks

Authors: Heeba A. Gurku

Abstract:

Introduction: Cone Beam CT(CBCT) images play an integral part in proper patient positioning in cancer patients undergoing radiation therapy treatment. But these images are low in quality. The purpose of this study is to generate high-quality synthetic CT images from CBCT using generative models. Material and Methods: This study utilized two datasets from The Cancer Imaging Archive (TCIA) 1) Lung cancer dataset of 20 patients (with full view CBCT images) and 2) Pancreatic cancer dataset of 40 patients (only 27 patients having limited view images were included in the study). Cycle Generative Adversarial Networks (GAN) and its variant Attention Guided Generative Adversarial Networks (AGGAN) models were used to generate the synthetic CTs. Models were evaluated by visual evaluation and on four metrics, Structural Similarity Index Measure (SSIM), Peak Signal Noise Ratio (PSNR) Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), to compare the synthetic CT and original CT images. Results: For pancreatic dataset with limited view CBCT images, our study showed that in Cycle GAN model, MAE, RMSE, PSNR improved from 12.57to 8.49, 20.94 to 15.29 and 21.85 to 24.63, respectively but structural similarity only marginally increased from 0.78 to 0.79. Similar, results were achieved with AGGAN with no improvement over Cycle GAN. However, for lung dataset with full view CBCT images Cycle GAN was able to reduce MAE significantly from 89.44 to 15.11 and AGGAN was able to reduce it to 19.77. Similarly, RMSE was also decreased from 92.68 to 23.50 in Cycle GAN and to 29.02 in AGGAN. SSIM and PSNR also improved significantly from 0.17 to 0.59 and from 8.81 to 21.06 in Cycle GAN respectively while in AGGAN SSIM increased to 0.52 and PSNR increased to 19.31. In both datasets, GAN models were able to reduce artifacts, reduce noise, have better resolution, and better contrast enhancement. Conclusion and Recommendation: Both Cycle GAN and AGGAN were significantly able to reduce MAE, RMSE and PSNR in both datasets. However, full view lung dataset showed more improvement in SSIM and image quality than limited view pancreatic dataset.

Keywords: CT images, CBCT images, cycle GAN, AGGAN

Procedia PDF Downloads 69
1026 A Packet Loss Probability Estimation Filter Using Most Recent Finite Traffic Measurements

Authors: Pyung Soo Kim, Eung Hyuk Lee, Mun Suck Jang

Abstract:

A packet loss probability (PLP) estimation filter with finite memory structure is proposed to estimate the packet rate mean and variance of the input traffic process in real-time while removing undesired system and measurement noises. The proposed PLP estimation filter is developed under a weighted least square criterion using only the finite traffic measurements on the most recent window. The proposed PLP estimation filter is shown to have several inherent properties such as unbiasedness, deadbeat, robustness. A guideline for choosing appropriate window length is described since it can affect significantly the estimation performance. Using computer simulations, the proposed PLP estimation filter is shown to be superior to the Kalman filter for the temporarily uncertain system. One possible explanation for this is that the proposed PLP estimation filter can have greater convergence time of a filtered estimate as the window length M decreases.

Keywords: packet loss probability estimation, finite memory filter, infinite memory filter, Kalman filter

Procedia PDF Downloads 660