Search results for: noise reduction techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11888

Search results for: noise reduction techniques

10388 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 68
10387 End-to-End Spanish-English Sequence Learning Translation Model

Authors: Vidhu Mitha Goutham, Ruma Mukherjee

Abstract:

The low availability of well-trained, unlimited, dynamic-access models for specific languages makes it hard for corporate users to adopt quick translation techniques and incorporate them into product solutions. As translation tasks increasingly require a dynamic sequence learning curve; stable, cost-free opensource models are scarce. We survey and compare current translation techniques and propose a modified sequence to sequence model repurposed with attention techniques. Sequence learning using an encoder-decoder model is now paving the path for higher precision levels in translation. Using a Convolutional Neural Network (CNN) encoder and a Recurrent Neural Network (RNN) decoder background, we use Fairseq tools to produce an end-to-end bilingually trained Spanish-English machine translation model including source language detection. We acquire competitive results using a duo-lingo-corpus trained model to provide for prospective, ready-made plug-in use for compound sentences and document translations. Our model serves a decent system for large, organizational data translation needs. While acknowledging its shortcomings and future scope, it also identifies itself as a well-optimized deep neural network model and solution.

Keywords: attention, encoder-decoder, Fairseq, Seq2Seq, Spanish, translation

Procedia PDF Downloads 172
10386 Reduction in Population Growth under Various Contraceptive Strategies in Uttar Pradesh, India

Authors: Prashant Verma, K. K. Singh, Anjali Singh, Ujjaval Srivastava

Abstract:

Contraceptive policies have been derived to achieve desired reductions in the growth rate and also, applied to the data of Uttar-Pradesh, India for illustration. Using the Lotka’s integral equation for the stable population, expressions for the proportion of contraceptive users at different ages have been obtained. At the age of 20 years, 42% of contraceptive users is imperative to reduce the present annual growth rate of 0.036 to 0.02, assuming that 40% of the contraceptive users discontinue at the age of 25 years and 30% again continue contraceptive use at age 30 years. Further, presuming that 75% of women start using contraceptives at the age of 23 years, and 50% of the remaining women start using contraceptives at the age of 28 years, while the rest of them start using it at the age of 32 years. If we set a minimum age of marriage as 20 years, a reduction of 0.019 in growth rate will be obtained. This study describes how the level of contraceptive use at different age groups of women reduces the growth rate in the state of Uttar Pradesh. The article also promotes delayed marriage in the region.

Keywords: child bearing, contraceptive devices, contraceptive policies, population growth, stable population

Procedia PDF Downloads 250
10385 Strategy of Loading by Number for Commercial Vehicles

Authors: Ramalan Musa Yerima

Abstract:

The paper titled “Loading by number” explained a strategy developed recently by the Zonal Commanding Officer of the Federal Road Safety Corps of Nigeria, covering Sokoto, Kebbi and Zamfara States of Northern Nigeria. The strategy is aimed at reducing competition, which will invariably lead to a reduction in speed, reduction in dangerous driving, reduction in crash rate, reduction in injuries, reduction in property damages and reduction in death through road traffic crashes (RTC). This research paper presents a study focused on enhancing the safety of commercial vehicles. The background of this study highlights the alarming statistics related to commercial vehicle crashes in Nigeria with a focus on Sokoto, Kebbi and Zamfara States, which often result in significant damage to property, loss of lives, and economic costs. The significance and aims is to investigate and propose an effective strategy to enhance the safety of commercial vehicles. The study recognizes the pressing need for heightened safety measures in commercial transportation, as it impacts not only the well-being of drivers and passengers but also the overall public safety. To achieve the objectives, an examination of accident data, including causes and contributing factors, was performed to identify critical areas for improvement. The major finding of the study reveals that when competition comes into play within the realm of commercial driving, it has detrimental effects on road safety and resource management. Commercial drivers are pushed to complete their routes quickly and deliver goods on time, or they push themselves to arrive quickly for more passengers and new contracts. This competitive environment, fuelled by internal and external pressures such as tight deadlines, poverty and greed, often leads to sad endings. The study recommends that if a strategy called loading by number is integrated with other multiple safety measures, such as driver training programs, regulatory enforcement, and infrastructure improvements, commercial vehicle safety can be significantly enhanced. "Loading by Number” approach is designed to ensure that the sequence of departure of drivers from the motor park ‘A’ would be communicated to motor park officials of park ‘B’, which would be considered sequentially when giving them returning passengers, regardless of the first to arrive. In conclusion, this paper underscores the significance of improving the safety measures of commercial vehicles, as they are often larger and heavier than other vehicles on the road. Whenever they are involved in accidents, the consequences can be more severe. Commercial vehicles are also frequently involved in long-haul or interstate transportation, which means they cover longer distances and spend more time on the road. This increased exposure to driving conditions increases the probability of accidents occurring. By implementing the suggested measures, policymakers, transportation authorities, and industry stakeholders can work collectively toward ensuring a safer commercial transportation system.

Keywords: commercial, safety, strategy, transport

Procedia PDF Downloads 58
10384 Protecting the Cloud Computing Data Through the Data Backups

Authors: Abdullah Alsaeed

Abstract:

Virtualized computing and cloud computing infrastructures are no longer fuzz or marketing term. They are a core reality in today’s corporate Information Technology (IT) organizations. Hence, developing an effective and efficient methodologies for data backup and data recovery is required more than any time. The purpose of data backup and recovery techniques are to assist the organizations to strategize the business continuity and disaster recovery approaches. In order to accomplish this strategic objective, a variety of mechanism were proposed in the recent years. This research paper will explore and examine the latest techniques and solutions to provide data backup and restoration for the cloud computing platforms.

Keywords: data backup, data recovery, cloud computing, business continuity, disaster recovery, cost-effective, data encryption.

Procedia PDF Downloads 82
10383 A Fast Convergence Subband BSS Structure

Authors: Salah Al-Din I. Badran, Samad Ahmadi, Ismail Shahin

Abstract:

A blind source separation method is proposed; in this method we use a non-uniform filter bank and a novel normalisation. This method provides a reduced computational complexity and increased convergence speed comparing to the full-band algorithm. Recently, adaptive sub-band scheme has been recommended to solve two problems: reduction of computational complexity and increase the convergence speed of the adaptive algorithm for correlated input signals. In this work the reduction in computational complexity is achieved with the use of adaptive filters of orders less than the full-band adaptive filters, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each sub-band than the input signal at full bandwidth, and can promote better rates of convergence.

Keywords: blind source separation, computational complexity, subband, convergence speed, mixture

Procedia PDF Downloads 545
10382 Effect of Fuel Lean Reburning Process on NOx Reduction and CO Emission

Authors: Changyeop Lee, Sewon Kim

Abstract:

Reburning is a useful technology in reducing nitric oxide through injection of a secondary hydrocarbon fuel. In this paper, an experimental study has been conducted to evaluate the effect of fuel lean reburning on NOx/CO reduction in LNG flame. Experiments were performed in flames stabilized by a co-flow swirl burner, which was mounted at the bottom of the furnace. Tests were conducted using LNG gas as the reburn fuel as well as the main fuel. The effects of reburn fuel fraction and injection manner of the reburn fuel were studied when the fuel lean reburning system was applied. The paper reports data on flue gas emissions and temperature distribution in the furnace for a wide range of experimental conditions. At steady state, temperature distribution and emission formation in the furnace have been measured and compared. This paper makes clear that in order to decrease both NOx and CO concentrations in the exhaust when the pulsated fuel lean reburning system was adapted, it is important that the control of some factors such as frequency and duty ratio. Also it shows the fuel lean reburning is also effective method to reduce NOx as much as reburning.

Keywords: fuel lean reburn, NOx, CO, LNG flame

Procedia PDF Downloads 422
10381 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu

Abstract:

Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.

Keywords: instance selection, data reduction, MapReduce, kNN

Procedia PDF Downloads 250
10380 To Study the New Invocation of Biometric Authentication Technique

Authors: Aparna Gulhane

Abstract:

Biometrics is the science and technology of measuring and analyzing biological data form the basis of research in biological measuring techniques for the purpose of people identification and recognition. In information technology, biometrics refers to technologies that measure and analyze human body characteristics, such as DNA, fingerprints, eye retinas and irises, voice patterns, facial patterns and hand measurements. Biometric systems are used to authenticate the person's identity. The idea is to use the special characteristics of a person to identify him. These papers present a biometric authentication techniques and actual deployment of potential by overall invocation of biometrics recognition, with an independent testing of various biometric authentication products and technology.

Keywords: types of biometrics, importance of biometric, review for biometrics and getting a new implementation, biometric authentication technique

Procedia PDF Downloads 317
10379 Exposing Latent Fingermarks on Problematic Metal Surfaces Using Time of Flight Secondary Ion Mass Spectroscopy

Authors: Tshaiya Devi Thandauthapani, Adam J. Reeve, Adam S. Long, Ian J. Turner, James S. Sharp

Abstract:

Fingermarks are a crucial form of evidence for identifying a person at a crime scene. However, visualising latent (hidden) fingermarks can be difficult, and the correct choice of techniques is essential to develop and preserve any fingermarks that might be present. Knives, firearms and other metal weapons have proven to be challenging substrates (stainless steel in particular) from which to reliably obtain fingermarks. In this study, time of flight secondary ion mass spectroscopy (ToF-SIMS) was used to image fingermarks on metal surfaces. This technique was compared to a conventional superglue based fuming technique that was accompanied by a series of contrast enhancing dyes (basic yellow 40 (BY40), crystal violet (CV) and Sudan black (SB)) on three different metal surfaces. The conventional techniques showed little to no evidence of fingermarks being present on the metal surfaces after a few days. However, ToF-SIMS images revealed fingermarks on the same and similar substrates with an exceptional level of detail demonstrating clear ridge definition as well as detail about sweat pore position and shape, that persist for over 26 days after deposition when the samples were stored under ambient conditions.

Keywords: conventional techniques, latent fingermarks, metal substrates, time of flight secondary ion mass spectroscopy

Procedia PDF Downloads 159
10378 Solving Momentum and Energy Equation by Using Differential Transform Techniques

Authors: Mustafa Ekici

Abstract:

Natural convection is a basic process which is important in a wide variety of practical applications. In essence, a heated fluid expands and rises from buoyancy due to decreased density. Numerous papers have been written on natural or mixed convection in vertical ducts heated on the side. These equations have been proved to be valuable tools for the modelling of many phenomena such as fluid dynamics. Finding solutions to such equations or system of equations are in general not an easy task. We propose a method, which is called differential transform method, of solving a non-linear equations and compare the results with some of the other techniques. Illustrative examples shows that the results are in good agreement.

Keywords: differential transform method, momentum, energy equation, boundry value problem

Procedia PDF Downloads 457
10377 Spatial REE Geochemical Modeling at Lake Acıgöl, Denizli, Turkey: Analytical Approaches on Spatial Interpolation and Spatial Correlation

Authors: M. Budakoglu, M. Karaman, A. Abdelnasser, M. Kumral

Abstract:

The spatial interpolation and spatial correlation of the rare earth elements (REE) of lake surface sediments of Lake Acıgöl and its surrounding lithological units is carried out by using GIS techniques like Inverse Distance Weighted (IDW) and Geographically Weighted Regression (GWR) techniques. IDW technique which makes the spatial interpolation shows that the lithological units like Hayrettin Formation at north of Lake Acigol have high REE contents than lake sediments as well as ∑LREE and ∑HREE contents. However, Eu/Eu* values (based on chondrite-normalized REE pattern) show high value in some lake surface sediments than in lithological units and that refers to negative Eu-anomaly. Also, the spatial interpolation of the V/Cr ratio indicated that Acıgöl lithological units and lake sediments deposited in in oxic and dysoxic conditions. But, the spatial correlation is carried out by GWR technique. This technique shows high spatial correlation coefficient between ∑LREE and ∑HREE which is higher in the lithological units (Hayrettin Formation and Cameli Formation) than in the other lithological units and lake surface sediments. Also, the matching between REEs and Sc and Al refers to REE abundances of Lake Acıgöl sediments weathered from local bedrock around the lake.

Keywords: spatial geochemical modeling, IDW, GWR techniques, REE, lake sediments, Lake Acıgöl, Turkey

Procedia PDF Downloads 549
10376 Geological Mapping of Gabel Humr Akarim Area, Southern Eastern Desert, Egypt: Constrain from Remote Sensing Data, Petrographic Description and Field Investigation

Authors: Doaa Hamdi, Ahmed Hashem

Abstract:

The present study aims at integrating the ASTER data and Landsat 8 data to discriminate and map alteration and/or mineralization zones in addition to delineating different lithological units of Humr Akarim Granites area. The study area is located at 24º9' to 24º13' N and 34º1' to 34º2'45"E., covering a total exposed surface area of about 17 km². The area is characterized by rugged topography with low to moderate relief. Geologic fieldwork and petrographic investigations revealed that the basement complex of the study area is composed of metasediments, mafic dikes, older granitoids, and alkali-feldspar granites. Petrographic investigations revealed that the secondary minerals in the study area are mainly represented by chlorite, epidote, clay minerals and iron oxides. These minerals have specific spectral signatures in the region of visible near-infrared and short-wave infrared (0.4 to 2.5 µm). So that the ASTER imagery processing was concentrated on VNIR-SWIR spectrometric data in order to achieve the purposes of this study (geologic mapping of hydrothermal alteration zones and delineate possible radioactive potentialities). Mapping of hydrothermal alterations zones in addition to discriminating the lithological units in the study area are achieved through the utilization of some different image processing, including color band composites (CBC) and data transformation techniques such as band ratios (BR), band ratio codes (BRCs), principal component analysis(PCA), Crosta Technique and minimum noise fraction (MNF). The field verification and petrographic investigation confirm the results of ASTER imagery and Landsat 8 data, proposing a geological map (scale 1:50000).

Keywords: remote sensing, petrography, mineralization, alteration detection

Procedia PDF Downloads 159
10375 The Impact of Migrants’ Remittances on Household Poverty and Inequality: A Case Study of Mazar-i-Sharif, Balkh Province, Afghanistan

Authors: Baqir Khawari

Abstract:

This study has been undertaken to investigate the impact of remittances on household poverty and inequality using OLS and Logit Models with a strictly multi-random sampling method. The result of the OLS model reveals that if the per capita international remittances increase by 1%, then it is estimated that the per capita income will increase by 0.071% and 0.059% during 2019/20 and 2020/21, respectively. In addition, a 1% increase in external remittances results in a 0.0272% and 0.025% reduction in per capita depth of poverty and a 0.0149% and 0.0145% decrease in severity of poverty during 2019/20 and 2020/21, respectively. It is also shown that the effect of external remittances on poverty is greater than internal remittances. In terms of inequality, the result represents that remittances reduced the Gini coefficient by 2% and 7% during 2019/20 and 2020/21, respectively. Further, it is bold that COVID-19 negatively impacts the amount of received remittances by households, thus resulting in a reduction in the size of the effect of remittances. Therefore, a concerted effort of effective policies and governance and international assistance is imperative to address this prolonged problem.

Keywords: migration, remittances, poverty, inequality, COVID-19, Afghanistan

Procedia PDF Downloads 61
10374 Theoretical Insight into Ligand Free Manganese Catalyzed C-O Coupling Protocol for the Synthesis of Biaryl Ethers

Authors: Carolin Anna Joy, Rohith K. R, Rehin Sulay, Parvathy Santhoshkumar, G.Anil Kumar, Vibin Ipe Thomas

Abstract:

Ullmann coupling reactions are gaining great relevance owing to their contribution in the synthesis of biologically and pharmaceutically important compounds. Palladium and many other heavy metals have proven their excellent ability in coupling reaction, but the toxicity matters. The first-row transition metal also possess toxicity, except in the case of iron and manganese. The suitability of manganese as a catalyst is achieving great interest in oxidation, reduction, C-H activation, coupling reaction etc. In this presentation, we discuss the thermo chemistry of ligand free manganese catalyzed C-O coupling reaction between phenol and aryl halide for the synthesis of biaryl ethers using Density functional theory techniques. The mechanism involves an oxidative addition-reductive elimination step. The transition state for both the step had been studied and confirmed using Intrinsic Reaction Coordinate (IRC) calculation. The barrier height for the reaction had also been calculated from the rate determining step. The possibility of other mechanistic way had also been studied. To achieve further insight into the mechanism, substrate having various functional groups is considered in our study to direct their effect on the feasibility of the reaction.

Keywords: Density functional theory, Molecular Modeling, ligand free, biaryl ethers, Ullmann coupling

Procedia PDF Downloads 142
10373 From Waste Recycling to Waste Prevention by Households : Could Eco-Feedback Strategies Fill the Gap?

Authors: I. Dangeard, S. Meineri, M. Dupré

Abstract:

large body of research on energy consumption reveals that regular information on energy consumption produces a positive effect on behavior. The present research aims to test this feedback paradigm on waste management. A small-scale experiment on residual household waste was performed in a large french urban area, in partnership with local authorities, as part of the development of larger-scale project. A two-step door-to-door recruitment scheme led to 85 households answering a questionnaire. Among them, 54 accepted to participate in a study on waste (second step). Participants were then randomly assigned to one of the 3 experimental conditions : self-reported feedback on curbside waste, external feedback on waste weight based on information technologies, and no feedback for the control group. An additional control group was added, including households who were not requested to answer the questionnaire. Household residual waste was collected every week, and tags on curbside bins fed a database with waste weight of households. The feedback period lasted 14 weeks (february-may 2014). Quantitative data on waste weight were analysed, including these 14 weeks and the 7 previous weeks. Households were then contacted by phone in order to confirm the quantitative results. Regarding the recruitment questionnaire, results revealed high pro-environmental attitude on the NEP scale, high recycling behavior level and moderate level of source reduction behavior on the adapted 3R scale, but no statistical difference between the 3 experimental groups. Regarding the feedback manipulation paradigm, waste weight reveals important differences between households, but doesn't prove any statistical difference between the experimental conditions. Qualitative phone interviews confirm that recycling is a current practice among participants, whereas source reduction of waste is not, and mainly appears as a producer problem of packaging limitation. We conclude that triggering waste prevention behaviors among recycling households involves long-term feedback and should promote benchmarking, in order to clearly set waste reduction as an objective to be managed through feedback figures.

Keywords: eco-feedback, household waste, waste reduction, experimental research

Procedia PDF Downloads 386
10372 Cognitive Methods for Detecting Deception During the Criminal Investigation Process

Authors: Laid Fekih

Abstract:

Background: It is difficult to detect lying, deception, and misrepresentation just by looking at verbal or non-verbal expression during the criminal investigation process, as there is a common belief that it is possible to tell whether a person is lying or telling the truth just by looking at the way they act or behave. The process of detecting lies and deception during the criminal investigation process needs more studies and research to overcome the difficulties facing the investigators. Method: The present study aimed to identify the effectiveness of cognitive methods and techniques in detecting deception during the criminal investigation. It adopted the quasi-experimental method and covered a sample of (20) defendants distributed randomly into two homogeneous groups, an experimental group of (10) defendants be subject to criminal investigation by applying cognitive techniques to detect deception and a second experimental group of (10) defendants be subject to the direct investigation method. The tool that used is a guided interview based on models of investigative questions according to the cognitive deception detection approach, which consists of three techniques of Vrij: imposing the cognitive burden, encouragement to provide more information, and ask unexpected questions, and the Direct Investigation Method. Results: Results revealed a significant difference between the two groups in term of lie detection accuracy in favour of defendants be subject to criminal investigation by applying cognitive techniques, the cognitive deception detection approach produced superior total accuracy rates both with human observers and through an analysis of objective criteria. The cognitive deception detection approach produced superior accuracy results in truth detection: 71%, deception detection: 70% compared to a direct investigation method truth detection: 52%; deception detection: 49%. Conclusion: The study recommended if practitioners use a cognitive deception detection technique, they will correctly classify more individuals than when they use a direct investigation method.

Keywords: the cognitive lie detection approach, deception, criminal investigation, mental health

Procedia PDF Downloads 65
10371 A Review Paper for Detecting Zero-Day Vulnerabilities

Authors: Tshegofatso Rambau, Tonderai Muchenje

Abstract:

Zero-day attacks (ZDA) are increasing day by day; there are many vulnerabilities in systems and software that date back decades. Companies keep discovering vulnerabilities in their systems and software and work to release patches and updates. A zero-day vulnerability is a software fault that is not widely known and is unknown to the vendor; attackers work very quickly to exploit these vulnerabilities. These are major security threats with a high success rate because businesses lack the essential safeguards to detect and prevent them. This study focuses on the factors and techniques that can help us detect zero-day attacks. There are various methods and techniques for detecting vulnerabilities. Various companies like edges can offer penetration testing and smart vulnerability management solutions. We will undertake literature studies on zero-day attacks and detection methods, as well as modeling approaches and simulations, as part of the study process.

Keywords: zero-day attacks, exploitation, vulnerabilities

Procedia PDF Downloads 97
10370 An Experimental Study to Mitigate Swelling Pressure of Expansive Tabuk Shale, Saudi Arabia

Authors: A. A. Embaby, A. Abu Halawa, M. Ramadan

Abstract:

In Kingdom of Saudi Arabia, there are several areas where expansive soil exists in the form of variable-thicknesses layers in the developed regions. Severe distress to infrastructures can be caused by the development of heave and swelling pressure in this kind of expansive shale. Among the various techniques for expansive soil mitigation, the removal and replacement technique is very popular for lightly loaded structures and shallow foundations. This paper presents the result of an experimental study conducted for evaluating the effect of type and thickness of the cushion soils on mitigation of swelling characteristics of expanded shale. Seven undisturbed shale samples collected from Al Qadsiyah district, which is located in the Tabuk town north Kingdom of Saudi Arabia, are treated with two types of cushion coarse-grained sediments (CCS); sand and gravel. Each type is represented with three thicknesses, 22%, 33% and 44% in relation to the depth of the active zone. The test results indicated that the replacement of expansive shale by CCS reduces the swelling potential and pressure. It is found that the reduction in swelling depends on the type and thickness of CCS. The treatment by removing the original expansive shale and replacing it by cushion sand with 44% thickness reduced the swelling potential and pressure of about 53.29% and 62.78 %, respectively.

Keywords: cushion coarse-grained sediments (CCS), expansive soil, Saudi Arabia, swelling pressure, Tabuk Shale

Procedia PDF Downloads 309
10369 Horizontal-Vertical and Enhanced-Unicast Interconnect Testing Techniques for Network-on-Chip

Authors: Mahdiar Hosseinghadiry, Razali Ismail, F. Fotovati

Abstract:

One of the most important and challenging tasks in testing network-on-chip based system-on-chips (NoC based SoCs) is to verify the communication entity. It is important because of its usage for transferring both data packets and test patterns for intellectual properties (IPs) during normal and test mode. Hence, ensuring of NoC reliability is required for reliable IPs functionality and testing. On the other hand, it is challenging due to the required time to test it and the way of transferring test patterns from the tester to the NoC components. In this paper, two testing techniques for mesh-based NoC interconnections are proposed. The first one is based on one-by-one testing and the second one divides NoC interconnects into three parts, horizontal links of switches in even columns, horizontal links of switches in odd columns and all vertical. A design for testability (DFT) architecture is represented to send test patterns directly to each switch under test and also support the proposed testing techniques by providing a loopback path in each switch. The simulation results shows the second proposed testing mechanism outperforms in terms of test time because this method test all the interconnects in only three phases, independent to the number of existed interconnects in the network, while test time of other methods are highly dependent to the number of switches and interconnects in the NoC.

Keywords: on chip, interconnection testing, horizontal-vertical testing, enhanced unicast

Procedia PDF Downloads 550
10368 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 229
10367 Decolorization and Degradation of Ponceau Red P4R in Aqueous Solution by Ferrate (Vi)

Authors: Chaimaan Benhsinat, Amal Tazi, Mohammed Azzi

Abstract:

Synthetic azo-dyes are widely used in food industry, they product intense coloration, high toxicity and mutagenicity for wastewater; Causing serious damage to aquatic biota and risk factors for humans. The treatment of these effluents remains a major challenge especially for third world countries that have not yet all possibilities to integrate the concept of sustainable development. These aqueous effluents require specific treatment to preserve natural environments. For these reasons and in order to contribute to the fight against this danger, we were interested in this study to the degradation of the dye Ponceau Red E124 'C20H11N2Na3O10S3' 'used in a food industry Casablanca-Morocco, by the super iron ferrate (VI) K3FexMnyO8; Synthesized in our laboratory and known for its high oxidizing and flocculants. The degradation of Ponceau red is evaluated with the objectives of chemical oxygen demand (COD), total organic carbon (TOC) and discoloration reductions. The results are very satisfying. In fact, we achieved 90% reduction of COD and 99% of discoloration. The recovered floc are subject to various techniques for spectroscopic analysis (UV-visible and IR) to identify by-products formed after the degradation. Moreover, the results will then be compared with those obtained by the application of ferrous sulfate (FeSO4, 7H2O) used by the food industry for the degradation of P4R. The results will be later compared with those obtained by the application of ferrous sulfate (FeSO4, 7H2O) used by the food industry, in the degradation of the P4R.

Keywords: COD removal, color removal, dye ponceau 4R, oxydation by ferrate (VI)

Procedia PDF Downloads 337
10366 Educational Innovation through Coaching and Mentoring in Thailand: A Mixed Method Evaluation of the Training Outcomes

Authors: Kanu Priya Mohan

Abstract:

Innovation in education is one of the essential pathways to achieve both educational, and development goals in today’s dynamically changing world. Over the last decade, coaching and mentoring have been applied in the field of education as positive intervention techniques for fostering teaching and learning reforms in the developed countries. The context of this research was Thailand’s educational reform process, wherein a project on coaching and mentoring (C&M) was launched in 2014. The C&M project endeavored to support the professional development of the school teachers in the various provinces of Thailand, and to also enable them to apply C&M for teaching innovative instructional techniques. This research aimed to empirically investigate the learning outcomes for the master trainers, who trained for coaching and mentoring as the first step in the process to train the school teachers. A mixed method study was used for evaluating the learning outcomes of training in terms of cognitive- behavioral-affective dimensions. In the first part of the research a quantitative research design was incorporated to evaluate the effects of learner characteristics and instructional techniques, on the learning outcomes. In the second phase, a qualitative method of in-depth interviews was used to find details about the training outcomes, as well as the perceived barriers and enablers of the training process. Sample size constraints were there, yet these exploratory results, integrated from both methods indicated the significance of evaluating training outcomes from the three dimensions, and the perceived role of other factors in the training. Findings are discussed in terms of their implications for the training of C&M, and also their impact in fostering positive education through innovative educational techniques in the developing countries.

Keywords: cognitive-behavioral-affective learning outcomes, mixed method research, teachers in Thailand, training evaluation

Procedia PDF Downloads 271
10365 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 108
10364 Waste Management Option for Bioplastics Alongside Conventional Plastics

Authors: Dan Akesson, Gauthaman Kuzhanthaivelu, Martin Bohlen, Sunil K. Ramamoorthy

Abstract:

Bioplastics can be defined as polymers derived partly or completely from biomass. Bioplastics can be biodegradable such as polylactic acid (PLA) and polyhydroxyalkonoates (PHA); or non-biodegradable (biobased polyethylene (bio-PE), polypropylene (bio-PP), polyethylene terephthalate (bio-PET)). The usage of such bioplastics is expected to increase in the future due to new found interest in sustainable materials. At the same time, these plastics become a new type of waste in the recycling stream. Most countries do not have separate bioplastics collection for it to be recycled or composted. After a brief introduction of bioplastics such as PLA in the UK, these plastics are once again replaced by conventional plastics by many establishments due to lack of commercial composting. Recycling companies fear the contamination of conventional plastic in the recycling stream and they said they would have to invest in expensive new equipment to separate bioplastics and recycle it separately. This project studies what happens when bioplastics contaminate conventional plastics. Three commonly used conventional plastics were selected for this study: polyethylene (PE), polypropylene (PP) and polyethylene terephthalate (PET). In order to simulate contamination, two biopolymers, either polyhydroxyalkanoate (PHA) or thermoplastic starch (TPS) were blended with the conventional polymers. The amount of bioplastics in conventional plastics was either 1% or 5%. The blended plastics were processed again to see the effect of degradation. The results from contamination showed that the tensile strength and the modulus of PE was almost unaffected whereas the elongation is clearly reduced indicating the increase in brittleness of the plastic. Generally, it can be said that PP is slightly more sensitive to the contamination than PE. This can be explained by the fact that the melting point of PP is higher than for PE and as a consequence, the biopolymer will degrade more quickly. However, the reduction of the tensile properties for PP is relatively modest. Impact strength is generally a more sensitive test method towards contamination. Again, PE is relatively unaffected by the contamination but for PP there is a relatively large reduction of the impact properties already at 1% contamination. PET is polyester, and it is, by its very nature, more sensitive to degradation than PE and PP. PET also has a much higher melting point than PE and PP, and as a consequence, the biopolymer will quickly degrade at the processing temperature of PET. As for the tensile strength, PET can tolerate 1% contamination without any reduction of the tensile strength. However, when the impact strength is examined, it is clear that already at 1% contamination, there is a strong reduction of the properties. The thermal properties show the change in the crystallinity. The blends were also characterized by SEM. Biphasic morphology can be seen as the two polymers are not truly blendable which also contributes to reduced mechanical properties. The study shows that PE is relatively robust against contamination, while polypropylene (PP) is sensitive and polyethylene terephthalate (PET) can be quite sensitive towards contamination.

Keywords: bioplastics, contamination, recycling, waste management

Procedia PDF Downloads 221
10363 An Empirical Evaluation of Performance of Machine Learning Techniques on Imbalanced Software Quality Data

Authors: Ruchika Malhotra, Megha Khanna

Abstract:

The development of change prediction models can help the software practitioners in planning testing and inspection resources at early phases of software development. However, a major challenge faced during the training process of any classification model is the imbalanced nature of the software quality data. A data with very few minority outcome categories leads to inefficient learning process and a classification model developed from the imbalanced data generally does not predict these minority categories correctly. Thus, for a given dataset, a minority of classes may be change prone whereas a majority of classes may be non-change prone. This study explores various alternatives for adeptly handling the imbalanced software quality data using different sampling methods and effective MetaCost learners. The study also analyzes and justifies the use of different performance metrics while dealing with the imbalanced data. In order to empirically validate different alternatives, the study uses change data from three application packages of open-source Android data set and evaluates the performance of six different machine learning techniques. The results of the study indicate extensive improvement in the performance of the classification models when using resampling method and robust performance measures.

Keywords: change proneness, empirical validation, imbalanced learning, machine learning techniques, object-oriented metrics

Procedia PDF Downloads 418
10362 On the Influence of the Covid-19 Pandemic on Tunisian Stock Market: By Sector Analysis

Authors: Nadia Sghaier

Abstract:

In this paper, we examine the influence of the COVID-19 pandemic on the performance of the Tunisian stock market and 12 sectors over a recent period from 23 March 2020 to 18 August 2021, including several waves and the introduction of vaccination. The empirical study is conducted using cointegration techniques which allows for long and short-run relationships. The obtained results indicate that both daily growth in confirmed cases and deaths have a negative and significant effect on the stock market returns. In particular, this effect differs across sectors. It seems more pronounced in financial, consumer goods and industrials sectors. These findings have important implications for investors to predict the behavior of the stock market or sectors returns and to implement hedging strategies during the COVID-19 pandemic.

Keywords: Tunisian stock market, sectors, COVID-19 pandemic, cointegration techniques

Procedia PDF Downloads 195
10361 Diagnosis and Analysis of Automated Liver and Tumor Segmentation on CT

Authors: R. R. Ramsheeja, R. Sreeraj

Abstract:

For view the internal structures of the human body such as liver, brain, kidney etc have a wide range of different modalities for medical images are provided nowadays. Computer Tomography is one of the most significant medical image modalities. In this paper use CT liver images for study the use of automatic computer aided techniques to calculate the volume of the liver tumor. Segmentation method is used for the detection of tumor from the CT scan is proposed. Gaussian filter is used for denoising the liver image and Adaptive Thresholding algorithm is used for segmentation. Multiple Region Of Interest(ROI) based method that may help to characteristic the feature different. It provides a significant impact on classification performance. Due to the characteristic of liver tumor lesion, inherent difficulties appear selective. For a better performance, a novel proposed system is introduced. Multiple ROI based feature selection and classification are performed. In order to obtain of relevant features for Support Vector Machine(SVM) classifier is important for better generalization performance. The proposed system helps to improve the better classification performance, reason in which we can see a significant reduction of features is used. The diagnosis of liver cancer from the computer tomography images is very difficult in nature. Early detection of liver tumor is very helpful to save the human life.

Keywords: computed tomography (CT), multiple region of interest(ROI), feature values, segmentation, SVM classification

Procedia PDF Downloads 507
10360 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: altitude estimation, drone, image processing, trajectory planning

Procedia PDF Downloads 108
10359 Effect of Water Addition on Catalytic Activity for CO2 Purification from Oxyfuel Combustion

Authors: Joudia Akil, Stephane Siffert, Laurence Pirault-Roy, Renaud Cousin, Christophe Poupin

Abstract:

Oxyfuel combustion is a promising method that enables to obtain a CO2 rich stream, with water vapor ( ̴10%), unburned components such as CO and NO, which must be cleaned before the use of CO2. Our objective is then the final treatment of CO and NO by catalysis. Three-way catalysts are well-developed material for simultaneous conversion of NO, CO and hydrocarbons. Pt and/or Rh ensure a quasi-complete removal of NOx, CO and HC and there is also a growing interest in partly replacing Pt with less-expensive Pd. The use of alumina and ceria as support ensures, respectively, the stabilization of such species in active state and discharging or storing oxygen to control the oxidation of CO and HC and the reduction of NOx. In this work, we will compare different metals (Pd, Rh and Pt) supported on Al2O3 and CeO2, for CO2 purification from oxyfuel combustion. The catalyst must reduce NO by CO in an oxidizing environment, in the presence of CO2 rich stream and resistant to water. In this study, Al2O3 and CeO2 were used as support materials of the catalysts. 1wt% M/Support where M = Pd, Rh or Pt catalysts were obtained by wet impregnation on supports with a precursor of palladium [Pd(acac)2], rhodium [Rh(NO3)3] and platinum [Pt(NO2)2(NO3)2]. Materials were characterized by BET surface area, H2 chemisorption, and TEM. Catalytic activity was evaluated in CO2 purification which is carried out in a fixed-bed flow reactor containing 150 mg of catalyst at atmospheric pressure. The flow of the reactant gases is composed of: 20% CO2, 10% O2, 0.5% CO, 0.02% NO and 8.2% H2O (He as eluent gas) with a total flow of 200 mL.min−1, with same GHSV (2.24x104 h-1). The catalytic performances of the samples were investigated with and without water. It shows that the total oxidation of CO occurred over the different materials. This study evidenced an important effect of the nature of the metals, supports and the presence or absence of H2O during the reduction of NO by CO in oxyfuel combustions conditions. Rh based catalysts show that the addition of water has a very positive influence especially on the Rh catalyst on CeO2. Pt based catalysts keep a good activity despite the addition of water on the both supports studied. For the NO reduction, addition of water act as a poison with Pd catalysts. The interesting results of Rh based catalysts with water can be explained by a production of hydrogen through the water gas shift reaction. The produced hydrogen acts as a more effective reductant than CO for NO removal. Furthermore, in TWCs, Rh is the main component responsible for NOx reduction due to its especially high activity for NO dissociation. Moreover, cerium oxide is a promotor for WGSR.

Keywords: carbon dioxide, environmental chemistry, heterogeneous catalysis

Procedia PDF Downloads 180