Search results for: noise matching
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1592

Search results for: noise matching

362 Investigating the Environmental Impact of Tourists Activities on Yankari Resort and Safari

Authors: Eldah Ephraim Buba, Sanusi Abubakar Sadiq

Abstract:

Habitat can be degraded by tourism leisure activities for example wildlife viewing can bring abrupt stress for animals and alter their natural behaviors when tourist come too close and wildlife watching have degradation effects on the habitats as they often are accompanied by the noise and commotion created by tourist as they chase wild animals. It is observed that Jos Wild Life Park is usually congested during on-peak periods which causes littering and contamination of the environment by tourist which may lead to changes in the soil nutrient. The issue of unauthorized feeding of animals by a tourist in which the food might be dangerous and harmful to their health and making them be so aggressive is also observed. The aim of the study is to investigate the environmental impact of tourists’ activities in Jos Wild Life Park, Nigeria. The study used survey questionnaires to both tourists and the staff of the wildlife park. One hundred questionnaires were self-administered to randomly selected tourists as the visit the park and some staff. The average mean score of the response was used to show agreement or disagreement. Major findings show the negative impact of tourist’s activities to the environment as air pollution, overcrowding, and congestion, solid littering of the environment, distress to animals and alteration of the ecosystem. Furthermore, the study found the positive impact of tourists activities on the environment to be income generation through tourists activities and infrastructural development. It is recommended that the impact of tourism should be minimized through admitting the right carrying capacity and impact assessment.

Keywords: environmental, impact, investigation, tourists, activities

Procedia PDF Downloads 327
361 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 51
360 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 42
359 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction

Authors: Joy Cao, Min Zhou

Abstract:

Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.

Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.

Procedia PDF Downloads 55
358 The Role of Muzara’ah Islamic Financing in Supporting Smallholder Farmers among Muslim Communities: An Empirical Experience of Yobe Microfinance Bank

Authors: Sheriff Muhammad Ibrahim

Abstract:

The contemporary world has seen many agents of market liberalization, globalization, and expansion in agribusiness, which pose a big threat to the existence of smallholder farmers in the farming business or, at most, being marginalized against government interventions, investors' partnerships and further stretched by government policies in an effort to promote subsistent farming that can generate profits and speedy growth through attracting foreign businesses. The consequence of these modern shifts ends basically at the expense of smallholder farmers. Many scholars believed that this shift was among the major causes of urban-rural drift facing almost all communities in the World. In an effort to address these glaring economic crises, various governments at different levels and development agencies have created different programs trying to identify other sources of income generation for rural farmers. However, despite the different approaches adopted by many communities and states, the mass rural exodus continues to increase as the rural farmers continue to lose due to a lack of reliable sources for cost-efficient inputs such as agricultural extension services, mechanization supports, quality, and improved seeds, soil matching fertilizers and access to credit facilities and profitable markets for rural farmers output. Unfortunately for them, they see these agricultural requirements provided by large-scale farmers making their farming activities cheaper and yields higher. These have further created other social problems between the smallholder farmers and the large-scale farmers in many areas. This study aims to suggest the Islamic mode of agricultural financing named Muzara’ah for smallholder farmers as a microfinance banking product adopted and practiced by Yobe Microfinance Bank as a model to promote agricultural financing to be adopted in other communities. The study adopts a comparative research method to conclude that the Muzara’ah model of financing can be adopted as a valid means of financing smallholder farmers and reducing food insecurity.

Keywords: Muzara'ah, Islamic finance, agricultural financing, microfinance, smallholder farmers

Procedia PDF Downloads 31
357 Forecasting Future Society to Explore Promising Security Technologies

Authors: Jeonghwan Jeon, Mintak Han, Youngjun Kim

Abstract:

Due to the rapid development of information and communication technology (ICT), a substantial transformation is currently happening in the society. As the range of intelligent technologies and services is continuously expanding, ‘things’ are becoming capable of communicating one another and even with people. However, such “Internet of Things” has the technical weakness so that a great amount of such information transferred in real-time may be widely exposed to the threat of security. User’s personal data are a typical example which is faced with a serious security threat. The threats of security will be diversified and arose more frequently because next generation of unfamiliar technology develops. Moreover, as the society is becoming increasingly complex, security vulnerability will be increased as well. In the existing literature, a considerable number of private and public reports that forecast future society have been published as a precedent step of the selection of future technology and the establishment of strategies for competitiveness. Although there are previous studies that forecast security technology, they have focused only on technical issues and overlooked the interrelationships between security technology and social factors are. Therefore, investigations of security threats in the future and security technology that is able to protect people from various threats are required. In response, this study aims to derive potential security threats associated with the development of technology and to explore the security technology that can protect against them. To do this, first of all, private and public reports that forecast future and online documents from technology-related communities are collected. By analyzing the data, future issues are extracted and categorized in terms of STEEP (Society, Technology, Economy, Environment, and Politics), as well as security. Second, the components of potential security threats are developed based on classified future issues. Then, points that the security threats may occur –for example, mobile payment system based on a finger scan technology– are identified. Lastly, alternatives that prevent potential security threats are proposed by matching security threats with points and investigating related security technologies from patent data. Proposed approach can identify the ICT-related latent security menaces and provide the guidelines in the ‘problem – alternative’ form by linking the threat point with security technologies.

Keywords: future society, information and communication technology, security technology, technology forecasting

Procedia PDF Downloads 439
356 Combining the Deep Neural Network with the K-Means for Traffic Accident Prediction

Authors: Celso L. Fernando, Toshio Yoshii, Takahiro Tsubota

Abstract:

Understanding the causes of a road accident and predicting their occurrence is key to preventing deaths and serious injuries from road accident events. Traditional statistical methods such as the Poisson and the Logistics regressions have been used to find the association of the traffic environmental factors with the accident occurred; recently, an artificial neural network, ANN, a computational technique that learns from historical data to make a more accurate prediction, has emerged. Although the ability to make accurate predictions, the ANN has difficulty dealing with highly unbalanced attribute patterns distribution in the training dataset; in such circumstances, the ANN treats the minority group as noise. However, in the real world data, the minority group is often the group of interest; e.g., in the road traffic accident data, the events of the accident are the group of interest. This study proposes a combination of the k-means with the ANN to improve the predictive ability of the neural network model by alleviating the effect of the unbalanced distribution of the attribute patterns in the training dataset. The results show that the proposed method improves the ability of the neural network to make a prediction on a highly unbalanced distributed attribute patterns dataset; however, on an even distributed attribute patterns dataset, the proposed method performs almost like a standard neural network.

Keywords: accident risks estimation, artificial neural network, deep learning, k-mean, road safety

Procedia PDF Downloads 115
355 Quantitative Analysis of Caffeine in Pharmaceutical Formulations Using a Cost-Effective Electrochemical Sensor

Authors: Y. T. Gebreslassie, Abrha Tadesse, R. C. Saini, Rishi Pal

Abstract:

Caffeine, known chemically as 3,7-dihydro-1,3,7-trimethyl-1H-purine-2,6-dione, is a naturally occurring alkaloid classified as an N-methyl derivative of xanthine. Given its widespread use in coffee and other caffeine-containing products, it is the most commonly consumed psychoactive substance in everyday human life. This research aimed to develop a cost-effective, sensitive, and easily manufacturable sensor for the detection of caffeine. Antraquinone-modified carbon paste electrode (AQMCPE) was fabricated, and the electrochemical behavior of caffeine on this electrode was investigated using cyclic voltammetry (CV) and square wave voltammetry (SWV) in a solution of 0.1M perchloric acid at pH 0.56. The modified electrode displayed enhanced electrocatalytic activity towards caffeine oxidation, exhibiting a two-fold increase in peak current and an 82 mV shift of the peak potential in the negative direction compared to an unmodified carbon paste electrode (UMCPE). Exploiting the electrocatalytic properties of the modified electrode, SWV was employed for the quantitative determination of caffeine. Under optimized experimental conditions, a linear relationship between peak current and concentration was observed within the range of 2.0 x 10⁻⁶ to 1.0× 10⁻⁴ M, with a correlation coefficient of 0.998 and a detection limit of 1.47× 10⁻⁷ M (signal-to-noise ratio = 3). Finally, the proposed method was successfully applied to the quantitative analysis of caffeine in pharmaceutical formulations, yielding recovery percentages ranging from 95.27% to 106.75%.

Keywords: antraquinone-modified carbon paste electrode, caffeine, detection, electrochemical sensor, quantitative analysis

Procedia PDF Downloads 25
354 Speech Enhancement Using Wavelet Coefficients Masking with Local Binary Patterns

Authors: Christian Arcos, Marley Vellasco, Abraham Alcaim

Abstract:

In this paper, we present a wavelet coefficients masking based on Local Binary Patterns (WLBP) approach to enhance the temporal spectra of the wavelet coefficients for speech enhancement. This technique exploits the wavelet denoising scheme, which splits the degraded speech into pyramidal subband components and extracts frequency information without losing temporal information. Speech enhancement in each high-frequency subband is performed by binary labels through the local binary pattern masking that encodes the ratio between the original value of each coefficient and the values of the neighbour coefficients. This approach enhances the high-frequency spectra of the wavelet transform instead of eliminating them through a threshold. A comparative analysis is carried out with conventional speech enhancement algorithms, demonstrating that the proposed technique achieves significant improvements in terms of PESQ, an international recommendation of objective measure for estimating subjective speech quality. Informal listening tests also show that the proposed method in an acoustic context improves the quality of speech, avoiding the annoying musical noise present in other speech enhancement techniques. Experimental results obtained with a DNN based speech recognizer in noisy environments corroborate the superiority of the proposed scheme in the robust speech recognition scenario.

Keywords: binary labels, local binary patterns, mask, wavelet coefficients, speech enhancement, speech recognition

Procedia PDF Downloads 196
353 Statistical Time-Series and Neural Architecture of Malaria Patients Records in Lagos, Nigeria

Authors: Akinbo Razak Yinka, Adesanya Kehinde Kazeem, Oladokun Oluwagbenga Peter

Abstract:

Time series data are sequences of observations collected over a period of time. Such data can be used to predict health outcomes, such as disease progression, mortality, hospitalization, etc. The Statistical approach is based on mathematical models that capture the patterns and trends of the data, such as autocorrelation, seasonality, and noise, while Neural methods are based on artificial neural networks, which are computational models that mimic the structure and function of biological neurons. This paper compared both parametric and non-parametric time series models of patients treated for malaria in Maternal and Child Health Centres in Lagos State, Nigeria. The forecast methods considered linear regression, Integrated Moving Average, ARIMA and SARIMA Modeling for the parametric approach, while Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) Network were used for the non-parametric model. The performance of each method is evaluated using the Mean Absolute Error (MAE), R-squared (R2) and Root Mean Square Error (RMSE) as criteria to determine the accuracy of each model. The study revealed that the best performance in terms of error was found in MLP, followed by the LSTM and ARIMA models. In addition, the Bootstrap Aggregating technique was used to make robust forecasts when there are uncertainties in the data.

Keywords: ARIMA, bootstrap aggregation, MLP, LSTM, SARIMA, time-series analysis

Procedia PDF Downloads 32
352 Thermal Vacuum Chamber Test Result for CubeSat Transmitter

Authors: Fitri D. Jaswar, Tharek A. Rahman, Yasser A. Ahmad

Abstract:

CubeSat in low earth orbit (LEO) mainly uses ultra high frequency (UHF) transmitter with fixed radio frequency (RF) output power to download the telemetry and the payload data. The transmitter consumes large amount of electrical energy during the transmission considering the limited satellite size of a CubeSat. A transmitter with power control ability is designed to achieve optimize the signal to noise ratio (SNR) and efficient power consumption. In this paper, the thermal vacuum chamber (TVAC) test is performed to validate the performance of the UHF band transmitter with power control capability. The TVAC is used to simulate the satellite condition in the outer space environment. The TVAC test was conducted at the Laboratory of Spacecraft Environment Interaction Engineering, Kyushu Institute of Technology, Japan. The TVAC test used 4 thermal cycles starting from +60°C to -20°C for the temperature setting. The pressure condition inside chamber was less than 10-5Pa. During the test, the UHF transmitter is integrated in a CubeSat configuration with other CubeSat subsystem such as on board computer (OBC), power module, and satellite structure. The system is validated and verified through its performance in terms of its frequency stability and the RF output power. The UHF band transmitter output power is tested from 0.5W to 2W according the satellite mode of operations and the satellite power limitations. The frequency stability is measured and the performance obtained is less than 2 ppm in the tested operating temperature range. The test demonstrates the RF output power is adjustable in a thermal vacuum condition.

Keywords: communication system, CubeSat, SNR, UHF transmitter

Procedia PDF Downloads 232
351 An Investigation of Challenges in Implementing Sustainable Supply Chain Management for Construction Industry in Thailand by Interpretive Structural Model Approach

Authors: Shaolan Zou, Kullapa Soratana

Abstract:

Construction industry faces tremendous challenges in sustainability issue in recent years. Building materials, generally, are non-recyclable with short service life time, leading to economic loss. Building sites also cause social issues, e.g. noise, hazardous substances, and particulate matters. Sustainable supply chain management (SSCM) has been recognized as an appropriate method to balance three pillars of sustainability: environment, economy, and society. However, most of construction companies cannot successfully adopt SSCM due to numerous challenges. In this study, a list of challenges in implementing SSCM was collected from peer-reviewed literature on sustainable implementation. A building materials company in Thailand, which has successfully adopted SSCM for almost two decades and established the sustainable development committee since 1995, was used as a case study. Management-level representatives in sustainability department of the company were interviewed, mainly, to examine which challenges on the list complies with the company’s condition when adopting SSCM. The interview result was analyzed by interpretive structural model (ISM) with sustainability experts’ opinions to identify top 5 influential challenges. The results could assist a building construction company in assigning appropriate strategies to overcome most influential barriers, as well as in using as a reference or guidance for other construction companies adopting SSCM.

Keywords: sustainable supply chain management, challenges, construction industry, interpretive structural model

Procedia PDF Downloads 158
350 Detection of Atrial Fibrillation Using Wearables via Attentional Two-Stream Heterogeneous Networks

Authors: Huawei Bai, Jianguo Yao, Fellow, IEEE

Abstract:

Atrial fibrillation (AF) is the most common form of heart arrhythmia and is closely associated with mortality and morbidity in heart failure, stroke, and coronary artery disease. The development of single spot optical sensors enables widespread photoplethysmography (PPG) screening, especially for AF, since it represents a more convenient and noninvasive approach. To our knowledge, most existing studies based on public and unbalanced datasets can barely handle the multiple noises sources in the real world and, also, lack interpretability. In this paper, we construct a large- scale PPG dataset using measurements collected from PPG wrist- watch devices worn by volunteers and propose an attention-based two-stream heterogeneous neural network (TSHNN). The first stream is a hybrid neural network consisting of a three-layer one-dimensional convolutional neural network (1D-CNN) and two-layer attention- based bidirectional long short-term memory (Bi-LSTM) network to learn representations from temporally sampled signals. The second stream extracts latent representations from the PPG time-frequency spectrogram using a five-layer CNN. The outputs from both streams are fed into a fusion layer for the outcome. Visualization of the attention weights learned demonstrates the effectiveness of the attention mechanism against noise. The experimental results show that the TSHNN outperforms all the competitive baseline approaches and with 98.09% accuracy, achieves state-of-the-art performance.

Keywords: PPG wearables, atrial fibrillation, feature fusion, attention mechanism, hyber network

Procedia PDF Downloads 84
349 Highly Accurate Target Motion Compensation Using Entropy Function Minimization

Authors: Amin Aghatabar Roodbary, Mohammad Hassan Bastani

Abstract:

One of the defects of stepped frequency radar systems is their sensitivity to target motion. In such systems, target motion causes range cell shift, false peaks, Signal to Noise Ratio (SNR) reduction and range profile spreading because of power spectrum interference of each range cell in adjacent range cells which induces distortion in High Resolution Range Profile (HRRP) and disrupt target recognition process. Thus Target Motion Parameters (TMPs) effects compensation should be employed. In this paper, such a method for estimating TMPs (velocity and acceleration) and consequently eliminating or suppressing the unwanted effects on HRRP based on entropy minimization has been proposed. This method is carried out in two major steps: in the first step, a discrete search method has been utilized over the whole acceleration-velocity lattice network, in a specific interval seeking to find a less-accurate minimum point of the entropy function. Then in the second step, a 1-D search over velocity is done in locus of the minimum for several constant acceleration lines, in order to enhance the accuracy of the minimum point found in the first step. The provided simulation results demonstrate the effectiveness of the proposed method.

Keywords: automatic target recognition (ATR), high resolution range profile (HRRP), motion compensation, stepped frequency waveform technique (SFW), target motion parameters (TMPs)

Procedia PDF Downloads 125
348 Interference of Mild Drought Stress on Estimation of Nitrogen Status in Winter Wheat by Some Vegetation Indices

Authors: H. Tavakoli, S. S. Mohtasebi, R. Alimardani, R. Gebbers

Abstract:

Nitrogen (N) is one of the most important agricultural inputs affecting crop growth, yield and quality in rain-fed cereal production. N demand of crops varies spatially across fields due to spatial differences in soil conditions. In addition, the response of a crop to the fertilizer applications is heavily reliant on plant available water. Matching N supply to water availability is thus essential to achieve an optimal crop response. The objective of this study was to determine effect of drought stress on estimation of nitrogen status of winter wheat by some vegetation indices. During the 2012 growing season, a field experiment was conducted at the Bundessortenamt (German Plant Variety Office) Marquardt experimental station which is located in the village of Marquardt about 5 km northwest of Potsdam, Germany (52°27' N, 12°57' E). The experiment was designed as a randomized split block design with two replications. Treatments consisted of four N fertilization rates (0, 60, 120 and 240 kg N ha-1, in total) and two water regimes (irrigated (Irr) and non-irrigated (NIrr)) in total of 16 plots with dimension of 4.5 × 9.0 m. The indices were calculated using readings of a spectroradiometer made of tec5 components. The main parts were two “Zeiss MMS1 nir enh” diode-array sensors with a nominal rage of 300 to 1150 nm with less than 10 nm resolutions and an effective range of 400 to 1000 nm. The following vegetation indices were calculated: NDVI, GNDVI, SR, MSR, NDRE, RDVI, REIP, SAVI, OSAVI, MSAVI, and PRI. All the experiments were conducted during the growing season in different plant growth stages including: stem elongation (BBCH=32-41), booting stage (BBCH=43), inflorescence emergence, heading (BBCH=56-58), flowering (BBCH=65-69), and development of fruit (BBCH=71). According to the results obtained, among the indices, NDRE and REIP were less affected by drought stress and can provide reliable wheat nitrogen status information, regardless of water status of the plant. They also showed strong relations with nitrogen status of winter wheat.

Keywords: nitrogen status, drought stress, vegetation indices, precision agriculture

Procedia PDF Downloads 287
347 Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area

Authors: Nassib Abdallah, Pierre Chauvet, Abd El Salam Hajjar, Bassam Daya

Abstract:

In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words.

Keywords: brain-computer interface, speech recognition, artificial neural network, electroencephalography, EEG, wernicke area

Procedia PDF Downloads 245
346 Single Atom Manipulation with 4 Scanning Tunneling Microscope Technique

Authors: Jianshu Yang, Delphine Sordes, Marek Kolmer, Christian Joachim

Abstract:

Nanoelectronics, for example the calculating circuits integrating at molecule scale logic gates, atomic scale circuits, has been constructed and investigated recently. A major challenge is their functional properties characterization because of the connecting problem from atomic scale to micrometer scale. New experimental instruments and new processes have been proposed therefore. To satisfy a precisely measurement at atomic scale and then connecting micrometer scale electrical integration controller, the technique improvement is kept on going. Our new machine, a low temperature high vacuum four scanning tunneling microscope, as a customer required instrument constructed by Omicron GmbH, is expected to be scaling down to atomic scale characterization. Here, we will present our first testified results about the performance of this new instrument. The sample we selected is Au(111) surface. The measurements have been taken at 4.2 K. The atomic resolution surface structure was observed with each of four scanners with noise level better than 3 pm. With a tip-sample distance calibration by I-z spectra, the sample conductance has been derived from its atomic locally I-V spectra. Furthermore, the surface conductance measurement has been performed using two methods, (1) by landing two STM tips on the surface with sample floating; and (2) by sample floating and one of the landed tips turned to be grounding. In addition, single atom manipulation has been achieved with a modified tip design, which is comparable to a conventional LT-STM.

Keywords: low temperature ultra-high vacuum four scanning tunneling microscope, nanoelectronics, point contact, single atom manipulation, tunneling resistance

Procedia PDF Downloads 256
345 Kinoform Optimisation Using Gerchberg- Saxton Iterative Algorithm

Authors: M. Al-Shamery, R. Young, P. Birch, C. Chatwin

Abstract:

Computer Generated Holography (CGH) is employed to create digitally defined coherent wavefronts. A CGH can be created by using different techniques such as by using a detour-phase technique or by direct phase modulation to create a kinoform. The detour-phase technique was one of the first techniques that was used to generate holograms digitally. The disadvantage of this technique is that the reconstructed image often has poor quality due to the limited dynamic range it is possible to record using a medium with reasonable spatial resolution.. The kinoform (phase-only hologram) is an alternative technique. In this method, the phase of the original wavefront is recorded but the amplitude is constrained to be constant. The original object does not need to exist physically and so the kinoform can be used to reconstruct an almost arbitrary wavefront. However, the image reconstructed by this technique contains high levels of noise and is not identical to the reference image. To improve the reconstruction quality of the kinoform, iterative techniques such as the Gerchberg-Saxton algorithm (GS) are employed. In this paper the GS algorithm is described for the optimisation of a kinoform used for the reconstruction of a complex wavefront. Iterations of the GS algorithm are applied to determine the phase at a plane (with known amplitude distribution which is often taken as uniform), that satisfies given phase and amplitude constraints in a corresponding Fourier plane. The GS algorithm can be used in this way to enhance the reconstruction quality of the kinoform. Different images are employed as the reference object and their kinoform is synthesised using the GS algorithm. The quality of the reconstructed images is quantified to demonstrate the enhanced reconstruction quality achieved by using this method.

Keywords: computer generated holography, digital holography, Gerchberg-Saxton algorithm, kinoform

Procedia PDF Downloads 492
344 Recreating Old Gardens, a Dynamic and Sustainable Design Pattern for Urban Green Spaces, Case Study: Persian Garden

Authors: Mina Sarabi, Dariush Sattarzadeh, Mitra Asadollahi Oula

Abstract:

In the old days, gardens reflect the identity and culture of each country. Persian garden in urban planning and architecture has a high position and it is a kind of paradise in Iranian opinion. But nowadays, the gardens were replaced with parks and urban open spaces. On the other hand, due to the industrial development of cities and increasing air pollution in urban environments, living in this spaces make problem for people. And improving ecological conditions will be felt more than ever. The purposes of this study are identification and reproduction of Persian garden pattern and adaptation of it with sustainability features in green spaces in contemporary cities and developing meaningful green spaces instead of designing aimless spaces in urban environment. The research method in this article is analytical and descriptive. Studying and collecting information about Iranian garden pattern is referring to library documents, articles and analysis case studies. The result reveals that Persian garden was the main factor the bond between man and nature. But in the last century, this relationship is in trouble. It has a significant impact in reducing the adverse effects of urban air pollution, noise and etc as well. Nowadays, recreated pattern of Iranian gardens in urban green spaces not only keep Iranian identity for future generations but also, using the principles of sustainability can play an important role in sustainable development and quality space of a city.

Keywords: green open spaces, nature, Persian garden, urban sustainability

Procedia PDF Downloads 211
343 A Method for Evaluating the Mechanical Stress on Mandibular Advancement Devices

Authors: Tsung-yin Lin, Yi-yu Lee, Ching-hua Hung

Abstract:

Snoring, the lay term for obstructive breathing during sleep, is one of the most prevalent of obnoxious human habits. Loud snoring usually makes others feel noisy and uncomfortable. Snoring also influences the sleep quality of snorers’ bed partners, because of the noise they do not get to sleep easily. Snoring causes the reduce of sleep quality leading to several medical problems, such as excessive daytime sleepiness, high blood pressure, increased risk for cardiovascular disease and cerebral vascular accident, and etc. There are many non-prescription devices offered for sale on the market, but very limited data are available to support a beneficial effect of these devices on snoring and use in treating obstructive sleep apnea (OSA). Mandibular advancement devices (MADs), also termed as the Mandibular reposition devices (MRDs) are removable devices which are worn at night during sleep. Most devices require dental impression, bite registration, and fabrication by a dental laboratory. Those devices are fixed to upper and lower teeth and are adjusted to advance the mandible. The amount of protrusion is adjusted to meet the therapeutic requirements, comfort, and tolerance. Many devices have a fixed degree of advancement. Some are adjustable in a limited degree. This study focuses on the stress analysis of Mandibular Advancement Devices (MADs), which are considered as a standard treatment of snoring that promoted by American Academy of Sleep Medicine (AASM). This paper proposes a new MAD design, and the finite element analysis (FEA) is introduced to precede the stress simulation for this MAD.

Keywords: finite element analysis, mandibular advancement devices, mechanical stress, snoring

Procedia PDF Downloads 337
342 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System

Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi

Abstract:

Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.

Keywords: channel estimation, OFDM, pilot-assist, VLC

Procedia PDF Downloads 151
341 Development of a Few-View Computed Tomographic Reconstruction Algorithm Using Multi-Directional Total Variation

Authors: Chia Jui Hsieh, Jyh Cheng Chen, Chih Wei Kuo, Ruei Teng Wang, Woei Chyn Chu

Abstract:

Compressed sensing (CS) based computed tomographic (CT) reconstruction algorithm utilizes total variation (TV) to transform CT image into sparse domain and minimizes L1-norm of sparse image for reconstruction. Different from the traditional CS based reconstruction which only calculates x-coordinate and y-coordinate TV to transform CT images into sparse domain, we propose a multi-directional TV to transform tomographic image into sparse domain for low-dose reconstruction. Our method considers all possible directions of TV calculations around a pixel, so the sparse transform for CS based reconstruction is more accurate. In 2D CT reconstruction, we use eight-directional TV to transform CT image into sparse domain. Furthermore, we also use 26-directional TV for 3D reconstruction. This multi-directional sparse transform method makes CS based reconstruction algorithm more powerful to reduce noise and increase image quality. To validate and evaluate the performance of this multi-directional sparse transform method, we use both Shepp-Logan phantom and a head phantom as the targets for reconstruction with the corresponding simulated sparse projection data (angular sampling interval is 5 deg and 6 deg, respectively). From the results, the multi-directional TV method can reconstruct images with relatively less artifacts compared with traditional CS based reconstruction algorithm which only calculates x-coordinate and y-coordinate TV. We also choose RMSE, PSNR, UQI to be the parameters for quantitative analysis. From the results of quantitative analysis, no matter which parameter is calculated, the multi-directional TV method, which we proposed, is better.

Keywords: compressed sensing (CS), low-dose CT reconstruction, total variation (TV), multi-directional gradient operator

Procedia PDF Downloads 229
340 Setting the Baseline for a Sentinel System for the Identification of Occupational Risk Factors in Africa

Authors: Menouni Aziza, Chbihi Kaoutar, Duca Radu Corneliu, Gilissen Liesbeth, Bounou Salim, Godderis Lode, El Jaafari Samir

Abstract:

In Africa, environmental and occupational health risks are mostly underreported. The aim of this research is to develop and implement a sentinel surveillance system comprising training and guidance of occupational physicians (OC) who will report new work-related diseases in African countries. A group of 30 OC are recruited and trained in each of the partner countries (Morocco, Benin and Ethiopia). Each committed OC is asked to recruit 50 workers during a consultation in a time-frame of 6 months (1500 workers per country). Workers are asked to fill out an online questionnaire about their health status and work conditions, including exposure to 20 chemicals. Urine and blood samples are then collected for human biomonitoring of common exposures. Some preliminary results showed that 92% of the employees surveyed are exposed to physical constraints, 44% to chemical agents, and 24% to biological agents. The most common physical constraints are manual handling of loads, noise pollution and thermal pollution. The most frequent chemical risks are exposure to pesticides and fuels. This project will allow a better understanding of effective sentinel systems as a promising method to gather high quality data, which can support policy-making in terms of preventing emerging work-related diseases.

Keywords: sentinel system, occupational diseases, human biomonitoring, Africa

Procedia PDF Downloads 46
339 Mitigation of Interference in Satellite Communications Systems via a Cross-Layer Coding Technique

Authors: Mario A. Blanco, Nicholas Burkhardt

Abstract:

An important problem in satellite communication systems which operate in the Ka and EHF frequency bands consists of the overall degradation in link performance of mobile terminals due to various types of degradations in the link/channel, such as fading, blockage of the link to the satellite (especially in urban environments), intentional as well as other types of interference, etc. In this paper, we focus primarily on the interference problem, and we develop a very efficient and cost-effective solution based on the use of fountain codes. We first introduce a satellite communications (SATCOM) terminal uplink interference channel model that is classically used against communication systems that use spread-spectrum waveforms. We then consider the use of fountain codes, with focus on Raptor codes, as our main mitigation technique to combat the degradation in link/receiver performance due to the interference signal. The performance of the receiver is obtained in terms of average probability of bit and message error rate as a function of bit energy-to-noise density ratio, Eb/N0, and other parameters of interest, via a combination of analysis and computer simulations, and we show that the use of fountain codes is extremely effective in overcoming the effects of intentional interference on the performance of the receiver and associated communication links. We then show this technique can be extended to mitigate other types of SATCOM channel degradations, such as those caused by channel fading, shadowing, and hard-blockage of the uplink signal.

Keywords: SATCOM, interference mitigation, fountain codes, turbo codes, cross-layer

Procedia PDF Downloads 325
338 Content-Aware Image Augmentation for Medical Imaging Applications

Authors: Filip Rusak, Yulia Arzhaeva, Dadong Wang

Abstract:

Machine learning based Computer-Aided Diagnosis (CAD) is gaining much popularity in medical imaging and diagnostic radiology. However, it requires a large amount of high quality and labeled training image datasets. The training images may come from different sources and be acquired from different radiography machines produced by different manufacturers, digital or digitized copies of film radiographs, with various sizes as well as different pixel intensity distributions. In this paper, a content-aware image augmentation method is presented to deal with these variations. The results of the proposed method have been validated graphically by plotting the removed and added seams of pixels on original images. Two different chest X-ray (CXR) datasets are used in the experiments. The CXRs in the datasets defer in size, some are digital CXRs while the others are digitized from analog CXR films. With the proposed content-aware augmentation method, the Seam Carving algorithm is employed to resize CXRs and the corresponding labels in the form of image masks, followed by histogram matching used to normalize the pixel intensities of digital radiography, based on the pixel intensity values of digitized radiographs. We implemented the algorithms, resized the well-known Montgomery dataset, to the size of the most frequently used Japanese Society of Radiological Technology (JSRT) dataset and normalized our digital CXRs for testing. This work resulted in the unified off-the-shelf CXR dataset composed of radiographs included in both, Montgomery and JSRT datasets. The experimental results show that even though the amount of augmentation is large, our algorithm can preserve the important information in lung fields, local structures, and global visual effect adequately. The proposed method can be used to augment training and testing image data sets so that the trained machine learning model can be used to process CXRs from various sources, and it can be potentially used broadly in any medical imaging applications.

Keywords: computer-aided diagnosis, image augmentation, lung segmentation, medical imaging, seam carving

Procedia PDF Downloads 178
337 Classification of EEG Signals Based on Dynamic Connectivity Analysis

Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović

Abstract:

In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.

Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients

Procedia PDF Downloads 169
336 Design and Analysis of Crankshaft Using Al-Al2O3 Composite Material

Authors: Palanisamy Samyraj, Sriram Yogesh, Kishore Kumar, Vaishak Cibi

Abstract:

The project is about design and analysis of crankshaft using Al-Al2O3 composite material. The project is mainly concentrated across two areas one is to design and analyze the composite material, and the other is to work on the practical model. Growing competition and the growing concern for the environment has forced the automobile manufactures to meet conflicting demands such as increased power and performance, lower fuel consumption, lower pollution emission and decrease noise and vibration. Metal matrix composites offer good properties for a number of automotive components. The work reports on studies on Al-Al2O3 as the possible alternative material for a crank shaft. These material have been considered for use in various components in engines due to the high amount of strength to weight ratio. These materials are significantly taken into account for their light weight, high strength, high specific modulus, low co-efficient of thermal expansion, good air resistance properties. In addition high specific stiffness, superior high temperature, mechanical properties and oxidation resistance of Al2O3 have developed some advanced materials that are Al-Al2O3 composites. Crankshafts are used in automobile industries. Crankshaft is connected to the connecting rod for the movement of the piston which is subjected to high stresses which cause the wear of the crankshaft. Hence using composite material in crankshaft gives good fuel efficiency, low manufacturing cost, less weight.

Keywords: metal matrix composites, Al-Al2O3, high specific modulus, strength to weight ratio

Procedia PDF Downloads 244
335 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach

Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed

Abstract:

Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.

Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model

Procedia PDF Downloads 429
334 Nonlinear Passive Shunt for Electroacoustic Absorbers Using Nonlinear Energy Sink

Authors: Diala Bitar, Emmanuel Gourdon, Claude H. Lamarque, Manuel Collet

Abstract:

Acoustic absorber devices play an important role reducing the noise at the propagation and reception paths. An electroacoustic absorber consists of a loudspeaker coupled to an electric shunt circuit, where the membrane is playing the role of an absorber/reflector of sound. Although the use of linear shunt resistors at the transducer terminals, has shown to improve the performances of the dynamical absorbers, it is nearly efficient in a narrow frequency band. Therefore, and since nonlinear phenomena are promising for their ability to absorb the vibrations and sound on a larger frequency range, we propose to couple a nonlinear electric shunt circuit at the loudspeaker terminals. Then, the equivalent model can be described by a 2 degrees of freedom system, consisting of a primary linear oscillator describing the dynamics of the loudspeaker membrane, linearly coupled to a cubic nonlinear energy sink (NES). The system is analytically treated for the case of 1:1 resonance, using an invariant manifold approach at different time scales. The proposed methodology enables us to detect the equilibrium points and fold singularities at the first slow time scales, providing a predictive tool to design the nonlinear circuit shunt during the energy exchange process. The preliminary results are promising; a significant improvement of acoustic absorption performances are obtained.

Keywords: electroacoustic absorber, multiple-time-scale with small finite parameter, nonlinear energy sink, nonlinear passive shunt

Procedia PDF Downloads 189
333 Vehicle Gearbox Fault Diagnosis Based on Cepstrum Analysis

Authors: Mohamed El Morsy, Gabriela Achtenová

Abstract:

Research on damage of gears and gear pairs using vibration signals remains very attractive, because vibration signals from a gear pair are complex in nature and not easy to interpret. Predicting gear pair defects by analyzing changes in vibration signal of gears pairs in operation is a very reliable method. Therefore, a suitable vibration signal processing technique is necessary to extract defect information generally obscured by the noise from dynamic factors of other gear pairs. This article presents the value of cepstrum analysis in vehicle gearbox fault diagnosis. Cepstrum represents the overall power content of a whole family of harmonics and sidebands when more than one family of sidebands is present at the same time. The concept for the measurement and analysis involved in using the technique are briefly outlined. Cepstrum analysis is used for detection of an artificial pitting defect in a vehicle gearbox loaded with different speeds and torques. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers introduce the load on the flanges of the output joint shafts. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. Also, a method for fault diagnosis of gear faults is presented based on order cepstrum. The procedure is illustrated with the experimental vibration data of the vehicle gearbox. The results show the effectiveness of cepstrum analysis in detection and diagnosis of the gear condition.

Keywords: cepstrum analysis, fault diagnosis, gearbox, vibration signals

Procedia PDF Downloads 352