Search results for: algebraic signal processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5163

Search results for: algebraic signal processing

4323 Correlation Analysis between Sensory Processing Sensitivity (SPS), Meares-Irlen Syndrome (MIS) and Dyslexia

Authors: Kaaryn M. Cater

Abstract:

Students with sensory processing sensitivity (SPS), Meares-Irlen Syndrome (MIS) and dyslexia can become overwhelmed and struggle to thrive in traditional tertiary learning environments. An estimated 50% of tertiary students who disclose learning related issues are dyslexic. This study explores the relationship between SPS, MIS and dyslexia. Baseline measures will be analysed to establish any correlation between these three minority methods of information processing. SPS is an innate sensitivity trait found in 15-20% of the population and has been identified in over 100 species of animals. Humans with SPS are referred to as Highly Sensitive People (HSP) and the measure of HSP is a 27 point self-test known as the Highly Sensitive Person Scale (HSPS). A 2016 study conducted by the author established base-line data for HSP students in a tertiary institution in New Zealand. The results of the study showed that all participating HSP students believed the knowledge of SPS to be life-changing and useful in managing life and study, in addition, they believed that all tutors and in-coming students should be given information on SPS. MIS is a visual processing and perception disorder that is found in approximately 10% of the population and has a variety of symptoms including visual fatigue, headaches and nausea. One way to ease some of these symptoms is through the use of colored lenses or overlays. Dyslexia is a complex phonological based information processing variation present in approximately 10% of the population. An estimated 50% of dyslexics are thought to have MIS. The study exploring possible correlations between these minority forms of information processing is due to begin in February 2017. An invitation will be extended to all first year students enrolled in degree programmes across all faculties and schools within the institution. An estimated 900 students will be eligible to participate in the study. Participants will be asked to complete a battery of on-line questionnaires including the Highly Sensitive Person Scale, the International Dyslexia Association adult self-assessment and the adapted Irlen indicator. All three scales have been used extensively in literature and have been validated among many populations. All participants whose score on any (or some) of the three questionnaires suggest a minority method of information processing will receive an invitation to meet with a learning advisor, and given access to counselling services if they choose. Meeting with a learning advisor is not mandatory, and some participants may choose not to receive help. Data will be collected using the Question Pro platform and base-line data will be analysed using correlation and regression analysis to identify relationships and predictors between SPS, MIS and dyslexia. This study forms part of a larger three year longitudinal study and participants will be required to complete questionnaires at annual intervals in subsequent years of the study until completion of (or withdrawal from) their degree. At these data collection points, participants will be questioned on any additional support received relating to their minority method(s) of information processing. Data from this study will be available by April 2017.

Keywords: dyslexia, highly sensitive person (HSP), Meares-Irlen Syndrome (MIS), minority forms of information processing, sensory processing sensitivity (SPS)

Procedia PDF Downloads 248
4322 Exploring the Applications of Modular Forms in Cryptography

Authors: Berhane Tewelday Weldhiwot

Abstract:

This research investigates the pivotal role of modular forms in modern cryptographic systems, particularly focusing on their applications in secure communications and data integrity. Modular forms, which are complex analytic functions with rich arithmetic properties, have gained prominence due to their connections to number theory and algebraic geometry. This study begins by outlining the fundamental concepts of modular forms and their historical development, followed by a detailed examination of their applications in cryptographic protocols such as elliptic curve cryptography and zero-knowledge proofs. By employing techniques from analytic number theory, the research delves into how modular forms can enhance the efficiency and security of cryptographic algorithms. The findings suggest that leveraging modular forms not only improves computational performance but also fortifies security measures against emerging threats in digital communication. This work aims to contribute to the ongoing discourse on integrating advanced mathematical theories into practical applications, ultimately fostering innovation in cryptographic methodologies.

Keywords: modular forms, cryptography, elliptic curves, applications, mathematical theory

Procedia PDF Downloads 23
4321 Generalization of Tsallis Entropy from a Q-Deformed Arithmetic

Authors: J. Juan Peña, J. Morales, J. García-Ravelo, J. García-Martínez

Abstract:

It is known that by introducing alternative forms of exponential and logarithmic functions, the Tsallis entropy Sᵩ is itself a generalization of Shannon entropy S. In this work, from a deformation through a scaling function applied to the differential operator, it is possible to generate a q-deformed calculus as well as a q-deformed arithmetic, which not only allows generalizing the exponential and logarithmic functions but also any other standard function. The updated q-deformed differential operator leads to an updated integral operator under which the functions are integrated together with a weight function. For each differentiable function, it is possible to identify its q-deformed partner, which is useful to generalize other algebraic relations proper of the original functions. As an application of this proposal, in this work, a generalization of exponential and logarithmic functions is studied in such a way that their relationship with the thermodynamic functions, particularly the entropy, allows us to have a q-deformed expression of these. As a result, from a particular scaling function applied to the differential operator, a q-deformed arithmetic is obtained, leading to the generalization of the Tsallis entropy.

Keywords: q-calculus, q-deformed arithmetic, entropy, exponential functions, thermodynamic functions

Procedia PDF Downloads 78
4320 Enhancing Plant Throughput in Mineral Processing Through Multimodal Artificial Intelligence

Authors: Muhammad Bilal Shaikh

Abstract:

Mineral processing plants play a pivotal role in extracting valuable minerals from raw ores, contributing significantly to various industries. However, the optimization of plant throughput remains a complex challenge, necessitating innovative approaches for increased efficiency and productivity. This research paper investigates the application of Multimodal Artificial Intelligence (MAI) techniques to address this challenge, aiming to improve overall plant throughput in mineral processing operations. The integration of multimodal AI leverages a combination of diverse data sources, including sensor data, images, and textual information, to provide a holistic understanding of the complex processes involved in mineral extraction. The paper explores the synergies between various AI modalities, such as machine learning, computer vision, and natural language processing, to create a comprehensive and adaptive system for optimizing mineral processing plants. The primary focus of the research is on developing advanced predictive models that can accurately forecast various parameters affecting plant throughput. Utilizing historical process data, machine learning algorithms are trained to identify patterns, correlations, and dependencies within the intricate network of mineral processing operations. This enables real-time decision-making and process optimization, ultimately leading to enhanced plant throughput. Incorporating computer vision into the multimodal AI framework allows for the analysis of visual data from sensors and cameras positioned throughout the plant. This visual input aids in monitoring equipment conditions, identifying anomalies, and optimizing the flow of raw materials. The combination of machine learning and computer vision enables the creation of predictive maintenance strategies, reducing downtime and improving the overall reliability of mineral processing plants. Furthermore, the integration of natural language processing facilitates the extraction of valuable insights from unstructured textual data, such as maintenance logs, research papers, and operator reports. By understanding and analyzing this textual information, the multimodal AI system can identify trends, potential bottlenecks, and areas for improvement in plant operations. This comprehensive approach enables a more nuanced understanding of the factors influencing throughput and allows for targeted interventions. The research also explores the challenges associated with implementing multimodal AI in mineral processing plants, including data integration, model interpretability, and scalability. Addressing these challenges is crucial for the successful deployment of AI solutions in real-world industrial settings. To validate the effectiveness of the proposed multimodal AI framework, the research conducts case studies in collaboration with mineral processing plants. The results demonstrate tangible improvements in plant throughput, efficiency, and cost-effectiveness. The paper concludes with insights into the broader implications of implementing multimodal AI in mineral processing and its potential to revolutionize the industry by providing a robust, adaptive, and data-driven approach to optimizing plant operations. In summary, this research contributes to the evolving field of mineral processing by showcasing the transformative potential of multimodal artificial intelligence in enhancing plant throughput. The proposed framework offers a holistic solution that integrates machine learning, computer vision, and natural language processing to address the intricacies of mineral extraction processes, paving the way for a more efficient and sustainable future in the mineral processing industry.

Keywords: multimodal AI, computer vision, NLP, mineral processing, mining

Procedia PDF Downloads 68
4319 Detection and Classification of Myocardial Infarction Using New Extracted Features from Standard 12-Lead ECG Signals

Authors: Naser Safdarian, Nader Jafarnia Dabanloo

Abstract:

In this paper we used four features i.e. Q-wave integral, QRS complex integral, T-wave integral and total integral as extracted feature from normal and patient ECG signals to detection and localization of myocardial infarction (MI) in left ventricle of heart. In our research we focused on detection and localization of MI in standard ECG. We use the Q-wave integral and T-wave integral because this feature is important impression in detection of MI. We used some pattern recognition method such as Artificial Neural Network (ANN) to detect and localize the MI. Because these methods have good accuracy for classification of normal and abnormal signals. We used one type of Radial Basis Function (RBF) that called Probabilistic Neural Network (PNN) because of its nonlinearity property, and used other classifier such as k-Nearest Neighbors (KNN), Multilayer Perceptron (MLP) and Naive Bayes Classification. We used PhysioNet database as our training and test data. We reached over 80% for accuracy in test data for localization and over 95% for detection of MI. Main advantages of our method are simplicity and its good accuracy. Also we can improve accuracy of classification by adding more features in this method. A simple method based on using only four features which extracted from standard ECG is presented which has good accuracy in MI localization.

Keywords: ECG signal processing, myocardial infarction, features extraction, pattern recognition

Procedia PDF Downloads 456
4318 Systematic Literature Review of Therapeutic Use of Autonomous Sensory Meridian Response (ASMR) and Short-Term ASMR Auditory Training Trial

Authors: Christine H. Cubelo

Abstract:

This study consists of 2-parts: a systematic review of current publications on the therapeutic use of autonomous sensory meridian response (ASMR) and a within-subjects auditory training trial using ASMR videos. The main intent is to explore ASMR as potentially therapeutically beneficial for those with atypical sensory processing. Many hearing-related disorders and mood or anxiety symptoms overlap with symptoms of sensory processing issues. For this reason, inclusion and exclusion criteria of the systematic review were generated in an effort to produce optimal search outcomes and avoid overly confined criteria that would limit yielded results. Criteria for inclusion in the review for Part 1 are (1) adult participants diagnosed with hearing loss or atypical sensory processing, (2) inclusion of measures related to ASMR as a treatment method, and (3) published between 2000 and 2022. A total of 1,088 publications were found in the preliminary search, and a total of 13 articles met the inclusion criteria. A total of 14 participants completed the trial and post-trial questionnaire. Of all responses, 64.29% agreed that the duration of auditory training sessions was reasonable. In addition, 71.43% agreed that the training improved their perception of music. Lastly, 64.29% agreed that the training improved their perception of a primary talker when there are other talkers or background noises present.

Keywords: autonomous sensory meridian response, auditory training, atypical sensory processing, hearing loss, hearing aids

Procedia PDF Downloads 56
4317 An Image Enhancement Method Based on Curvelet Transform for CBCT-Images

Authors: Shahriar Farzam, Maryam Rastgarpour

Abstract:

Image denoising plays extremely important role in digital image processing. Enhancement of clinical image research based on Curvelet has been developed rapidly in recent years. In this paper, we present a method for image contrast enhancement for cone beam CT (CBCT) images based on fast discrete curvelet transforms (FDCT) that work through Unequally Spaced Fast Fourier Transform (USFFT). These transforms return a table of Curvelet transform coefficients indexed by a scale parameter, an orientation and a spatial location. Accordingly, the coefficients obtained from FDCT-USFFT can be modified in order to enhance contrast in an image. Our proposed method first uses a two-dimensional mathematical transform, namely the FDCT through unequal-space fast Fourier transform on input image and then applies thresholding on coefficients of Curvelet to enhance the CBCT images. Consequently, applying unequal-space fast Fourier Transform leads to an accurate reconstruction of the image with high resolution. The experimental results indicate the performance of the proposed method is superior to the existing ones in terms of Peak Signal to Noise Ratio (PSNR) and Effective Measure of Enhancement (EME).

Keywords: curvelet transform, CBCT, image enhancement, image denoising

Procedia PDF Downloads 300
4316 Numerical Solution of Porous Media Equation Using Jacobi Operational Matrix

Authors: Shubham Jaiswal

Abstract:

During modeling of transport phenomena in porous media, many nonlinear partial differential equations (NPDEs) encountered which greatly described the convection, diffusion and reaction process. To solve such types of nonlinear problems, a reliable and efficient technique is needed. In this article, the numerical solution of NPDEs encountered in porous media is derived. Here Jacobi collocation method is used to solve the considered problems which convert the NPDEs in systems of nonlinear algebraic equations that can be solved using Newton-Raphson method. The numerical results of some illustrative examples are reported to show the efficiency and high accuracy of the proposed approach. The comparison of the numerical results with the existing analytical results already reported in the literature and the error analysis for each example exhibited through graphs and tables confirms the exponential convergence rate of the proposed method.

Keywords: nonlinear porous media equation, shifted Jacobi polynomials, operational matrix, spectral collocation method

Procedia PDF Downloads 440
4315 SNR Classification Using Multiple CNNs

Authors: Thinh Ngo, Paul Rad, Brian Kelley

Abstract:

Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.

Keywords: classification, CNN, deep learning, prediction, SNR

Procedia PDF Downloads 135
4314 Iris Cancer Detection System Using Image Processing and Neural Classifier

Authors: Abdulkader Helwan

Abstract:

Iris cancer, so called intraocular melanoma is a cancer that starts in the iris; the colored part of the eye that surrounds the pupil. There is a need for an accurate and cost-effective iris cancer detection system since the available techniques used currently are still not efficient. The combination of the image processing and artificial neural networks has a great efficiency for the diagnosis and detection of the iris cancer. Image processing techniques improve the diagnosis of the cancer by enhancing the quality of the images, so the physicians diagnose properly. However, neural networks can help in making decision; whether the eye is cancerous or not. This paper aims to develop an intelligent system that stimulates a human visual detection of the intraocular melanoma, so called iris cancer. The suggested system combines both image processing techniques and neural networks. The images are first converted to grayscale, filtered, and then segmented using prewitt edge detection algorithm to detect the iris, sclera circles and the cancer. The principal component analysis is used to reduce the image size and for extracting features. Those features are considered then as inputs for a neural network which is capable of deciding if the eye is cancerous or not, throughout its experience adopted by many training iterations of different normal and abnormal eye images during the training phase. Normal images are obtained from a public database available on the internet, “Mile Research”, while the abnormal ones are obtained from another database which is the “eyecancer”. The experimental results for the proposed system show high accuracy 100% for detecting cancer and making the right decision.

Keywords: iris cancer, intraocular melanoma, cancerous, prewitt edge detection algorithm, sclera

Procedia PDF Downloads 504
4313 Exploring the Potential of Replika: An AI Chatbot for Mental Health Support

Authors: Nashwah Alnajjar

Abstract:

This research paper provides an overview of Replika, an AI chatbot application that uses natural language processing technology to engage in conversations with users. The app was developed to provide users with a virtual AI friend who can converse with them on various topics, including mental health. This study explores the experiences of Replika users using quantitative research methodology. A survey was conducted with 12 participants to collect data on their demographics, usage patterns, and experiences with the Replika app. The results showed that Replika has the potential to play a role in mental health support and well-being.

Keywords: Replika, chatbot, mental health, artificial intelligence, natural language processing

Procedia PDF Downloads 89
4312 Waste Derived from Refinery and Petrochemical Plants Activities: Processing of Oil Sludge through Thermal Desorption

Authors: Anna Bohers, Emília Hroncová, Juraj Ladomerský

Abstract:

Oil sludge with its main characteristic of high acidity is a waste product generated from the operation of refinery and petrochemical plants. Former refinery and petrochemical plant - Petrochema Dubová is present in Slovakia as well. Its activities was to process the crude oil through sulfonation and adsorption technology for production of lubricating and special oils, synthetic detergents and special white oils for cosmetic and medical purposes. Seventy years ago – period, when this historical acid sludge burden has been created – comparing to the environmental awareness the production was in preference. That is the reason why, as in many countries, also in Slovakia a historical environmental burden is present until now – 229 211 m3 of oil sludge in the middle of the National Park of Nízke Tatry mountain chain. Neither one of tried treatment methods – bio or non-biologic one - was proved as suitable for processing or for recovery in the reason of different factors admission: i.e. strong aggressivity, difficulty with handling because of its sludgy and liquid state et sim. As a potential solution, also incineration was tested, but it was not proven as a suitable method, as the concentration of SO2 in combustion gases was too high, and it was not possible to decrease it under the acceptable value of 2000 mg.mn-3. That is the reason why the operation of incineration plant has been terminated, and the acid sludge landfills are present until nowadays. The objective of this paper is to present a new possibility of processing and valorization of acid sludgy-waste. The processing of oil sludge was performed through the effective separation - thermal desorption technology, through which it is possible to split the sludgy material into the matrix (soil, sediments) and organic contaminants. In order to boost the efficiency in the processing of acid sludge through thermal desorption, the work will present the possibility of application of an original technology – Method of Blowing Decomposition for recovering of organic matter into technological lubricating oil.

Keywords: hazardous waste, oil sludge, remediation, thermal desorption

Procedia PDF Downloads 200
4311 Electrocardiogram-Based Heartbeat Classification Using Convolutional Neural Networks

Authors: Jacqueline Rose T. Alipo-on, Francesca Isabelle F. Escobar, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar Al Dahoul

Abstract:

Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases, which are considered one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis of ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heartbeat types. The dataset used in this work is the synthetic MIT-BIH Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%.

Keywords: heartbeat classification, convolutional neural network, electrocardiogram signals, generative adversarial networks, long short-term memory, ResNet-50

Procedia PDF Downloads 130
4310 Regression-Based Approach for Development of a Cuff-Less Non-Intrusive Cardiovascular Health Monitor

Authors: Pranav Gulati, Isha Sharma

Abstract:

Hypertension and hypotension are known to have repercussions on the health of an individual, with hypertension contributing to an increased probability of risk to cardiovascular diseases and hypotension resulting in syncope. This prompts the development of a non-invasive, non-intrusive, continuous and cuff-less blood pressure monitoring system to detect blood pressure variations and to identify individuals with acute and chronic heart ailments, but due to the unavailability of such devices for practical daily use, it becomes difficult to screen and subsequently regulate blood pressure. The complexities which hamper the steady monitoring of blood pressure comprises of the variations in physical characteristics from individual to individual and the postural differences at the site of monitoring. We propose to develop a continuous, comprehensive cardio-analysis tool, based on reflective photoplethysmography (PPG). The proposed device, in the form of an eyewear captures the PPG signal and estimates the systolic and diastolic blood pressure using a sensor positioned near the temporal artery. This system relies on regression models which are based on extraction of key points from a pair of PPG wavelets. The proposed system provides an edge over the existing wearables considering that it allows for uniform contact and pressure with the temporal site, in addition to minimal disturbance by movement. Additionally, the feature extraction algorithms enhance the integrity and quality of the extracted features by reducing unreliable data sets. We tested the system with 12 subjects of which 6 served as the training dataset. For this, we measured the blood pressure using a cuff based BP monitor (Omron HEM-8712) and at the same time recorded the PPG signal from our cardio-analysis tool. The complete test was conducted by using the cuff based blood pressure monitor on the left arm while the PPG signal was acquired from the temporal site on the left side of the head. This acquisition served as the training input for the regression model on the selected features. The other 6 subjects were used to validate the model by conducting the same test on them. Results show that the developed prototype can robustly acquire the PPG signal and can therefore be used to reliably predict blood pressure levels.

Keywords: blood pressure, photoplethysmograph, eyewear, physiological monitoring

Procedia PDF Downloads 279
4309 Graph-Oriented Summary for Optimized Resource Description Framework Graphs Streams Processing

Authors: Amadou Fall Dia, Maurras Ulbricht Togbe, Aliou Boly, Zakia Kazi Aoul, Elisabeth Metais

Abstract:

Existing RDF (Resource Description Framework) Stream Processing (RSP) systems allow continuous processing of RDF data issued from different application domains such as weather station measuring phenomena, geolocation, IoT applications, drinking water distribution management, and so on. However, processing window phase often expires before finishing the entire session and RSP systems immediately delete data streams after each processed window. Such mechanism does not allow optimized exploitation of the RDF data streams as the most relevant and pertinent information of the data is often not used in a due time and almost impossible to be exploited for further analyzes. It should be better to keep the most informative part of data within streams while minimizing the memory storage space. In this work, we propose an RDF graph summarization system based on an explicit and implicit expressed needs through three main approaches: (1) an approach for user queries (SPARQL) in order to extract their needs and group them into a more global query, (2) an extension of the closeness centrality measure issued from Social Network Analysis (SNA) to determine the most informative parts of the graph and (3) an RDF graph summarization technique combining extracted user query needs and the extended centrality measure. Experiments and evaluations show efficient results in terms of memory space storage and the most expected approximate query results on summarized graphs compared to the source ones.

Keywords: centrality measures, RDF graphs summary, RDF graphs stream, SPARQL query

Procedia PDF Downloads 203
4308 A Fast Convergence Subband BSS Structure

Authors: Salah Al-Din I. Badran, Samad Ahmadi, Ismail Shahin

Abstract:

A blind source separation method is proposed; in this method we use a non-uniform filter bank and a novel normalisation. This method provides a reduced computational complexity and increased convergence speed comparing to the full-band algorithm. Recently, adaptive sub-band scheme has been recommended to solve two problems: reduction of computational complexity and increase the convergence speed of the adaptive algorithm for correlated input signals. In this work the reduction in computational complexity is achieved with the use of adaptive filters of orders less than the full-band adaptive filters, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each sub-band than the input signal at full bandwidth, and can promote better rates of convergence.

Keywords: blind source separation, computational complexity, subband, convergence speed, mixture

Procedia PDF Downloads 556
4307 Cascade Control for Pressure Calibration by Fieldbus Communication System

Authors: Chatchaval Pornpatkul, Wipawan Suksathid

Abstract:

This paper is to study and control the pressure of the water inside the open tank using a cascade control with the communication in the process by fieldbus system for the pressure calibration. The plant model is to be used in experiments to control the level and flow process of the water by using Syscon program to create functions. We used to control by Intouch runtime program to create the graphic display on the screen. In this case we used PI control the level and the flow process of water in the open tank in the range of 0 – 10 L/m. The output signal of the level and the flow transmitter are the digital standard signal by fieldbus system. And all information displayed on the computer with the communication between the computer and plant model can be communication to each other through just one cable pair. And in this paper, the PI tuning, we used calculate by Ziegler-Nichols reaction curve method to control the plant model by PI controller.

Keywords: cascade control, fieldbus system, pressure calibration, microelectronics systems

Procedia PDF Downloads 459
4306 Efficient Antenna Array Beamforming with Robustness against Random Steering Mismatch

Authors: Ju-Hong Lee, Ching-Wei Liao, Kun-Che Lee

Abstract:

This paper deals with the problem of using antenna sensors for adaptive beamforming in the presence of random steering mismatch. We present an efficient adaptive array beamformer with robustness to deal with the considered problem. The robustness of the proposed beamformer comes from the efficient designation of the steering vector. Using the received array data vector, we construct an appropriate correlation matrix associated with the received array data vector and a correlation matrix associated with signal sources. Then, the eigenvector associated with the largest eigenvalue of the constructed signal correlation matrix is designated as an appropriate estimate of the steering vector. Finally, the adaptive weight vector required for adaptive beamforming is obtained by using the estimated steering vector and the constructed correlation matrix of the array data vector. Simulation results confirm the effectiveness of the proposed method.

Keywords: adaptive beamforming, antenna array, linearly constrained minimum variance, robustness, steering vector

Procedia PDF Downloads 199
4305 The Lubrication Regimes Recognition of a Pressure-Fed Journal Bearing by Time and Frequency Domain Analysis of Acoustic Emission Signals

Authors: S. Hosseini, M. Ahmadi Najafabadi, M. Akhlaghi

Abstract:

The health of the journal bearings is very important in preventing unforeseen breakdowns in rotary machines, and poor lubrication is one of the most important factors for producing the bearing failures. Hydrodynamic lubrication (HL), mixed lubrication (ML), and boundary lubrication (BL) are three regimes of a journal bearing lubrication. This paper uses acoustic emission (AE) measurement technique to correlate features of the AE signals to the three lubrication regimes. The transitions from HL to ML based on operating factors such as rotating speed, load, inlet oil pressure by time domain and time-frequency domain signal analysis techniques are detected, and then metal-to-metal contacts between sliding surfaces of the journal and bearing are identified. It is found that there is a significant difference between theoretical and experimental operating values that are obtained for defining the lubrication regions.

Keywords: acoustic emission technique, pressure fed journal bearing, time and frequency signal analysis, metal-to-metal contact

Procedia PDF Downloads 155
4304 Effect of Digital Technology on Students Interest, Achievement and Retention in Algebra in Abia State College of Education (Technical) Arochukwu

Authors: Stephen O. Amaraihu

Abstract:

This research investigated the effect of Computer Based Instruction on Students’ interest, achievement, and retention in Algebra in Abia State College of Education (Technical), Arochukwu. Three research questions and two hypotheses guided the study. Two instruments, Maths Achievement Test (MAT) and Maths Interest Inventory were employed, to test a population of three hundred and sixteen (316) NCE 1 students in algebra. It is expected that this research will lead to the improvement of students’ performance and enhance their interest and retention of basic algebraic concept. It was found that the majority of students in the college are not proficient in the use of ICT as a result of a lack of trained personnel. It was concluded that the state government was not ready to implement the usage of mathematics in Abia State College of Education. The paper recommends, amongst others, the employment of mathematics Lectures with competent skills in ICT and the training of lecturers of mathematics.

Keywords: achievement, computer based instruction, interest, retention

Procedia PDF Downloads 209
4303 Development of Folding Based Aptasensor for Ochratoxin a Using Different Pulse Voltammetry

Authors: Rupesh K. Mishra, Gaëlle Catanante, Akhtar Hayat, Jean-Louis Marty

Abstract:

Ochratoxins (OTA) are secondary metabolites present in a wide variety of food stuff. They are dangerous by-products mainly produced by several species of storage fungi including the Aspergillus and Penicillium genera. OTA is known to have nephrotoxic, immunotoxic, teratogenic and carcinogenic effects. Thus, needs a special attention for a highly sensitive and selective detection system that can quantify these organic toxins in various matrices such as cocoa beans. This work presents a folding based aptasensors by employing an aptamer conjugated redox probe (methylene blue) specifically designed for OTA. The aptamers were covalently attached to the screen printed carbon electrodes using diazonium grafting. Upon sensing the OTA, it binds with the immobilized aptamer on the electrode surface, which induces the conformational changes of the aptamer, consequently increased in the signal. This conformational change of the aptamer before and after biosensing of target OTA could produce the distinguishable electrochemical signal. The obtained limit of detection was 0.01 ng/ml for OTA samples with recovery of up to 88% in contaminated cocoa samples.

Keywords: ochratoxin A, cocoa, DNA aptamer, labelled probe

Procedia PDF Downloads 286
4302 Introducing Quantum-Weijsberg Algebras by Redefining Quantum-MV Algebras: Characterization, Properties, and Other Important Results

Authors: Lavinia Ciungu

Abstract:

In the last decades, developing algebras related to the logical foundations of quantum mechanics became a central topic of research. Generally known as quantum structures, these algebras serve as models for the formalism of quantum mechanics. In this work, we introduce the notion of quantum-Wajsberg algebras by redefining the quantum-MV algebras starting from involutive BE algebras. We give a characterization of quantum-Wajsberg algebras, investigate their properties, and show that, in general, quantum-Wajsberg algebras are not (commutative) quantum-B algebras. We also define the ∨-commutative quantum-Wajsberg algebras and study their properties. Furthermore, we prove that any Wajsberg algebra (bounded ∨-commutative BCK algebra) is a quantum-Wajsberg algebra, and we give a condition for a quantum-Wajsberg algebra to be a Wajsberg algebra. We prove that Wajsberg algebras are both quantum-Wajsberg algebras and commutative quantum-B algebras. We establish the connection between quantum-Wajsberg algebras and quantum-MV algebras, proving that the quantum-Wajsberg algebras are term equivalent to quantum-MV algebras. We show that, in general, the quantum-Wajsberg algebras are not commutative quantum-B algebras and if a quantum-Wajsberg algebra is self-distributive, then the corresponding quantum-MV algebra is an MV algebra. Our study could be a starting point for the development of other implicative counterparts of certain existing algebraic quantum structures.

Keywords: quantum-Wajsberg algebra, quantum-MV algebra, MV algebra, Wajsberg algebra, BE algebra, quantum-B algebra

Procedia PDF Downloads 19
4301 Multi-Band Frequency Conversion Scheme with Multi-Phase Shift Based on Optical Frequency Comb

Authors: Tao Lin, Shanghong Zhao, Yufu Yin, Zihang Zhu, Wei Jiang, Xuan Li, Qiurong Zheng

Abstract:

A simple operated, stable and compact multi-band frequency conversion and multi-phase shift is proposed to satisfy the demands of multi-band communication and radar phase array system. The dual polarization quadrature phase shift keying (DP-QPSK) modulator is employed to support the LO sideband and the optical frequency comb simultaneously. Meanwhile, the fiber is also used to introduce different phase shifts to different sidebands. The simulation result shows that by controlling the DC bias voltages and a C band microwave signal with frequency of 4.5 GHz can be simultaneously converted into other signals that cover from C band to K band with multiple phases. It also verifies that the multi-band and multi-phase frequency conversion system can be stably performed based on current manufacturing art and can well cope with the DC drifting. It should be noted that the phase shift of the converted signal also partly depends of the length of the optical fiber.

Keywords: microwave photonics, multi-band frequency conversion, multi-phase shift, conversion efficiency

Procedia PDF Downloads 255
4300 Poster : Incident Signals Estimation Based on a Modified MCA Learning Algorithm

Authors: Rashid Ahmed , John N. Avaritsiotis

Abstract:

Many signal subspace-based approaches have already been proposed for determining the fixed Direction of Arrival (DOA) of plane waves impinging on an array of sensors. Two procedures for DOA estimation based neural networks are presented. First, Principal Component Analysis (PCA) is employed to extract the maximum eigenvalue and eigenvector from signal subspace to estimate DOA. Second, minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix. In this paper, we will modify a Minor Component Analysis (MCA(R)) learning algorithm to enhance the convergence, where a convergence is essential for MCA algorithm towards practical applications. The learning rate parameter is also presented, which ensures fast convergence of the algorithm, because it has direct effect on the convergence of the weight vector and the error level is affected by this value. MCA is performed to determine the estimated DOA. Preliminary results will be furnished to illustrate the convergences results achieved.

Keywords: Direction of Arrival, neural networks, Principle Component Analysis, Minor Component Analysis

Procedia PDF Downloads 452
4299 Vibroacoustic Modulation of Wideband Vibrations and its Possible Application for Windmill Blade Diagnostics

Authors: Abdullah Alnutayfat, Alexander Sutin, Dong Liu

Abstract:

Wind turbine has become one of the most popular energy productions. However, failure of blades and maintenance costs evolve into significant issues in the wind power industry, so it is essential to detect the initial blade defects to avoid the collapse of the blades and structure. This paper aims to apply modulation of high-frequency blade vibrations by low-frequency blade rotation, which is close to the known Vibro-Acoustic Modulation (VAM) method. The high-frequency wideband blade vibration is produced by the interaction of the surface blades with the environment air turbulence, and the low-frequency modulation is produced by alternating bending stress due to gravity. The low-frequency load of rotational wind turbine blades ranges between 0.2-0.4 Hz and can reach up to 2 Hz for strong wind. The main difference between this study and previous ones on VAM methods is the use of a wideband vibration signal from the blade's natural vibrations. Different features of the vibroacoustic modulation are considered using a simple model of breathing crack. This model considers the simple mechanical oscillator, where the parameters of the oscillator are varied due to low-frequency blade rotation. During the blade's operation, the internal stress caused by the weight of the blade modifies the crack's elasticity and damping. The laboratory experiment using steel samples demonstrates the possibility of VAM using a probe wideband noise signal. A cycle load with a small amplitude was used as a pump wave to damage the tested sample, and a small transducer generated a wideband probe wave. The received signal demodulation was conducted using the Detecting of Envelope Modulation on Noise (DEMON) approach. In addition, the experimental results were compared with the modulation index (MI) technique regarding the harmonic pump wave. The wideband and traditional VAM methods demonstrated similar sensitivity for earlier detection of invisible cracks. Importantly, employing a wideband probe signal with the DEMON approach speeds up and simplifies testing since it eliminates the need to conduct tests repeatedly for various harmonic probe frequencies and to adjust the probe frequency.

Keywords: vibro-acoustic modulation, detecting of envelope modulation on noise, damage, turbine blades

Procedia PDF Downloads 100
4298 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering

Authors: Hamza Benzerrouk, Alexander Nebylov

Abstract:

In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.

Keywords: GNSS, INS, Kalman filtering, ultra tight integration

Procedia PDF Downloads 284
4297 Evaluation and Analysis of Light Emitting Diode Distribution in an Indoor Visible Light Communication

Authors: Olawale J. Olaluyi, Ayodele S. Oluwole, O. Akinsanmi, Johnson O. Adeogo

Abstract:

Communication using visible light VLC is considered a cutting-edge technology used for data transmission and illumination since it uses less energy than radio frequency (RF) technology and has a large bandwidth, extended lifespan, and high security. The room's irregular distribution of small base stations, or LED array distribution, is the cause of the obscured area, minimum signal-to-noise ratio (SNR), and received power. In order to maximize the received power distribution and SNR at the center of the room for an indoor VLC system, the researchers offer an innovative model for the placement of eight LED array distributions in this work. We have investigated the arrangement of the LED array distribution with regard to receiving power to fill the open space in the center of the room. The suggested LED array distribution saved 36.2% of the transmitted power, according to the simulation findings. Aside from that, the entire room was equally covered. This leads to an increase in both received power and SNR.

Keywords: visible light communication (VLC), light emitted diodes (LED), optical power distribution, signal-to-noise ratio (SNR).

Procedia PDF Downloads 91
4296 Investigation of Unusually High Ultrasonic Signal Attenuation in Water Observed in Various Combinations of Pairs of Lead Zirconate Titanate Pb(ZrxTi1-x)O3 (PZT) Piezoelectric Ceramics Positioned Adjacent to One Another Separated by an Intermediate Gap

Authors: S. M. Mabandla, P. Loveday, C. Gomes, D. T. Maiga, T. T. Phadi

Abstract:

Lead zirconate titanate (PZT) piezoelectric ceramics are widely used in ultrasonic applications due to their ability to effectively convert electrical energy into mechanical vibrations and vice versa. This paper presents a study on the behaviour of various combinations of pairs of PZT piezoelectric ceramic materials positioned adjacent to each other with an intermediate gap submerged in water, where one piezoelectric ceramic material is excited by a cyclic electric field with constant frequency and amplitude displacement. The transmitted ultrasonic sound propagates through the medium and is received by the PZT ceramic at the other end, the ultrasonic sound signal amplitude displacement experiences attenuation during propagation due to acoustic impedance. The investigation focuses on understanding the causes of extremely high amplitude displacement attenuation that have been observed in various combinations of piezoelectric ceramic pairs that are submerged in water arranged in a manner stipulated earlier. by examining various combinations of pairs of these piezoelectric ceramics, their physical, electrical, and acoustic properties, and behaviour and attributing them to the observed significant signal attenuation. The experimental setup involves exciting one piezoelectric ceramic material at one end with a burst square cyclic electric field signal of constant frequency, which generates a burst of ultrasonic sound that propagates through the water medium to the adjacent piezoelectric ceramic at the other end. Mechanical vibrations of a PZT piezoelectric ceramic are measured using a double-beam laser Doppler vibrometer to mimic the incident ultrasonic waves generated and received ultrasonic waves on the other end due to mechanical vibrations of a PZT. The measured ultrasonic sound wave signals are continuously compared to the applied cyclic electric field at both ends. The impedance matching networks are continuously tuned at both ends to eliminate electromechanical impedance mismatch to improve ultrasonic transmission and reception. The study delves into various physical, electrical, and acoustic properties of the PZT piezoelectric ceramics, such as the electromechanical coupling factor, acoustic coupling, and elasticity, among others. These properties are analyzed to identify potential factors contributing to the unusually high acoustic impedance in the water medium between the ceramics. Additionally, impedance-matching networks are investigated at both ends to offset the high signal attenuation and improve overall system performance. The findings will be reported in this paper.

Keywords: acoustic impedance, impedance mismatch, piezoelectric ceramics, ultrasonic sound

Procedia PDF Downloads 79
4295 Development of Concurrent Engineering through the Application of Software Simulations of Metal Production Processing and Analysis of the Effects of Application

Authors: D. M. Eric, D. Milosevic, F. D. Eric

Abstract:

Concurrent engineering technologies are a modern concept in manufacturing engineering. One of the key goals in designing modern technological processes is further reduction of production costs, both in the prototype and the preparatory part, as well as during the serial production. Thanks to many segments of concurrent engineering, these goals can be accomplished much more easily. In this paper, we give an overview of the advantages of using modern software simulations in relation to the classical aspects of designing technological processes of metal deformation. Significant savings are achieved thanks to the electronic simulation and software detection of all possible irregularities in the functional-working regime of the technological process. In order for the expected results to be optimal, it is necessary that the input parameters are very objective and that they reliably represent the values ​of these parameters in real conditions. Since it is a metal deformation treatment here, the particularly important parameters are the coefficient of internal friction between the working material and the tools, as well as the parameters related to the flow curve of the processing material. The paper will give a presentation for the experimental determination of some of these parameters.

Keywords: production technologies, metal processing, software simulations, effects of application

Procedia PDF Downloads 235
4294 Unlocking the Potential of Short Texts with Semantic Enrichment, Disambiguation Techniques, and Context Fusion

Authors: Mouheb Mehdoui, Amel Fraisse, Mounir Zrigui

Abstract:

This paper explores the potential of short texts through semantic enrichment and disambiguation techniques. By employing context fusion, we aim to enhance the comprehension and utility of concise textual information. The methodologies utilized are grounded in recent advancements in natural language processing, which allow for a deeper understanding of semantics within limited text formats. Specifically, topic classification is employed to understand the context of the sentence and assess the relevance of added expressions. Additionally, word sense disambiguation is used to clarify unclear words, replacing them with more precise terms. The implications of this research extend to various applications, including information retrieval and knowledge representation. Ultimately, this work highlights the importance of refining short text processing techniques to unlock their full potential in real-world applications.

Keywords: information traffic, text summarization, word-sense disambiguation, semantic enrichment, ambiguity resolution, short text enhancement, information retrieval, contextual understanding, natural language processing, ambiguity

Procedia PDF Downloads 14