Search results for: processing based on signal identification
31878 Quantile Coherence Analysis: Application to Precipitation Data
Authors: Yaeji Lim, Hee-Seok Oh
Abstract:
The coherence analysis measures the linear time-invariant relationship between two data sets and has been studied various fields such as signal processing, engineering, and medical science. However classical coherence analysis tends to be sensitive to outliers and focuses only on mean relationship. In this paper, we generalized cross periodogram to quantile cross periodogram and provide richer inter-relationship between two data sets. This is a general version of Laplace cross periodogram. We prove its asymptotic distribution under the long range process and compare them with ordinary coherence through numerical examples. We also present real data example to confirm the usefulness of quantile coherence analysis.Keywords: coherence, cross periodogram, spectrum, quantile
Procedia PDF Downloads 39031877 Effect of Sub Supercritical CO2 Processing on Microflora and Shelf Life Tempe
Authors: M. Kustyawati, F. Pratama, D. Saputra, A. Wijaya
Abstract:
Tempe composes of not only molds but also bacteria and yeasts. The structure of microorganisms needs to be in balance number in order the tempe to be an acceptable quality for an extended time. Sub supercritical carbon dioxide can be a promising preservation method for tempe as it induces microbial inactivation avoiding alterations of its quality attributes. Fresh tempe were processed using supercritical and sub supercritical CO2 for a defined holding times, then the growth ability of molds and bacteria were analyzed. The results showed that the supercritical CO2 processing for 5 minutes reduced the number of bacteria and molds to 0.30 log cycle and 1.17 log cycles, respectively. In addition, sub supercritical CO2 processing for 20 minutes had fungicidal effect against mold tempe; whereas, the sub supercritical CO2 for 10 minutes had reducing effect against bacteria tempe, and had fungistatic affect against mold tempe. It suggested that sub-supercritical CO2 processing for 10 min could be useful alternative technique for preservation of tempe.Keywords: tempe, sub supercritical CO2, fungistatic effect, preservation
Procedia PDF Downloads 26931876 Surface-Enhanced Raman Detection in Chip-Based Chromatography via a Droplet Interface
Authors: Renata Gerhardt, Detlev Belder
Abstract:
Raman spectroscopy has attracted much attention as a structurally descriptive and label-free detection method. It is particularly suited for chemical analysis given as it is non-destructive and molecules can be identified via the fingerprint region of the spectra. In this work possibilities are investigated how to integrate Raman spectroscopy as a detection method for chip-based chromatography, making use of a droplet interface. A demanding task in lab-on-a-chip applications is the specific and sensitive detection of low concentrated analytes in small volumes. Fluorescence detection is frequently utilized but restricted to fluorescent molecules. Furthermore, no structural information is provided. Another often applied technique is mass spectrometry which enables the identification of molecules based on their mass to charge ratio. Additionally, the obtained fragmentation pattern gives insight into the chemical structure. However, it is only applicable as an end-of-the-line detection because analytes are destroyed during measurements. In contrast to mass spectrometry, Raman spectroscopy can be applied on-chip and substances can be processed further downstream after detection. A major drawback of Raman spectroscopy is the inherent weakness of the Raman signal, which is due to the small cross-sections associated with the scattering process. Enhancement techniques, such as surface enhanced Raman spectroscopy (SERS), are employed to overcome the poor sensitivity even allowing detection on a single molecule level. In SERS measurements, Raman signal intensity is improved by several orders of magnitude if the analyte is in close proximity to nanostructured metal surfaces or nanoparticles. The main gain of lab-on-a-chip technology is the building block-like ability to seamlessly integrate different functionalities, such as synthesis, separation, derivatization and detection on a single device. We intend to utilize this powerful toolbox to realize Raman detection in chip-based chromatography. By interfacing on-chip separations with a droplet generator, the separated analytes are encapsulated into numerous discrete containers. These droplets can then be injected with a silver nanoparticle solution and investigated via Raman spectroscopy. Droplet microfluidics is a sub-discipline of microfluidics which instead of a continuous flow operates with the segmented flow. Segmented flow is created by merging two immiscible phases (usually an aqueous phase and oil) thus forming small discrete volumes of one phase in the carrier phase. The study surveys different chip designs to realize coupling of chip-based chromatography with droplet microfluidics. With regards to maintaining a sufficient flow rate for chromatographic separation and ensuring stable eluent flow over the column different flow rates of eluent and oil phase are tested. Furthermore, the detection of analytes in droplets with surface enhanced Raman spectroscopy is examined. The compartmentalization of separated compounds preserves the analytical resolution since the continuous phase restricts dispersion between the droplets. The droplets are ideal vessels for the insertion of silver colloids thus making use of the surface enhancement effect and improving the sensitivity of the detection. The long-term goal of this work is the first realization of coupling chip based chromatography with droplets microfluidics to employ surface enhanced Raman spectroscopy as means of detection.Keywords: chip-based separation, chip LC, droplets, Raman spectroscopy, SERS
Procedia PDF Downloads 24531875 Design of a Phemt Buffer Amplifier in Mm-Wave Band around 60 GHz
Authors: Maryam Abata, Moulhime El Bekkali, Said Mazer, Catherine Algani, Mahmoud Mehdi
Abstract:
One major problem of most electronic systems operating in the millimeter wave band is the signal generation with a high purity and a stable carrier frequency. This problem is overcome by using the combination of a signal with a low frequency local oscillator (LO) and several stages of frequency multipliers. The use of these frequency multipliers to create millimeter-wave signals is an attractive alternative to direct generation signal. Therefore, the isolation problem of the local oscillator from the other stages is always present, which leads to have various mechanisms that can disturb the oscillator performance, thus a buffer amplifier is often included in oscillator outputs. In this paper, we present the study and design of a buffer amplifier in the mm-wave band using a 0.15μm pHEMT from UMS foundry. This amplifier will be used as a part of a frequency quadrupler at 60 GHz.Keywords: Mm-wave band, local oscillator, frequency quadrupler, buffer amplifier
Procedia PDF Downloads 54531874 Study on Acoustic Source Detection Performance Improvement of Microphone Array Installed on Drones Using Blind Source Separation
Authors: Youngsun Moon, Yeong-Ju Go, Jong-Soo Choi
Abstract:
Most drones that currently have surveillance/reconnaissance missions are basically equipped with optical equipment, but we also need to use a microphone array to estimate the location of the acoustic source. This can provide additional information in the absence of optical equipment. The purpose of this study is to estimate Direction of Arrival (DOA) based on Time Difference of Arrival (TDOA) estimation of the acoustic source in the drone. The problem is that it is impossible to measure the clear target acoustic source because of the drone noise. To overcome this problem is to separate the drone noise and the target acoustic source using Blind Source Separation(BSS) based on Independent Component Analysis(ICA). ICA can be performed assuming that the drone noise and target acoustic source are independent and each signal has non-gaussianity. For maximized non-gaussianity each signal, we use Negentropy and Kurtosis based on probability theory. As a result, we can improve TDOA estimation and DOA estimation of the target source in the noisy environment. We simulated the performance of the DOA algorithm applying BSS algorithm, and demonstrated the simulation through experiment at the anechoic wind tunnel.Keywords: aeroacoustics, acoustic source detection, time difference of arrival, direction of arrival, blind source separation, independent component analysis, drone
Procedia PDF Downloads 16231873 Increased Energy Efficiency and Improved Product Quality in Processing of Lithium Bearing Ores by Applying Fluidized-Bed Calcination Systems
Authors: Edgar Gasafi, Robert Pardemann, Linus Perander
Abstract:
For the production of lithium carbonate or hydroxide out of lithium bearing ores, a thermal activation (calcination/decrepitation) is required for the phase transition in the mineral to enable an acid respectively soda leaching in the downstream hydrometallurgical section. In this paper, traditional processing in Lithium industry is reviewed, and opportunities to reduce energy consumption and improve product quality and recovery rate will be discussed. The conventional process approach is still based on rotary kiln calcination, a technology in use since the early days of lithium ore processing, albeit not significantly further developed since. A new technology, at least for the Lithium industry, is fluidized bed calcination. Decrepitation of lithium ore was investigated at Outotec’s Frankfurt Research Centre. Focusing on fluidized bed technology, a study of major process parameters (temperature and residence time) was performed at laboratory and larger bench scale aiming for optimal product quality for subsequent processing. The technical feasibility was confirmed for optimal process conditions on pilot scale (400 kg/h feed input) providing the basis for industrial process design. Based on experimental results, a comprehensive Aspen Plus flow sheet simulation was developed to quantify mass and energy flow for the rotary kiln and fluidized bed system. Results show a significant reduction in energy consumption and improved process performance in terms of temperature profile, product quality and plant footprint. The major conclusion is that a substantial reduction of energy consumption can be achieved in processing Lithium bearing ores by using fluidized bed based systems. At the same time and different from rotary kiln process, an accurate temperature and residence time control is ensured in fluidized-bed systems leading to a homogenous temperature profile in the reactor which prevents overheating and sintering of the solids and results in uniform product quality.Keywords: calcination, decrepitation, fluidized bed, lithium, spodumene
Procedia PDF Downloads 23031872 Cross Analysis of Gender Discrimination in Print Media of Subcontinent via James Paul Gee Model
Authors: Luqman Shah
Abstract:
The myopic gender discrimination is now a well-documented and recognized fact. However, gender is only one facet of an individual’s multiple identities. The aim of this work is to investigate gender discrimination highlighted in print media in the subcontinent with a specific focus on Pakistan and India. In this study, an approach is adopted by using the James Paul Gee model for the identification of gender discrimination. As a matter of fact, gender discrimination is not consistent in its nature and intensity across global societies and varies as social, geographical, and cultural background change. The World has been changed enormously in every aspect of life, and there are also obvious changes towards gender discrimination, prejudices, and biases, but still, the world has a long way to go to recognize women as equal as men in every sphere of life. The history of the world is full of gender-based incidents and violence. Now the time came that this issue must be seriously addressed and to eradicate this evil, which will lead to harmonize society and consequently heading towards peace and prosperity. The study was carried out by a mixed model research method. The data was extracted from the contents of five Pakistani English newspapers out of a total of 23 daily English newspapers, and likewise, five Indian daily English newspapers out of 52 those were published 2018-2019. Two news stories from each of these newspapers, in total, twenty news stories were taken as sampling for this research. Content and semiotic analysis techniques were used to analyze through James Paul Gee's seven building tasks of language. The resources of renowned e-papers are utilized, and the highlighted cases in Pakistani newspapers of Indian gender-based stories and vice versa are scrutinized as per the requirement of this research paper. For analysis of the written stretches of discourse taken from e-papers and processing of data for the focused problem, James Paul Gee 'Seven Building Tasks of Language' is used. Tabulation of findings is carried to pinpoint the issue with certainty. Findings after processing the data showed that there is a gross human rights violation on the basis of gender discrimination. The print media needs a more realistic representation of what is what not what seems to be. The study recommends the equality and parity of genders.Keywords: gender discrimination, print media, Paul Gee model, subcontinent
Procedia PDF Downloads 22131871 Automatic Music Score Recognition System Using Digital Image Processing
Authors: Yuan-Hsiang Chang, Zhong-Xian Peng, Li-Der Jeng
Abstract:
Music has always been an integral part of human’s daily lives. But, for the most people, reading musical score and turning it into melody is not easy. This study aims to develop an Automatic music score recognition system using digital image processing, which can be used to read and analyze musical score images automatically. The technical approaches included: (1) staff region segmentation; (2) image preprocessing; (3) note recognition; and (4) accidental and rest recognition. Digital image processing techniques (e.g., horizontal /vertical projections, connected component labeling, morphological processing, template matching, etc.) were applied according to musical notes, accidents, and rests in staff notations. Preliminary results showed that our system could achieve detection and recognition rates of 96.3% and 91.7%, respectively. In conclusion, we presented an effective automated musical score recognition system that could be integrated in a system with a media player to play music/songs given input images of musical score. Ultimately, this system could also be incorporated in applications for mobile devices as a learning tool, such that a music player could learn to play music/songs.Keywords: connected component labeling, image processing, morphological processing, optical musical recognition
Procedia PDF Downloads 41931870 Design and Implementation of Active Radio Frequency Identification on Wireless Sensor Network-Based System
Authors: Che Z. Zulkifli, Nursyahida M. Noor, Siti N. Semunab, Shafawati A. Malek
Abstract:
Wireless sensors, also known as wireless sensor nodes, have been making a significant impact on human daily life. The Radio Frequency Identification (RFID) and Wireless Sensor Network (WSN) are two complementary technologies; hence, an integrated implementation of these technologies expands the overall functionality in obtaining long-range and real-time information on the location and properties of objects and people. An approach for integrating ZigBee and RFID networks is proposed in this paper, to create an energy-efficient network improved by the benefits of combining ZigBee and RFID architecture. Furthermore, the compatibility and requirements of the ZigBee device and communication links in the typical RFID system which is presented with the real world experiment on the capabilities of the proposed RFID system.Keywords: mesh network, RFID, wireless sensor network, zigbee
Procedia PDF Downloads 46131869 GPS Signal Correction to Improve Vehicle Location during Experimental Campaign
Authors: L. Della Ragione, G. Meccariello
Abstract:
In recent years the progress of the automobile industry in Italy in the field of reduction of emissions values is very remarkable. Nevertheless, their evaluation and reduction is a key problem, especially in the cities, which account for more than 50% of world population. In this paper we dealt with the problem of describing a quantitative approach for the reconstruction of GPS coordinates and altitude, in the context of correlation study between driving cycles / emission / geographical location, during an experimental campaign realized with some instrumented cars.Keywords: air pollution, driving cycles, GPS signal, vehicle location
Procedia PDF Downloads 42831868 Investigation of Surface Electromyograph Signal Acquired from the around Shoulder Muscles of Upper Limb Amputees
Authors: Amanpreet Kaur, Ravinder Agarwal, Amod Kumar
Abstract:
Surface electromyography is a strategy to measure the muscle activity of the skin. Sensors placed on the skin recognize the electrical current or signal generated by active muscles. A lot of the research has focussed on the detection of signal from upper limb amputee with activity of triceps and biceps muscles. The purpose of this study was to correlate phantom movement and sEMG activity in residual stump muscles of transhumeral amputee from the shoulder muscles. Eight non- amputee and seven right hand amputees were recruited for this study. sEMG data were collected for the trapezius, pectoralis and teres muscles for elevation, protraction and retraction of shoulder. Contrast between the amputees and non-amputees muscles action have been investigated. Subsequently, to investigate the impact of class separability for different motions of shoulder, analysis of variance for experimental recorded data was carried out. Results were analyzed to recognize different shoulder movements and represent a step towards the surface electromyography controlled system for amputees. Difference in F ratio (p < 0.05) values indicates the distinction in mean therefore these analysis helps to determine the independent motion. The identified signal would be used to design more accurate and efficient controllers for the upper-limb amputee for researchers.Keywords: around shoulder amputation, surface electromyography, analysis of variance, features
Procedia PDF Downloads 43431867 Design and Implementation of a Counting and Differentiation System for Vehicles through Video Processing
Authors: Derlis Gregor, Kevin Cikel, Mario Arzamendia, Raúl Gregor
Abstract:
This paper presents a self-sustaining mobile system for counting and classification of vehicles through processing video. It proposes a counting and classification algorithm divided in four steps that can be executed multiple times in parallel in a SBC (Single Board Computer), like the Raspberry Pi 2, in such a way that it can be implemented in real time. The first step of the proposed algorithm limits the zone of the image that it will be processed. The second step performs the detection of the mobile objects using a BGS (Background Subtraction) algorithm based on the GMM (Gaussian Mixture Model), as well as a shadow removal algorithm using physical-based features, followed by morphological operations. In the first step the vehicle detection will be performed by using edge detection algorithms and the vehicle following through Kalman filters. The last step of the proposed algorithm registers the vehicle passing and performs their classification according to their areas. An auto-sustainable system is proposed, powered by batteries and photovoltaic solar panels, and the data transmission is done through GPRS (General Packet Radio Service)eliminating the need of using external cable, which will facilitate it deployment and translation to any location where it could operate. The self-sustaining trailer will allow the counting and classification of vehicles in specific zones with difficult access.Keywords: intelligent transportation system, object detection, vehicle couting, vehicle classification, video processing
Procedia PDF Downloads 32331866 Heart Failure Identification and Progression by Classifying Cardiac Patients
Authors: Muhammad Saqlain, Nazar Abbas Saqib, Muazzam A. Khan
Abstract:
Heart Failure (HF) has become the major health problem in our society. The prevalence of HF has increased as the patient’s ages and it is the major cause of the high mortality rate in adults. A successful identification and progression of HF can be helpful to reduce the individual and social burden from this syndrome. In this study, we use a real data set of cardiac patients to propose a classification model for the identification and progression of HF. The data set has divided into three age groups, namely young, adult, and old and then each age group have further classified into four classes according to patient’s current physical condition. Contemporary Data Mining classification algorithms have been applied to each individual class of every age group to identify the HF. Decision Tree (DT) gives the highest accuracy of 90% and outperform all other algorithms. Our model accurately diagnoses different stages of HF for each age group and it can be very useful for the early prediction of HF.Keywords: decision tree, heart failure, data mining, classification model
Procedia PDF Downloads 40231865 A Transformer-Based Question Answering Framework for Software Contract Risk Assessment
Authors: Qisheng Hu, Jianglei Han, Yue Yang, My Hoa Ha
Abstract:
When a company is considering purchasing software for commercial use, contract risk assessment is critical to identify risks to mitigate the potential adverse business impact, e.g., security, financial and regulatory risks. Contract risk assessment requires reviewers with specialized knowledge and time to evaluate the legal documents manually. Specifically, validating contracts for a software vendor requires the following steps: manual screening, interpreting legal documents, and extracting risk-prone segments. To automate the process, we proposed a framework to assist legal contract document risk identification, leveraging pre-trained deep learning models and natural language processing techniques. Given a set of pre-defined risk evaluation problems, our framework utilizes the pre-trained transformer-based models for question-answering to identify risk-prone sections in a contract. Furthermore, the question-answering model encodes the concatenated question-contract text and predicts the start and end position for clause extraction. Due to the limited labelled dataset for training, we leveraged transfer learning by fine-tuning the models with the CUAD dataset to enhance the model. On a dataset comprising 287 contract documents and 2000 labelled samples, our best model achieved an F1 score of 0.687.Keywords: contract risk assessment, NLP, transfer learning, question answering
Procedia PDF Downloads 12931864 Autism Disease Detection Using Transfer Learning Techniques: Performance Comparison between Central Processing Unit vs. Graphics Processing Unit Functions for Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
Neural network approaches are machine learning methods used in many domains, such as healthcare and cyber security. Neural networks are mostly known for dealing with image datasets. While training with the images, several fundamental mathematical operations are carried out in the Neural Network. The operation includes a number of algebraic and mathematical functions, including derivative, convolution, and matrix inversion and transposition. Such operations require higher processing power than is typically needed for computer usage. Central Processing Unit (CPU) is not appropriate for a large image size of the dataset as it is built with serial processing. While Graphics Processing Unit (GPU) has parallel processing capabilities and, therefore, has higher speed. This paper uses advanced Neural Network techniques such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBOOST-VGG16, and our proposed models to compare CPU and GPU resources. A system for classifying autism disease using face images of an autistic and non-autistic child was used to compare performance during testing. We used evaluation matrices such as Accuracy, F1 score, Precision, Recall, and Execution time. It has been observed that GPU runs faster than the CPU in all tests performed. Moreover, the performance of the Neural Network models in terms of accuracy increases on GPU compared to CPU.Keywords: autism disease, neural network, CPU, GPU, transfer learning
Procedia PDF Downloads 11831863 Secured Embedding of Patient’s Confidential Data in Electrocardiogram Using Chaotic Maps
Authors: Butta Singh
Abstract:
This paper presents a chaotic map based approach for secured embedding of patient’s confidential data in electrocardiogram (ECG) signal. The chaotic map generates predefined locations through the use of selective control parameters. The sample value difference method effectually hides the confidential data in ECG sample pairs at these predefined locations. Evaluation of proposed method on all 48 records of MIT-BIH arrhythmia ECG database demonstrates that the embedding does not alter the diagnostic features of cover ECG. The secret data imperceptibility in stego-ECG is evident through various statistical and clinical performance measures. Statistical metrics comprise of Percentage Root Mean Square Difference (PRD) and Peak Signal to Noise Ratio (PSNR). Further, a comparative analysis between proposed method and existing approaches was also performed. The results clearly demonstrated the superiority of proposed method.Keywords: chaotic maps, ECG steganography, data embedding, electrocardiogram
Procedia PDF Downloads 19631862 Computational Tool for Surface Electromyography Analysis; an Easy Way for Non-Engineers
Authors: Fabiano Araujo Soares, Sauro Emerick Salomoni, Joao Paulo Lima da Silva, Igor Luiz Moura, Adson Ferreira da Rocha
Abstract:
This paper presents a tool developed in the Matlab platform. It was developed to simplify the analysis of surface electromyography signals (S-EMG) in a way accessible to users that are not familiarized with signal processing procedures. The tool receives data by commands in window fields and generates results as graphics and excel tables. The underlying math of each S-EMG estimator is presented. Setup window and result graphics are presented. The tool was presented to four non-engineer users and all of them managed to appropriately use it after a 5 minutes instruction period.Keywords: S-EMG estimators, electromyography, surface electromyography, ARV, RMS, MDF, MNF, CV
Procedia PDF Downloads 55931861 Initial Dip: An Early Indicator of Neural Activity in Functional Near Infrared Spectroscopy Waveform
Authors: Mannan Malik Muhammad Naeem, Jeong Myung Yung
Abstract:
Functional near infrared spectroscopy (fNIRS) has a favorable position in non-invasive brain imaging techniques. The concentration change of oxygenated hemoglobin and de-oxygenated hemoglobin during particular cognitive activity is the basis for this neuro-imaging modality. Two wavelengths of near-infrared light can be used with modified Beer-Lambert law to explain the indirect status of neuronal activity inside brain. The temporal resolution of fNIRS is very good for real-time brain computer-interface applications. The portability, low cost and an acceptable temporal resolution of fNIRS put it on a better position in neuro-imaging modalities. In this study, an optimization model for impulse response function has been used to estimate/predict initial dip using fNIRS data. In addition, the activity strength parameter related to motor based cognitive task has been analyzed. We found an initial dip that remains around 200-300 millisecond and better localize neural activity.Keywords: fNIRS, brain-computer interface, optimization algorithm, adaptive signal processing
Procedia PDF Downloads 22631860 Shock and Particle Velocity Determination from Microwave Interrogation
Authors: Benoit Rougier, Alexandre Lefrancois, Herve Aubert
Abstract:
Microwave interrogation in the range 10-100 GHz is identified as an advanced technique to investigate simultaneously shock and particle velocity measurements. However, it requires the understanding of electromagnetic wave propagation in a multi-layered moving media. The existing models limit their approach to wave guides or evaluate the velocities with a fitting method, restricting therefore the domain of validity and the precision of the results. Moreover, few data of permittivity on high explosives at these frequencies under dynamic compression have been reported. In this paper, shock and particle velocities are computed concurrently for steady and unsteady shocks for various inert and reactive materials, via a propagation model based on Doppler shifts and signal amplitude. Refractive index of the material under compression is also calculated. From experimental data processing, it is demonstrated that Hugoniot curve can be evaluated. The comparison with published results proves the accuracy of the proposed method. This microwave interrogation technique seems promising for shock and detonation waves studies.Keywords: electromagnetic propagation, experimental setup, Hugoniot measurement, shock propagation
Procedia PDF Downloads 21331859 The Variable Sampling Interval Xbar Chart versus the Double Sampling Xbar Chart
Authors: Michael B. C. Khoo, J. L. Khoo, W. C. Yeong, W. L. Teoh
Abstract:
The Shewhart Xbar control chart is a useful process monitoring tool in manufacturing industries to detect the presence of assignable causes. However, it is insensitive in detecting small process shifts. To circumvent this problem, adaptive control charts are suggested. An adaptive chart enables at least one of the chart’s parameters to be adjusted to increase the chart’s sensitivity. Two common adaptive charts that exist in the literature are the double sampling (DS) Xbar and variable sampling interval (VSI) Xbar charts. This paper compares the performances of the DS and VSI Xbar charts, based on the average time to signal (ATS) criterion. The ATS profiles of the DS Xbar and VSI Xbar charts are obtained using the Mathematica and Statistical Analysis System (SAS) programs, respectively. The results show that the VSI Xbar chart is generally superior to the DS Xbar chart.Keywords: adaptive charts, average time to signal, double sampling, charts, variable sampling interval
Procedia PDF Downloads 28731858 Analysis and Improvement of Efficiency for Food Processing Assembly Lines
Authors: Mehmet Savsar
Abstract:
Several factors affect productivity of Food Processing Assembly Lines (FPAL). Engineers and line managers usually do not recognize some of these factors and underutilize their production/assembly lines. In this paper, a special food processing assembly line is studied in detail, and procedures are presented to illustrate how productivity and efficiency of such lines can be increased. The assembly line considered produces ten different types of freshly prepared salads on the same line, which is called mixed model assembly line. Problems causing delays and inefficiencies on the line are identified. Line balancing and related tools are used to increase line efficiency and minimize balance delays. The procedure and the approach utilized in this paper can be useful for the operation managers and industrial engineers dealing with similar assembly lines in food processing industry.Keywords: assembly lines, line balancing, production efficiency, bottleneck
Procedia PDF Downloads 38831857 Open-Source YOLO CV For Detection of Dust on Solar PV Surface
Authors: Jeewan Rai, Kinzang, Yeshi Jigme Choden
Abstract:
Accumulation of dust on solar panels impacts the overall efficiency and the amount of energy they produce. While various techniques exist for detecting dust to schedule cleaning, many of these methods use MATLAB image processing tools and other licensed software, which can be financially burdensome. This study will investigate the efficiency of a free open-source computer vision library using the YOLO algorithm. The proposed approach has been tested on images of solar panels with varying dust levels through an experiment setup. The experimental findings illustrated the effectiveness of using the YOLO-based image classification method and the overall dust detection approach with an accuracy of 90% in distinguishing between clean and dusty panels. This open-source solution provides a cost effective and accessible alternative to commercial image processing tools, offering solutions for optimizing solar panel maintenance and enhancing energy production.Keywords: YOLO, openCV, dust detection, solar panels, computer vision, image processing
Procedia PDF Downloads 3231856 Optimizing of Machining Parameters of Plastic Material Using Taguchi Method
Authors: Jumazulhisham Abdul Shukor, Mohd. Sazali Said, Roshanizah Harun, Shuib Husin, Ahmad Razlee Ab Kadir
Abstract:
This paper applies Taguchi Optimization Method in determining the best machining parameters for pocket milling process on Polypropylene (PP) using CNC milling machine where the surface roughness is considered and the Carbide inserts cutting tool are used. Three machining parameters; speed, feed rate and depth of cut are investigated along three levels; low, medium and high of each parameter (Taguchi Orthogonal Arrays). The setting of machining parameters were determined by using Taguchi Method and the Signal-to-Noise (S/N) ratio are assessed to define the optimal levels and to predict the effect of surface roughness with assigned parameters based on L9. The final experimental outcomes are presented to prove the optimization parameters recommended by manufacturer are accurate.Keywords: inserts, milling process, signal-to-noise (S/N) ratio, surface roughness, Taguchi Optimization Method
Procedia PDF Downloads 63731855 Ultrasonic Techniques to Characterize and Monitor Water-in-Oil Emulsion
Authors: E. A. Alshaafi, A. Prakash
Abstract:
Oil-water emulsions are commonly encountered in various industrial operations and at different stages of crude oil production and processing. Emulsions are often difficult to track and treat and can cause a number of costly problems which need to be avoided. The characteristics of the emulsion phase can vary with crude composition and types of impurities present in oil. The objectives of this study are the development of ultrasonic techniques to track and characterize emulsion phase generated during production and cleaning of crude oil. The position of emulsion layer is monitored with the help of ultrasonic probes suitably placed in the vessel. The sensitivity of the technique and its potential has been demonstrated based on extensive testing with different oil samples. The technique is also being developed to monitor emulsion phase characteristics such as stability, composition, and droplet size distribution. The ultrasonic parameters recorded are changes in acoustic velocity, signal attenuation and its frequency spectrum. Emulsion has been prepared with light mineral oil sample and the effects of various factors including mixing speed, temperature, surfactant, and solid particles concentrations have been investigated. The applied frequency for ultrasonic waves has been varied from 1 to 5 MHz to carry out a sensitivity analysis. Emulsion droplet structure is observed with optical microscopy and stability is examined by tracking the changes in ultrasonic parameters with time. A model based on ultrasonic attenuation spectroscopy is being developed and tested to track changes in droplet size distribution with time.Keywords: ultrasonic techniques, emulsion, characterization, droplet size
Procedia PDF Downloads 17531854 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks
Authors: Ahmed Abdullah Ahmed
Abstract:
The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments
Procedia PDF Downloads 51231853 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 9731852 Blind Channel Estimation for Frequency Hopping System Using Subspace Based Method
Authors: M. M. Qasaymeh, M. A. Khodeir
Abstract:
Subspace channel estimation methods have been studied widely. It depends on subspace decomposition of the covariance matrix to separate signal subspace from noise subspace. The decomposition normally is done by either Eigenvalue Decomposition (EVD) or Singular Value Decomposition (SVD) of the Auto-Correlation matrix (ACM). However, the subspace decomposition process is computationally expensive. In this paper, the multipath channel estimation problem for a Slow Frequency Hopping (SFH) system using noise space based method is considered. An efficient method to estimate multipath the time delays basically is proposed, by applying MUltiple Signal Classification (MUSIC) algorithm which used the null space extracted by the Rank Revealing LU factorization (RRLU). The RRLU provides accurate information about the rank and the numerical null space which make it a valuable tool in numerical linear algebra. The proposed novel method decreases the computational complexity approximately to the half compared with RRQR methods keeping the same performance. Computer simulations are also included to demonstrate the effectiveness of the proposed scheme.Keywords: frequency hopping, channel model, time delay estimation, RRLU, RRQR, MUSIC, LS-ESPRIT
Procedia PDF Downloads 41031851 A Study of the Costs and Benefits of Smart City Projects Including the Scenario of Public-Private Partnerships
Authors: Patrick T. I. Lam, Wenjing Yang
Abstract:
A smart city project embraces benefits and costs which can be classified under direct and indirect categories. Externalities come into the picture, but they are often difficult to quantify. Despite this barrier, policy makers need to carry out cost-benefit analysis to justify the huge investments needed to make a city smart. The recent trend is towards the engagement of the private sector to utilize their resources and expertise, especially in the Information and Communication Technology (ICT) areas, where innovations blossom. This study focuses on the identification of costs (on a life cycle basis) and benefits associated with smart city project developments based on a comprehensive literature review and case studies, where public-private partnerships would warrant consideration, the related costs and benefits are highlighted. The findings will be useful for policy makers of cities.Keywords: smart city projects, costs and benefits, identification, public-private partnerships
Procedia PDF Downloads 33831850 Ultrasensitive Detection and Discrimination of Cancer-Related Single Nucleotide Polymorphisms Using Poly-Enzyme Polymer Bead Amplification
Authors: Lorico D. S. Lapitan Jr., Yihan Xu, Yuan Guo, Dejian Zhou
Abstract:
The ability of ultrasensitive detection of specific genes and discrimination of single nucleotide polymorphisms is important for clinical diagnosis and biomedical research. Herein, we report the development of a new ultrasensitive approach for label-free DNA detection using magnetic nanoparticle (MNP) assisted rapid target capture/separation in combination with signal amplification using poly-enzyme tagged polymer nanobead. The sensor uses an MNP linked capture DNA and a biotin modified signal DNA to sandwich bind the target followed by ligation to provide high single-nucleotide polymorphism discrimination. Only the presence of a perfect match target DNA yields a covalent linkage between the capture and signal DNAs for subsequent conjugation of a neutravidin-modified horseradish peroxidase (HRP) enzyme through the strong biotin-nuetravidin interaction. This converts each captured DNA target into an HRP which can convert millions of copies of a non-fluorescent substrate (amplex red) to a highly fluorescent product (resorufin), for great signal amplification. The use of polymer nanobead each tagged with thousands of copies of HRPs as the signal amplifier greatly improves the signal amplification power, leading to greatly improved sensitivity. We show our biosensing approach can specifically detect an unlabeled DNA target down to 10 aM with a wide dynamic range of 5 orders of magnitude (from 0.001 fM to 100.0 fM). Furthermore, our approach has a high discrimination between a perfectly matched gene and its cancer-related single-base mismatch targets (SNPs): It can positively detect the perfect match DNA target even in the presence of 100 fold excess of co-existing SNPs. This sensing approach also works robustly in clinical relevant media (e.g. 10% human serum) and gives almost the same SNP discrimination ratio as that in clean buffers. Therefore, this ultrasensitive SNP biosensor appears to be well-suited for potential diagnostic applications of genetic diseases.Keywords: DNA detection, polymer beads, signal amplification, single nucleotide polymorphisms
Procedia PDF Downloads 24931849 Trace Network: A Probabilistic Relevant Pattern Recognition Approach to Attribution Trace Analysis
Authors: Jian Xu, Xiaochun Yun, Yongzheng Zhang, Yafei Sang, Zhenyu Cheng
Abstract:
Network attack prevention is a critical research area of information security. Network attack would be oppressed if attribution techniques are capable to trace back to the attackers after the hacking event. Therefore attributing these attacks to a particular identification becomes one of the important tasks when analysts attempt to differentiate and profile the attacker behind a piece of attack trace. To assist analysts in expose attackers behind the scenes, this paper researches on the connections between attribution traces and proposes probabilistic relevance based attribution patterns. This method facilitates the evaluation of the plausibility relevance between different traceable identifications. Furthermore, through analyzing the connections among traces, it could confirm the existence probability of a certain organization as well as discover its affinitive partners by the means of drawing relevance matrix from attribution traces.Keywords: attribution trace, probabilistic relevance, network attack, attacker identification
Procedia PDF Downloads 366