Search results for: array signal processing
4946 Run-Time Customisation of Soft-Core CPUs on Field Programmable Gate Array
Authors: Rehab Abdullah Shendi
Abstract:
The use of customised soft-core processors in which instructions can be integrated into a system in application hardware is increasing in the Field Programmable Gate Array (FPGA) field. Specifically, the partial run-time reconfiguration of FPGAs in specialised processors for a particular domain can be very beneficial. In this report, the design and implementation for the customisation of a soft-core MIPS processor using an FPGA and partial reconfiguration (PR) of FPGA technology will be addressed to achieve efficient resource use. This can be achieved using a PR design flow that helps the design fit into a smaller device. Moreover, the impact of static power consumption could be reduced due to runtime reconfiguration. This will be done by configurable custom instructions implemented in the hardware as an extension on the MIPS CPU. The aim of this project is to investigate the PR of FPGAs for run-time adaptations of the instruction set of a soft-core CPU, including the integration of custom instructions and the exploration of the potential to use the MultiBoot feature available in Xilinx FPGAs to carry out the PR process. The system will be evaluated and tested on a Nexus 3 development board featuring a Xilinx Spartran-6 FPGA. The system will be able to load reconfigurable custom instructions dynamically into user programs with the help of the trap handler when the custom instruction is called by the MIPS CPU. The results of this experiment demonstrate that custom instructions in hardware can speed up a certain function and many instructions can be saved when compared to a software implementation of the same function. Implementing custom instructions in hardware is perfectly possible and worth exploring.Keywords: customisation, FPGA, MIPS, partial reconfiguration, PR
Procedia PDF Downloads 2674945 Design of a Photovoltaic Power Generation System Based on Artificial Intelligence and Internet of Things
Authors: Wei Hu, Wenguang Chen, Chong Dong
Abstract:
In order to improve the efficiency and safety of photovoltaic power generation devices, this photovoltaic power generation system combines Artificial Intelligence (AI) and the Internet of Things (IoT) to control the chasing photovoltaic power generation device to track the sun to improve power generation efficiency and then convert energy management. The system uses artificial intelligence as the control terminal, the power generation device executive end uses the Linux system, and Exynos4412 is the CPU. The power generating device collects the sun image information through Sony CCD. After several power generating devices feedback the data to the CPU for processing, several CPUs send the data to the artificial intelligence control terminal through the Internet. The control terminal integrates the executive terminal information, time information, and environmental information to decide whether to generate electricity normally and then whether to convert the converted electrical energy into the grid or store it in the battery pack. When the power generation environment is abnormal, the control terminal authorizes the protection strategy, the power generation device executive terminal stops power generation and enters a self-protection posture, and at the same time, the control terminal synchronizes the data with the cloud. At the same time, the system is more intelligent, more adaptive, and longer life.Keywords: photo-voltaic power generation, the pursuit of light, artificial intelligence, internet of things, photovoltaic array, power management
Procedia PDF Downloads 1234944 Real-Time Neuroimaging for Rehabilitation of Stroke Patients
Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge
Abstract:
Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation
Procedia PDF Downloads 3874943 Gearbox Defect Detection in the Semi Autogenous Mills Using the Vibration Analysis Technique
Authors: Mostafa Firoozabadi, Alireza Foroughi Nematollahi
Abstract:
Semi autogenous mills are designed for grinding or primary crushed ore, and are the most widely used in concentrators globally. Any defect occurrence in semi autogenous mills can stop the production line. A Gearbox is a significant part of a rotating machine or a mill, so, the gearbox monitoring is a necessary process to prevent the unwanted defects. When a defect happens in a gearbox bearing, this defect can be transferred to the other parts of the equipment like inner ring, outer ring, balls, and the bearing cage. Vibration analysis is one of the most effective and common ways to detect the bearing defects in the mills. Vibration signal in a mill can be made by different parts of the mill including electromotor, pinion girth gear, different rolling bearings, and tire. When a vibration signal, made by the aforementioned parts, is added to the gearbox vibration spectrum, an accurate and on time defect detection in the gearbox will be difficult. In this paper, a new method is proposed to detect the gearbox bearing defects in the semi autogenous mill on time and accurately, using the vibration signal analysis method. In this method, if the vibration values are increased in the vibration curve, the probability of defect occurrence is investigated by comparing the equipment vibration values and the standard ones. Then, all vibration frequencies are extracted from the vibration signal and the equipment defect is detected using the vibration spectrum curve. This method is implemented on the semi autogenous mills in the Golgohar mining and industrial company in Iran. The results show that the proposed method can detect the bearing looseness on time and accurately. After defect detection, the bearing is opened before the equipment failure and the predictive maintenance actions are implemented on it.Keywords: condition monitoring, gearbox defects, predictive maintenance, vibration analysis
Procedia PDF Downloads 4634942 Taguchi-Based Six Sigma Approach to Optimize Surface Roughness for Milling Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using Six Sigma methodologies to improve the surface roughness of a manufactured part produced by the CNC milling machine. It presents a case study where the surface roughness of milled aluminum is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for a CNC milling process. The six sigma methodology, DMAIC (design, measure, analyze, improve, and control) approach, was applied in this study to improve the process, reduce defects, and ultimately reduce costs. The Taguchi-based six sigma approach was applied to identify the optimized processing parameters that led to the targeted surface roughness specified by our customer. A L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of feed rate, depth of cut, spindle speed, and surface roughness. The noise factor is the difference between the old cutting tool and the new cutting tool. The confirmation run with the optimal parameters confirmed that the new parameter settings are correct. The new settings also improved the process capability index. The purpose of this study is that the Taguchi–based six sigma approach can be efficiently used to phase out defects and improve the process capability index of the CNC milling process.Keywords: CNC machining, six sigma, surface roughness, Taguchi methodology
Procedia PDF Downloads 2424941 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering
Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi
Abstract:
In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering
Procedia PDF Downloads 1504940 Excitation Modeling for Hidden Markov Model-Based Speech Synthesis Based on Wavelet Analysis
Authors: M. Kiran Reddy, K. Sreenivasa Rao
Abstract:
The conventional Hidden Markov Model (HMM)-based speech synthesis system (HTS) uses only a pulse excitation model, which significantly differs from natural excitation signal. Hence, buzziness can be perceived in the speech generated using HTS. This paper proposes an efficient excitation modeling method that can significantly reduce the buzziness, and improve the quality of HMM-based speech synthesis. The proposed approach models the pitch-synchronous residual frames extracted from the residual excitation signal. Each pitch synchronous residual frame is parameterized using 30 wavelet coefficients. These 30 wavelet coefficients are found to accurately capture the perceptually important information present in the residual waveform. In synthesis phase, the residual frames are reconstructed from the generated wavelet coefficients and are pitch-synchronously overlap-added to generate the excitation signal. The proposed excitation modeling method is integrated into HMM-based speech synthesis system. Evaluation results indicate that the speech synthesized by the proposed excitation model is significantly better than the speech generated using state-of-the-art excitation modeling methods.Keywords: excitation modeling, hidden Markov models, pitch-synchronous frames, speech synthesis, wavelet coefficients
Procedia PDF Downloads 2484939 Additive Manufacturing Optimization Via Integrated Taguchi-Gray Relation Methodology for Oil and Gas Component Fabrication
Authors: Meshal Alsaiari
Abstract:
Fused Deposition Modeling is one of the additive manufacturing technologies the industry is shifting to nowadays due to its simplicity and low affordable cost. The fabrication processing parameters predominantly influence FDM part strength and mechanical properties. This presentation will demonstrate the influences of the two manufacturing parameters on the tensile testing evaluation indexes, infill density, and Printing Orientation, which were analyzed to create a piping spacer suitable for oil and gas applications. The tensile specimens are made of two polymers, Acrylonitrile Styrene Acrylate (ASA) and High high-impact polystyrene (HIPS), to characterize the mechanical properties performance for creating the final product. The mechanical testing was carried out per the ASTM D638 testing standard, following Type IV requirements. Taguchi's experiment design using an L-9 orthogonal array was used to evaluate the performance output and identify the optimal manufacturing factors. The experimental results demonstrate that the tensile test is more pronounced with 100% infill for ASA and HIPS samples. However, the printing orientations varied in reactions; ASA is maximum at 0 degrees while HIPS shows almost similar percentages between 45 and 90 degrees. Taguchi-Gray integrated methodology was adopted to minimize the response and recognize optimal fabrication factors combinations.Keywords: FDM, ASTM D638, tensile testing, acrylonitrile styrene acrylate
Procedia PDF Downloads 934938 Developing Wearable EMG Sensor Designed for Parkinson's Disease (PD) Monitoring, and Treatment
Authors: Bulcha Belay Etana
Abstract:
Electromyography is used to measure the electrical activity of muscles for various health monitoring applications using surface electrodes or needle electrodes. Recent developments in electromyogram signal acquisition using textile electrodes open the door for wearable health monitoring which enables patients to monitor and control their health issues outside of traditional healthcare facilities. The aim of this research is therefore to develop and analyze wearable textile electrodes for the acquisition of electromyography signals for Parkinson’s patients and apply an appropriate thermal stimulus to relieve muscle cramping. In order to achieve this, textile electrodes are sewn with a silver-coated thread in an overlapping zigzag pattern into an inextensible fabric, and stainless steel knitted textile electrodes attached to a sleeve were prepared and its electrical characteristics including signal to noise ratio were compared with traditional electrodes. To relieve muscle cramping, a heating element using stainless steel conductive yarn Sewn onto a cotton fabric, coupled with a vibration system were developed. The system was integrated using a microcontroller and a Myoware muscle sensor so that when muscle cramping occurs, measured by the system activates the heating elements and vibration motors. The optimum temperature considered for treatment was 35.50c, so a Temperature measurement system was incorporated to deactivate the heating system when the temperature reaches this threshold, and the signals indicating muscle cramping have subsided. The textile electrode exhibited a signal to noise ratio of 6.38dB while the signal to noise ratio of the traditional electrode was 7.05dB. The rise time of the developed heating element was about 6 minutes to reach the optimum temperature using a 9volt power supply. The treatment of muscle cramping in Parkinson's patients using heat and muscle vibration simultaneously with a wearable electromyography signal acquisition system will improve patients’ livelihoods and enable better chronic pain management.Keywords: electromyography, heating textile, vibration therapy, parkinson’s disease, wearable electronic textile
Procedia PDF Downloads 1354937 Cognitive Dysfunctioning and the Fronto-Limbic Network in Bipolar Disorder Patients: A Fmri Meta-Analysis
Authors: Rahele Mesbah, Nic Van Der Wee, Manja Koenders, Erik Giltay, Albert Van Hemert, Max De Leeuw
Abstract:
Introduction: Patients with bipolar disorder (BD), characterized by depressive and manic episodes, often suffer from cognitive dysfunction. An up-to-date meta-analysis of functional Magnetic Resonance Imaging (fMRI) studies examining cognitive function in BD is lacking. Objective: The aim of the current fMRI meta-analysis is to investigate brain functioning of bipolar patients compared with healthy subjects within three domains of emotion processing, reward processing, and working memory. Method: Differences in brain regions activation were tested within whole-brain analysis using the activation likelihood estimation (ALE) method. Separate analyses were performed for each cognitive domain. Results: A total of 50 fMRI studies were included: 20 studies used an emotion processing (316 BD and 369 HC) task, 9 studies a reward processing task (215 BD and 213 HC), and 21 studies used a working memory task (503 BD and 445 HC). During emotion processing, BD patients hyperactivated parts of the left amygdala and hippocampus as compared to HC’s, but showed hypoactivation in the inferior frontal gyrus (IFG). Regarding reward processing, BD patients showed hyperactivation in part of the orbitofrontal cortex (OFC). During working memory, BD patients showed increased activity in the prefrontal cortex (PFC) and anterior cingulate cortex (ACC). Conclusions: This meta-analysis revealed evidence for activity disturbances in several brain areas involved in the cognitive functioning of BD patients. Furthermore, most of the found regions are part of the so-called fronto-limbic network which is hypothesized to be affected as a result of BD candidate genes' expression.Keywords: cognitive functioning, fMRI analysis, bipolar disorder, fronto-limbic network
Procedia PDF Downloads 4624936 Spatial Audio Player Using Musical Genre Classification
Authors: Jun-Yong Lee, Hyoung-Gook Kim
Abstract:
In this paper, we propose a smart music player that combines the musical genre classification and the spatial audio processing. The musical genre is classified based on content analysis of the musical segment detected from the audio stream. In parallel with the classification, the spatial audio quality is achieved by adding an artificial reverberation in a virtual acoustic space to the input mono sound. Thereafter, the spatial sound is boosted with the given frequency gains based on the musical genre when played back. Experiments measured the accuracy of detecting the musical segment from the audio stream and its musical genre classification. A listening test was performed based on the virtual acoustic space based spatial audio processing.Keywords: automatic equalization, genre classification, music segment detection, spatial audio processing
Procedia PDF Downloads 4294935 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values
Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie
Abstract:
Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.Keywords: initial input, iterative learning control, maximum input, singular values
Procedia PDF Downloads 2414934 2D Point Clouds Features from Radar for Helicopter Classification
Authors: Danilo Habermann, Aleksander Medella, Carla Cremon, Yusef Caceres
Abstract:
This paper aims to analyze the ability of 2d point clouds features to classify different models of helicopters using radars. This method does not need to estimate the blade length, the number of blades of helicopters, and the period of their micro-Doppler signatures. It is also not necessary to generate spectrograms (or any other image based on time and frequency domain). This work transforms a radar return signal into a 2D point cloud and extracts features of it. Three classifiers are used to distinguish 9 different helicopter models in order to analyze the performance of the features used in this work. The high accuracy obtained with each of the classifiers demonstrates that the 2D point clouds features are very useful for classifying helicopters from radar signal.Keywords: helicopter classification, point clouds features, radar, supervised classifiers
Procedia PDF Downloads 2274933 Oleic Acid Enhances Hippocampal Synaptic Efficacy
Authors: Rema Vazhappilly, Tapas Das
Abstract:
Oleic acid is a cis unsaturated fatty acid and is known to be a partially essential fatty acid due to its limited endogenous synthesis during pregnancy and lactation. Previous studies have demonstrated the role of oleic acid in neuronal differentiation and brain phospholipid synthesis. These evidences indicate a major role for oleic acid in learning and memory. Interestingly, oleic acid has been shown to enhance hippocampal long term potentiation (LTP), the physiological correlate of long term synaptic plasticity. However the effect of oleic acid on short term synaptic plasticity has not been investigated. Short term potentiation (STP) is the physiological correlate of short term synaptic plasticity which is the key underlying molecular mechanism of short term memory and neuronal information processing. STP in the hippocampal CA1 region has been known to require the activation of N-methyl-D-aspartate receptors (NMDARs). The NMDAR dependent hippocampal STP as a potential mechanism for short term memory has been a subject of intense interest for the past few years. Therefore in the present study the effect of oleic acid on NMDAR dependent hippocampal STP was determined in mouse hippocampal slices (in vitro) using Multi-electrode array system. STP was induced by weak tetanic Stimulation (one train of 100 Hz stimulations for 0.1s) of the Schaffer collaterals of CA1 region of the hippocampus in slices treated with different concentrations of oleic acid in presence or absence of NMDAR antagonist D-AP5 (30 µM) . Oleic acid at 20 (mean increase in fEPSP amplitude = ~135 % Vs. Control = 100%; P<0.001) and 30 µM (mean increase in fEPSP amplitude = ~ 280% Vs. Control = 100%); P<0.001) significantly enhanced the STP following weak tetanic stimulation. Lower oleic acid concentrations at 10 µM did not modify the hippocampal STP induced by weak tetanic stimulation. The hippocampal STP induced by weak tetanic stimulation was completely blocked by the NMDA receptor antagonist D-AP5 (30µM) in both oleic acid and control treated hippocampal slices. This lead to the conclusion that the hippocampal STP elicited by weak tetanic stimulation and enhanced by oleic acid was NMDAR dependent. Together these findings suggest that oleic acid may enhance the short term memory and neuronal information processing through the modulation of NMDAR dependent hippocampal short-term synaptic plasticity. In conclusion this study suggests the possible role of oleic acid to prevent the short term memory loss and impaired neuronal function throughout development.Keywords: oleic acid, short-term potentiation, memory, field excitatory post synaptic potentials, NMDA receptor
Procedia PDF Downloads 3354932 The Positive Effects of Processing Instruction on the Acquisition of French as a Second Language: An Eye-Tracking Study
Authors: Cecile Laval, Harriet Lowe
Abstract:
Processing Instruction is a psycholinguistic pedagogical approach drawing insights from the Input Processing Model which establishes the initial innate strategies used by second language learners to connect form and meaning of linguistic features. With the ever-growing use of technology in Second Language Acquisition research, the present study uses eye-tracking to measure the effectiveness of Processing Instruction in the acquisition of French and its effects on learner’s cognitive strategies. The experiment was designed using a TOBII Pro-TX300 eye-tracker to measure participants’ default strategies when processing French linguistic input and any cognitive changes after receiving Processing Instruction treatment. Participants were drawn from lower intermediate adult learners of French at the University of Greenwich and randomly assigned to two groups. The study used a pre-test/post-test methodology. The pre-tests (one per linguistic item) were administered via the eye-tracker to both groups one week prior to instructional treatment. One group received full Processing Instruction treatment (explicit information on the grammatical item and on the processing strategies, and structured input activities) on the primary target linguistic feature (French past tense imperfective aspect). The second group received Processing Instruction treatment except the explicit information on the processing strategies. Three immediate post-tests on the three grammatical structures under investigation (French past tense imperfective aspect, French Subjunctive used for the expression of doubt, and the French causative construction with Faire) were administered with the eye-tracker. The eye-tracking data showed the positive change in learners’ processing of the French target features after instruction with improvement in the interpretation of the three linguistic features under investigation. 100% of participants in both groups made a statistically significant improvement (p=0.001) in the interpretation of the primary target feature (French past tense imperfective aspect) after treatment. 62.5% of participants made an improvement in the secondary target item (French Subjunctive used for the expression of doubt) and 37.5% of participants made an improvement in the cumulative target feature (French causative construction with Faire). Statistically there was no significant difference between the pre-test and post-test scores in the cumulative target feature; however, the variance approximately tripled between the pre-test and the post-test (3.9 pre-test and 9.6 post-test). This suggests that the treatment does not affect participants homogenously and implies a role for individual differences in the transfer-of-training effect of Processing Instruction. The use of eye-tracking provides an opportunity for the study of unconscious processing decisions made during moment-by-moment comprehension. The visual data from the eye-tracking demonstrates changes in participants’ processing strategies. Gaze plots from pre- and post-tests display participants fixation points changing from focusing on content words to focusing on the verb ending. This change in processing strategies can be clearly seen in the interpretation of sentences in both primary and secondary target features. This paper will present the research methodology, design and results of the experimental study using eye-tracking to investigate the primary effects and transfer-of-training effects of Processing Instruction. It will then provide evidence of the cognitive benefits of Processing Instruction in Second Language Acquisition and offer suggestion in second language teaching of grammar.Keywords: eye-tracking, language teaching, processing instruction, second language acquisition
Procedia PDF Downloads 2794931 Optimisation of Wastewater Treatment for Yeast Processing Effluent Using Response Surface Methodology
Authors: Shepherd Manhokwe, Sheron Shoko, Cuthbert Zvidzai
Abstract:
In the present study, the interactive effects of temperature and cultured bacteria on the performance of a biological treatment system of yeast processing wastewater were investigated. The main objective of this study was to investigate and optimize the operating parameters that reduce organic load and colour. Experiments were conducted based on a Central Composite Design (CCD) and analysed using Response Surface Methodology (RSM). Three dependent parameters were either directly measured or calculated as response. These parameters were total Chemical Oxygen Demand (COD) removal, colour reduction and total solids. COD removal efficiency of 26 % and decolourization efficiency of 44 % were recorded for the wastewater treatment. The optimized conditions for the biological treatment were found to be at 20 g/l cultured bacteria and 25 °C for COD reduction. For colour reduction optimum conditions were temperature of 30.35°C and bacterial formulation of 20g/l. Biological treatment of baker’s yeast processing effluent is a suitable process for the removal of organic load and colour from wastewater, especially when the operating parameters are optimized.Keywords: COD reduction, optimisation, response surface methodology, yeast processing wastewater
Procedia PDF Downloads 3444930 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.Keywords: injection molding, shrinkage, six sigma, Taguchi parameter design
Procedia PDF Downloads 1784929 Image Reconstruction Method Based on L0 Norm
Authors: Jianhong Xiang, Hao Xiang, Linyu Wang
Abstract:
Compressed sensing (CS) has a wide range of applications in sparse signal reconstruction. Aiming at the problems of low recovery accuracy and long reconstruction time of existing reconstruction algorithms in medical imaging, this paper proposes a corrected smoothing L0 algorithm based on compressed sensing (CSL0). First, an approximate hyperbolic tangent function (AHTF) that is more similar to the L0 norm is proposed to approximate the L0 norm. Secondly, in view of the "sawtooth phenomenon" in the steepest descent method and the problem of sensitivity to the initial value selection in the modified Newton method, the use of the steepest descent method and the modified Newton method are jointly optimized to improve the reconstruction accuracy. Finally, the CSL0 algorithm is simulated on various images. The results show that the algorithm proposed in this paper improves the reconstruction accuracy of the test image by 0-0. 98dB.Keywords: smoothed L0, compressed sensing, image processing, sparse reconstruction
Procedia PDF Downloads 1154928 A Genetic-Neural-Network Modeling Approach for Self-Heating in GaN High Electron Mobility Transistors
Authors: Anwar Jarndal
Abstract:
In this paper, a genetic-neural-network (GNN) based large-signal model for GaN HEMTs is presented along with its parameters extraction procedure. The model is easy to construct and implement in CAD software and requires only DC and S-parameter measurements. An improved decomposition technique is used to model self-heating effect. Two GNN models are constructed to simulate isothermal drain current and power dissipation, respectively. The two model are then composed to simulate the drain current. The modeling procedure was applied to a packaged GaN-on-Si HEMT and the developed model is validated by comparing its large-signal simulation with measured data. A very good agreement between the simulation and measurement is obtained.Keywords: GaN HEMT, computer-aided design and modeling, neural networks, genetic optimization
Procedia PDF Downloads 3824927 Improving Capability of Detecting Impulsive Noise
Authors: Farbod Rohani, Elyar Ghafoori, Matin Saeedkondori
Abstract:
Impulse noise is electromagnetic emission which generated by many house hold appliances that are attached to the electrical network. The main difficulty of impulsive noise (IN) elimination process from communication channels is to distinguish it from the transmitted signal and more importantly choosing the proper threshold bandwidth in order to eliminate the signal. Because of wide band property of impulsive noise, we present a novel method for setting the detection threshold, by taking advantage of the fact that impulsive noise bandwidth is usually wider than that of typical communication channels and specifically OFDM channel. After IN detection procedure, we apply simple windowing mechanisms to eliminate them from the communication channel.Keywords: impulsive noise, OFDM channel, threshold detecting, windowing mechanisms
Procedia PDF Downloads 3414926 Optimal Harmonic Filters Design of Taiwan High Speed Rail Traction System
Authors: Ying-Pin Chang
Abstract:
This paper presents a method for combining a particle swarm optimization with nonlinear time-varying evolution and orthogonal arrays (PSO-NTVEOA) in the planning of harmonic filters for the high speed railway traction system with specially connected transformers in unbalanced three-phase power systems. The objective is to minimize the cost of the filter, the filters loss, the total harmonic distortion of currents and voltages at each bus simultaneously. An orthogonal array is first conducted to obtain the initial solution set. The set is then treated as the initial training sample. Next, the PSO-NTVEOA method parameters are determined by using matrix experiments with an orthogonal array, in which a minimal number of experiments would have an effect that approximates the full factorial experiments. This PSO-NTVEOA method is then applied to design optimal harmonic filters in Taiwan High Speed Rail (THSR) traction system, where both rectifiers and inverters with IGBT are used. From the results of the illustrative examples, the feasibility of the PSO-NTVEOA to design an optimal passive harmonic filter of THSR system is verified and the design approach can greatly reduce the harmonic distortion. Three design schemes are compared that V-V connection suppressing the 3rd order harmonic, and Scott and Le Blanc connection for the harmonic improvement is better than the V-V connection.Keywords: harmonic filters, particle swarm optimization, nonlinear time-varying evolution, orthogonal arrays, specially connected transformers
Procedia PDF Downloads 3924925 An Application-Driven Procedure for Optimal Signal Digitization of Automotive-Grade Ultrasonic Sensors
Authors: Mohamed Shawki Elamir, Heinrich Gotzig, Raoul Zoellner, Patrick Maeder
Abstract:
In this work, a methodology is presented for identifying the optimal digitization parameters for the analog signal of ultrasonic sensors. These digitization parameters are the resolution of the analog to digital conversion and the sampling rate. This is accomplished through the derivation of characteristic curves based on Fano inequality and the calculation of the mutual information content over a given dataset. The mutual information is calculated between the examples in the dataset and the corresponding variation in the feature that needs to be estimated. The optimal parameters are identified in a manner that ensures optimal estimation performance while preventing inefficiency in using unnecessarily powerful analog to digital converters.Keywords: analog to digital conversion, digitization, sampling rate, ultrasonic
Procedia PDF Downloads 2074924 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 3974923 The Influence of Different Technologies on the Infiltration Properties and Soil Surface Crusting Processing in the North Bohemia Region
Authors: Miroslav Dumbrovsky, Lucie Larisova
Abstract:
The infiltration characteristic of the soil surface is one of the major factors that determines the potential soil degradation risk. The physical, chemical and biological characteristic of soil is changed by the processing of soil. The infiltration soil ability has an important role in soil and water conservation. The subject of the contribution is the evaluation of the influence of the conventional tillage and reduced tillage technology on soil surface crusting processing and infiltration properties of the soil in the North Bohemia region. Field experimental work at the area was carried out in the years 2013-2016 on Cambisol district medium-heavy clayey soil. The research was conducted on sloping erosion-endangered blocks of compacted arable land. The areas were chosen each year in the way that one of the experimental areas was handled by conventional tillage technologies and the other by reduced tillage technologies. Intact soil samples were taken into Kopecký´s cylinders in the three landscape positions, at a depth of 10 cm (representing topsoil) and 30 cm (representing subsoil). The cumulative infiltration was measured using a mini-disc infiltrometer near the consumption points. The Zhang method (1997), which provides an estimate of the unsaturated hydraulic conductivity K(h), was used for the evaluation of the infiltration tests of the mini-disc infiltrometer. The soil profile processed by conventional tillage showed a higher degree of compaction and soil crusting processing. The bulk density was between 1.10–1.67 g.cm⁻³, compared to the land processed by the reduced tillage technology, where the values were between 0.80–1.29 g.cm⁻³. Unsaturated hydraulic conductivity values were about one-third higher within the reduced tillage technology soil processing.Keywords: soil crusting processing, unsaturated hydraulic conductivity, cumulative infiltration, bulk density, porosity
Procedia PDF Downloads 2474922 Omni-Relay (OR) Scheme-Aided LTE-A Communication Systems
Authors: Hassan Mahasneh, Abu Sesay
Abstract:
We propose the use of relay terminals at the cell edge of an LTE-based cellar system. Each relay terminal is equipped with an omni-directional antenna. We refer to this scheme as the Omni-Relay (OR) scheme. The OR scheme coordinates the inter-cell interference (ICI) stemming from adjacent cells and increases the desired signal level at cell-edge regions. To validate the performance of the OR scheme, we derive the average signal-to-interference plus noise ratio (SINR) and the average capacity and compare it with the conventional universal frequency reuse factor (UFRF). The results show that the proposed OR scheme provides higher average SINR and average capacity compared to the UFRF due to the assistance of the distributed relay nodes.Keywords: the UFRF scheme, the OR scheme, ICI, relay terminals, SINR, spectral efficiency
Procedia PDF Downloads 3414921 Accurate Position Electromagnetic Sensor Using Data Acquisition System
Authors: Z. Ezzouine, A. Nakheli
Abstract:
This paper presents a high position electromagnetic sensor system (HPESS) that is applicable for moving object detection. The authors have developed a high-performance position sensor prototype dedicated to students’ laboratory. The challenge was to obtain a highly accurate and real-time sensor that is able to calculate position, length or displacement. An electromagnetic solution based on a two coil induction principal was adopted. The HPESS converts mechanical motion to electric energy with direct contact. The output signal can then be fed to an electronic circuit. The voltage output change from the sensor is captured by data acquisition system using LabVIEW software. The displacement of the moving object is determined. The measured data are transmitted to a PC in real-time via a DAQ (NI USB -6281). This paper also describes the data acquisition analysis and the conditioning card developed specially for sensor signal monitoring. The data is then recorded and viewed using a user interface written using National Instrument LabVIEW software. On-line displays of time and voltage of the sensor signal provide a user-friendly data acquisition interface. The sensor provides an uncomplicated, accurate, reliable, inexpensive transducer for highly sophisticated control systems.Keywords: electromagnetic sensor, accurately, data acquisition, position measurement
Procedia PDF Downloads 2854920 Signal On-Off Ratio and Output Frequency Analysis of Semiconductor Electron-Interference Device
Authors: Tomotaka Aoki, Isao Tomita
Abstract:
We examined the on-off ratio and frequency components of output signals from an electron-interference device made of GaAs/AlₓGa₁₋ₓAs by solving the time-dependent Schrödinger's equation on conducting electrons in the channel waveguide of the device. For electron-wave modulation, a periodic voltage of frequency f was applied to the channel. Furthermore, we examined the voltage-amplitude dependence of the signals in time and frequency domains and found that large applied voltage deformed the output-signal waveform and created additional side modes (frequencies) near the modulation frequency f and that there was a trade-off between on-off ratio and side-mode creation.Keywords: electrical conduction, electron interference, frequency spectrum, on-off ratio
Procedia PDF Downloads 1214919 Multi Response Optimization in Drilling Al6063/SiC/15% Metal Matrix Composite
Authors: Hari Singh, Abhishek Kamboj, Sudhir Kumar
Abstract:
This investigation proposes a grey-based Taguchi method to solve the multi-response problems. The grey-based Taguchi method is based on the Taguchi’s design of experimental method, and adopts Grey Relational Analysis (GRA) to transfer multi-response problems into single-response problems. In this investigation, an attempt has been made to optimize the drilling process parameters considering weighted output response characteristics using grey relational analysis. The output response characteristics considered are surface roughness, burr height and hole diameter error under the experimental conditions of cutting speed, feed rate, step angle, and cutting environment. The drilling experiments were conducted using L27 orthogonal array. A combination of orthogonal array, design of experiments and grey relational analysis was used to ascertain best possible drilling process parameters that give minimum surface roughness, burr height and hole diameter error. The results reveal that combination of Taguchi design of experiment and grey relational analysis improves surface quality of drilled hole.Keywords: metal matrix composite, drilling, optimization, step drill, surface roughness, burr height, hole diameter error
Procedia PDF Downloads 3174918 On the Implementation of The Pulse Coupled Neural Network (PCNN) in the Vision of Cognitive Systems
Authors: Hala Zaghloul, Taymoor Nazmy
Abstract:
One of the great challenges of the 21st century is to build a robot that can perceive and act within its environment and communicate with people, while also exhibiting the cognitive capabilities that lead to performance like that of people. The Pulse Coupled Neural Network, PCNN, is a relative new ANN model that derived from a neural mammal model with a great potential in the area of image processing as well as target recognition, feature extraction, speech recognition, combinatorial optimization, compressed encoding. PCNN has unique feature among other types of neural network, which make it a candid to be an important approach for perceiving in cognitive systems. This work show and emphasis on the potentials of PCNN to perform different tasks related to image processing. The main drawback or the obstacle that prevent the direct implementation of such technique, is the need to find away to control the PCNN parameters toward perform a specific task. This paper will evaluate the performance of PCNN standard model for processing images with different properties, and select the important parameters that give a significant result, also, the approaches towards find a way for the adaptation of the PCNN parameters to perform a specific task.Keywords: cognitive system, image processing, segmentation, PCNN kernels
Procedia PDF Downloads 2804917 Studying the Value-Added Chain for the Fish Distribution Process at Quang Binh Fishing Port in Vietnam
Authors: Van Chung Nguyen
Abstract:
The purpose of this study is to study the current status of the value chain for fish distribution at Quang Binh Fishing Port with 360 research samples in which the research subjects are fishermen, traders, retailers, and businesses. The research uses the approach of applying the value chain theoretical framework of Kaplinsky and Morris to quantify and describe market channels and actors participating in the value chain and analyze the value-added process of these companies according to market channels. The analysis results show that fishermen directly catch fish with high economic efficiency, but processing enterprises and, especially retailers, are the agents to obtain higher added value. Processing enterprises play a role that is not really clear due to outdated processing technology; in contrast, retailers have the highest added value. This shows that the added value of the fish supply chain at Quang Binh fishing port is still limited, leading to low output quality. Therefore, the selling price of fish to the market is still high compared to the abundant fish resources, leading to low consumption and limiting exports due to the quality of processing enterprises. This reduces demand and fishing capacity, and productivity is lower than potential. To improve the fish value chain at fishing ports, it is necessary to focus on improving product quality, strengthening linkages between actors, building brands and product consumption markets at the same time, improving the capacity of export processing enterprises.Keywords: Quang Binh fishing port, value chain, market, distributions channel
Procedia PDF Downloads 73