Search results for: processing based on signal identification
31639 Cost Effective Microfabrication Technique for Lab on Chip (LOC) Devices Using Epoxy Polymers
Authors: Charmi Chande, Ravindra Phadke
Abstract:
Microfluidics devices are fabricated by using multiple fabrication methods. Photolithography is one of the common methods wherein SU8 is widely used for making master which in turn is used for making working chip by the process of soft lithography. The high-aspect ratio features of SU-8 makes it suitable to be used as micro moulds for injection moulding, hot embossing, and moulds to form polydimethylsiloxane (PDMS) structures for bioMEMS (Microelectromechanical systems) applications. But due to high cost, difficulty in procuring and need for clean room, restricts the use of this polymer especially in developing countries and small research labs. ‘Bisphenol –A’ based polymers in mixture with curing agent are used in various industries like Paints and coatings, Adhesives, Electrical systems and electronics, Industrial tooling and composites. We present the novel use of ‘Bisphenol – A’ based polymer in fabricating micro channels for Lab On Chip(LOC) devices. The present paper describes the prototype for production of microfluidics chips using range of ‘Bisphenol-A’ based polymers viz. GY 250, ATUL B11, DER 331, DER 330 in mixture with cationic photo initiators. All the steps of chip production were carried out using an inexpensive approach that uses low cost chemicals and equipment. This even excludes the need of clean room. The produced chips using all above mentioned polymers were validated with respect to height and the chip giving least height was selected for further experimentation. The lowest height achieved was 7 micrometers by GY250. The cost of the master fabricated was $ 0.20 and working chip was $. 0.22. The best working chip was used for morphological identification and profiling of microorganisms from environmental samples like soil, marine water and salt water pan sites. The current chip can be adapted for various microbiological screening experiments like biochemical based microbial identification, studying uncultivable microorganisms at single cell/community level.Keywords: bisphenol–A based epoxy, cationic photoinitiators, microfabrication, photolithography
Procedia PDF Downloads 28731638 DNA Multiplier: A Design Architecture of a Multiplier Circuit Using DNA Molecules
Authors: Hafiz Md. Hasan Babu, Khandaker Mohammad Mohi Uddin, Nitish Biswas, Sarreha Tasmin Rikta, Nuzmul Hossain Nahid
Abstract:
Nanomedicine and bioengineering use biological systems that can perform computing operations. In a biocomputational circuit, different types of biomolecules and DNA (Deoxyribose Nucleic Acid) are used as active components. DNA computing has the capability of performing parallel processing and a large storage capacity that makes it diverse from other computing systems. In most processors, the multiplier is treated as a core hardware block, and multiplication is one of the time-consuming and lengthy tasks. In this paper, cost-effective DNA multipliers are designed using algorithms of molecular DNA operations with respect to conventional ones. The speed and storage capacity of a DNA multiplier are also much higher than a traditional silicon-based multiplier.Keywords: biological systems, DNA multiplier, large storage, parallel processing
Procedia PDF Downloads 21631637 Benchmarking Bert-Based Low-Resource Language: Case Uzbek NLP Models
Authors: Jamshid Qodirov, Sirojiddin Komolov, Ravilov Mirahmad, Olimjon Mirzayev
Abstract:
Nowadays, natural language processing tools play a crucial role in our daily lives, including various techniques with text processing. There are very advanced models in modern languages, such as English, Russian etc. But, in some languages, such as Uzbek, the NLP models have been developed recently. Thus, there are only a few NLP models in Uzbek language. Moreover, there is no such work that could show which Uzbek NLP model behaves in different situations and when to use them. This work tries to close this gap and compares the Uzbek NLP models existing as of the time this article was written. The authors try to compare the NLP models in two different scenarios: sentiment analysis and sentence similarity, which are the implementations of the two most common problems in the industry: classification and similarity. Another outcome from this work is two datasets for classification and sentence similarity in Uzbek language that we generated ourselves and can be useful in both industry and academia as well.Keywords: NLP, benchmak, bert, vectorization
Procedia PDF Downloads 5431636 Tumor Boundary Extraction Using Intensity and Texture-Based on Gradient Vector
Authors: Namita Mittal, Himakshi Shekhawat, Ankit Vidyarthi
Abstract:
In medical research study, doctors and radiologists face lot of complexities in analysing the brain tumors in Magnetic Resonance (MR) images. Brain tumor detection is difficult due to amorphous tumor shape and overlapping of similar tissues in nearby region. So, radiologists require one such clinically viable solution which helps in automatic segmentation of tumor inside brain MR image. Initially, segmentation methods were used to detect tumor, by dividing the image into segments but causes loss of information. In this paper, a hybrid method is proposed which detect Region of Interest (ROI) on the basis of difference in intensity values and texture values of tumor region using nearby tissues with Gradient Vector Flow (GVF) technique in the identification of ROI. Proposed approach uses both intensity and texture values for identification of abnormal section of the brain MR images. Experimental results show that proposed method outperforms GVF method without any loss of information.Keywords: brain tumor, GVF, intensity, MR images, segmentation, texture
Procedia PDF Downloads 43231635 On the Equalization of Nonminimum Phase Electroacoustic Systems Using Digital Inverse Filters
Authors: Avelino Marques, Diamantino Freitas
Abstract:
Some important electroacoustic systems, like loudspeaker systems, exhibit a nonminimum phase behavior that poses considerable effort when applying advanced digital signal processing techniques, such as linear equalization. In this paper, the position and the number of zeros and poles of the inverse filter, FIR type or IIR type, designed using time domain techniques, are studied, compared and related to the nonminimum phase zeros of system to be equalized. Conclusions about the impact of the position of the system non-minimum phase zeros, on the length/order of the inverse filter and on the delay of the equalized system are outlined as a guide to previously decide which type of filter will be more adequate.Keywords: loudspeaker systems, nonminimum phase system, FIR and IIR filter, delay
Procedia PDF Downloads 7731634 Insights Into Serotonin-Receptor Binding and Stability via Molecular Dynamics Simulations: Key Residues for Electrostatic Interactions and Signal Transduction
Authors: Arunima Verma, Padmabati Mondal
Abstract:
Serotonin-receptor binding plays a key role in several neurological and biological processes, including mood, sleep, hunger, cognition, learning, and memory. In this article, we performed molecular dynamics simulation to examine the key residues that play an essential role in the binding of serotonin to the G-protein-coupled 5-HT₁ᴮ receptor (5-HT₁ᴮ R) via electrostatic interactions. An end-point free energy calculation method (MM-PBSA) determines the stability of the 5-HT1B R due to serotonin binding. The single-point mutation of the polar or charged amino acid residues (Asp129, Thr134) on the binding sites and the calculation of binding free energy validate the importance of these residues in the stability of the serotonin-receptor complex. Principal component analysis indicates the serotonin-bound 5-HT1BR is more stabilized than the apo-receptor in terms of dynamical changes. The difference dynamic cross-correlations map shows the correlation between the transmembrane and mini-Go, which indicates signal transduction happening between mini-Go and the receptor. Allosteric communication reveals the key nodes for signal transduction in 5-HT1BR. These results provide useful insights into the signal transduction pathways and mutagenesis study to regulate the functionality of the complex. The developed protocols can be applied to study local non-covalent interactions and long-range allosteric communications in any protein-ligand system for computer-aided drug design.Keywords: allostery, CADD, MD simulations, MM-PBSA
Procedia PDF Downloads 8731633 Identification of the Parameters of a AC Servomotor Using Genetic Algorithm
Authors: J. G. Batista, K. N. Sousa, ¬J. L. Nunes, R. L. S. Sousa, G. A. P. Thé
Abstract:
This work deals with parameter identification of permanent magnet motors, a class of ac motor which is particularly important in industrial automation due to characteristics like applications high performance, are very attractive for applications with limited space and reducing the need to eliminate because they have reduced size and volume and can operate in a wide speed range, without independent ventilation. By using experimental data and genetic algorithm we have been able to extract values for both the motor inductance and the electromechanical coupling constant, which are then compared to measured and/or expected values.Keywords: modeling, AC servomotor, permanent magnet synchronous motor-PMSM, genetic algorithm, vector control, robotic manipulator, control
Procedia PDF Downloads 47031632 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights
Authors: Julian Wise
Abstract:
Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.Keywords: mineral technology, big data, machine learning operations, data lake
Procedia PDF Downloads 11231631 Proposition of an Intelligent System Based on the Augmented Reality for Warehouse Logistics
Authors: Safa Gharbi, Hayfa Zgaya, Nesrine Zoghlami, Slim Hammadi, Cyril De Barbarin, Laurent Vinatier, Christiane Coupier
Abstract:
Increasing productivity and quality of service, improving the working comfort and ensuring the efficiency of all processes are important challenges for every warehouse. The order picking is recognized to be the most important and costly activity of all the process in warehouses. This paper presents a new approach using Augmented Reality (AR) in the field of logistics. It aims to create a Head-Up Display (HUD) interface with a Warehouse Management System (WMS), using AR glasses. Integrating AR technology allows the optimization of order picking by reducing time of picking process, increasing the efficiency and delivering quickly. The picker will be able to access immediately to all the information needed for his tasks. All the information is displayed when needed in the field of vision (FOV) of the operator, without any action requested from him. These research works are part of the industrial project RASL (Réalité Augmentée au Service de la Logistique) which gathers two major partners: the LAGIS (Laboratory of Automatics, Computer Engineering and Signal Processing in Lille-France) and Genrix Group, European leader in warehouses logistics, who provided his software for implementation, and his logistics expertise.Keywords: Augmented Reality (AR), logistics and optimization, Warehouse Management System (WMS), Head-Up Display (HUD)
Procedia PDF Downloads 48331630 Modal Density Influence on Modal Complexity Quantification in Dynamic Systems
Authors: Fabrizio Iezzi, Claudio Valente
Abstract:
The viscous damping in dynamic systems can be proportional or non-proportional. In the first case, the mode shapes are real whereas in the second case they are complex. From an engineering point of view, the complexity of the mode shapes is important in order to quantify the non-proportional damping. Different indices exist to provide estimates of the modal complexity. These indices are or not zero, depending whether the mode shapes are not or are complex. The modal density problem arises in the experimental identification when the dynamic systems have close modal frequencies. Depending on the entity of this closeness, the mode shapes can hold fictitious imaginary quantities that affect the values of the modal complexity indices. The results are the failing in the identification of the real or complex mode shapes and then of the proportional or non-proportional damping. The paper aims to show the influence of the modal density on the values of these indices in case of both proportional and non-proportional damping. Theoretical and pseudo-experimental solutions are compared to analyze the problem according to an appropriate mechanical system.Keywords: complex mode shapes, dynamic systems identification, modal density, non-proportional damping
Procedia PDF Downloads 38731629 Task Scheduling and Resource Allocation in Cloud-based on AHP Method
Authors: Zahra Ahmadi, Fazlollah Adibnia
Abstract:
Scheduling of tasks and the optimal allocation of resources in the cloud are based on the dynamic nature of tasks and the heterogeneity of resources. Applications that are based on the scientific workflow are among the most widely used applications in this field, which are characterized by high processing power and storage capacity. In order to increase their efficiency, it is necessary to plan the tasks properly and select the best virtual machine in the cloud. The goals of the system are effective factors in scheduling tasks and resource selection, which depend on various criteria such as time, cost, current workload and processing power. Multi-criteria decision-making methods are a good choice in this field. In this research, a new method of work planning and resource allocation in a heterogeneous environment based on the modified AHP algorithm is proposed. In this method, the scheduling of input tasks is based on two criteria of execution time and size. Resource allocation is also a combination of the AHP algorithm and the first-input method of the first client. Resource prioritization is done with the criteria of main memory size, processor speed and bandwidth. What is considered in this system to modify the AHP algorithm Linear Max-Min and Linear Max normalization methods are the best choice for the mentioned algorithm, which have a great impact on the ranking. The simulation results show a decrease in the average response time, return time and execution time of input tasks in the proposed method compared to similar methods (basic methods).Keywords: hierarchical analytical process, work prioritization, normalization, heterogeneous resource allocation, scientific workflow
Procedia PDF Downloads 14531628 Improved Cooperative Communication Scheme in the Edge of Cell Coverage
Authors: Myoung-Jin Kim, Yeong-Seop Ahn, Hyun-Jee Yang, Hyoung-Kyu Song
Abstract:
This paper proposes the new cooperative communication scheme for the wireless communication system. When the receiver is located in the edge of coverage, the signal from the transmitter is distorted by the inter-cell interference (ICI) and power reduction by distance. In order to improve communication performance, the proposed scheme adds the relay. By using the relay, the receiver receives the signal from the transmitter and relay at the same time. Therefore, the new cooperative communication scheme obtains diversity gain and is improved by the relay.Keywords: cooperative communication, diversity gain, OFDM, MIMO
Procedia PDF Downloads 60931627 A Fast Calculation Approach for Position Identification in a Distance Space
Authors: Dawei Cai, Yuya Tokuda
Abstract:
The market of localization based service (LBS) is expanding. The acquisition of physical location is the fundamental basis for LBS. GPS, the de facto standard for outdoor localization, does not work well in indoor environment due to the blocking of signals by walls and ceiling. To acquire high accurate localization in an indoor environment, many techniques have been developed. Triangulation approach is often used for identifying the location, but a heavy and complex computation is necessary to calculate the location of the distances between the object and several source points. This computation is also time and power consumption, and not favorable to a mobile device that needs a long action life with battery. To provide a low power consumption approach for a mobile device, this paper presents a fast calculation approach to identify the location of the object without online solving solutions to simultaneous quadratic equations. In our approach, we divide the location identification into two parts, one is offline, and other is online. In offline mode, we make a mapping process that maps the location area to distance space and find a simple formula that can be used to identify the location of the object online with very light computation. The characteristic of the approach is a good tradeoff between the accuracy and computational amount. Therefore, this approach can be used in smartphone and other mobile devices that need a long work time. To show the performance, some simulation experimental results are provided also in the paper.Keywords: indoor localization, location based service, triangulation, fast calculation, mobile device
Procedia PDF Downloads 17431626 Modified Gold Screen Printed Electrode with Ruthenium Complex for Selective Detection of Porcine DNA
Authors: Siti Aishah Hasbullah
Abstract:
Studies on identification of pork content in food have grown rapidly to meet the Halal food standard in Malaysia. The used mitochondria DNA (mtDNA) approaches for the identification of pig species is thought to be the most precise marker due to the mtDNA genes are present in thousands of copies per cell, the large variability of mtDNA. The standard method commonly used for DNA detection is based on polymerase chain reaction (PCR) method combined with gel electrophoresis but has major drawback. Its major drawbacks are laborious, need longer time and toxic to handle. Therefore, the need for simplicity and fast assay of DNA is vital and has triggered us to develop DNA biosensors for porcine DNA detection. Therefore, the aim of this project is to develop electrochemical DNA biosensor based on ruthenium (II) complex, [Ru(bpy)2(p-PIP)]2+ as DNA hybridization label. The interaction of DNA and [Ru(bpy)2(p-HPIP)]2+ will be studied by electrochemical transduction using Gold Screen-Printed Electrode (GSPE) modified with gold nanoparticles (AuNPs) and succinimide acrylic microspheres. The electrochemical detection by redox active ruthenium (II) complex was measured by cyclic voltammetry (CV) and differential pulse voltammetry (DPV). The results indicate that the interaction of [Ru(bpy)2(PIP)]2+ with hybridization complementary DNA has higher response compared to single-stranded and mismatch complementary DNA. Under optimized condition, this porcine DNA biosensor incorporated modified GSPE shows good linear range towards porcine DNA.Keywords: gold, screen printed electrode, ruthenium, porcine DNA
Procedia PDF Downloads 30931625 Identification of Coauthors in Scientific Database
Authors: Thiago M. R Dias, Gray F. Moita
Abstract:
The analysis of scientific collaboration networks has contributed significantly to improving the understanding of how does the process of collaboration between researchers and also to understand how the evolution of scientific production of researchers or research groups occurs. However, the identification of collaborations in large scientific databases is not a trivial task given the high computational cost of the methods commonly used. This paper proposes a method for identifying collaboration in large data base of curriculum researchers. The proposed method has low computational cost with satisfactory results, proving to be an interesting alternative for the modeling and characterization of large scientific collaboration networks.Keywords: extraction, data integration, information retrieval, scientific collaboration
Procedia PDF Downloads 39631624 Influence of Processing Parameters on the Reliability of Sieving as a Particle Size Distribution Measurements
Authors: Eseldin Keleb
Abstract:
In the pharmaceutical industry particle size distribution is an important parameter for the characterization of pharmaceutical powders. The powder flowability, reactivity and compatibility, which have a decisive impact on the final product, are determined by particle size and size distribution. Therefore, the aim of this study was to evaluate the influence of processing parameters on the particle size distribution measurements. Different Size fractions of α-lactose monohydrate and 5% polyvinylpyrrolidone were prepared by wet granulation and were used for the preparation of samples. The influence of sieve load (50, 100, 150, 200, 250, 300, and 350 g), processing time (5, 10, and 15 min), sample size ratios (high percentage of small and large particles), type of disturbances (vibration and shaking) and process reproducibility have been investigated. Results obtained showed that a sieve load of 50 g produce the best separation, a further increase in sample weight resulted in incomplete separation even after the extension of the processing time for 15 min. Performing sieving using vibration was rapider and more efficient than shaking. Meanwhile between day reproducibility showed that particle size distribution measurements are reproducible. However, for samples containing 70% fines or 70% large particles, which processed at optimized parameters, the incomplete separation was always observed. These results indicated that sieving reliability is highly influenced by the particle size distribution of the sample and care must be taken for samples with particle size distribution skewness.Keywords: sieving, reliability, particle size distribution, processing parameters
Procedia PDF Downloads 61331623 In vitro Method to Evaluate the Effect of Steam-Flaking on the Quality of Common Cereal Grains
Authors: Wanbao Chen, Qianqian Yao, Zhenming Zhou
Abstract:
Whole grains with intact pericarp are largely resistant to digestion by ruminants because entire kernels are not conducive to bacterial attachment. But processing methods makes the starch more accessible to microbes, and increases the rate and extent of starch degradation in the rumen. To estimate the feasibility of applying a steam-flaking as the processing technique of grains for ruminants, cereal grains (maize, wheat, barley and sorghum) were processed by steam-flaking (steam temperature 105°C, heating time, 45 min). And chemical analysis, in vitro gas production, volatile fatty acid concentrations, and energetic values were adopted to evaluate the effects of steam-flaking. In vitro cultivation was conducted for 48h with the rumen fluid collected from steers fed a total mixed ration consisted of 40% hay and 60% concentrates. The results showed that steam-flaking processing had a significant effect on the contents of neutral detergent fiber and acid detergent fiber (P < 0.01). The concentration of starch gelatinization degree in all grains was also great improved in steam-flaking grains, as steam-flaking processing disintegrates the crystal structure of cereal starch, which may subsequently facilitate absorption of moisture and swelling. Theoretical maximum gas production after steam-flaking processing showed no great difference. However, compared with intact grains, total gas production at 48 h and the rate of gas production were significantly (P < 0.01) increased in all types of grain. Furthermore, there was no effect of steam-flaking processing on total volatile fatty acid, but a decrease in the ratio between acetate and propionate was observed in the current in vitro fermentation. The present study also found that steam-flaking processing increased (P < 0.05) organic matter digestibility and energy concentration of the grains. The collective findings of the present study suggest that steam-flaking processing of grains could improve their rumen fermentation and energy utilization by ruminants. In conclusion, the utilization of steam-flaking would be practical to improve the quality of common cereal grains.Keywords: cereal grains, gas production, in vitro rumen fermentation, steam-flaking processing
Procedia PDF Downloads 27031622 Bioactive Chemical Markers Based Strategy for Quality Control of Herbal Medicines
Authors: Zhenzhong Yang
Abstract:
Herbal medicines are important supplements to chemical drugs and usually consist of a complex mixture of constituents. The current quality control strategy of herbal medicines is mainly based on chemical markers, which largely failed to owe to the markers, not reflecting the herbal medicines’ multiple mechanisms of action. Herein, a bioactive chemical markers based strategy was proposed and applied to the quality assessment and control of herbal medicines. This strategy mainly includes the comprehensive chemical characterization of herbal medicines, bioactive chemical markers identification, and related quantitative analysis methods development. As a proof-of-concept, this strategy was applied to a Panax notoginseng derived herbal medicine. The bioactive chemical markers based strategy offers a rational approach for quality assessment and control of herbal medicines.Keywords: bioactive chemical markers, herbal medicines, quality assessment, quality control
Procedia PDF Downloads 17931621 Video Compression Using Contourlet Transform
Authors: Delara Kazempour, Mashallah Abasi Dezfuli, Reza Javidan
Abstract:
Video compression used for channels with limited bandwidth and storage devices has limited storage capabilities. One of the most popular approaches in video compression is the usage of different transforms. Discrete cosine transform is one of the video compression methods that have some problems such as blocking, noising and high distortion inappropriate effect in compression ratio. wavelet transform is another approach is better than cosine transforms in balancing of compression and quality but the recognizing of curve curvature is so limit. Because of the importance of the compression and problems of the cosine and wavelet transforms, the contourlet transform is most popular in video compression. In the new proposed method, we used contourlet transform in video image compression. Contourlet transform can save details of the image better than the previous transforms because this transform is multi-scale and oriented. This transform can recognize discontinuity such as edges. In this approach we lost data less than previous approaches. Contourlet transform finds discrete space structure. This transform is useful for represented of two dimension smooth images. This transform, produces compressed images with high compression ratio along with texture and edge preservation. Finally, the results show that the majority of the images, the parameters of the mean square error and maximum signal-to-noise ratio of the new method based contourlet transform compared to wavelet transform are improved but in most of the images, the parameters of the mean square error and maximum signal-to-noise ratio in the cosine transform is better than the method based on contourlet transform.Keywords: video compression, contourlet transform, discrete cosine transform, wavelet transform
Procedia PDF Downloads 44431620 Fluorescing Aptamer-Gold Nanoparticle Complex for the Sensitive Detection of Bisphenol A
Authors: Eunsong Lee, Gae Baik Kim, Young Pil Kim
Abstract:
Bisphenol A (BPA) is one of the endocrine disruptors (EDCs), which have been suspected to be associated with reproductive dysfunction and physiological abnormality in human. Since the BPA has been widely used to make plastics and epoxy resins, the leach of BPA from the lining of plastic products has been of major concern, due to its environmental or human exposure issues. The simple detection of BPA based on the self-assembly of aptamer-mediated gold nanoparticles (AuNPs) has been reported elsewhere, yet the detection sensitivity still remains challenging. Here we demonstrate an improved AuNP-based sensor of BPA by using fluorescence-combined AuNP colorimetry in order to overcome the drawback of traditional AuNP sensors. While the anti-BPA aptamer (full length or truncated ssDNA) triggered the self-assembly of unmodified AuNP (citrate-stabilized AuNP) in the presence of BPA at high salt concentrations, no fluorescence signal was observed by the subsequent addition of SYBR Green, due to a small amount of free anti-BPA aptamer. In contrast, the absence of BPA did not cause the self-assembly of AuNPs (no color change by salt-bridged surface stabilization) and high fluorescence signal by SYBP Green, which was due to a large amount of free anti-BPA aptamer. As a result, the quantitative analysis of BPA was achieved using the combination of absorption of AuNP with fluorescence intensity of SYBR green as a function of BPA concentration, which represented more improved detection sensitivity (as low as 1 ppb) than did in the AuNP colorimetric analysis. This method also enabled to detect high BPA in water-soluble extracts from thermal papers with high specificity against BPS and BPF. We suggest that this approach will be alternative for traditional AuNP colorimetric assays in the field of aptamer-based molecular diagnosis.Keywords: bisphenol A, colorimetric, fluoroscence, gold-aptamer nanobiosensor
Procedia PDF Downloads 18831619 ARIMA-GARCH, A Statistical Modeling for Epileptic Seizure Prediction
Authors: Salman Mohamadi, Seyed Mohammad Ali Tayaranian Hosseini, Hamidreza Amindavar
Abstract:
In this paper, we provide a procedure to analyze and model EEG (electroencephalogram) signal as a time series using ARIMA-GARCH to predict an epileptic attack. The heteroskedasticity of EEG signal is examined through the ARCH or GARCH, (Autore- gressive conditional heteroskedasticity, Generalized autoregressive conditional heteroskedasticity) test. The best ARIMA-GARCH model in AIC sense is utilized to measure the volatility of the EEG from epileptic canine subjects, to forecast the future values of EEG. ARIMA-only model can perform prediction, but the ARCH or GARCH model acting on the residuals of ARIMA attains a con- siderable improved forecast horizon. First, we estimate the best ARIMA model, then different orders of ARCH and GARCH modelings are surveyed to determine the best heteroskedastic model of the residuals of the mentioned ARIMA. Using the simulated conditional variance of selected ARCH or GARCH model, we suggest the procedure to predict the oncoming seizures. The results indicate that GARCH modeling determines the dynamic changes of variance well before the onset of seizure. It can be inferred that the prediction capability comes from the ability of the combined ARIMA-GARCH modeling to cover the heteroskedastic nature of EEG signal changes.Keywords: epileptic seizure prediction , ARIMA, ARCH and GARCH modeling, heteroskedasticity, EEG
Procedia PDF Downloads 40631618 A Novel Concept of Optical Immunosensor Based on High-Affinity Recombinant Protein Binders for Tailored Target-Specific Detection
Authors: Alena Semeradtova, Marcel Stofik, Lucie Mareckova, Petr Maly, Ondrej Stanek, Jan Maly
Abstract:
Recently, novel strategies based on so-called molecular evolution were shown to be effective for the production of various peptide ligand libraries with high affinities to molecular targets of interest comparable or even better than monoclonal antibodies. The major advantage of these peptide scaffolds is mainly their prevailing low molecular weight and simple structure. This study describes a new high-affinity binding molecules based immunesensor using a simple optical system for human serum albumin (HSA) detection as a model molecule. We present a comparison of two variants of recombinant binders based on albumin binding domain of the protein G (ABD) performed on micropatterned glass chip. Binding domains may be tailored to any specific target of interest by molecular evolution. Micropatterened glass chips were prepared using UV-photolithography on chromium sputtered glasses. Glass surface was modified by (3-aminopropyl)trietoxysilane and biotin-PEG-acid using EDC/NHS chemistry. Two variants of high-affinity binding molecules were used to detect target molecule. Firstly, a variant is based on ABD domain fused with TolA chain. This molecule is in vivo biotinylated and each molecule contains one molecule of biotin and one ABD domain. Secondly, the variant is ABD domain based on streptavidin molecule and contains four gaps for biotin and four ABD domains. These high-affinity molecules were immobilized to the chip surface via biotin-streptavidin chemistry. To eliminate nonspecific binding 1% bovine serum albumin (BSA) or 6% fetal bovine serum (FBS) were used in every step. For both variants range of measured concentrations of fluorescently labelled HSA was 0 – 30 µg/ml. As a control, we performed a simultaneous assay without high-affinity binding molecules. Fluorescent signal was measured using inverse fluorescent microscope Olympus IX 70 with COOL LED pE 4000 as a light source, related filters, and camera Retiga 2000R as a detector. The fluorescent signal from non-modified areas was substracted from the signal of the fluorescent areas. Results were presented in graphs showing the dependence of measured grayscale value on the log-scale of HSA concentration. For the TolA variant the limit of detection (LOD) of the optical immunosensor proposed in this study is calculated to be 0,20 µg/ml for HSA detection in 1% BSA and 0,24 µg/ml in 6% FBS. In the case of streptavidin-based molecule, it was 0,04 µg/ml and 0,07 µg/ml respectively. The dynamical range of the immunosensor was possible to estimate just in the case of TolA variant and it was calculated to be 0,49 – 3,75 µg/ml and 0,73-1,88 µg/ml respectively. In the case of the streptavidin-based the variant we didn´t reach the surface saturation even with the 480 ug/ml concentration and the upper value of dynamical range was not estimated. Lower value was calculated to be 0,14 µg/ml and 0,17 µg/ml respectively. Based on the obtained results, it´s clear that both variants are useful for creating the bio-recognizing layer on immunosensors. For this particular system, it is obvious that the variant based on streptavidin molecule is more useful for biosensing on glass planar surfaces. Immunosensors based on this variant would exhibit better limit of detection and wide dynamical range.Keywords: high affinity binding molecules, human serum albumin, optical immunosensor, protein G, UV-photolitography
Procedia PDF Downloads 36831617 Developed CNN Model with Various Input Scale Data Evaluation for Bearing Faults Prognostics
Authors: Anas H. Aljemely, Jianping Xuan
Abstract:
Rolling bearing fault diagnosis plays a pivotal issue in the rotating machinery of modern manufacturing. In this research, a raw vibration signal and improved deep learning method for bearing fault diagnosis are proposed. The multi-dimensional scales of raw vibration signals are selected for evaluation condition monitoring system, and the deep learning process has shown its effectiveness in fault diagnosis. In the proposed method, employing an Exponential linear unit (ELU) layer in a convolutional neural network (CNN) that conducts the identical function on positive data, an exponential nonlinearity on negative inputs, and a particular convolutional operation to extract valuable features. The identification results show the improved method has achieved the highest accuracy with a 100-dimensional scale and increase the training and testing speed.Keywords: bearing fault prognostics, developed CNN model, multiple-scale evaluation, deep learning features
Procedia PDF Downloads 21031616 MarginDistillation: Distillation for Face Recognition Neural Networks with Margin-Based Softmax
Authors: Svitov David, Alyamkin Sergey
Abstract:
The usage of convolutional neural networks (CNNs) in conjunction with the margin-based softmax approach demonstrates the state-of-the-art performance for the face recognition problem. Recently, lightweight neural network models trained with the margin-based softmax have been introduced for the face identification task for edge devices. In this paper, we propose a distillation method for lightweight neural network architectures that outperforms other known methods for the face recognition task on LFW, AgeDB-30 and Megaface datasets. The idea of the proposed method is to use class centers from the teacher network for the student network. Then the student network is trained to get the same angles between the class centers and face embeddings predicted by the teacher network.Keywords: ArcFace, distillation, face recognition, margin-based softmax
Procedia PDF Downloads 14631615 Dairy Value Chain: Assessing the Inter Linkage of Dairy Farm and Small-Scale Dairy Processing in Tigray: Case Study of Mekelle City
Authors: Weldeabrha Kiros Kidanemaryam, DepaTesfay Kelali Gidey, Yikaalo Welu Kidanemariam
Abstract:
Dairy services are considered as sources of income, employment, nutrition and health for smallholder rural and urban farmers. The main objective of this study is to assess the interlinkage of dairy farms and small-scale dairy processing in Mekelle, Tigray. To achieve the stated objective, a descriptive research approach was employed where data was collected from 45 dairy farmers and 40 small-scale processors and analyzed by calculating the mean values and percentages. Findings show that the dairy business in the study area is characterized by a shortage of feed and water for the farm. The dairy farm is dominated by breeds of hybrid type, followed by the so called ‘begait’. Though the farms have access to medication and vaccination for the cattle, they fell short of hygiene practices, reliable shade for the cattle and separate space for the claves. The value chain at the milk production stage is characterized by a low production rate, selling raw milk without adding value and a very meager traditional processing practice. Furthermore, small-scale milk processors are characterized by collecting milk from farmers and producing cheese, butter, ghee and sour milk. They do not engage in modern milk processing like pasteurized milk, yogurt and table butter. Most small-scale milk processors are engaged in traditional production systems. Additionally, the milk consumption and marketing part of the chain is dominated by the informal market (channel), where market problems, lack of skill and technology, shortage of loans and weak policy support are being faced as the main challenges. Based on the findings, recommendations and future research areas are forwarded.Keywords: value-chain, dairy, milk production, milk processing
Procedia PDF Downloads 3231614 Microfluidic Impedimetric Biochip and Related Methods for Measurement Chip Manufacture and Counting Cells
Authors: Amina Farooq, Nauman Zafar Butt
Abstract:
This paper is about methods and tools for counting particles of interest, such as cells. A microfluidic system with interconnected electronics on a flexible substrate, inlet-outlet ports and interface schemes, sensitive and selective detection of cells specificity, and processing of cell counting at polymer interfaces in a microscale biosensor for use in the detection of target biological and non-biological cells. The development of fluidic channels, planar fluidic contact ports, integrated metal electrodes on a flexible substrate for impedance measurements, and a surface modification plasma treatment as an intermediate bonding layer are all part of the fabrication process. Magnetron DC sputtering is used to deposit a double metal layer (Ti/Pt) over the polypropylene film. Using a photoresist layer, specified and etched zones are established. Small fluid volumes, a reduced detection region, and electrical impedance measurements over a range of frequencies for cell counts improve detection sensitivity and specificity. The procedure involves continuous flow of fluid samples that contain particles of interest through the microfluidic channels, counting all types of particles in a portion of the sample using the electrical differential counter to generate a bipolar pulse for each passing cell—calculating the total number of particles of interest originally in the fluid sample by using MATLAB program and signal processing. It's indeed potential to develop a robust and economical kit for cell counting in whole-blood samples using these methods and similar devices.Keywords: impedance, biochip, cell counting, microfluidics
Procedia PDF Downloads 16231613 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization
Authors: Subhajit Das, Nirjhar Dhang
Abstract:
Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization
Procedia PDF Downloads 21531612 Evaluation of Condyle Alterations after Orthognathic Surgery with a Digital Image Processing Technique
Authors: Livia Eisler, Cristiane C. B. Alves, Cristina L. F. Ortolani, Kurt Faltin Jr.
Abstract:
Purpose: This paper proposes a technically simple diagnosis method among orthodontists and maxillofacial surgeons in order to evaluate discrete bone alterations. The methodology consists of a protocol to optimize the diagnosis and minimize the possibility for orthodontic and ortho-surgical retreatment. Materials and Methods: A protocol of image processing and analysis, through ImageJ software and its plugins, was applied to 20 pairs of lateral cephalometric images obtained from cone beam computerized tomographies, before and 1 year after undergoing orthognathic surgery. The optical density of the images was analyzed in the condylar region to determine possible bone alteration after surgical correction. Results: Image density was shown to be altered in all image pairs, especially regarding the condyle contours. According to measures, condyle had a gender-related density reduction for p=0.05 and condylar contours had their alterations registered in mm. Conclusion: A simple, viable and cost-effective technique can be applied to achieve the more detailed image-based diagnosis, not depending on the human eye and therefore, offering more reliable, quantitative results.Keywords: bone resorption, computer-assisted image processing, orthodontics, orthognathic surgery
Procedia PDF Downloads 16031611 Analysis of Formation Methods of Range Profiles for an X-Band Coastal Surveillance Radar
Authors: Nguyen Van Loi, Le Thanh Son, Tran Trung Kien
Abstract:
The paper deals with the problem of the formation of range profiles (RPs) for an X-band coastal surveillance radar. Two popular methods, the difference operator method, and the window-based method, are reviewed and analyzed via two tests with different datasets. The test results show that although the original window-based method achieves a better performance than the difference operator method, it has three main drawbacks that are the use of 3 or 4 peaks of an RP for creating the windows, the extension of the window size using the power sum of three adjacent cells in the left and the right sides of the windows and the same threshold applied for all types of vessels to finish the formation process of RPs. These drawbacks lead to inaccurate RPs due to the low signal-to-clutter ratio. Therefore, some suggestions are proposed to improve the original window-based method.Keywords: range profile, difference operator method, window-based method, automatic target recognition
Procedia PDF Downloads 12731610 The Results of the Systematic Archaeological Survey of Sistan (Iran)
Authors: Reza Mehrafarin, Nafiseh Mirshekari
Abstract:
The Sistan plain has always been a site for the settlement of various human societies, thanks to its favorable environmental conditions, such as abundant water from the Hirmand River and fertile sedimentary soil. Consequently, there was a need for a systematic archaeological investigation in the area. The survey had multiple objectives, with the most significant ones being the creation of an archaeological map and the identification and documentation of all ancient sites to establish their records and chronology. The survey was carried out in two phases, with each phase covering half of the area. The research method involved fieldwork, with two teams of professional archaeologists conducting a comprehensive survey of each of the 22 areas in Sistan. Once an area was identified, various recording, scientific, and field operations were executed to study the site. In the first phase (2007), an intensive field survey focused on the residential area of Sistan, including its northern and eastern regions. This phase resulted in the identification of 808 sites in eleven selected areas. In the second phase (2009), the desert area of Sistan, or its southern half, was surveyed, leading to the identification of approximately 853 sites. Overall, these surveys resulted in the identification of 1661 sites in Sistan. Among these sites, approximately 899 belong to the Bronze Age (late 4th millennium BCE to early 2nd millennium BCE). Of these sites, around 501 date back to the historical period, while nearly 590 sites pertain to the Islamic period. The archaeological investigations of both phases revealed that Sistan has consistently possessed fertile soil, abundant water, and a skilled workforce, making it capable of becoming Iran's granary and the center of the East once again if these conditions are restored.Keywords: sistan, field surveys, archaeology, archaeological map
Procedia PDF Downloads 65