Search results for: time domain reflectometry measurement techinque
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20861

Search results for: time domain reflectometry measurement techinque

20321 Measurement Errors and Misclassifications in Covariates in Logistic Regression: Bayesian Adjustment of Main and Interaction Effects and the Sample Size Implications

Authors: Shahadut Hossain

Abstract:

Measurement errors in continuous covariates and/or misclassifications in categorical covariates are common in epidemiological studies. Regression analysis ignoring such mismeasurements seriously biases the estimated main and interaction effects of covariates on the outcome of interest. Thus, adjustments for such mismeasurements are necessary. In this research, we propose a Bayesian parametric framework for eliminating deleterious impacts of covariate mismeasurements in logistic regression. The proposed adjustment method is unified and thus can be applied to any generalized linear and non-linear regression models. Furthermore, adjustment for covariate mismeasurements requires validation data usually in the form of either gold standard measurements or replicates of the mismeasured covariates on a subset of the study population. Initial investigation shows that adequacy of such adjustment depends on the sizes of main and validation samples, especially when prevalences of the categorical covariates are low. Thus, we investigate the impact of main and validation sample sizes on the adjusted estimates, and provide a general guideline about these sample sizes based on simulation studies.

Keywords: measurement errors, misclassification, mismeasurement, validation sample, Bayesian adjustment

Procedia PDF Downloads 396
20320 Electrical Equivalent Analysis of Micro Cantilever Beams for Sensing Applications

Authors: B. G. Sheeparamatti, J. S. Kadadevarmath

Abstract:

Microcantilevers are the basic MEMS devices, which can be used as sensors, actuators, and electronics can be easily built into them. The detection principle of microcantilever sensors is based on the measurement of change in cantilever deflection or change in its resonance frequency. The objective of this work is to explore the analogies between the mechanical and electrical equivalent of microcantilever beams. Normally scientists and engineers working in MEMS use expensive software like CoventorWare, IntelliSuite, ANSYS/Multiphysics, etc. This paper indicates the need of developing the electrical equivalent of the MEMS structure and with that, one can have a better insight on important parameters, and their interrelation of the MEMS structure. In this work, considering the mechanical model of the microcantilever, the equivalent electrical circuit is drawn and using a force-voltage analogy, it is analyzed with circuit simulation software. By doing so, one can gain access to a powerful set of intellectual tools that have been developed for understanding electrical circuits. Later the analysis is performed using ANSYS/Multiphysics - software based on finite element method (FEM). It is observed that both mechanical and electrical domain results for a rectangular microcantilevers are in agreement with each other.

Keywords: electrical equivalent circuit analogy, FEM analysis, micro cantilevers, micro sensors

Procedia PDF Downloads 386
20319 Inner Quality Parameters of Rapeseed (Brassica napus) Populations in Different Sowing Technology Models

Authors: É. Vincze

Abstract:

Demand on plant oils has increased to an enormous extent that is due to the change of human nutrition habits on the one hand, while on the other hand to the increase of raw material demand of some industrial sectors, just as to the increase of biofuel production. Besides the determining importance of sunflower in Hungary the production area, just as in part the average yield amount of rapeseed has increased among the produced oil crops. The variety/hybrid palette has changed significantly during the past decade. The available varieties’/hybrids’ palette has been extended to a significant extent. It is agreed that rapeseed production demands professionalism and local experience. Technological elements are successive; high yield amounts cannot be produced without system-based approach. The aim of the present work was to execute the complex study of one of the most critical production technology element of rapeseed production, that was sowing technology. Several sowing technology elements are studied in this research project that are the following: biological basis (the hybrid Arkaso is studied in this regard), sowing time (sowing time treatments were set so that they represent the wide period used in industrial practice: early, optimal and late sowing time) plant density (in this regard reaction of rare, optimal and too dense populations) were modelled. The multifactorial experimental system enables the single and complex evaluation of rapeseed sowing technology elements, just as their modelling using experimental result data. Yield quality and quantity have been determined as well in the present experiment, just as the interactions between these factors. The experiment was set up in four replications at the Látókép Plant Production Research Site of the University of Debrecen. Two different sowing times were sown in the first experimental year (2014), while three in the second (2015). Three different plant densities were set in both years: 200, 350 and 500 thousand plants ha-1. Uniform nutrient supply and a row spacing of 45 cm were applied. Winter wheat was used as pre-crop. Plant physiological measurements were executed in the populations of the Arkaso rapeseed hybrid that were: relative chlorophyll content analysis (SPAD) and leaf area index (LAI) measurement. Relative chlorophyll content (SPAD) and leaf area index (LAI) were monitored in 7 different measurement times.

Keywords: inner quality, plant density, rapeseed, sowing time

Procedia PDF Downloads 188
20318 Effects of Body Positioning on Videofluoroscopic Barium Esophagram in Healthy Cats

Authors: Hyeona Kim, Kichang Lee, Seunghee Lee, Jeongsu An, Kyungjun Min

Abstract:

Contrast videofluoroscopy is the diagnostic imaging technique for evaluating cat with dysphagia. Generally, videofluoroscopic studies have been done with the cat restrained in lateral recumbency. It is different from the neutral position such as standing or sternal recumbency which is actual swallowing posture. We hypothesized that measurement of esophageal transit and peristalsis would be affected by body position. This experimental study analyzed the imaging findings of barium esophagram in 5 cats. Each cat underwent videofluoroscopy during swallowing of liquid barium and barium-soaked kibble in standing position and lateral recumbency. Esophageal transit time and the number of esophageal peristaltic waves were compared among body positions. Transit time in the cervical esophagus (0.57s), cranial thoracic esophagus (2.5s), and caudal thoracic esophagus(1.10s) was delayed when cats were in lateral recumbency for liquid barium. For kibble, transit time was more delayed than that of liquid through the entire esophagus in lateral recumbency. Liquid and kibble frequently started to delay at thoracic inlet region, transit time in the thoracic esophagus was significantly delayed than the cervical esophagus. In standing position, 60.2% of liquid swallows stimulated primary esophageal peristalsis. In lateral recumbency, 50.5% of liquid swallows stimulated primary esophageal peristalsis. Other variables were not significantly different. Lateral body positioning increases entire esophageal transit time and thoracic esophageal transit time is most significantly delayed. Thus, lateral recumbency decreases the number of primary esophageal peristalsis.

Keywords: barium esophagram, body positioning, cat, videofluoroscopy

Procedia PDF Downloads 189
20317 Lamb Wave-Based Blood Coagulation Measurement System Using Citrated Plasma

Authors: Hyunjoo Choi, Jeonghun Nam, Chae Seung Lim

Abstract:

Acoustomicrofluidics has gained much attention due to the advantages, such as noninvasiveness and easy integration with other miniaturized systems, for clinical and biological applications. However, a limitation of acoustomicrofluidics is the complicated and costly fabrication process of electrodes. In this study, we propose a low-cost and lithography-free device using Lamb wave for blood analysis. Using a Lamb wave, calcium ion-removed blood plasma and coagulation reagents can be rapidly mixed for blood coagulation test. Due to the coagulation process, the viscosity of the sample increases and the viscosity change can be monitored by internal acoustic streaming of microparticles suspended in the sample droplet. When the acoustic streaming of particles stops by the viscosity increase is defined as the coagulation time. With the addition of calcium ion at 0-25 mM, the coagulation time was measured and compared with the conventional index for blood coagulation analysis, prothrombin time, which showed highly correlated with the correlation coefficient as 0.94. Therefore, our simple and cost-effective Lamb wave-based blood analysis device has the powerful potential to be utilized in clinical settings.

Keywords: acoustomicrofluidics, blood analysis, coagulation, lamb wave

Procedia PDF Downloads 326
20316 A Predictive MOC Solver for Water Hammer Waves Distribution in Network

Authors: A. Bayle, F. Plouraboué

Abstract:

Water Distribution Network (WDN) still suffers from a lack of knowledge about fast pressure transient events prediction, although the latter may considerably impact their durability. Accidental or planned operating activities indeed give rise to complex pressure interactions and may drastically modified the local pressure value generating leaks and, in rare cases, pipe’s break. In this context, a numerical predictive analysis is conducted to prevent such event and optimize network management. A couple of Python/FORTRAN 90, home-made software, has been developed using Method Of Characteristic (MOC) solving for water-hammer equations. The solver is validated by direct comparison with theoretical and experimental measurement in simple configurations whilst afterward extended to network analysis. The algorithm's most costly steps are designed for parallel computation. A various set of boundary conditions and energetic losses models are considered for the network simulations. The results are analyzed in both real and frequencies domain and provide crucial information on the pressure distribution behavior within the network.

Keywords: energetic losses models, method of characteristic, numerical predictive analysis, water distribution network, water hammer

Procedia PDF Downloads 210
20315 Attention-based Adaptive Convolution with Progressive Learning in Speech Enhancement

Authors: Tian Lan, Yixiang Wang, Wenxin Tai, Yilan Lyu, Zufeng Wu

Abstract:

The monaural speech enhancement task in the time-frequencydomain has a myriad of approaches, with the stacked con-volutional neural network (CNN) demonstrating superiorability in feature extraction and selection. However, usingstacked single convolutions method limits feature represen-tation capability and generalization ability. In order to solvethe aforementioned problem, we propose an attention-basedadaptive convolutional network that integrates the multi-scale convolutional operations into a operation-specific blockvia input dependent attention to adapt to complex auditoryscenes. In addition, we introduce a two-stage progressivelearning method to enlarge the receptive field without a dra-matic increase in computation burden. We conduct a series ofexperiments based on the TIMIT corpus, and the experimen-tal results prove that our proposed model is better than thestate-of-art models on all metrics.

Keywords: speech enhancement, adaptive convolu-tion, progressive learning, time-frequency domain

Procedia PDF Downloads 104
20314 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 97
20313 Fabrication of LiNbO₃ Based Conspicuous Nanomaterials for Renewable Energy Devices

Authors: Riffat Kalsoom, Qurat-Ul-Ain Javed

Abstract:

Optical and dielectric properties of lithium niobates have made them the fascinating materials to be used in optical industry for device formation such as Q and optical switching. Synthesis of lithium niobates was carried out by solvothermal process with and without temperature fluctuation at 200°C for 4 hrs, and behavior of properties for different durations was also examined. Prepared samples of LiNbO₃ were examined in a way as crystallographic phases by using XRD diffractometer, morphology by scanning electron microscope (SEM), absorption by UV-Visible Spectroscopy and dielectric measurement by impedance analyzer. A structural change from trigonal to spherical shape was observed by changing the time of reaction. Crystallite size decreases by the temperature fluctuation and increasing reaction time. Band gap decreases whereas dielectric constant and dielectric loss was increased with increasing time of reaction. Trend of AC conductivity is explained by Joschner’s power law. Due to these significant properties, it finds its applications in devices, such as cells, Q switching and optical switching for laser and gigahertz frequencies, respectively and these applications depend on the industrial demands.

Keywords: lithium niobates, renewable energy devices, controlled structure, temperature fluctuations

Procedia PDF Downloads 119
20312 Method of Visual Prosthesis Design Based on Biologically Inspired Design

Authors: Shen Jian, Hu Jie, Zhu Guo Niu, Peng Ying Hong

Abstract:

There are two issues exited in the traditional visual prosthesis: lacking systematic method and the low level of humanization. To tackcle those obstacles, a visual prosthesis design method based on biologically inspired design is proposed. Firstly, a constrained FBS knowledge cell model is applied to construct the functional model of visual prosthesis in biological field. Then the clustering results of engineering domain are ob-tained with the use of the cross-domain knowledge cell clustering algorithm. Finally, a prototype system is designed to support the bio-logically inspired design where the conflict is digested by TRIZ and other tools, and the validity of the method is verified by the solution scheme

Keywords: knowledge-based engineering, visual prosthesis, biologically inspired design, biomedical engineering

Procedia PDF Downloads 173
20311 Lexicon-Based Sentiment Analysis for Stock Movement Prediction

Authors: Zane Turner, Kevin Labille, Susan Gauch

Abstract:

Sentiment analysis is a broad and expanding field that aims to extract and classify opinions from textual data. Lexicon-based approaches are based on the use of a sentiment lexicon, i.e., a list of words each mapped to a sentiment score, to rate the sentiment of a text chunk. Our work focuses on predicting stock price change using a sentiment lexicon built from financial conference call logs. We present a method to generate a sentiment lexicon based upon an existing probabilistic approach. By using a domain-specific lexicon, we outperform traditional techniques and demonstrate that domain-specific sentiment lexicons provide higher accuracy than generic sentiment lexicons when predicting stock price change.

Keywords: computational finance, sentiment analysis, sentiment lexicon, stock movement prediction

Procedia PDF Downloads 113
20310 Lexicon-Based Sentiment Analysis for Stock Movement Prediction

Authors: Zane Turner, Kevin Labille, Susan Gauch

Abstract:

Sentiment analysis is a broad and expanding field that aims to extract and classify opinions from textual data. Lexicon-based approaches are based on the use of a sentiment lexicon, i.e., a list of words each mapped to a sentiment score, to rate the sentiment of a text chunk. Our work focuses on predicting stock price change using a sentiment lexicon built from financial conference call logs. We introduce a method to generate a sentiment lexicon based upon an existing probabilistic approach. By using a domain-specific lexicon, we outperform traditional techniques and demonstrate that domain-specific sentiment lexicons provide higher accuracy than generic sentiment lexicons when predicting stock price change.

Keywords: computational finance, sentiment analysis, sentiment lexicon, stock movement prediction

Procedia PDF Downloads 156
20309 Vortex Separator for More Accurate Air Dry-Bulb Temperature Measurement

Authors: Ahmed N. Shmroukh, I. M. S. Taha, A. M. Abdel-Ghany, M. Attalla

Abstract:

Fog systems application for cooling and humidification is still limited, although these systems require less initial cost compared with that of other cooling systems such as pad-and-fan systems. The undesirable relative humidity and air temperature inside the space which have been cooled or humidified are the main reasons for its limited use, which results from the poor control of fog systems. Any accurate control system essentially needs air dry bulb temperature as an input parameter. Therefore, the air dry-bulb temperature in the space needs to be measured accurately. The Scope of the present work is the separation of the fog droplets from the air in a fogged space to measure the air dry bulb temperature accurately. The separation is to be done in a small device inside which the sensor of the temperature measuring instrument is positioned. Vortex separator will be designed and used. Another reference device will be used for measuring the air temperature without separation. A comparative study will be performed to reach at the best device which leads to the most accurate measurement of air dry bulb temperature. The results showed that the proposed devices improved the measured air dry bulb temperature toward the correct direction over that of the free junction. Vortex device was the best. It respectively increased the temperature measured by the free junction in the range from around 2 to around 6°C for different fog on-off duration.

Keywords: fog systems, measuring air dry bulb temperature, temperature measurement, vortex separator

Procedia PDF Downloads 278
20308 Identification of Vehicle Dynamic Parameters by Using Optimized Exciting Trajectory on 3- DOF Parallel Manipulator

Authors: Di Yao, Gunther Prokop, Kay Buttner

Abstract:

Dynamic parameters, including the center of gravity, mass and inertia moments of vehicle, play an essential role in vehicle simulation, collision test and real-time control of vehicle active systems. To identify the important vehicle dynamic parameters, a systematic parameter identification procedure is studied in this work. In the first step of the procedure, a conceptual parallel manipulator (virtual test rig), which possesses three rotational degrees-of-freedom, is firstly proposed. To realize kinematic characteristics of the conceptual parallel manipulator, the kinematic analysis consists of inverse kinematic and singularity architecture is carried out. Based on the Euler's rotation equations for rigid body dynamics, the dynamic model of parallel manipulator and derivation of measurement matrix for parameter identification are presented subsequently. In order to reduce the sensitivity of parameter identification to measurement noise and other unexpected disturbances, a parameter optimization process of searching for optimal exciting trajectory of parallel manipulator is conducted in the following section. For this purpose, the 321-Euler-angles defined by parameterized finite-Fourier-series are primarily used to describe the general exciting trajectory of parallel manipulator. To minimize the condition number of measurement matrix for achieving better parameter identification accuracy, the unknown coefficients of parameterized finite-Fourier-series are estimated by employing an iterative algorithm based on MATLAB®. Meanwhile, the iterative algorithm will ensure the parallel manipulator still keeps in an achievable working status during the execution of optimal exciting trajectory. It is showed that the proposed procedure and methods in this work can effectively identify the vehicle dynamic parameters and could be an important application of parallel manipulator in the fields of parameter identification and test rig development.

Keywords: parameter identification, parallel manipulator, singularity architecture, dynamic modelling, exciting trajectory

Procedia PDF Downloads 249
20307 Genome-Wide Assessment of Putative Superoxide Dismutases in Unicellular and Filamentous Cyanobacteria

Authors: Shivam Yadav, Neelam Atri

Abstract:

Cyanobacteria are photoautotrophic prokaryotes able to grow in diverse ecological habitats, originated 2.5 - 3.5 billion years ago and brought oxygenic photosynthesis. Since then superoxide dismutases (SODs) acquired great significance due to their ability to catalyze detoxification of byproducts of oxygenic photosynthesis, i.e. superoxide radicals. Sequence information from several cyanobacterial genomes offers a unique opportunity to conduct a comprehensive comparative analysis of the superoxide dismutases family. In the present study, we extracted information regarding SODs from species of sequenced cyanobacteria and investigated their diversity, conservation, domain structure, and evolution. 144 putative SOD homologues were identified. SODs are present in all cyanobacterial species reflecting their significant role in survival. However, their distribution varies, fewer in unicellular marine strains whereas abundant in filamentous nitrogen-fixing cyanobacteria. Motifs and invariant amino acids typical in eukaryotic SODs were conserved well in these proteins. These SODs were classified into three major families according to their domain structures. Interestingly, they lack additional domains as found in proteins of other family. Phylogenetic relationships correspond well with phylogenies based on 16S rRNA and clustering occurs on the basis of structural characteristics such as domain organization. Similar conserved motifs and amino acids indicate that cyanobacterial SODs make use of a similar catalytic mechanism as eukaryotic SODs. Gene gain-and-loss is insignificant during SOD evolution as evidenced by absence of additional domain. This study has not only examined an overall background of sequence-structure-function interactions for the SOD gene family but also revealed variation among SOD distribution based on ecophysiological and morphological characters.

Keywords: comparative genomics, cyanobacteria, phylogeny, superoxide dismutases

Procedia PDF Downloads 118
20306 Identification of Training Topics for the Improvement of the Relevant Cognitive Skills of Technical Operators in the Railway Domain

Authors: Giulio Nisoli, Jonas Brüngger, Karin Hostettler, Nicole Stoller, Katrin Fischer

Abstract:

Technical operators in the railway domain are experts responsible for the supervisory control of the railway power grid as well as of the railway tunnels. The technical systems used to master these demanding tasks are constantly increasing in their degree of automation. It becomes therefore difficult for technical operators to maintain the control over the technical systems and the processes of their job. In particular, the operators must have the necessary experience and knowledge in dealing with a malfunction situation or unexpected event. For this reason, it is of growing importance that the skills relevant for the execution of the job are maintained and further developed beyond the basic training they receive, where they are educated in respect of technical knowledge and the work with guidelines. Training methods aimed at improving the cognitive skills needed by technical operators are still missing and must be developed. Goals of the present study were to identify which are the relevant cognitive skills of technical operators in the railway domain and to define which topics should be addressed by the training of these skills. Observational interviews were conducted in order to identify the main tasks and the organization of the work of technical operators as well as the technical systems used for the execution of their job. Based on this analysis, the most demanding tasks of technical operators could be identified and described. The cognitive skills involved in the execution of these tasks are those, which need to be trained. In order to identify and analyze these cognitive skills a cognitive task analysis (CTA) was developed. CTA specifically aims at identifying the cognitive skills that employees implement when performing their own tasks. The identified cognitive skills of technical operators were summarized and grouped in training topics. For every training topic, specific goals were defined. The goals regard the three main categories; knowledge, skills and attitude to be trained in every training topic. Based on the results of this study, it is possible to develop specific training methods to train the relevant cognitive skills of the technical operators.

Keywords: cognitive skills, cognitive task analysis, technical operators in the railway domain, training topics

Procedia PDF Downloads 132
20305 Near Infrared Spectrometry to Determine the Quality of Milk, Experimental Design Setup and Chemometrics: Review

Authors: Meghana Shankara, Priyadarshini Natarajan

Abstract:

Infrared (IR) spectroscopy has revolutionized the way we look at materials around us. Unraveling the pattern in the molecular spectra of materials to analyze the composition and properties of it has been one of the most interesting challenges in modern science. Applications of the IR spectrometry are numerous in the field’s pharmaceuticals, health, food and nutrition, oils, agriculture, construction, polymers, beverage, fabrics and much more limited only by the curiosity of the people. Near Infrared (NIR) spectrometry is applied robustly in analyzing the solids and liquid substances because of its non-destructive analysis method. In this paper, we have reviewed the application of NIR spectrometry in milk quality analysis and have presented the modes of measurement applied in NIRS measurement setup, Design of Experiment (DoE), classification/quantification algorithms used in the case of milk composition prediction like Fat%, Protein%, Lactose%, Solids Not Fat (SNF%) along with different approaches for adulterant identification. We have also discussed the important NIR ranges for the chosen milk parameters. The performance metrics used in the comparison of the various Chemometric approaches include Root Mean Square Error (RMSE), R^2, slope, offset, sensitivity, specificity and accuracy

Keywords: chemometrics, design of experiment, milk quality analysis, NIRS measurement modes

Procedia PDF Downloads 253
20304 Scoping Review of Biological Age Measurement Composed of Biomarkers

Authors: Diego Alejandro Espíndola-Fernández, Ana María Posada-Cano, Dagnóvar Aristizábal-Ocampo, Jaime Alberto Gallo-Villegas

Abstract:

Background: With the increase in life expectancy, aging has been subject of frequent research, and therefore multiple strategies have been proposed to quantify the advance of the years based on the known physiology of human senescence. For several decades, attempts have been made to characterize these changes through the concept of biological age, which aims to integrate, in a measure of time, structural or functional variation through biomarkers in comparison with simple chronological age. The objective of this scoping review is to deepen the updated concept of measuring biological age composed of biomarkers in the general population and to summarize recent evidence to identify gaps and priorities for future research. Methods: A scoping review was conducted according to the five-phase methodology developed by Arksey and O'Malley through a search of five bibliographic databases to February 2021. Original articles were included with no time or language limit that described the biological age composed of at least two biomarkers in those over 18 years of age. Results: 674 articles were identified, of which 105 were evaluated for eligibility and 65 were included with information on the measurement of biological age composed of biomarkers. Articles from 1974 of 15 nationalities were found, most observational studies, in which clinical or paraclinical biomarkers were used, and 11 different methods described for the calculation of the composite biological age were informed. The outcomes reported were the relationship with the same measured biomarkers, specified risk factors, comorbidities, physical or cognitive functionality, and mortality. Conclusions: The concept of biological age composed of biomarkers has evolved since the 1970s and multiple methods of its quantification have been described through the combination of different clinical and paraclinical variables from observational studies. Future research should consider the population characteristics, and the choice of biomarkers against the proposed outcomes to improve the understanding of aging variables to direct effective strategies for a proper approach.

Keywords: biological age, biological aging, aging, senescence, biomarker

Procedia PDF Downloads 171
20303 Optimization by Means of Genetic Algorithm of the Equivalent Electrical Circuit Model of Different Order for Li-ion Battery Pack

Authors: V. Pizarro-Carmona, S. Castano-Solis, M. Cortés-Carmona, J. Fraile-Ardanuy, D. Jimenez-Bermejo

Abstract:

The purpose of this article is to optimize the Equivalent Electric Circuit Model (EECM) of different orders to obtain greater precision in the modeling of Li-ion battery packs. Optimization includes considering circuits based on 1RC, 2RC and 3RC networks, with a dependent voltage source and a series resistor. The parameters are obtained experimentally using tests in the time domain and in the frequency domain. Due to the high non-linearity of the behavior of the battery pack, Genetic Algorithm (GA) was used to solve and optimize the parameters of each EECM considered (1RC, 2RC and 3RC). The objective of the estimation is to minimize the mean square error between the measured impedance in the real battery pack and those generated by the simulation of different proposed circuit models. The results have been verified by comparing the Nyquist graphs of the estimation of the complex impedance of the pack. As a result of the optimization, the 2RC and 3RC circuit alternatives are considered as viable to represent the battery behavior. These battery pack models are experimentally validated using a hardware-in-the-loop (HIL) simulation platform that reproduces the well-known New York City cycle (NYCC) and Federal Test Procedure (FTP) driving cycles for electric vehicles. The results show that using GA optimization allows obtaining EECs with 2RC or 3RC networks, with high precision to represent the dynamic behavior of a battery pack in vehicular applications.

Keywords: Li-ion battery packs modeling optimized, EECM, GA, electric vehicle applications

Procedia PDF Downloads 106
20302 Non-Pharmacological Approach to the Improvement and Maintenance of the Convergence Parameter

Authors: Andreas Aceranti, Guido Bighiani, Francesca Crotto, Marco Colorato, Stefania Zaghi, Marino Zanetti, Simonetta Vernocchi

Abstract:

The management of eye parameters such as convergence, accommodation, and miosis is very complex; in fact, both the neurovegetative system and the complex Oculocephalgiria system come into play. We have found the effectiveness of the "highvelocity low amplitude" technique directed on C7-T1 (where the cilio-spinal nucleus of the budge is located) in improving the convergence parameter through the measurement of the point of maximum convergence. With this research, we set out to investigate whether the improvement obtained through the High Velocity Low Amplitude maneuver lasts over time, carrying out a pre-manipulation measurement, one immediately after manipulation and one month after manipulation. We took a population of 30 subjects with both refractive and non-refractive problems. Of the 30 patients tested, 27 gave a positive result after the High Velocity Low Amplitude maneuver, giving an improvement in the point of maximum convergence. After a month, we retested all 27 subjects: some further improved the result, others kept, and three subjects slightly lost the gain obtained. None of the re-tested patients returned to the point of maximum convergence starting pre-manipulation. This result opens the door to a multidisciplinary approach between ophthalmologists and osteopaths with the aim of addressing oculomotricity and convergence deficits that increasingly afflict our society due to the massive use of devices and for the conduct of life in closed and restricted environments.

Keywords: point of maximum convergence, HVLA, improvement in PPC, convergence

Procedia PDF Downloads 59
20301 The Various Forms of a Soft Set and Its Extension in Medical Diagnosis

Authors: Biplab Singha, Mausumi Sen, Nidul Sinha

Abstract:

In order to deal with the impreciseness and uncertainty of a system, D. Molodtsov has introduced the concept of ‘Soft Set’ in the year 1999. Since then, a number of related definitions have been conceptualized. This paper includes a study on various forms of Soft Sets with examples. The paper contains the concepts of domain and co-domain of a soft set, conversion to one-one and onto function, matrix representation of a soft set and its relation with one-one function, upper and lower triangular matrix, transpose and Kernel of a soft set. This paper also gives the idea of the extension of soft sets in medical diagnosis. Here, two soft sets related to disease and symptoms are considered and using AND operation and OR operation, diagnosis of the disease is calculated through appropriate examples.

Keywords: kernel of a soft set, soft set, transpose of a soft set, upper and lower triangular matrix of a soft set

Procedia PDF Downloads 325
20300 Study on the Non-Contact Sheet Resistance Measuring of Silver Nanowire Coated Film Using Terahertz Wave

Authors: Dong-Hyun Kim, Wan-Ho Chung, Hak-Sung Kim

Abstract:

In this work, non-destructive evaluation was conducted to measure the sheet resistance of silver nanowire coated film and find a damage of that film using terahertz (THz) wave. Pulse type THz instrument was used, and the measurement was performed under transmission and pitch-catch reflection modes with 30 degree of incidence angle. In the transmission mode, the intensity of the THz wave was gradually increased as the conductivity decreased. Meanwhile, the intensity of THz wave was decreased as the conductivity decreased in the pitch-catch reflection mode. To confirm the conductivity of the film, sheet resistance was measured by 4-point probe station. Interaction formula was drawn from a relation between the intensity and the sheet resistance. Through substituting sheet resistance to the formula and comparing the resultant value with measured maximum THz wave intensity, measurement of sheet resistance using THz wave was more suitable than that using 4-point probe station. In addition, the damage on the silver nanowire coated film was detected by applying the THz image system. Therefore, the reliability of the entire film can be also be ensured. In conclusion, real-time monitoring using the THz wave can be applied in the transparent electrodes with detecting the damaged area as well as measuring the sheet resistance.

Keywords: terahertz wave, sheet resistance, non-destructive evaluation, silver nanowire

Procedia PDF Downloads 473
20299 Memory, Self, and Time: A Bachelardian Perspective

Authors: Michael Granado

Abstract:

The French philosopher Gaston Bachelard’s philosophy of time is articulated in his two works on the subject, the Intuition of the Instant (1932) and his The Dialectic of Duration (1936). Both works present a systematic methodology predicated upon the assumption that our understanding of time has radically changed as a result of Einstein and subsequently needs to be reimagined. Bachelard makes a major distinction in his discussion of time: 1. Time as it is (physical time), 2. Time as we experience it (phenomenological time). This paper will focus on the second distinction, phenomenological time, and explore the connections between Bachelard’s work and contemporary psychology. Several aspects of Bachelard’s philosophy of time nicely complement our current understanding of memory and self and clarify how the self relates to experienced time. Two points, in particular, stand out; the first is the relative nature of subjective time, and the second is the implications of subjective time in the formation of the narrative self. Bachelard introduces two philosophical concepts to explain these points: rhythmanalysis and reverie. By exploring these concepts, it will become apparent that there is an undeniable link between memory, self, and time. Through the use of narrative self, the individual connects and links memories and time together to form a sense of personal identity.

Keywords: Gaston Bachelard, memory, self, time

Procedia PDF Downloads 148
20298 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements

Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker

Abstract:

Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.

Keywords: adaptive, CAx, function blocks, turbomachinery

Procedia PDF Downloads 286
20297 Development and Validation of a Carbon Dioxide TDLAS Sensor for Studies on Fermented Dairy Products

Authors: Lorenzo Cocola, Massimo Fedel, Dragiša Savić, Bojana Danilović, Luca Poletto

Abstract:

An instrument for the detection and evaluation of gaseous carbon dioxide in the headspace of closed containers has been developed in the context of Packsensor Italian-Serbian joint project. The device is based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) with a Wavelength Modulation Spectroscopy (WMS) technique in order to accomplish a non-invasive measurement inside closed containers of fermented dairy products (yogurts and fermented cheese in cups and bottles). The purpose of this instrument is the continuous monitoring of carbon dioxide concentration during incubation and storage of products over a time span of the whole shelf life of the product, in the presence of different microorganisms. The instrument’s optical front end has been designed to be integrated in a thermally stabilized incubator. An embedded computer provides processing of spectral artifacts and storage of an arbitrary set of calibration data allowing a properly calibrated measurement on many samples (cups and bottles) of different shapes and sizes commonly found in the retail distribution. A calibration protocol has been developed in order to be able to calibrate the instrument on the field also on containers which are notoriously difficult to seal properly. This calibration protocol is described and evaluated against reference measurements obtained through an industry standard (sampling) carbon dioxide metering technique. Some sets of validation test measurements on different containers are reported. Two test recordings of carbon dioxide concentration evolution are shown as an example of instrument operation. The first demonstrates the ability to monitor a rapid yeast growth in a contaminated sample through the increase of headspace carbon dioxide. Another experiment shows the dissolution transient with a non-saturated liquid medium in presence of a carbon dioxide rich headspace atmosphere.

Keywords: TDLAS, carbon dioxide, cups, headspace, measurement

Procedia PDF Downloads 303
20296 Smartphone-Based Human Activity Recognition by Machine Learning Methods

Authors: Yanting Cao, Kazumitsu Nawata

Abstract:

As smartphones upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described as more refined, complex, and detailed. In this context, we analyzed a set of experimental data obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model becomes extremely challenging. After a series of feature selection and parameters adjustment, a well-performed SVM classifier has been trained.

Keywords: smart sensors, human activity recognition, artificial intelligence, SVM

Procedia PDF Downloads 131
20295 A Low-Cost of Foot Plantar Shoes for Gait Analysis

Authors: Zulkifli Ahmad, Mohd Razlan Azizan, Nasrul Hadi Johari

Abstract:

This paper presents a study on development and conducting of a wearable sensor system for gait analysis measurement. For validation, the method of plantar surface measurement by force plate was prepared. In general gait analysis, force plate generally represents a studies about barefoot in whole steps and do not allow analysis of repeating movement step in normal walking and running. The measurements that were usually perform do not represent the whole daily plantar pressures in the shoe insole and only obtain the ground reaction force. The force plate measurement is usually limited a few step and it is done indoor and obtaining coupling information from both feet during walking is not easily obtained. Nowadays, in order to measure pressure for a large number of steps and obtain pressure in each insole part, it could be done by placing sensors within an insole. With this method, it will provide a method for determine the plantar pressures while standing, walking or running of a shoe wearing subject. Inserting pressure sensors in the insole will provide specific information and therefore the point of the sensor placement will result in obtaining the critical part under the insole. In the wearable shoe sensor project, the device consists left and right shoe insole with ten FSR. Arduino Mega was used as a micro-controller that read the analog input from FSR. The analog inputs were transmitted via bluetooth data transmission that gains the force data in real time on smartphone. Blueterm software which is an android application was used as an interface to read the FSR reading on the shoe wearing subject. The subject consist of two healthy men with different age and weight doing test while standing, walking (1.5 m/s), jogging (5 m/s) and running (9 m/s) on treadmill. The data obtain will be saved on the android device and for making an analysis and comparison graph.

Keywords: gait analysis, plantar pressure, force plate, earable sensor

Procedia PDF Downloads 431
20294 Dosimetric Application of α-Al2O3:C for Food Irradiation Using TA-OSL

Authors: A. Soni, D. R. Mishra, D. K. Koul

Abstract:

α-Al2O3:C has been reported to have deeper traps at 600°C and 900°C respectively. These traps have been reported to accessed at relatively earlier temperatures (122 and 322 °C respectively) using thermally assisted OSL (TA-OSL). In this work, the dose response α-Al2O3:C was studied in the dose range of 10Gy to 10kGy for its application in food irradiation in low ( upto 1kGy) and medium(1 to 10kGy) dose range. The TOL (Thermo-optically stimulated luminescence) measurements were carried out on RisØ TL/OSL, TL-DA-15 system having a blue light-emitting diodes (λ=470 ±30nm) stimulation source with power level set at the 90% of the maximum stimulation intensity for the blue LEDs (40 mW/cm2). The observations were carried on commercial α-Al2O3:C phosphor. The TOL experiments were carried out with number of active channel (300) and inactive channel (1). Using these settings, the sample is subjected to linear thermal heating and constant optical stimulation. The detection filter used in all observations was a Hoya U-340 (Ip ~ 340 nm, FWHM ~ 80 nm). Irradiation of the samples was carried out using a 90Sr/90Y β-source housed in the system. A heating rate of 2 °C/s was preferred in TL measurements so as to reduce the temperature lag between the heater plate and the samples. To study the dose response of deep traps of α-Al2O3:C, samples were irradiated with various dose ranging from 10 Gy to 10 kGy. For each set of dose, three samples were irradiated. In order to record the TA-OSL, initially TL was recorded up to a temperature of 400°C, to deplete the signal due to 185°C main dosimetry TL peak in α-Al2O3:C, which is also associated with the basic OSL traps. After taking TL readout, the sample was subsequently subjected to TOL measurement. As a result, two well-defined TA-OSL peaks at 121°C and at 232°C occur in time as well as temperature domain which are different from the main dosimetric TL peak which occurs at ~ 185°C. The linearity of the integrated TOL signal has been measured as a function of absorbed dose and found to be linear upto 10kGy. Thus, it can be used for low and intermediate dose range of for its application in food irradiation. The deep energy level defects of α-Al2O3:C phosphor can be accessed using TOL section of RisØ reader system.

Keywords: α-Al2O3:C, deep traps, food irradiation, TA-OSL

Procedia PDF Downloads 285
20293 Numerical Simulation of Urea Water Solution Evaporation Behavior inside the Diesel Selective Catalytic Reduction System

Authors: Kumaresh Selvakumar, Man Young Kim

Abstract:

Selective catalytic reduction (SCR) converts the nitrogen oxides with the aid of a catalyst by adding aqueous urea into the exhaust stream. In this work, the urea water droplets are sprayed over the exhaust gases by treating with Lagrangian particle tracking. The evaporation of ammonia from a single droplet of urea water solution is investigated computationally by convection-diffusion controlled model. The conversion to ammonia due to thermolysis of urea water droplets is measured downstream at different sections using finite rate/eddy dissipation model. In this paper, the mixer installed at the upstream enhances the distribution of ammonia over the entire domain which is calculated for different time steps. Calculations are made within the respective duration such that the complete decomposition of urea is possible at a much shorter residence time.

Keywords: convection-diffusion controlled model, lagrangian particle tracking, selective catalytic reduction, thermolysis

Procedia PDF Downloads 390
20292 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 56