Search results for: precision%20medicine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 879

Search results for: precision%20medicine

699 An Optimal and Efficient Family of Fourth-Order Methods for Nonlinear Equations

Authors: Parshanth Maroju, Ramandeep Behl, Sandile S. Motsa

Abstract:

In this study, we proposed a simple and interesting family of fourth-order multi-point methods without memory for obtaining simple roots. This family requires only three functional evaluations (viz. two of functions f(xn), f(yn) and third one of its first-order derivative f'(xn)) per iteration. Moreover, the accuracy and validity of new schemes is tested by a number of numerical examples are also proposed to illustrate their accuracy by comparing them with the new existing optimal fourth-order methods available in the literature. It is found that they are very useful in high precision computations. Further, the dynamic study of these methods also supports the theoretical aspect.

Keywords: basins of attraction, nonlinear equations, simple roots, Newton's method

Procedia PDF Downloads 289
698 Pesticide Residue Determination on Cumin Plant (Nigella orientalis L.) Grown through Agricultural Practices with LC-MS/MS and GC-MS

Authors: Nilda Ersoy, Sevinç Şener, Ayşe Yalçın Elidemir, Ebru Evcil, Ergün Döğen

Abstract:

In this study, pesticide residues were investigated in black cumin (Nigella orientalis L.) seeds which grown in Turkey. GC-MS and LC-MS/MS analytical instruments are used in high precision, when determining residue limits. A total of 100 pesticide active ingredients in LC-MS/MS devices have been performed in Nigella orientalis L. seeds samples. Moreover, for same aim, 103 pesticide active ingredients were analyzed in GC-MS. This study conducted in 2012 and 2013. Samples residues were not found in detectable levels for two years.

Keywords: pesticide, residue, black cumin, Nigella orientalis L.

Procedia PDF Downloads 310
697 Interpretation of the Russia-Ukraine 2022 War via N-Gram Analysis

Authors: Elcin Timur Cakmak, Ayse Oguzlar

Abstract:

This study presents the results of the tweets sent by Twitter users on social media about the Russia-Ukraine war by bigram and trigram methods. On February 24, 2022, Russian President Vladimir Putin declared a military operation against Ukraine, and all eyes were turned to this war. Many people living in Russia and Ukraine reacted to this war and protested and also expressed their deep concern about this war as they felt the safety of their families and their futures were at stake. Most people, especially those living in Russia and Ukraine, express their views on the war in different ways. The most popular way to do this is through social media. Many people prefer to convey their feelings using Twitter, one of the most frequently used social media tools. Since the beginning of the war, it is seen that there have been thousands of tweets about the war from many countries of the world on Twitter. These tweets accumulated in data sources are extracted using various codes for analysis through Twitter API and analysed by Python programming language. The aim of the study is to find the word sequences in these tweets by the n-gram method, which is known for its widespread use in computational linguistics and natural language processing. The tweet language used in the study is English. The data set consists of the data obtained from Twitter between February 24, 2022, and April 24, 2022. The tweets obtained from Twitter using the #ukraine, #russia, #war, #putin, #zelensky hashtags together were captured as raw data, and the remaining tweets were included in the analysis stage after they were cleaned through the preprocessing stage. In the data analysis part, the sentiments are found to present what people send as a message about the war on Twitter. Regarding this, negative messages make up the majority of all the tweets as a ratio of %63,6. Furthermore, the most frequently used bigram and trigram word groups are found. Regarding the results, the most frequently used word groups are “he, is”, “I, do”, “I, am” for bigrams. Also, the most frequently used word groups are “I, do, not”, “I, am, not”, “I, can, not” for trigrams. In the machine learning phase, the accuracy of classifications is measured by Classification and Regression Trees (CART) and Naïve Bayes (NB) algorithms. The algorithms are used separately for bigrams and trigrams. We gained the highest accuracy and F-measure values by the NB algorithm and the highest precision and recall values by the CART algorithm for bigrams. On the other hand, the highest values for accuracy, precision, and F-measure values are achieved by the CART algorithm, and the highest value for the recall is gained by NB for trigrams.

Keywords: classification algorithms, machine learning, sentiment analysis, Twitter

Procedia PDF Downloads 52
696 Relevancy Measures of Errors in Displacements of Finite Elements Analysis Results

Authors: A. B. Bolkhir, A. Elshafie, T. K. Yousif

Abstract:

This paper highlights the methods of error estimation in finite element analysis (FEA) results. It indicates that the modeling error could be eliminated by performing finite element analysis with successively finer meshes or by extrapolating response predictions from an orderly sequence of relatively low degree of freedom analysis results. In addition, the paper eliminates the round-off error by running the code at a higher precision. The paper provides application in finite element analysis results. It draws a conclusion based on results of application of methods of error estimation.

Keywords: finite element analysis (FEA), discretization error, round-off error, mesh refinement, richardson extrapolation, monotonic convergence

Procedia PDF Downloads 456
695 Marine Propeller Cavitation Analysis Using BEM

Authors: Ehsan Yari

Abstract:

In this paper, a numerical study of sheet cavitation has been performed on DTMB4119 and E779A marine propellers with the boundary element method. In propeller design, various parameters of geometry and fluid are incorporated. So a program is needed to solve the flow taking the whole parameters changing into account. The capability of analyzing the wetted and cavitation flow around propellers in steady, unsteady, uniform, and non-uniform conditions while decreasing computational time compared to numerical finite volume methods with acceptable precision are the characteristic features of the present method. Moreover, modifying the position of the detachment point and its corresponding potential value has been considered. Numerical results have been validated with experimental data, showing a good conformation.

Keywords: cavitation, BEM, DTMB4119, E779A

Procedia PDF Downloads 20
694 A Compressor Map Optimizing Tool for Prediction of Compressor Off-Design Performance

Authors: Zhongzhi Hu, Jie Shen, Jiqiang Wang

Abstract:

A high precision aeroengine model is needed when developing the engine control system. Compared with other main components, the axial compressor is the most challenging component to simulate. In this paper, a compressor map optimizing tool based on the introduction of a modifiable β function is developed for FWorks (FADEC Works). Three parameters (d density, f fitting coefficient, k₀ slope of the line β=0) are introduced to the β function to make it modifiable. The comparison of the traditional β function and the modifiable β function is carried out for a certain type of compressor. The interpolation errors show that both methods meet the modeling requirements, while the modifiable β function can predict compressor performance more accurately for some areas of the compressor map where the users are interested in.

Keywords: beta function, compressor map, interpolation error, map optimization tool

Procedia PDF Downloads 234
693 Reduced Complexity Iterative Solution For I/Q Imbalance Problem in DVB-T2 Systems

Authors: Karim S. Hassan, Hisham M. Hamed, Yassmine A. Fahmy, Ahmed F. Shalash

Abstract:

The mismatch between in-phase and quadrature signals in Orthogonal frequency division multiplexing (OFDM) systems, such as DVB-T2, results in a severe degradation in performance. Several general solutions have been proposed in the past, but these are largely computationally intensive, leading to complex implementations. In this paper, we propose a relatively simple iterative solution, which provides good results in relatively few iterations, using fixed precision arithmetic. An additional advantage is that complex digital blocks, such as dividers and square root, are not required. Thus, the proposed solution may be implemented in relatively simple hardware.

Keywords: OFDM, DVB-T2, I/Q imbalance, I/Q mismatch, iterative method, fixed point, reduced complexity

Procedia PDF Downloads 507
692 Stabilized Earth Roads Construction and Its Challenges

Authors: Mokhtar Nikgoo

Abstract:

Road definition and road construction: in engineering literature, a road is defined as a means of communication between two different places by air, land, and sea. In this way, all sea, land, and air routes are considered as roads. Road construction is an operation to create a road on the ground between 2 points with a specified width, which includes works such as subgrade, paving, placing tables, and traffic signs on the road. In this article, the stages of road construction are explained from the beginning to the end. Road construction is generally done in the construction of rural, urban, and inter-city roads, and according to the special conditions of this area, the precision of engineers in its design and calculations is very important. For example, if the design of a road does not pay enough attention to the way the road curves, there will undoubtedly be countless accidents. Also, adjusting the road surface and its durability and uniformity are among the things that engineers solve according to the upcoming obstacles.

Keywords: road construction, surveying, freeway, pavement, excavator

Procedia PDF Downloads 45
691 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning

Authors: Yangzhi Li

Abstract:

Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.

Keywords: robotic construction, robotic assembly, visual guidance, machine learning

Procedia PDF Downloads 54
690 The Staphylococcus aureus Exotoxin Recognition Using Nanobiosensor Designed by an Antibody-Attached Nanosilica Method

Authors: Hamed Ahari, Behrouz Akbari Adreghani, Vadood Razavilar, Amirali Anvar, Sima Moradi, Hourieh Shalchi

Abstract:

Considering the ever increasing population and industrialization of the developmental trend of humankind's life, we are no longer able to detect the toxins produced in food products using the traditional techniques. This is due to the fact that the isolation time for food products is not cost-effective and even in most of the cases, the precision in the practical techniques like the bacterial cultivation and other techniques suffer from operator errors or the errors of the mixtures used. Hence with the advent of nanotechnology, the design of selective and smart sensors is one of the greatest industrial revelations of the quality control of food products that in few minutes time, and with a very high precision can identify the volume and toxicity of the bacteria. Methods and Materials: In this technique, based on the bacterial antibody connection to nanoparticle, a sensor was used. In this part of the research, as the basis for absorption for the recognition of bacterial toxin, medium sized silica nanoparticles of 10 nanometer in form of solid powder were utilized with Notrino brand. Then the suspension produced from agent-linked nanosilica which was connected to bacterial antibody was positioned near the samples of distilled water, which were contaminated with Staphylococcus aureus bacterial toxin with the density of 10-3, so that in case any toxin exists in the sample, a connection between toxin antigen and antibody would be formed. Finally, the light absorption related to the connection of antigen to the particle attached antibody was measured using spectrophotometry. The gene of 23S rRNA that is conserved in all Staphylococcus spp., also used as control. The accuracy of the test was monitored by using serial dilution (l0-6) of overnight cell culture of Staphylococcus spp., bacteria (OD600: 0.02 = 107 cell). It showed that the sensitivity of PCR is 10 bacteria per ml of cells within few hours. Result: The results indicate that the sensor detects up to 10-4 density. Additionally, the sensitivity of the sensors was examined after 60 days, the sensor by the 56 days had confirmatory results and started to decrease after those time periods. Conclusions: Comparing practical nano biosensory to conventional methods like that culture and biotechnology methods(such as polymerase chain reaction) is accuracy, sensitiveness and being unique. In the other way, they reduce the time from the hours to the 30 minutes.

Keywords: exotoxin, nanobiosensor, recognition, Staphylococcus aureus

Procedia PDF Downloads 361
689 On the Analysis of Pseudorandom Partial Quotient Sequences Generated from Continued Fractions

Authors: T. Padma, Jayashree S. Pillai

Abstract:

Random entities are an essential component in any cryptographic application. The suitability of a number theory based novel pseudorandom sequence called Pseudorandom Partial Quotient Sequence (PPQS) generated from the continued fraction expansion of irrational numbers, in cryptographic applications, is analyzed in this paper. An approach to build the algorithm around a hard mathematical problem has been considered. The PQ sequence is tested for randomness and its suitability as a cryptographic key by performing randomness analysis, key sensitivity and key space analysis, precision analysis and evaluating the correlation properties is established.

Keywords: pseudorandom sequences, key sensitivity, correlation, security analysis, randomness analysis, sensitivity analysis

Procedia PDF Downloads 550
688 Phenotype and Psychometric Characterization of Phelan-Mcdermid Syndrome Patients

Authors: C. Bel, J. Nevado, F. Ciceri, M. Ropacki, T. Hoffmann, P. Lapunzina, C. Buesa

Abstract:

Background: The Phelan-McDermid syndrome (PMS) is a genetic disorder caused by the deletion of the terminal region of chromosome 22 or mutation of the SHANK3 gene. Shank3 disruption in mice leads to dysfunction of synaptic transmission, which can be restored by epigenetic regulation with both Lysine Specific Demethylase 1 (LSD1) inhibitors. PMS subjects result in a variable degree of intellectual disability, delay or absence of speech, autistic spectrum disorders symptoms, low muscle tone, motor delays and epilepsy. Vafidemstat is an LSD1 inhibitor in Phase II clinical development with a well-established and favorable safety profile, and data supporting the restoration of memory and cognition defects as well as reduction of agitation and aggression in several animal models and clinical studies. Therefore, vafidemstat has the potential to become a first-in-class precision medicine approach to treat PMS patients. Aims: The goal of this research is to perform an observational trial to psychometrically characterize individuals carrying deletions in SHANK3 and build a foundation for subsequent precision psychiatry clinical trials with vafidemstat. Methodology: This study is characterizing the clinical profile of 20 to 40 subjects, > 16-year-old, with genotypically confirmed PMS diagnosis. Subjects will complete a battery of neuropsychological scales, including the Repetitive Behavior Questionnaire (RBQ), Vineland Adaptive Behavior Scales, Escala de Observación para el Diagnostico del Autismo (Autism Diagnostic Observational Scale) (ADOS)-2, the Battelle Developmental Inventory and the Behavior Problems Inventory (BPI). Results: By March 2021, 19 patients have been enrolled. Unsupervised hierarchical clustering of the results obtained so far identifies 3 groups of patients, characterized by different profiles of cognitive and behavioral scores. The first cluster is characterized by low Battelle age, high ADOS and low Vineland, RBQ and BPI scores. Low Vineland, RBQ and BPI scores are also detected in the second cluster, which in contrast has high Battelle age and low ADOS scores. The third cluster is somewhat in the middle for the Battelle, Vineland and ADOS scores while displaying the highest levels of aggression (high BPI) and repeated behaviors (high RBQ). In line with the observation that female patients are generally affected by milder forms of autistic symptoms, no male patients are present in the second cluster. Dividing the results by gender highlights that male patients in the third cluster are characterized by a higher frequency of aggression, whereas female patients from the same cluster display a tendency toward higher repetitive behavior. Finally, statistically significant differences in deletion sizes are detected comparing the three clusters (also after correcting for gender), and deletion size appears to be positively correlated with ADOS and negatively correlated with Vineland A and C scores. No correlation is detected between deletion size and the BPI and RBQ scores. Conclusions: Precision medicine may open a new way to understand and treat Central Nervous System disorders. Epigenetic dysregulation has been proposed to be an important mechanism in the pathogenesis of schizophrenia and autism. Vafidemstat holds exciting therapeutic potential in PMS, and this study will provide data regarding the optimal endpoints for a future clinical study to explore vafidemstat ability to treat shank3-associated psychiatric disorders.

Keywords: autism, epigenetics, LSD1, personalized medicine

Procedia PDF Downloads 136
687 Characterizing and Developing the Clinical Grade Microbiome Assay with a Robust Bioinformatics Pipeline for Supporting Precision Medicine Driven Clinical Development

Authors: Danyi Wang, Andrew Schriefer, Dennis O'Rourke, Brajendra Kumar, Yang Liu, Fei Zhong, Juergen Scheuenpflug, Zheng Feng

Abstract:

Purpose: It has been recognized that the microbiome plays critical roles in disease pathogenesis, including cancer, autoimmune disease, and multiple sclerosis. To develop a clinical-grade assay for exploring microbiome-derived clinical biomarkers across disease areas, a two-phase approach is implemented. 1) Identification of the optimal sample preparation reagents using pre-mixed bacteria and healthy donor stool samples coupled with proprietary Sigma-Aldrich® bioinformatics solution. 2) Exploratory analysis of patient samples for enabling precision medicine. Study Procedure: In phase 1 study, we first compared the 16S sequencing results of two ATCC® microbiome standards (MSA 2002 and MSA 2003) across five different extraction kits (Kit A, B, C, D & E). Both microbiome standards samples were extracted in triplicate across all extraction kits. Following isolation, DNA quantity was determined by Qubit assay. DNA quality was assessed to determine purity and to confirm extracted DNA is of high molecular weight. Bacterial 16S ribosomal ribonucleic acid (rRNA) amplicons were generated via amplification of the V3/V4 hypervariable region of the 16S rRNA. Sequencing was performed using a 2x300 bp paired-end configuration on the Illumina MiSeq. Fastq files were analyzed using the Sigma-Aldrich® Microbiome Platform. The Microbiome Platform is a cloud-based service that offers best-in-class 16S-seq and WGS analysis pipelines and databases. The Platform and its methods have been extensively benchmarked using microbiome standards generated internally by MilliporeSigma and other external providers. Data Summary: The DNA yield using the extraction kit D and E is below the limit of detection (100 pg/µl) of Qubit assay as both extraction kits are intended for samples with low bacterial counts. The pre-mixed bacterial pellets at high concentrations with an input of 2 x106 cells for MSA-2002 and 1 x106 cells from MSA-2003 were not compatible with the kits. Among the remaining 3 extraction kits, kit A produced the greatest yield whereas kit B provided the least yield (Kit-A/MSA-2002: 174.25 ± 34.98; Kit-A/MSA-2003: 179.89 ± 30.18; Kit-B/MSA-2002: 27.86 ± 9.35; Kit-B/MSA-2003: 23.14 ± 6.39; Kit-C/MSA-2002: 55.19 ± 10.18; Kit-C/MSA-2003: 35.80 ± 11.41 (Mean ± SD)). Also, kit A produced the greatest yield, whereas kit B provided the least yield. The PCoA 3D visualization of the Weighted Unifrac beta diversity shows that kits A and C cluster closely together while kit B appears as an outlier. The kit A sequencing samples cluster more closely together than both the other kits. The taxonomic profiles of kit B have lower recall when compared to the known mixture profiles indicating that kit B was inefficient at detecting some of the bacteria. Conclusion: Our data demonstrated that the DNA extraction method impacts DNA concentration, purity, and microbial communities detected by next-generation sequencing analysis. Further microbiome analysis performance comparison of using healthy stool samples is underway; also, colorectal cancer patients' samples will be acquired for further explore the clinical utilities. Collectively, our comprehensive qualification approach, including the evaluation of optimal DNA extraction conditions, the inclusion of positive controls, and the implementation of a robust qualified bioinformatics pipeline, assures accurate characterization of the microbiota in a complex matrix for deciphering the deep biology and enabling precision medicine.

Keywords: 16S rRNA sequencing, analytical validation, bioinformatics pipeline, metagenomics

Procedia PDF Downloads 129
686 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools

Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus

Abstract:

Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.

Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects

Procedia PDF Downloads 241
685 Optimize Data Evaluation Metrics for Fraud Detection Using Machine Learning

Authors: Jennifer Leach, Umashanger Thayasivam

Abstract:

The use of technology has benefited society in more ways than one ever thought possible. Unfortunately, though, as society’s knowledge of technology has advanced, so has its knowledge of ways to use technology to manipulate people. This has led to a simultaneous advancement in the world of fraud. Machine learning techniques can offer a possible solution to help decrease this advancement. This research explores how the use of various machine learning techniques can aid in detecting fraudulent activity across two different types of fraudulent data, and the accuracy, precision, recall, and F1 were recorded for each method. Each machine learning model was also tested across five different training and testing splits in order to discover which testing split and technique would lead to the most optimal results.

Keywords: data science, fraud detection, machine learning, supervised learning

Procedia PDF Downloads 157
684 The Modification of Convolutional Neural Network in Fin Whale Identification

Authors: Jiahao Cui

Abstract:

In the past centuries, due to climate change and intense whaling, the global whale population has dramatically declined. Among the various whale species, the fin whale experienced the most drastic drop in number due to its popularity in whaling. Under this background, identifying fin whale calls could be immensely beneficial to the preservation of the species. This paper uses feature extraction to process the input audio signal, then a network based on AlexNet and three networks based on the ResNet model was constructed to classify fin whale calls. A mixture of the DOSITS database and the Watkins database was used during training. The results demonstrate that a modified ResNet network has the best performance considering precision and network complexity.

Keywords: convolutional neural network, ResNet, AlexNet, fin whale preservation, feature extraction

Procedia PDF Downloads 88
683 Opto-Mechanical Characterization of Aspheric Lenses from the Hybrid Method

Authors: Aliouane Toufik, Hamdi Amine, Bouzid Djamel

Abstract:

Aspheric optical components are an alternative to the use of conventional lenses in the implementation of imaging systems for the visible range. Spherical lenses are capable of producing aberrations. Therefore, they are not able to focus all the light into a single point. Instead, aspherical lenses correct aberrations and provide better resolution even with compact lenses incorporating a small number of lenses. Metrology of these components is very difficult especially when the resolution requirements increase and insufficient or complexity of conventional tools requires the development of specific approaches to characterization. This work is part of the problem existed because the objectives are the study and comparison of different methods used to measure surface rays hybrid aspherical lenses.

Keywords: manufacture of lenses, aspherical surface, precision molding, radius of curvature, roughness

Procedia PDF Downloads 442
682 Design to Cryogenic System for Dilution Refrigerator with Cavity and Superconducting Magnet

Authors: Ki Woong Lee

Abstract:

The Center for Axion and Precision Physics Research is studying the search for dark matter using 12 tesla superconducting magnets. A dilution refrigerator is being used for search experiments, and superconducting magnets, superconducting cavities. The dilution refrigerator requires a stable cryogenic environment using liquid helium. Accordingly, a cryogenic system for a stable supply of liquid helium is to be established. This cryogenic system includes the liquefying, supply, storage, and purification of liquid helium. This article presents the basic design, construction, and operation plans for building cryogenic systems.

Keywords: cryogenic system, dilution refrigerator, superconducting magnet, helium recovery system

Procedia PDF Downloads 94
681 Study the Effect of Friction on Barreling Behavior during Upsetting Process Using Anand Model

Authors: H. Mohammadi Majd, M. Jalali Azizpour, V. Tavaf, A. Jaderi

Abstract:

In upsetting processes contact friction significantly influence metal flow, stress-strain state and process parameters. Furthermore, tribological conditions influence workpiece deformation and its dimensional precision. A viscoplastic constitutive law, the Anand model, was applied to represent the inelastic deformation behavior in upsetting process. This paper presents research results of the influence of contact friction coefficient on a workpiece deformation in upsetting process.finite element parameters. This technique was tested for three different specimens simulations of the upsetting and the corresponding material and can be successfully employed to predict the deformation of the upsetting process.

Keywords: friction, upsetting, barreling, Anand model

Procedia PDF Downloads 305
680 Deep Learning Based 6D Pose Estimation for Bin-Picking Using 3D Point Clouds

Authors: Hesheng Wang, Haoyu Wang, Chungang Zhuang

Abstract:

Estimating the 6D pose of objects is a core step for robot bin-picking tasks. The problem is that various objects are usually randomly stacked with heavy occlusion in real applications. In this work, we propose a method to regress 6D poses by predicting three points for each object in the 3D point cloud through deep learning. To solve the ambiguity of symmetric pose, we propose a labeling method to help the network converge better. Based on the predicted pose, an iterative method is employed for pose optimization. In real-world experiments, our method outperforms the classical approach in both precision and recall.

Keywords: pose estimation, deep learning, point cloud, bin-picking, 3D computer vision

Procedia PDF Downloads 133
679 Classification of Red, Green and Blue Values from Face Images Using k-NN Classifier to Predict the Skin or Non-Skin

Authors: Kemal Polat

Abstract:

In this study, it has been estimated whether there is skin by using RBG values obtained from the camera and k-nearest neighbor (k-NN) classifier. The dataset used in this study has an unbalanced distribution and a linearly non-separable structure. This problem can also be called a big data problem. The Skin dataset was taken from UCI machine learning repository. As the classifier, we have used the k-NN method to handle this big data problem. For k value of k-NN classifier, we have used as 1. To train and test the k-NN classifier, 50-50% training-testing partition has been used. As the performance metrics, TP rate, FP Rate, Precision, recall, f-measure and AUC values have been used to evaluate the performance of k-NN classifier. These obtained results are as follows: 0.999, 0.001, 0.999, 0.999, 0.999, and 1,00. As can be seen from the obtained results, this proposed method could be used to predict whether the image is skin or not.

Keywords: k-NN classifier, skin or non-skin classification, RGB values, classification

Procedia PDF Downloads 219
678 Facial Emotion Recognition Using Deep Learning

Authors: Ashutosh Mishra, Nikhil Goyal

Abstract:

A 3D facial emotion recognition model based on deep learning is proposed in this paper. Two convolution layers and a pooling layer are employed in the deep learning architecture. After the convolution process, the pooling is finished. The probabilities for various classes of human faces are calculated using the sigmoid activation function. To verify the efficiency of deep learning-based systems, a set of faces. The Kaggle dataset is used to verify the accuracy of a deep learning-based face recognition model. The model's accuracy is about 65 percent, which is lower than that of other facial expression recognition techniques. Despite significant gains in representation precision due to the nonlinearity of profound image representations.

Keywords: facial recognition, computational intelligence, convolutional neural network, depth map

Procedia PDF Downloads 196
677 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination

Authors: Gilberto Goracci, Fabio Curti

Abstract:

This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.

Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field

Procedia PDF Downloads 65
676 The Role Played by Awareness and Complexity through the Use of a Logistic Regression Analysis

Authors: Yari Vecchio, Margherita Masi, Jorgelina Di Pasquale

Abstract:

Adoption of Precision Agriculture (PA) is involved in a multidimensional and complex scenario. The process of adopting innovations is complex and social inherently, influenced by other producers, change agents, social norms and organizational pressure. Complexity depends on factors that interact and influence the decision to adopt. Farm and operator characteristics, as well as organizational, informational and agro-ecological context directly affect adoption. This influence has been studied to measure drivers and to clarify 'bottlenecks' of the adoption of agricultural innovation. Making decision process involves a multistage procedure, in which individual passes from first hearing about the technology to final adoption. Awareness is the initial stage and represents the moment in which an individual learns about the existence of the technology. 'Static' concept of adoption has been overcome. Awareness is a precondition to adoption. This condition leads to not encountering some erroneous evaluations, arose from having carried out analysis on a population that is only in part aware of technologies. In support of this, the present study puts forward an empirical analysis among Italian farmers, considering awareness as a prerequisite for adoption. The purpose of the present work is to analyze both factors that affect the probability to adopt and determinants that drive an aware individual to not adopt. Data were collected through a questionnaire submitted in November 2017. A preliminary descriptive analysis has shown that high levels of adoption have been found among younger farmers, better educated, with high intensity of information, with large farm size and high labor-intensive, and whose perception of the complexity of adoption process is lower. The use of a logit model permits to appreciate the weight played by the intensity of labor and complexity perceived by the potential adopter in PA adoption process. All these findings suggest important policy implications: measures dedicated to promoting innovation will need to be more specific for each phase of this adoption process. Specifically, they should increase awareness of PA tools and foster dissemination of information to reduce the degree of perceived complexity of the adoption process. These implications are particularly important in Europe where is pre-announced the reform of Common Agricultural Policy, oriented to innovation. In this context, these implications suggest to the measures supporting innovation to consider the relationship between various organizational and structural dimensions of European agriculture and innovation approaches.

Keywords: adoption, awareness, complexity, precision agriculture

Procedia PDF Downloads 109
675 Application of Pattern Recognition Technique to the Quality Characterization of Superficial Microstructures in Steel Coatings

Authors: H. Gonzalez-Rivera, J. L. Palmeros-Torres

Abstract:

This paper describes the application of traditional computer vision techniques as a procedure for automatic measurement of the secondary dendrite arm spacing (SDAS) from microscopic images. The algorithm is capable of finding the lineal or curve-shaped secondary column of the main microstructure, measuring its length size in a micro-meter and counting the number of spaces between dendrites. The automatic characterization was compared with a set of 1728 manually characterized images, leading to an accuracy of −0.27 µm for the length size determination and a precision of ± 2.78 counts for dendrite spacing counting, also reducing the characterization time from 7 hours to 2 minutes.

Keywords: dendrite arm spacing, microstructure inspection, pattern recognition, polynomial regression

Procedia PDF Downloads 13
674 Conception of a Reliable Low Cost, Autonomous Explorative Hovercraft 1

Authors: A. Brand, S. Burgalat, E. Chastel, M. Jumeline, L. Teilhac

Abstract:

The paper presents actual benefits and drawbacks of a multidirectional Hovercraft conceived with limited resources and designed for indoor exploration. Recent developments in the field have led to apparition of very powerful automotive systems capable of very high calculation and exploration in complex unknown environments. They usually propose very complex algorithms, high precision/cost sensors and sometimes have heavy calculation consumption with complex data fusion. Those systems are usually powerful but have a certain price and the benefits may not be worth the cost, especially considering their hardware limitations and their power consumption. Present approach is to build a compromise between cost, power consumption and results preciseness.

Keywords: Hovercraft, indoor exploration, autonomous, multidirectional, wireless control

Procedia PDF Downloads 392
673 A New Method Presentation for Locating Fault in Power Distribution Feeders Considering DG

Authors: Rahman Dashti, Ehsan Gord

Abstract:

In this paper, an improved impedance based fault location method is proposed. In this method, online fault locating is performed using voltage and current information at the beginning of the feeder. Determining precise fault location in a short time increases reliability and efficiency of the system. The proposed method utilizes information about main component of voltage and current at the beginning of the feeder and distributed generation unit (DGU) in order to precisely locate different faults in acceptable time. To evaluate precision and accuracy of the proposed method, a 13-node is simulated and tested using MATLAB.

Keywords: distribution network, fault section determination, distributed generation units, distribution protection equipment

Procedia PDF Downloads 361
672 A Monitoring System to Detect Vegetation Growth along the Route of Power Overhead Lines

Authors: Eugene Eduful

Abstract:

This paper introduces an approach that utilizes a Wireless Sensor Network (WSN) to detect vegetation encroachment between segments of distribution lines. The WSN was designed and implemented, involving the seamless integration of Arduino Uno and Mega systems. This integration demonstrates a method for addressing the challenges posed by vegetation interference. The primary aim of the study is to improve the reliability of power supply in areas characterized by forested terrain, specifically targeting overhead powerlines. The experimental results validate the effectiveness of the proposed system, revealing its ability to accurately identify and locate instances of vegetation encroachment with a remarkably high degree of precision.

Keywords: wireless sensor network, vegetation encroachment, line of sight, Arduino Uno, XBEE

Procedia PDF Downloads 38
671 Impact Analysis Based on Change Requirement Traceability in Object Oriented Software Systems

Authors: Sunil Tumkur Dakshinamurthy, Mamootil Zachariah Kurian

Abstract:

Change requirement traceability in object oriented software systems is one of the challenging areas in research. We know that the traces between links of different artifacts are to be automated or semi-automated in the software development life cycle (SDLC). The aim of this paper is discussing and implementing aspects of dynamically linking the artifacts such as requirements, high level design, code and test cases through the Extensible Markup Language (XML) or by dynamically generating Object Oriented (OO) metrics. Also, non-functional requirements (NFR) aspects such as stability, completeness, clarity, validity, feasibility and precision are discussed. We discuss this as a Fifth Taxonomy, which is a system vulnerability concern.

Keywords: artifacts, NFRs, OO metrics, SDLC, XML

Procedia PDF Downloads 309
670 Rectenna Modeling Based on MoM-GEC Method for RF Energy Harvesting

Authors: Soulayma Smirani, Mourad Aidi, Taoufik Aguili

Abstract:

Energy harvesting has arisen as a prominent research area for low power delivery to RF devices. Rectennas have become a key element in this technology. In this paper, electromagnetic modeling of a rectenna system is presented. In our approach, a hybrid technique was demonstrated to associate both the method of auxiliary sources (MAS) and MoM-GEC (the method of moments combined with the generalized equivalent circuit technique). Auxiliary sources were used in order to substitute specific electronic devices. Therefore, a simple and controllable model is obtained. Also, it can easily be interconnected to form different topologies of rectenna arrays for more energy harvesting. At last, simulation results show the feasibility and simplicity of the proposed rectenna model with high precision and computation efficiency.

Keywords: computational electromagnetics, MoM-GEC method, rectennas, RF energy harvesting

Procedia PDF Downloads 137