Search results for: pointing accuracy
1487 Speeding up Nonlinear Time History Analysis of Base-Isolated Structures Using a Nonlinear Exponential Model
Authors: Nicolò Vaiana, Giorgio Serino
Abstract:
The nonlinear time history analysis of seismically base-isolated structures can require a significant computational effort when the behavior of each seismic isolator is predicted by adopting the widely used differential equation Bouc-Wen model. In this paper, a nonlinear exponential model, able to simulate the response of seismic isolation bearings within a relatively large displacements range, is described and adopted in order to reduce the numerical computations and speed up the nonlinear dynamic analysis. Compared to the Bouc-Wen model, the proposed one does not require the numerical solution of a nonlinear differential equation for each time step of the analysis. The seismic response of a 3d base-isolated structure with a lead rubber bearing system subjected to harmonic earthquake excitation is simulated by modeling each isolator using the proposed analytical model. The comparison of the numerical results and computational time with those obtained by modeling the lead rubber bearings using the Bouc-Wen model demonstrates the good accuracy of the proposed model and its capability to reduce significantly the computational effort of the analysis.Keywords: base isolation, computational efficiency, nonlinear exponential model, nonlinear time history analysis
Procedia PDF Downloads 3821486 Oxytocin and Sensorimotor Synchronization in Pairs of Strangers
Authors: Yana Gorina, Olga Lopatina, Elina Tsigeman, Larisa Mararitsa
Abstract:
The ability to act in concert with others, the so-called sensorimotor synchronisation, is a fundamental human ability that underlies successful interpersonal coordination. The manifestation of accuracy and plasticity in synchronisation is an adaptive aspect of interaction with the environment, as well as the ability to predict upcoming actions and behaviour of others. The ability to temporarily coordinate one’s actions with a predictable external event is manifested in such types of social behaviour as a synchronised group dance to music played live by an orchestra, group sports (rowing, swimming, etc.), synchronised actions of surgeons during an operation, applause from an admiring audience, walking rhythms, etc. Both our body and mind are involved in achieving the synchronisation during social interactions. However, it has not yet been well described how the brain determine the external rhythm and what neuropeptides coordinate and synchronise actions. Over the past few decades, there has been an increased interest among neuroscientists and neurophysiologists regarding the neuropeptide oxytocin in the context of its complex, diverse and sometimes polar effects manifested in the emotional and social aspects of behaviour (attachment, trust, empathy, emotion recognition, stress response, anxiety and depression, etc.). Presumable, oxytocin might also be involved in social synchronisation processes. The aim of our study is to test the hypothesis that oxytocin is linked to interpersonal synchronisation in a pair of strangers.Keywords: behavior, movement, oxytocin, synchronization
Procedia PDF Downloads 601485 Total-Reflection X-Ray Spectroscopy as a Tool for Element Screening in Food Samples
Authors: Hagen Stosnach
Abstract:
The analytical demands on modern instruments for element analysis in food samples include the analysis of major, trace and ultra-trace essential elements as well as potentially toxic trace elements. In this study total reflection, X-ray fluorescence analysis (TXRF) is presented as an analytical technique, which meets the requirements, defined by the Association of Official Agricultural Chemists (AOAC) regarding the limit of quantification, repeatability, reproducibility and recovery for most of the target elements. The advantages of TXRF are the small sample mass required, the broad linear range from µg/kg up to wt.-% values, no consumption of gases or cooling water, and the flexible and easy sample preparation. Liquid samples like alcoholic or non-alcoholic beverages can be analyzed without any preparation. For solid food samples, the most common sample pre-treatment methods are mineralization, direct deposition of the sample onto the reflector without/with minimal treatment, mainly as solid suspensions or after extraction. The main disadvantages are due to the possible peaks overlapping, which may lower the accuracy of quantitative analysis and the limit in the element identification. This analytical technique will be presented by several application examples, covering a broad range of liquid and solid food types.Keywords: essential elements, toxic metals, XRF, spectroscopy
Procedia PDF Downloads 1321484 B Spline Finite Element Method for Drifted Space Fractional Tempered Diffusion Equation
Authors: Ayan Chakraborty, BV. Rathish Kumar
Abstract:
Off-late many models in viscoelasticity, signal processing or anomalous diffusion equations are formulated in fractional calculus. Tempered fractional calculus is the generalization of fractional calculus and in the last few years several important partial differential equations occurring in the different field of science have been reconsidered in this term like diffusion wave equations, Schr$\ddot{o}$dinger equation and so on. In the present paper, a time-dependent tempered fractional diffusion equation of order $\gamma \in (0,1)$ with forcing function is considered. Existence, uniqueness, stability, and regularity of the solution has been proved. Crank-Nicolson discretization is used in the time direction. B spline finite element approximation is implemented. Generally, B-splines basis are useful for representing the geometry of a finite element model, interfacing a finite element analysis program. By utilizing this technique a priori space-time estimate in finite element analysis has been derived and we proved that the convergent order is $\mathcal{O}(h²+T²)$ where $h$ is the space step size and $T$ is the time. A couple of numerical examples have been presented to confirm the accuracy of theoretical results. Finally, we conclude that the studied method is useful for solving tempered fractional diffusion equations.Keywords: B-spline finite element, error estimates, Gronwall's lemma, stability, tempered fractional
Procedia PDF Downloads 1901483 Predictive Analytics of Student Performance Determinants
Authors: Mahtab Davari, Charles Edward Okon, Somayeh Aghanavesi
Abstract:
Every institute of learning is usually interested in the performance of enrolled students. The level of these performances determines the approach an institute of study may adopt in rendering academic services. The focus of this paper is to evaluate students' academic performance in given courses of study using machine learning methods. This study evaluated various supervised machine learning classification algorithms such as Logistic Regression (LR), Support Vector Machine, Random Forest, Decision Tree, K-Nearest Neighbors, Linear Discriminant Analysis, and Quadratic Discriminant Analysis, using selected features to predict study performance. The accuracy, precision, recall, and F1 score obtained from a 5-Fold Cross-Validation were used to determine the best classification algorithm to predict students’ performances. SVM (using a linear kernel), LDA, and LR were identified as the best-performing machine learning methods. Also, using the LR model, this study identified students' educational habits such as reading and paying attention in class as strong determinants for a student to have an above-average performance. Other important features include the academic history of the student and work. Demographic factors such as age, gender, high school graduation, etc., had no significant effect on a student's performance.Keywords: student performance, supervised machine learning, classification, cross-validation, prediction
Procedia PDF Downloads 1251482 Performance Evaluation of 3D Printed ZrO₂ Ceramic Components by Nanoparticle Jetting™
Authors: Shengping Zhong, Qimin Shi, Yaling Deng, Shoufeng Yang
Abstract:
Additive manufacturing has exerted a tremendous fascination on the development of the manufacturing and materials industry in the past three decades. Zirconia-based advanced ceramic has been poured substantial attention in the interest of structural and functional ceramics. As a novel material jetting process for selectively depositing nanoparticles, NanoParticle Jetting™ is capable of fabricating dense zirconia components with a high-detail surface, precisely controllable shrinkage, and remarkable mechanical properties. The presence of NPJ™ gave rise to a higher elevation regarding the printing process and printing accuracy. Emphasis is placed on the performance evaluation of NPJ™ printed ceramic components by which the physical, chemical, and mechanical properties are evaluated. The experimental results suggest the Y₂O₃-stabilized ZrO₂ boxes exhibit a high relative density of 99.5%, glossy surface of minimum 0.33 µm, general linear shrinkage factor of 17.47%, outstanding hardness and fracture toughness of 12.43±0.09 GPa and 7.52±0.34 MPa·m¹/², comparable flexural strength of 699±104 MPa, and dense and homogeneous grain distribution of microstructure. This innovative NanoParticle Jetting system manifests an overwhelming potential in dental, medical, and electronic applications.Keywords: nanoparticle jetting, ZrO₂ ceramic, materials jetting, performance evaluation
Procedia PDF Downloads 1761481 Deep Learning Approach to Trademark Design Code Identification
Authors: Girish J. Showkatramani, Arthi M. Krishna, Sashi Nareddi, Naresh Nula, Aaron Pepe, Glen Brown, Greg Gabel, Chris Doninger
Abstract:
Trademark examination and approval is a complex process that involves analysis and review of the design components of the marks such as the visual representation as well as the textual data associated with marks such as marks' description. Currently, the process of identifying marks with similar visual representation is done manually in United States Patent and Trademark Office (USPTO) and takes a considerable amount of time. Moreover, the accuracy of these searches depends heavily on the experts determining the trademark design codes used to catalog the visual design codes in the mark. In this study, we explore several methods to automate trademark design code classification. Based on recent successes of convolutional neural networks in image classification, we have used several different convolutional neural networks such as Google’s Inception v3, Inception-ResNet-v2, and Xception net. The study also looks into other techniques to augment the results from CNNs such as using Open Source Computer Vision Library (OpenCV) to pre-process the images. This paper reports the results of the various models trained on year of annotated trademark images.Keywords: trademark design code, convolutional neural networks, trademark image classification, trademark image search, Inception-ResNet-v2
Procedia PDF Downloads 2311480 Combination Rule for Homonuclear Dipole Dispersion Coefficients
Authors: Giorgio Visentin, Inna S. Kalinina, Alexei A. Buchachenko
Abstract:
In the ambit of intermolecular interactions, a combination rule is defined as a relation linking a potential parameter for the interaction of two unlike species with the same parameters for interaction pairs of like species. Some of their most exemplificative applications cover the construction of molecular dynamics force fields and dispersion-corrected density functionals. Here, an extended combination rule is proposed, relating the dipole-dipole dispersion coefficients for the interaction of like target species to the same coefficients for the interaction of the target and a set of partner species. The rule can be devised in two different ways, either by uniform discretization of the Casimir-Polder integral on a Gauss-Legendre quadrature or by relating the dynamic polarizabilities of the target and the partner species. Both methods return the same system of linear equations, which requires the knowledge of the dispersion coefficients for interaction between the partner species to be solved. The test examples show a high accuracy for dispersion coefficients (better than 1% in the pristine test for the interaction of Yb atom with rare gases and alkaline-earth metal atoms). In contrast, the rule does not ensure correct monotonic behavior of the dynamic polarizability of the target species. Acknowledgment: The work is supported by Russian Science Foundation grant # 17-13-01466.Keywords: combination rule, dipole-dipole dispersion coefficient, Casimir-Polder integral, Gauss-Legendre quadrature
Procedia PDF Downloads 1771479 Structural Behavior of Laterally Loaded Precast Foamed Concrete Sandwich Panel
Authors: Y. H. Mugahed Amran, Raizal S. M. Rashid, Farzad Hejazi, Nor Azizi Safiee, A. A. Abang Ali
Abstract:
Experimental and analytical studies were carried out to investigate the structural behavior of precast foamed concrete sandwich panels (PFCSP) of total number (6) as one-way action slab tested under lateral load. The details of the test setup and procedures were illustrated. The results obtained from the experimental tests were discussed which include the observation of cracking patterns and influence of aspect ratio (L/b). Analytical study of finite element analysis was implemented and degree of composite action of the test panels was also examined in both experimental and analytical studies. Result shows that crack patterns appeared in only one-direction, similar to reports on solid slabs, particularly when both concrete wythes act in a composite manner. Foamed concrete was briefly reviewed and experimental results were compared with the finite element analyses data which gives a reasonable degree of accuracy. Therefore, based on the results obtained, PFCSP slab can be used as an alternative to conventional flooring system.Keywords: aspect ratio (L/b), finite element analyses (FEA), foamed concrete (FC), precast foamed concrete sandwich panel (PFCSP), ultimate flexural strength capacity
Procedia PDF Downloads 3141478 A Method of Representing Knowledge of Toolkits in a Pervasive Toolroom Maintenance System
Authors: A. Mohamed Mydeen, Pallapa Venkataram
Abstract:
The learning process needs to be so pervasive to impart the quality in acquiring the knowledge about a subject by making use of the advancement in the field of information and communication systems. However, pervasive learning paradigms designed so far are system automation types and they lack in factual pervasive realm. Providing factual pervasive realm requires subtle ways of teaching and learning with system intelligence. Augmentation of intelligence with pervasive learning necessitates the most efficient way of representing knowledge for the system in order to give the right learning material to the learner. This paper presents a method of representing knowledge for Pervasive Toolroom Maintenance System (PTMS) in which a learner acquires sublime knowledge about the various kinds of tools kept in the toolroom and also helps for effective maintenance of the toolroom. First, we explicate the generic model of knowledge representation for PTMS. Second, we expound the knowledge representation for specific cases of toolkits in PTMS. We have also presented the conceptual view of knowledge representation using ontology for both generic and specific cases. Third, we have devised the relations for pervasive knowledge in PTMS. Finally, events are identified in PTMS which are then linked with pervasive data of toolkits based on relation formulated. The experimental environment and case studies show the accuracy and efficient knowledge representation of toolkits in PTMS.Keywords: knowledge representation, pervasive computing, agent technology, ECA rules
Procedia PDF Downloads 3371477 A Machine Learning-based Study on the Estimation of the Threat Posed by Orbital Debris
Authors: Suhani Srivastava
Abstract:
This research delves into the classification of orbital debris through machine learning (ML): it will categorize the intensity of the threat orbital debris poses through multiple ML models to gain an insight into effectively estimating the danger specific orbital debris can pose to future space missions. As the space industry expands, orbital debris becomes a growing concern in Low Earth Orbit (LEO) because it can potentially obfuscate space missions due to the increased orbital debris pollution. Moreover, detecting orbital debris and identifying its characteristics has become a major concern in Space Situational Awareness (SSA), and prior methods of solely utilizing physics can become inconvenient in the face of the growing issue. Thus, this research focuses on approaching orbital debris concerns through machine learning, an efficient and more convenient alternative, in detecting the potential threat certain orbital debris pose. Our findings found that the Logistic regression machine worked the best with a 98% accuracy and this research has provided insight into the accuracies of specific machine learning models when classifying orbital debris. Our work would help provide space shuttle manufacturers with guidelines about mitigating risks, and it would help in providing Aerospace Engineers facilities to identify the kinds of protection that should be incorporated into objects traveling in the LEO through the predictions our models provide.Keywords: aerospace, orbital debris, machine learning, space, space situational awareness, nasa
Procedia PDF Downloads 181476 A Selection Approach: Discriminative Model for Nominal Attributes-Based Distance Measures
Authors: Fang Gong
Abstract:
Distance measures are an indispensable part of many instance-based learning (IBL) and machine learning (ML) algorithms. The value difference metrics (VDM) and inverted specific-class distance measure (ISCDM) are among the top-performing distance measures that address nominal attributes. VDM performs well in some domains owing to its simplicity and poorly in others that exist missing value and non-class attribute noise. ISCDM, however, typically works better than VDM on such domains. To maximize their advantages and avoid disadvantages, in this paper, a selection approach: a discriminative model for nominal attributes-based distance measures is proposed. More concretely, VDM and ISCDM are built independently on a training dataset at the training stage, and the most credible one is recorded for each training instance. At the test stage, its nearest neighbor for each test instance is primarily found by any of VDM and ISCDM and then chooses the most reliable model of its nearest neighbor to predict its class label. It is simply denoted as a discriminative distance measure (DDM). Experiments are conducted on the 34 University of California at Irvine (UCI) machine learning repository datasets, and it shows DDM retains the interpretability and simplicity of VDM and ISCDM but significantly outperforms the original VDM and ISCDM and other state-of-the-art competitors in terms of accuracy.Keywords: distance measure, discriminative model, nominal attributes, nearest neighbor
Procedia PDF Downloads 1121475 Monitoring Public Transportation in Developing Countries Using Automatic Vehicle Location System: A Case Study
Authors: Ahmed Osama, Hassan A. Mahdy, Khalid A. Kandil, Mohamed Elhabiby
Abstract:
Automatic Vehicle Location systems (AVL) have been used worldwide for more than twenty years and have showed great success in public transportation management and monitoring. Cairo public bus service suffers from several problems such as unscheduled stops, unscheduled route deviations, and inaccurate schedules, which have negative impacts on service reliability. This research aims to study those problems for a selected bus route in Cairo using a prototype AVL system. Experimental trips were run on the selected route; and the locations of unscheduled stops, regions of unscheduled deviations, along with other trip time and speed data were collected. Data was analyzed to demonstrate the reliability of passengers on the unscheduled stops compared to the scheduled ones. Trip time was also modeled to assess the unscheduled stops’ impact on trip time, and to check the accuracy of the applied scheduled trip time. Moreover, frequency and length of the unscheduled route deviations, as well as their impact on the bus stops, were illustrated. Solutions were proposed for the bus service deficiencies using the AVL system. Finally, recommendations were proposed for further research.Keywords: automatic vehicle location, public transportation, unscheduled stops, unscheduled route deviations, inaccurate schedule
Procedia PDF Downloads 3881474 Parametric Study of Underground Opening Stability under Uncertainty Conditions
Authors: Aram Yakoby, Yossef H. Hatzor, Shmulik Pinkert
Abstract:
This work presents an applied engineering method for evaluating the stability of underground openings under conditions of uncertainty. The developed method is demonstrated by a comprehensive parametric study on a case of large-diameter vertical borehole stability analysis, with uncertainties regarding the in-situ stress distribution. To this aim, a safety factor analysis is performed for the stability of both supported and unsupported boreholes. In the analysis, we used analytic geomechanical calculations and advanced numerical modeling to evaluate the estimated stress field. In addition, the work presents the development of a boundary condition for the numerical model that fits the nature of the problem and yields excellent accuracy. The borehole stability analysis is studied in terms of (1) the stress ratio in the vertical and horizontal directions, (2) the mechanical properties and geometry of the support system, and (3) the parametric sensitivity. The method's results are studied in light of a real case study of an underground waste disposal site. The conclusions of this study focus on the developed method for capturing the parametric uncertainty, the definition of critical geological depths, the criteria for implementing structural support, and the effectiveness of further in-situ investigations.Keywords: borehole stability, in-situ stress, parametric study, factor of safety
Procedia PDF Downloads 671473 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values
Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie
Abstract:
Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.Keywords: initial input, iterative learning control, maximum input, singular values
Procedia PDF Downloads 2401472 Analyzing the Effectiveness of a Bank of Parallel Resistors, as a Burden Compensation Technique for Current Transformer's Burden, Using LabVIEW™ Data Acquisition Tool
Authors: Dilson Subedi
Abstract:
Current transformers are an integral part of power system because it provides a proportional safe amount of current for protection and measurement applications. However, due to upgradation of electromechanical relays to numerical relays and electromechanical energy meters to digital meters, the connected burden, which defines some of the CT characteristics, has drastically reduced. This has led to the system experiencing high currents damaging the connected relays and meters. Since the protection and metering equipment's are designed to withstand only certain amount of current with respect to time, these high currents pose a risk to man and equipment. Therefore, during such instances, the CT saturation characteristics have a huge influence on the safety of both man and equipment and on the reliability of the protection and metering system. This paper shows the effectiveness of a bank of parallel connected resistors, as a burden compensation technique, in compensating the burden of under-burdened CT’s. The response of the CT in the case of failure of one or more resistors at different levels of overcurrent will be captured using the LabVIEWTM data acquisition hardware (DAQ). The analysis is done on the real-time data gathered using LabVIEWTM. Variation of current transformer saturation characteristics with changes in burden will be discussed.Keywords: accuracy limiting factor, burden, burden compensation, current transformer
Procedia PDF Downloads 2431471 Statistical Analysis with Prediction Models of User Satisfaction in Software Project Factors
Authors: Katawut Kaewbanjong
Abstract:
We analyzed a volume of data and found significant user satisfaction in software project factors. A statistical significance analysis (logistic regression) and collinearity analysis determined the significance factors from a group of 71 pre-defined factors from 191 software projects in ISBSG Release 12. The eight prediction models used for testing the prediction potential of these factors were Neural network, k-NN, Naïve Bayes, Random forest, Decision tree, Gradient boosted tree, linear regression and logistic regression prediction model. Fifteen pre-defined factors were truly significant in predicting user satisfaction, and they provided 82.71% prediction accuracy when used with a neural network prediction model. These factors were client-server, personnel changes, total defects delivered, project inactive time, industry sector, application type, development type, how methodology was acquired, development techniques, decision making process, intended market, size estimate approach, size estimate method, cost recording method, and effort estimate method. These findings may benefit software development managers considerably.Keywords: prediction model, statistical analysis, software project, user satisfaction factor
Procedia PDF Downloads 1221470 CFD Modeling of Insect Flight at Low Reynolds Numbers
Authors: Wu Di, Yeo Khoon Seng, Lim Tee Tai
Abstract:
The typical insects employ a flapping-wing mode of flight. The numerical simulations on free flight of a model fruit fly (Re=143) including hovering and are presented in this paper. Unsteady aerodynamics around a flapping insect is studied by solving the three-dimensional Newtonian dynamics of the flyer coupled with Navier-Stokes equations. A hybrid-grid scheme (Generalized Finite Difference Method) that combines great geometry flexibility and accuracy of moving boundary definition is employed for obtaining flow dynamics. The results show good points of agreement and consistency with the outcomes and analyses of other researchers, which validate the computational model and demonstrate the feasibility of this computational approach on analyzing fluid phenomena in insect flight. The present modeling approach also offers a promising route of investigation that could complement as well as overcome some of the limitations of physical experiments in the study of free flight aerodynamics of insects. The results are potentially useful for the design of biomimetic flapping-wing flyers.Keywords: free hovering flight, flapping wings, fruit fly, insect aerodynamics, leading edge vortex (LEV), computational fluid dynamics (CFD), Navier-Stokes equations (N-S), fluid structure interaction (FSI), generalized finite-difference method (GFD)
Procedia PDF Downloads 4071469 The Influence of Collaboration on Individual Writing Quality: The Case of Iranian vs. Malaysian Freshers
Authors: Seyed Yasin Yazdi-Amirkhiz, Azirah Hashim
Abstract:
This study purported to comparatively investigate the influence of collaborative writing on the quality of individual writing of four female Iranian and four female Malaysian students. The first semester students at a private university in Malaysia, who were homogeneous in terms of age, gender, study discipline, and language proficiency, were divided into two Iranian and two Malaysian dyads. The dyads performed collaborative writing tasks for 15 sessions; after three consecutive collaborative writing sessions, each participant was asked to individually attempt a writing task. Both collaborative and individual writing tasks comprised isomorphic graphic prompts (IELTS Academic Module task 1). Writing quality of the five individually-produced texts during the study was scored in terms of task achievement (TA), cohesion/coherence (C/C), grammatical range/accuracy (GR/A), and lexical resources (LR). The findings indicated a hierarchy of development in TA and C/C among all the students, while LR showed minor improvement only among three of Malaysian students, and GR/A barely exhibited any progress among all the participants. Intermittent progressions and regressions were also discerned in the trajectory of their writing development. The findings are discussed in the light of the socio-cultural and emergentist perspectives, the typology of tasks used as well as the role of the participants’ level of language proficiency.Keywords: collaborative writing, writing quality, individual writing, collaboration
Procedia PDF Downloads 4571468 Frequency Modulation Continuous Wave Radar Human Fall Detection Based on Time-Varying Range-Doppler Features
Authors: Xiang Yu, Chuntao Feng, Lu Yang, Meiyang Song, Wenhao Zhou
Abstract:
The existing two-dimensional micro-Doppler features extraction ignores the correlation information between the spatial and temporal dimension features. For the range-Doppler map, the time dimension is introduced, and a frequency modulation continuous wave (FMCW) radar human fall detection algorithm based on time-varying range-Doppler features is proposed. Firstly, the range-Doppler sequence maps are generated from the echo signals of the continuous motion of the human body collected by the radar. Then the three-dimensional data cube composed of multiple frames of range-Doppler maps is input into the three-dimensional Convolutional Neural Network (3D CNN). The spatial and temporal features of time-varying range-Doppler are extracted by the convolution layer and pool layer at the same time. Finally, the extracted spatial and temporal features are input into the fully connected layer for classification. The experimental results show that the proposed fall detection algorithm has a detection accuracy of 95.66%.Keywords: FMCW radar, fall detection, 3D CNN, time-varying range-doppler features
Procedia PDF Downloads 1191467 Image Segmentation Using Active Contours Based on Anisotropic Diffusion
Authors: Shafiullah Soomro
Abstract:
Active contour is one of the image segmentation techniques and its goal is to capture required object boundaries within an image. In this paper, we propose a novel image segmentation method by using an active contour method based on anisotropic diffusion feature enhancement technique. The traditional active contour methods use only pixel information to perform segmentation, which produces inaccurate results when an image has some noise or complex background. We use Perona and Malik diffusion scheme for feature enhancement, which sharpens the object boundaries and blurs the background variations. Our main contribution is the formulation of a new SPF (signed pressure force) function, which uses global intensity information across the regions. By minimizing an energy function using partial differential framework the proposed method captures semantically meaningful boundaries instead of catching uninterested regions. Finally, we use a Gaussian kernel which eliminates the problem of reinitialization in level set function. We use several synthetic and real images from different modalities to validate the performance of the proposed method. In the experimental section, we have found the proposed method performance is better qualitatively and quantitatively and yield results with higher accuracy compared to other state-of-the-art methods.Keywords: active contours, anisotropic diffusion, level-set, partial differential equations
Procedia PDF Downloads 1571466 Investigating the Impact of Solar Radiation on Electricity Meters’ Accuracy Using A Modified Climatic Chamber
Authors: Hala M. Abdel Mageed, Eman M. Hosny, Adel S. Nada
Abstract:
Solar radiation test is one of the essential tests performed on electricity meters that is carried out using solar simulators. In this work, the (MKF-240) climatic chamber has been modified to act as a solar simulator at the Egyptian national institute of standard, NIS. Quartz Tungsten Halogen (QTH) lamps and an Aluminum plate are added to the climatic chamber to realize the solar test conditions. Many experimental trials have been performed to reach the optimum number of lamps needed to fulfil the test requirements and to adjust the best uniform test area. The proposed solar simulator design is capable to produce irradiance up to 1066 W/m2. Its output radiation is controlled by changing the number of illuminated lamps as well as changing the distance between lamps and tested electricity meter. The uniformity of radiation within the simulator has been recognized to be 91.5 % at maximum irradiance. Three samples of electricity meters have been tested under different irradiances, temperatures, and electric loads. The electricity meters’ accuracies have been recorded and analyzedfor eachsample. Moreover, measurement uncertainty contribution has been considered in all tests to get precision value. There were noticeable changes in the accuracies of the electricity meters while exposed to solar radiation, although there were no noticeable distortions of their insulationsand outer surfaces.Keywords: solar radiation, solar simulator, climatic chamber, halogen lamp, electricity meter
Procedia PDF Downloads 1241465 Double Layer Security Authentication Model for Automatic Dependent Surveillance-Broadcast
Authors: Buse T. Aydin, Enver Ozdemir
Abstract:
An automatic dependent surveillance-broadcast (ADS-B) system has serious security problems. In this study, a double layer authentication scheme between the aircraft and ground station, aircraft to aircraft, ground station to ATC tower is designed to prevent any unauthorized aircrafts from introducing themselves as friends. This method can be used as a solution to the problem of authentication. The method is a combination of classical cryptographic methods and new generation physical layers. The first layer has employed the embedded key of the aircraft. The embedded key is assumed to installed during the construction of the utility. The other layer is a physical attribute (flight path, distance, etc.) between the aircraft and the ATC tower. We create a mathematical model so that two layers’ information is employed and an aircraft is authenticated as a friend or unknown according to the accuracy of the results of the model. The results of the aircraft are compared with the results of the ATC tower and if the values found by the aircraft and ATC tower match within a certain error margin, we mark the aircraft as friend. As a result, the ADS-B messages coming from this authenticated friendly aircraft will be processed. In this method, even if the embedded key is captured by the unknown aircraft, without the information of the second layer, the unknown aircraft can easily be determined. Overall, in this work, we present a reliable system by adding physical layer in the authentication process.Keywords: ADS-B, authentication, communication with physical layer security, cryptography, identification friend or foe
Procedia PDF Downloads 1771464 Parallel Gripper Modelling and Design Optimization Using Multi-Objective Grey Wolf Optimizer
Authors: Golak Bihari Mahanta, Bibhuti Bhusan Biswal, B. B. V. L. Deepak, Amruta Rout, Gunji Balamurali
Abstract:
Robots are widely used in the manufacturing industry for rapid production with higher accuracy and precision. With the help of End-of-Arm Tools (EOATs), robots are interacting with the environment. Robotic grippers are such EOATs which help to grasp the object in an automation system for improving the efficiency. As the robotic gripper directly influence the quality of the product due to the contact between the gripper surface and the object to be grasped, it is necessary to design and optimize the gripper mechanism configuration. In this study, geometric and kinematic modeling of the parallel gripper is proposed. Grey wolf optimizer algorithm is introduced for solving the proposed multiobjective gripper optimization problem. Two objective functions developed from the geometric and kinematic modeling along with several nonlinear constraints of the proposed gripper mechanism is used to optimize the design variables of the systems. Finally, the proposed methodology compared with a previously proposed method such as Teaching Learning Based Optimization (TLBO) algorithm, NSGA II, MODE and it was seen that the proposed method is more efficient compared to the earlier proposed methodology.Keywords: gripper optimization, metaheuristics, , teaching learning based algorithm, multi-objective optimization, optimal gripper design
Procedia PDF Downloads 1871463 Design and Implementation of LabVIEW Based Relay Autotuning Controller for Level Setup
Authors: Manoj M. Sarode, Sharad P. Jadhav, Mukesh D. Patil, Pushparaj S. Suryawanshi
Abstract:
Even though the PID controller is widely used in industrial process, tuning of PID parameters are not easy. It is a time consuming and requires expert people. Another drawback of PID controller is that process dynamics might change over time. This can happen due to variation of the process load, normal wear and tear etc. To compensate for process behavior change over time, expert users are required to recalibrate the PID gains. Implementation of model based controllers usually needs a process model. Identification of process model is time consuming job and no guaranty of model accuracy. If the identified model is not accurate, performance of the controller may degrade. Model based controllers are quite expensive and the whole procedure for the implementation is sometimes tedious. To eliminate such issues Autotuning PID controller becomes vital element. Software based Relay Feedback Autotuning Controller proves to be efficient, upgradable and maintenance free controller. In Relay Feedback Autotune controller PID parameters can be achieved with a very short span of time. This paper presents the real time implementation of LabVIEW based Relay Feedback Autotuning PID controller. It is successfully developed and implemented to control level of a laboratory setup. Its performance is analyzed for different setpoints and found satisfactorily.Keywords: autotuning, PID, liquid level control, recalibrate, labview, controller
Procedia PDF Downloads 3921462 Comparative Study of Dose Calculation Accuracy in Bone Marrow Using Monte Carlo Method
Authors: Marzieh Jafarzadeh, Fatemeh Rezaee
Abstract:
Introduction: The effect of ionizing radiation on human health can be effective for genomic integrity and cell viability. It also increases the risk of cancer and malignancy. Therefore, X-ray behavior and absorption dose calculation are considered. One of the applicable tools for calculating and evaluating the absorption dose in human tissues is Monte Carlo simulation. Monte Carlo offers a straightforward way to simulate and integrate, and because it is simple and straightforward, Monte Carlo is easy to use. The Monte Carlo BEAMnrc code is one of the most common diagnostic X-ray simulation codes used in this study. Method: In one of the understudy hospitals, a certain number of CT scan images of patients who had previously been imaged were extracted from the hospital database. BEAMnrc software was used for simulation. The simulation of the head of the device with the energy of 0.09 MeV with 500 million particles was performed, and the output data obtained from the simulation was applied for phantom construction using CT CREATE software. The percentage of depth dose (PDD) was calculated using STATE DOSE was then compared with international standard values. Results and Discussion: The ratio of surface dose to depth dose (D/Ds) in the measured energy was estimated to be about 4% to 8% for bone and 3% to 7% for bone marrow. Conclusion: MC simulation is an efficient and accurate method for simulating bone marrow and calculating the absorbed dose.Keywords: Monte Carlo, absorption dose, BEAMnrc, bone marrow
Procedia PDF Downloads 2111461 Use of Machine Learning in Data Quality Assessment
Authors: Bruno Pinto Vieira, Marco Antonio Calijorne Soares, Armando Sérgio de Aguiar Filho
Abstract:
Nowadays, a massive amount of information has been produced by different data sources, including mobile devices and transactional systems. In this scenario, concerns arise on how to maintain or establish data quality, which is now treated as a product to be defined, measured, analyzed, and improved to meet consumers' needs, which is the one who uses these data in decision making and companies strategies. Information that reaches low levels of quality can lead to issues that can consume time and money, such as missed business opportunities, inadequate decisions, and bad risk management actions. The step of selecting, identifying, evaluating, and selecting data sources with significant quality according to the need has become a costly task for users since the sources do not provide information about their quality. Traditional data quality control methods are based on user experience or business rules limiting performance and slowing down the process with less than desirable accuracy. Using advanced machine learning algorithms, it is possible to take advantage of computational resources to overcome challenges and add value to companies and users. In this study, machine learning is applied to data quality analysis on different datasets, seeking to compare the performance of the techniques according to the dimensions of quality assessment. As a result, we could create a ranking of approaches used, besides a system that is able to carry out automatically, data quality assessment.Keywords: machine learning, data quality, quality dimension, quality assessment
Procedia PDF Downloads 1461460 The Optimum Mel-Frequency Cepstral Coefficients (MFCCs) Contribution to Iranian Traditional Music Genre Classification by Instrumental Features
Authors: M. Abbasi Layegh, S. Haghipour, K. Athari, R. Khosravi, M. Tafkikialamdari
Abstract:
An approach to find the optimum mel-frequency cepstral coefficients (MFCCs) for the Radif of Mirzâ Ábdollâh, which is the principal emblem and the heart of Persian music, performed by most famous Iranian masters on two Iranian stringed instruments ‘Tar’ and ‘Setar’ is proposed. While investigating the variance of MFCC for each record in themusic database of 1500 gushe of the repertoire belonging to 12 modal systems (dastgâh and âvâz), we have applied the Fuzzy C-Mean clustering algorithm on each of the 12 coefficient and different combinations of those coefficients. We have applied the same experiment while increasing the number of coefficients but the clustering accuracy remained the same. Therefore, we can conclude that the first 7 MFCCs (V-7MFCC) are enough for classification of The Radif of Mirzâ Ábdollâh. Classical machine learning algorithms such as MLP neural networks, K-Nearest Neighbors (KNN), Gaussian Mixture Model (GMM), Hidden Markov Model (HMM) and Support Vector Machine (SVM) have been employed. Finally, it can be realized that SVM shows a better performance in this study.Keywords: radif of Mirzâ Ábdollâh, Gushe, mel frequency cepstral coefficients, fuzzy c-mean clustering algorithm, k-nearest neighbors (KNN), gaussian mixture model (GMM), hidden markov model (HMM), support vector machine (SVM)
Procedia PDF Downloads 4451459 Two-Stage Launch Vehicle Trajectory Modeling for Low Earth Orbit Applications
Authors: Assem M. F. Sallam, Ah. El-S. Makled
Abstract:
This paper presents a study on the trajectory of a two stage launch vehicle. The study includes dynamic responses of motion parameters as well as the variation of angles affecting the orientation of the launch vehicle (LV). LV dynamic characteristics including state vector variation with corresponding altitude and velocity for the different LV stages separation, as well as the angle of attack and flight path angles are also discussed. A flight trajectory study for the drop zone of first stage and the jettisoning of fairing are introduced in the mathematical modeling to study their effect. To increase the accuracy of the LV model, atmospheric model is used taking into consideration geographical location and the values of solar flux related to the date and time of launch, accurate atmospheric model leads to enhancement of the calculation of Mach number, which affects the drag force over the LV. The mathematical model is implemented on MATLAB based software (Simulink). The real available experimental data are compared with results obtained from the theoretical computation model. The comparison shows good agreement, which proves the validity of the developed simulation model; the maximum error noticed was generally less than 10%, which is a result that can lead to future works and enhancement to decrease this level of error.Keywords: launch vehicle modeling, launch vehicle trajectory, mathematical modeling, Matlab- Simulink
Procedia PDF Downloads 2731458 Performance Analysis of Traffic Classification with Machine Learning
Authors: Htay Htay Yi, Zin May Aye
Abstract:
Network security is role of the ICT environment because malicious users are continually growing that realm of education, business, and then related with ICT. The network security contravention is typically described and examined centrally based on a security event management system. The firewalls, Intrusion Detection System (IDS), and Intrusion Prevention System are becoming essential to monitor or prevent of potential violations, incidents attack, and imminent threats. In this system, the firewall rules are set only for where the system policies are needed. Dataset deployed in this system are derived from the testbed environment. The traffic as in DoS and PortScan traffics are applied in the testbed with firewall and IDS implementation. The network traffics are classified as normal or attacks in the existing testbed environment based on six machine learning classification methods applied in the system. It is required to be tested to get datasets and applied for DoS and PortScan. The dataset is based on CICIDS2017 and some features have been added. This system tested 26 features from the applied dataset. The system is to reduce false positive rates and to improve accuracy in the implemented testbed design. The system also proves good performance by selecting important features and comparing existing a dataset by machine learning classifiers.Keywords: false negative rate, intrusion detection system, machine learning methods, performance
Procedia PDF Downloads 116