Search results for: multi-source data fusion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24465

Search results for: multi-source data fusion

24255 Residual Lifetime Estimation for Weibull Distribution by Fusing Expert Judgements and Censored Data

Authors: Xiang Jia, Zhijun Cheng

Abstract:

The residual lifetime of a product is the operation time between the current time and the time point when the failure happens. The residual lifetime estimation is rather important in reliability analysis. To predict the residual lifetime, it is necessary to assume or verify a particular distribution that the lifetime of the product follows. And the two-parameter Weibull distribution is frequently adopted to describe the lifetime in reliability engineering. Due to the time constraint and cost reduction, a life testing experiment is usually terminated before all the units have failed. Then the censored data is usually collected. In addition, other information could also be obtained for reliability analysis. The expert judgements are considered as it is common that the experts could present some useful information concerning the reliability. Therefore, the residual lifetime is estimated for Weibull distribution by fusing the censored data and expert judgements in this paper. First, the closed-forms concerning the point estimate and confidence interval for the residual lifetime under the Weibull distribution are both presented. Next, the expert judgements are regarded as the prior information and how to determine the prior distribution of Weibull parameters is developed. For completeness, the cases that there is only one, and there are more than two expert judgements are both focused on. Further, the posterior distribution of Weibull parameters is derived. Considering that it is difficult to derive the posterior distribution of residual lifetime, a sample-based method is proposed to generate the posterior samples of Weibull parameters based on the Monte Carlo Markov Chain (MCMC) method. And these samples are used to obtain the Bayes estimation and credible interval for the residual lifetime. Finally, an illustrative example is discussed to show the application. It demonstrates that the proposed method is rather simple, satisfactory, and robust.

Keywords: expert judgements, information fusion, residual lifetime, Weibull distribution

Procedia PDF Downloads 110
24254 Quantification Model for Capability Evaluation of Optical-Based in-Situ Monitoring System for Laser Powder Bed Fusion (LPBF) Process

Authors: Song Zhang, Hui Wang, Johannes Henrich Schleifenbaum

Abstract:

Due to the increasing demand for quality assurance and reliability for additive manufacturing, the development of an advanced in-situ monitoring system is required to monitor the process anomalies as input for further process control. Optical-based monitoring systems, such as CMOS cameras and NIR cameras, are proved as effective ways to monitor the geometrical distortion and exceptional thermal distribution. Therefore, many studies and applications are focusing on the availability of the optical-based monitoring system for detecting varied types of defects. However, the capability of the monitoring setup is not quantified. In this study, a quantification model to evaluate the capability of the monitoring setups for the LPBF machine based on acquired monitoring data of a designed test artifact is presented, while the design of the relevant test artifacts is discussed. The monitoring setup is evaluated based on its hardware properties, location of the integration, and light condition. Methodology of data processing to quantify the capacity for each aspect is discussed. The minimal capability of the detectable size of the monitoring set up in the application is estimated by quantifying its resolution and accuracy. The quantification model is validated using a CCD camera-based monitoring system for LPBF machines in the laboratory with different setups. The result shows the model to quantify the monitoring system's performance, which makes the evaluation of monitoring systems with the same concept but different setups possible for the LPBF process and provides the direction to improve the setups.

Keywords: data processing, in-situ monitoring, LPBF process, optical system, quantization model, test artifact

Procedia PDF Downloads 169
24253 Beam Spatio-Temporal Multiplexing Approach for Improving Control Accuracy of High Contrast Pulse

Authors: Ping Li, Bing Feng, Junpu Zhao, Xudong Xie, Dangpeng Xu, Kuixing Zheng, Qihua Zhu, Xiaofeng Wei

Abstract:

In laser driven inertial confinement fusion (ICF), the control of the temporal shape of the laser pulse is a key point to ensure an optimal interaction of laser-target. One of the main difficulties in controlling the temporal shape is the foot part control accuracy of high contrast pulse. Based on the analysis of pulse perturbation in the process of amplification and frequency conversion in high power lasers, an approach of beam spatio-temporal multiplexing is proposed to improve the control precision of high contrast pulse. In the approach, the foot and peak part of high contrast pulse are controlled independently, which propagate separately in the near field, and combine together in the far field to form the required pulse shape. For high contrast pulse, the beam area ratio of the two parts is optimized, and then beam fluence and intensity of the foot part are increased, which brings great convenience to the control of pulse. Meanwhile, the near field distribution of the two parts is also carefully designed to make sure their F-numbers are the same, which is another important parameter for laser-target interaction. The integrated calculation results show that for a pulse with a contrast of up to 500, the deviation of foot part can be improved from 20% to 5% by using beam spatio-temporal multiplexing approach with beam area ratio of 1/20, which is almost the same as that of peak part. The research results are expected to bring a breakthrough in power balance of high power laser facility.

Keywords: inertial confinement fusion, laser pulse control, beam spatio-temporal multiplexing, power balance

Procedia PDF Downloads 130
24252 Human Posture Estimation Based on Multiple Viewpoints

Authors: Jiahe Liu, HongyangYu, Feng Qian, Miao Luo

Abstract:

This study aimed to address the problem of improving the confidence of key points by fusing multi-view information, thereby estimating human posture more accurately. We first obtained multi-view image information and then used the MvP algorithm to fuse this multi-view information together to obtain a set of high-confidence human key points. We used these as the input for the Spatio-Temporal Graph Convolution (ST-GCN). ST-GCN is a deep learning model used for processing spatio-temporal data, which can effectively capture spatio-temporal relationships in video sequences. By using the MvP algorithm to fuse multi-view information and inputting it into the spatio-temporal graph convolution model, this study provides an effective method to improve the accuracy of human posture estimation and provides strong support for further research and application in related fields.

Keywords: multi-view, pose estimation, ST-GCN, joint fusion

Procedia PDF Downloads 36
24251 Applications of Big Data in Education

Authors: Faisal Kalota

Abstract:

Big Data and analytics have gained a huge momentum in recent years. Big Data feeds into the field of Learning Analytics (LA) that may allow academic institutions to better understand the learners’ needs and proactively address them. Hence, it is important to have an understanding of Big Data and its applications. The purpose of this descriptive paper is to provide an overview of Big Data, the technologies used in Big Data, and some of the applications of Big Data in education. Additionally, it discusses some of the concerns related to Big Data and current research trends. While Big Data can provide big benefits, it is important that institutions understand their own needs, infrastructure, resources, and limitation before jumping on the Big Data bandwagon.

Keywords: big data, learning analytics, analytics, big data in education, Hadoop

Procedia PDF Downloads 382
24250 Microstructure, Mechanical, Electrical and Thermal Properties of the Al-Si-Ni Ternary Alloy

Authors: Aynur Aker, Hasan Kaya

Abstract:

In recent years, the use of the aluminum based alloys in the industry and technology are increasing. Alloying elements in aluminum have further been improving the strength and stiffness properties that provide superior compared to other metals. In this study, investigation of physical properties (microstructure, microhardness, tensile strength, electrical conductivity and thermal properties) in the Al-12.6wt.%Si-%2wt.Ni ternary alloy were investigated. Al-Si-Ni alloy was prepared in a graphite crucible under vacuum atmosphere. The samples were directionally solidified upwards with different growth rate (V) at constant temperature gradient G (7.73 K/mm). The microstructures (flake spacings, λ), microhardness (HV), ultimate tensile strength, electrical resistivity and thermal properties enthalpy of fusion and specific heat and melting temperature) of the samples were measured. Influence of the growth rate and flake spacings on microhardness, ultimate tensile strength and electrical resistivity were investigated and relationships between them were experimentally obtained by using regression analysis. According to results, λ values decrease with increasing V, but microhardness, ultimate tensile strength, electrical resistivity values increase with increasing V. Variations of electrical resistivity for cast samples with the temperature in the range of 300-1200 K were also measured by using a standard dc four-point probe technique. The enthalpy of fusion and specific heat for the same alloy was also determined by means of differential scanning calorimeter (DSC) from heating trace during the transformation from liquid to solid. The results obtained in this work were compared with the previous similar experimental results obtained for binary and ternary alloys.

Keywords: electrical resistivity, enthalpy, microhardness, solidification, tensile stress

Procedia PDF Downloads 350
24249 Assessing Professionalism, Communication, and Collaboration among Emergency Physicians by Implementing a 360-Degree Evaluation

Authors: Ahmed Al Ansari, Khalid Al Khalifa

Abstract:

Objective: Multisource feedback (MSF), also called the 360-Degree evaluation is an evaluation process by which questionnaires are distributed amongst medical peers and colleagues to assess physician performance from different sources other than the attending or the supervising physicians. The aim of this study was to design, implement, and evaluate a 360-Degree process in assessing emergency physicians trainee in the Kingdom of Bahrain. Method: The study was undertaken in Bahrain Defense Force Hospital which is a military teaching hospital in the Kingdom of Bahrain. Thirty emergency physicians (who represent the total population of the emergency physicians in our hospital) were assessed in this study. We developed an instrument modified from the Physician achievement review instrument PAR which was used to assess Physician in Alberta. We focused in our instrument to assess professionalism, communication skills and collaboration only. To achieve face and content validity, table of specification was constructed and a working group was involved in constructing the instrument. Expert opinion was considered as well. The instrument consisted of 39 items; were 15 items to assess professionalism, 13 items to assess communication skills, and 11 items to assess collaboration. Each emergency physicians was evaluated with 3 groups of raters, 4 Medical colleague emergency physicians, 4 medical colleague who are considered referral physicians from different departments, and 4 Coworkers from the emergency department. Independent administrative team was formed to carry on the responsibility of distributing the instruments and collecting them in closed envelopes. Each envelope was consisted of that instrument and a guide for the implementation of the MSF and the purpose of the study. Results: A total of 30 emergency physicians 16 males and 14 females who represent the total number of the emergency physicians in our hospital were assessed. The total collected forms is 269, were 105 surveys from coworkers working in emergency department, 93 surveys from medical colleague emergency physicians, and 116 surveys from referral physicians from different departments. The total mean response rates were 71.2%. The whole instrument was found to be suitable for factor analysis (KMO = 0.967; Bartlett test significant, p<0.00). Factor analysis showed that the data on the questionnaire decomposed into three factors which counted for 72.6% of the total variance: professionalism, collaboration, and communication. Reliability analysis indicated that the instrument full scale had high internal consistency (Cronbach’s α 0.98). The generalizability coefficients (Ep2) were 0.71 for the surveys. Conclusions: Based on the present results, the current instruments and procedures have high reliability, validity, and feasibility in assessing emergency physicians trainee in the emergency room.

Keywords: MSF system, emergency, validity, generalizability

Procedia PDF Downloads 332
24248 Multi Universe Existence Based-On Quantum Relativity using DJV Circuit Experiment Interpretation

Authors: Muhammad Arif Jalil, Somchat Sonasang, Preecha Yupapin

Abstract:

This study hypothesizes that the universe is at the center of the universe among the white and black holes, which are the entangled pairs. The coupling between them is in terms of spacetime forming the universe and things. The birth of things is based on exchange energy between the white and black sides. That is, the transition from the white side to the black side is called wave-matter, where it has a speed faster than light with positive gravity. The transition from the black to the white side has a speed faster than light with negative gravity called a wave-particle. In the part where the speed is equal to light, the particle rest mass is formed. Things can appear to take shape here. Thus, the gravity is zero because it is the center. The gravitational force belongs to the Earth itself because it is in a position that is twisted towards the white hole. Therefore, it is negative. The coupling of black-white holes occurs directly on both sides. The mass is formed at the saturation and will create universes and other things. Therefore, it can be hundreds of thousands of universes on both sides of the B and white holes before reaching the saturation point of multi-universes. This work will use the DJV circuit that the research team made as an entangled or two-level system circuit that has been experimentally demonstrated. Therefore, this principle has the possibility for interpretation. This work explains the emergence of multiple universes and can be applied as a practical guideline for searching for universes in the future. Moreover, the results indicate that the DJV circuit can create the elementary particles according to Feynman's diagram with rest mass conditions, which will be discussed for fission and fusion applications.

Keywords: multi-universes, feynman diagram, fission, fusion

Procedia PDF Downloads 36
24247 YOLO-IR: Infrared Small Object Detection in High Noise Images

Authors: Yufeng Li, Yinan Ma, Jing Wu, Chengnian Long

Abstract:

Infrared object detection aims at separating small and dim targets from cluttered backgrounds, and its capabilities extend beyond the limits of visible light, making it invaluable in a wide range of applications, such as improving safety, security, efficiency, and functionality. However, existing methods are usually sensitive to the noise of the input infrared image, leading to a decrease in target detection accuracy and an increase in the false alarm rate in high-noise environments. To address this issue, an infrared small target detection algorithm called YOLO-IR is proposed in this paper to improve the robustness to high infrared noise. To address the problem that high noise significantly reduces the clarity and reliability of target features in infrared images, we design a soft-threshold coordinate attention mechanism to improve the model’s ability to extract target features and its robustness to noise. Since the noise may overwhelm the local details of the target, resulting in the loss of small target features during depth down-sampling, we propose a deep and shallow feature fusion neck to improve the detection accuracy. In addition, because the generalized Intersection over Union (IoU)-based loss functions may be sensitive to noise and lead to unstable training in high-noise environments, we introduce a Wasserstein-distance based loss function to improve the training of the model. The experimental results show that YOLO-IR achieves a 5.0% improvement in recall and a 6.6% improvement in the F1 score over the existing state-of-the-art model.

Keywords: infrared small target detection, high noise, robustness, soft-threshold coordinate attention, feature fusion

Procedia PDF Downloads 25
24246 Automatic Identification and Monitoring of Wildlife via Computer Vision and IoT

Authors: Bilal Arshad, Johan Barthelemy, Elliott Pilton, Pascal Perez

Abstract:

Getting reliable, informative, and up-to-date information about the location, mobility, and behavioural patterns of animals will enhance our ability to research and preserve biodiversity. The fusion of infra-red sensors and camera traps offers an inexpensive way to collect wildlife data in the form of images. However, extracting useful data from these images, such as the identification and counting of animals remains a manual, time-consuming, and costly process. In this paper, we demonstrate that such information can be automatically retrieved by using state-of-the-art deep learning methods. Another major challenge that ecologists are facing is the recounting of one single animal multiple times due to that animal reappearing in other images taken by the same or other camera traps. Nonetheless, such information can be extremely useful for tracking wildlife and understanding its behaviour. To tackle the multiple count problem, we have designed a meshed network of camera traps, so they can share the captured images along with timestamps, cumulative counts, and dimensions of the animal. The proposed method takes leverage of edge computing to support real-time tracking and monitoring of wildlife. This method has been validated in the field and can be easily extended to other applications focusing on wildlife monitoring and management, where the traditional way of monitoring is expensive and time-consuming.

Keywords: computer vision, ecology, internet of things, invasive species management, wildlife management

Procedia PDF Downloads 108
24245 Analysis of Big Data

Authors: Sandeep Sharma, Sarabjit Singh

Abstract:

As per the user demand and growth trends of large free data the storage solutions are now becoming more challenge-able to protect, store and to retrieve data. The days are not so far when the storage companies and organizations are start saying 'no' to store our valuable data or they will start charging a huge amount for its storage and protection. On the other hand as per the environmental conditions it becomes challenge-able to maintain and establish new data warehouses and data centers to protect global warming threats. A challenge of small data is over now, the challenges are big that how to manage the exponential growth of data. In this paper we have analyzed the growth trend of big data and its future implications. We have also focused on the impact of the unstructured data on various concerns and we have also suggested some possible remedies to streamline big data.

Keywords: big data, unstructured data, volume, variety, velocity

Procedia PDF Downloads 511
24244 Experimental Study of Hydrogen and Water Vapor Extraction from Helium with Zeolite Membranes for Tritium Processes

Authors: Rodrigo Antunes, Olga Borisevich, David Demange

Abstract:

The Tritium Laboratory Karlsruhe (TLK) has identified zeolite membranes as most promising for tritium processes in the future fusion reactors. Tritium diluted in purge gases or gaseous effluents, and present in both molecular and oxidized forms, can be pre-concentrated by a stage of zeolite membranes followed by a main downstream recovery stage (e.g., catalytic membrane reactor). Since 2011 several membrane zeolite samples have been tested to measure the membrane performances in the separation of hydrogen and water vapor from helium streams. These experiments were carried out in the ZIMT (Zeolite Inorganic Membranes for Tritium) facility where mass spectrometry and cold traps were used to measure the membranes’ performances. The membranes were tested at temperatures ranging from 25 °C up to 130 °C, at feed pressures between 1 and 3 bar, and typical feed flows of 2 l/min. During this experimental campaign, several zeolite-type membranes were studied: a hollow-fiber MFI nanocomposite membrane purchased from IRCELYON (France), and tubular MFI-ZSM5, NaA and H-SOD membranes purchased from Institute for Ceramic Technologies and Systems (IKTS, Germany). Among these membranes, only the MFI-based showed relevant performances for the H2/He separation, with rather high permeances (~0.5 – 0.7 μmol/sm2Pa for H2 at 25 °C for MFI-ZSM5), however with a limited ideal selectivity of around 2 for H2/He regardless of the feed concentration. Both MFI and NaA showed higher separation performances when water vapor was used instead; for example, at 30 °C, the separation factor for MFI-ZSM5 is approximately 10 and 38 for 0.2% and 10% H2O/He, respectively. The H-SOD evidenced to be considerably defective and therefore not considered for further experiments. In this contribution, a comprehensive analysis of the experimental methods and results obtained for the separation performance of different zeolite membranes during the past four years in inactive environment is given. These results are encouraging for the experimental campaign with molecular and oxidized tritium that will follow in 2017.

Keywords: gas separation, nuclear fusion, tritium processes, zeolite membranes

Procedia PDF Downloads 209
24243 Research of Data Cleaning Methods Based on Dependency Rules

Authors: Yang Bao, Shi Wei Deng, WangQun Lin

Abstract:

This paper introduces the concept and principle of data cleaning, analyzes the types and causes of dirty data, and proposes several key steps of typical cleaning process, puts forward a well scalability and versatility data cleaning framework, in view of data with attribute dependency relation, designs several of violation data discovery algorithms by formal formula, which can obtain inconsistent data to all target columns with condition attribute dependent no matter data is structured (SQL) or unstructured (NoSQL), and gives 6 data cleaning methods based on these algorithms.

Keywords: data cleaning, dependency rules, violation data discovery, data repair

Procedia PDF Downloads 533
24242 PbLi Activation Due to Corrosion Products in WCLL BB (EU-DEMO) and Its Impact on Reactor Design and Recycling

Authors: Nicole Virgili, Marco Utili

Abstract:

The design of the Breeding Blanket in Tokamak fusion energy systems has to guarantee sufficient availability in addition to its functions, that are, tritium breeding self-sufficiency, power extraction and shielding (the magnets and the VV). All these function in the presence of extremely harsh operating conditions in terms of heat flux and neutron dose as well as chemical environment of the coolant and breeder that challenge structural materials (structural resistance and corrosion resistance). The movement and activation of fluids from the BB to the Ex-vessel components in a fusion power plant have an important radiological consideration because flowing material can carry radioactivity to safety-critical areas. This includes gamma-ray emission from activated fluid and activated corrosion products, and secondary activation resulting from neutron emission, with implication for the safety of maintenance personnel and damage to electrical and electronic equipment. In addition to the PbLi breeder activation, it is important to evaluate the contribution due to the activated corrosion products (ACPs) dissolved in the lead-lithium eutectic alloy, at different concentration levels. Therefore, the purpose of the study project is to evaluate the PbLi activity utilizing the FISPACT II inventory code. Emphasis is given on how the design of the EU-DEMO WCLL, and potential recycling of the breeder material will be impacted by the activation of PbLi and the associated active corrosion products (ACPs). For this scope the following Computational Tools, Data and Geometry have been considered: • Neutron source: EU-DEMO neutron flux < 1014/cm2/s • Neutron flux distribution in equatorial breeding blanket module (BBM) #13 in the WCLL BB outboard central zone, which is the most activated zone, with the aim to introduce a conservative component utilizing MNCP6. • The recommended geometry model: 2017 EU DEMO CAD model. • Blanket Module Material Specifications (Composition) • Activation calculations for different ACP concentration levels in the PbLi breeder, with a given chemistry in stationary equilibrium conditions, using FISPACT II code. Results suggest that there should be a waiting time of about 10 years from the shut-down (SD) to be able to safely manipulate the PbLi for recycling operations with simple shielding requirements. The dose rate is mainly given by the PbLi and the ACP concentration (x1 or x 100) does not shift the result. In conclusion, the results show that there is no impact on PbLi activation due to ACPs levels.

Keywords: activation, corrosion products, recycling, WCLL BB., PbLi

Procedia PDF Downloads 82
24241 Numerical Investigation of Solid Subcooling on a Low Melting Point Metal in Latent Thermal Energy Storage Systems Based on Flat Slab Configuration

Authors: Cleyton S. Stampa

Abstract:

This paper addresses the perspectives of using low melting point metals (LMPMs) as phase change materials (PCMs) in latent thermal energy storage (LTES) units, through a numerical approach. This is a new class of PCMs that has been one of the most prospective alternatives to be considered in LTES, due to these materials present high thermal conductivity and elevated heat of fusion, per unit volume. The chosen type of LTES consists of several horizontal parallel slabs filled with PCM. The heat transfer fluid (HTF) circulates through the channel formed between each two consecutive slabs on a laminar regime through forced convection. The study deals with the LTES charging process (heat-storing) by using pure gallium as PCM, and it considers heat conduction in the solid phase during melting driven by natural convection in the melt. The transient heat transfer problem is analyzed in one arbitrary slab under the influence of the HTF. The mathematical model to simulate the isothermal phase change is based on a volume-averaged enthalpy method, which is successfully verified by comparing its predictions with experimental data from works available in the pertinent literature. Regarding the convective heat transfer problem in the HTF, it is assumed that the flow is thermally developing, whereas the velocity profile is already fully developed. The study aims to learn about the effect of the solid subcooling in the melting rate through comparisons with the melting process of the solid in which it starts to melt from its fusion temperature. In order to best understand this effect in a metallic compound, as it is the case of pure gallium, the study also evaluates under the same conditions established for the gallium, the melting process of commercial paraffin wax (organic compound) and of the calcium chloride hexahydrate (CaCl₂ 6H₂O-inorganic compound). In the present work, it is adopted the best options that have been established by several researchers in their parametric studies with respect to this type of LTES, which lead to high values of thermal efficiency. To do so, concerning with the geometric aspects, one considers a gap of the channel formed by two consecutive slabs, thickness and length of the slab. About the HTF, one considers the type of fluid, the mass flow rate, and inlet temperature.

Keywords: flat slab, heat storing, pure metal, solid subcooling

Procedia PDF Downloads 110
24240 In-Situ Formation of Particle Reinforced Aluminium Matrix Composites by Laser Powder Bed Fusion of Fe₂O₃/AlSi12 Powder Mixture Using Consecutive Laser Melting+Remelting Strategy

Authors: Qimin Shi, Yi Sun, Constantinus Politis, Shoufeng Yang

Abstract:

In-situ preparation of particle-reinforced aluminium matrix composites (PRAMCs) by laser powder bed fusion (LPBF) additive manufacturing is a promising strategy to strengthen traditional Al-based alloys. The laser-driven thermite reaction can be a practical mechanism to in-situ synthesize PRAMCs. However, introducing oxygen elements through adding Fe₂O₃ makes the powder mixture highly sensitive to form porosity and Al₂O₃ film during LPBF, bringing challenges to producing dense Al-based materials. Therefore, this work develops a processing strategy combined with consecutive high-energy laser melting scanning and low-energy laser remelting scanning to prepare PRAMCs from a Fe₂O₃/AlSi12 powder mixture. The powder mixture consists of 5 wt% Fe₂O₃ and the remainder AlSi12 powder. The addition of 5 wt% Fe₂O₃ aims to achieve balanced strength and ductility. A high relative density (98.2 ± 0.55 %) was successfully obtained by optimizing laser melting (Emelting) and laser remelting surface energy density (Eremelting) to Emelting = 35 J/mm² and Eremelting = 5 J/mm². Results further reveal the necessity of increasing Emelting, to improve metal liquid’s spreading/wetting by breaking up the Al₂O₃ films surrounding the molten pools; however, the high-energy laser melting produced much porosity, including H₂₋, O₂₋ and keyhole-induced pores. The subsequent low-energy laser remelting could close the resulting internal pores, backfill open gaps and smoothen solidified surfaces. As a result, the material was densified by repeating laser melting and laser remelting layer by layer. Although with two-times laser scanning, the microstructure still shows fine cellular Si networks with Al grains inside (grain size of about 370 nm) and in-situ nano-precipitates (Al₂O₃, Si, and Al-Fe(-Si) intermetallics). Finally, the fine microstructure, nano-structured dispersion strengthening, and high-level densification strengthened the in-situ PRAMCs, reaching yield strength of 426 ± 4 MPa and tensile strength of 473 ± 6 MPa. Furthermore, the results can expect to provide valuable information to process other powder mixtures with severe porosity/oxide-film formation potential, considering the evidenced contribution of laser melting/remelting strategy to densify material and obtain good mechanical properties during LPBF.

Keywords: densification, laser powder bed fusion, metal matrix composites, microstructures, mechanical properties

Procedia PDF Downloads 128
24239 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field

Authors: Jeronimo Cox, Tomonari Furukawa

Abstract:

Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.

Keywords: motion tracking, sensor fusion, magnetometer, state estimation

Procedia PDF Downloads 51
24238 Mining Big Data in Telecommunications Industry: Challenges, Techniques, and Revenue Opportunity

Authors: Hoda A. Abdel Hafez

Abstract:

Mining big data represents a big challenge nowadays. Many types of research are concerned with mining massive amounts of data and big data streams. Mining big data faces a lot of challenges including scalability, speed, heterogeneity, accuracy, provenance and privacy. In telecommunication industry, mining big data is like a mining for gold; it represents a big opportunity and maximizing the revenue streams in this industry. This paper discusses the characteristics of big data (volume, variety, velocity and veracity), data mining techniques and tools for handling very large data sets, mining big data in telecommunication and the benefits and opportunities gained from them.

Keywords: mining big data, big data, machine learning, telecommunication

Procedia PDF Downloads 369
24237 Field Environment Sensing and Modeling for Pears towards Precision Agriculture

Authors: Tatsuya Yamazaki, Kazuya Miyakawa, Tomohiko Sugiyama, Toshitaka Iwatani

Abstract:

The introduction of sensor technologies into agriculture is a necessary step to realize Precision Agriculture. Although sensing methodologies themselves have been prevailing owing to miniaturization and reduction in costs of sensors, there are some difficulties to analyze and understand the sensing data. Targeting at pears ’Le Lectier’, which is particular to Niigata in Japan, cultivation environmental data have been collected at pear fields by eight sorts of sensors: field temperature, field humidity, rain gauge, soil water potential, soil temperature, soil moisture, inner-bag temperature, and inner-bag humidity sensors. With regard to the inner-bag temperature and humidity sensors, they are used to measure the environment inside the fruit bag used for pre-harvest bagging of pears. In this experiment, three kinds of fruit bags were used for the pre-harvest bagging. After over 100 days continuous measurement, volumes of sensing data have been collected. Firstly, correlation analysis among sensing data measured by respective sensors reveals that one sensor can replace another sensor so that more efficient and cost-saving sensing systems can be proposed to pear farmers. Secondly, differences in characteristic and performance of the three kinds of fruit bags are clarified by the measurement results by the inner-bag environmental sensing. It is found that characteristic and performance of the inner-bags significantly differ from each other by statistical analysis. Lastly, a relational model between the sensing data and the pear outlook quality is established by use of Structural Equation Model (SEM). Here, the pear outlook quality is related with existence of stain, blob, scratch, and so on caused by physiological impair or diseases. Conceptually SEM is a combination of exploratory factor analysis and multiple regression. By using SEM, a model is constructed to connect independent and dependent variables. The proposed SEM model relates the measured sensing data and the pear outlook quality determined on the basis of farmer judgement. In particularly, it is found that the inner-bag humidity variable relatively affects the pear outlook quality. Therefore, inner-bag humidity sensing might help the farmers to control the pear outlook quality. These results are supported by a large quantity of inner-bag humidity data measured over the years 2014, 2015, and 2016. The experimental and analytical results in this research contribute to spreading Precision Agriculture technologies among the farmers growing ’Le Lectier’.

Keywords: precision agriculture, pre-harvest bagging, sensor fusion, structural equation model

Procedia PDF Downloads 282
24236 JavaScript Object Notation Data against eXtensible Markup Language Data in Software Applications a Software Testing Approach

Authors: Theertha Chandroth

Abstract:

This paper presents a comparative study on how to check JSON (JavaScript Object Notation) data against XML (eXtensible Markup Language) data from a software testing point of view. JSON and XML are widely used data interchange formats, each with its unique syntax and structure. The objective is to explore various techniques and methodologies for validating comparison and integration between JSON data to XML and vice versa. By understanding the process of checking JSON data against XML data, testers, developers and data practitioners can ensure accurate data representation, seamless data interchange, and effective data validation.

Keywords: XML, JSON, data comparison, integration testing, Python, SQL

Procedia PDF Downloads 85
24235 Inverse Matrix in the Theory of Dynamical Systems

Authors: Renata Masarova, Bohuslava Juhasova, Martin Juhas, Zuzana Sutova

Abstract:

In dynamic system theory a mathematical model is often used to describe their properties. In order to find a transfer matrix of a dynamic system we need to calculate an inverse matrix. The paper contains the fusion of the classical theory and the procedures used in the theory of automated control for calculating the inverse matrix. The final part of the paper models the given problem by the Matlab.

Keywords: dynamic system, transfer matrix, inverse matrix, modeling

Procedia PDF Downloads 482
24234 Analysis of Secondary Peak in Hα Emission Profile during Gas Puffing in Aditya Tokamak

Authors: Harshita Raj, Joydeep Ghosh, Rakesh L. Tanna, Prabal K. Chattopadhyay, K. A. Jadeja, Sharvil Patel, Kaushal M. Patel, Narendra C. Patel, S. B. Bhatt, V. K. Panchal, Chhaya Chavda, C. N. Gupta, D. Raju, S. K. Jha, J. Raval, S. Joisa, S. Purohit, C. V. S. Rao, P. K. Atrey, Umesh Nagora, R. Manchanda, M. B. Chowdhuri, Nilam Ramaiya, S. Banerjee, Y. C. Saxena

Abstract:

Efficient gas fueling is a critical aspect that needs to be mastered in order to maintain plasma density, to carry out fusion. This requires a fair understanding of fuel recycling in order to optimize the gas fueling. In Aditya tokamak, multiple gas puffs are used in a precise and controlled manner, for hydrogen fueling during the flat top of plasma discharge which has been instrumental in achieving discharges with enhanced density as well as energy confinement time. Following each gas puff, we observe peaks in temporal profile of Hα emission, Soft X-ray (SXR) and chord averaged electron density in a number of discharges, indicating efficient gas fueling. Interestingly, Hα temporal profile exhibited an additional peak following the peak corresponding to each gas puff. These additional peak Hα appeared in between the two gas puffs, indicating the presence of a secondary hydrogen source apart from the gas puffs. A thorough investigation revealed that these secondary Hα peaks coincide with Hard X- ray bursts which come from the interaction of runaway electrons with vessel limiters. This leads to consider that the runaway electrons (REs), which hit the wall, in turn, bring out the absorbed hydrogen and oxygen from the wall and makes the interaction of REs with limiter a secondary hydrogen source. These observations suggest that runaway electron induced recycling should also be included in recycling particle source in the particle balance calculations in tokamaks. Observation of two Hα peaks associated with one gas puff and their roles in enhancing and maintaining plasma density in Aditya tokamak will be discussed in this paper.

Keywords: fusion, gas fueling, recycling, Tokamak, Aditya

Procedia PDF Downloads 368
24233 Random Variation of Treated Volumes in Fractionated 2D Image Based HDR Brachytherapy for Cervical Cancer

Authors: R. Tudugala, B. M. A. I. Balasooriya, W. M. Ediri Arachchi, R. W. M. W. K. Rathnayake, T. D. Premaratna

Abstract:

Brachytherapy involves placing a source of radiation near the cancer site which gives promising prognosis for cervical cancer treatments. The purpose of this study was to evaluate the effect of random variation of treated volumes in between fractions in the 2D image based fractionated high dose rate brachytherapy for cervical cancer at National Cancer Institute Maharagama, Sri Lanka. Dose plans were analyzed for 150 cervical cancer patients with orthogonal radiographs (2D) based brachytherapy. ICRU treated volumes was modeled by translating the applicators with the help of “Multisource HDR plus software”. The difference of treated volumes with respect to the applicator geometry was analyzed by using SPSS 18 software; to derived patient population based estimates of delivered treated volumes relative to ideally treated volumes. Packing was evaluated according to bladder dose, rectum dose and geometry of the dose distribution by three consultant radiation oncologist. The difference of treated volumes depends on types of the applicators, which was used in fractionated brachytherapy. The means of the “Difference of Treated Volume” (DTV) for “Evenly activated tandem (ET)” length” group was ((X_1)) -0.48 cm3 and ((X_2)) 11.85 cm3 for “Unevenly activated tandem length (UET) group. The range of the DTV for ET group was 35.80 cm3 whereas UET group 104.80 cm3. One sample T test was performed to compare the DTV with “Ideal treatment volume difference (0.00cm3)”. It is evident that P value was 0.732 for ET group and for UET it was 0.00 moreover independent two sample T test was performed to compare ET and UET groups and calculated P value was 0.005. Packing was evaluated under three categories 59.38% used “Convenient Packing Technique”, 33.33% used “Fairly Packing Technique” and 7.29% used “Not Convenient Packing” in their fractionated brachytherapy treatments. Random variation of treated volume in ET group is much lower than UET group and there is a significant difference (p<0.05) in between ET and UET groups which affects the dose distribution of the treatment. Furthermore, it can be concluded nearly 92.71% patient’s packing were used acceptable packing technique at NCIM, Sri Lanka.

Keywords: brachytherapy, cervical cancer, high dose rate, tandem, treated volumes

Procedia PDF Downloads 172
24232 Case Report of a Secretory Carcinoma of the Salivary Gland: Clinical Management Following High-Grade Transformation

Authors: Wissam Saliba, Mandy Nicholson

Abstract:

Secretory carcinoma (SC) is a rare type of salivary gland cancer. It was first realized as a distinct type of malignancy in 2010and wasinitially termed “mammary analogue secretory carcinoma” because of similarities with secretory breast cancer. The name was later changed to SC. Most SCs originate in parotid glands, and most harbour a rare gene mutation: ETV6-NTRK3. This mutation is rare in common cancers and common in rare cancers; it is present in most secretory carcinomas. Disease outcomes for SC are usually described as favourable as many cases of SC are lowgrade (LG), and cancer growth is slow. In early stages, localized therapy is usually indicated (surgery and/or radiation). Despitea favourable prognosis, a sub-set of casescan be much more aggressive.These cases tend to be of high-grade(HG).HG casesare associated with a poorer prognosis.Management of such cases can be challenging due to limited evidence for effective systemic therapy options. This case report describes the clinical management of a 46-year-oldmale patient with a unique case of SC. He was initially diagnosed with a low/intermediate grade carcinoma of the left parotid gland in 2009; he was treated with surgery and adjuvant radiation. Surgical pathology favoured primary salivary adenocarcinoma, and 2 lymph nodes were positive for malignancy. SC was not yet realized as a distinct type of cancerat the time of diagnosis, and the pathology reportvalidated this gap by stating that the specimen lacked features of the defined types of salivary carcinoma.Slow-growing pulmonary nodules were identified in 2017. In 2020, approximately 11 years after the initial diagnosis, the patient presented with malignant pleural effusion. Pathology from a pleural biopsy was consistent with metastatic poorly differentiated cancer of likely parotid origin, likely mammary analogue secretory carcinoma. The specimen was sent for Next Generation Sequencing (NGS); ETV6-NTRK3 gene fusion was confirmed, and systemic therapy was initiated.One cycle ofcarboplatin/paclitaxel was given in June 2020. He was switched to Larotrectinib (NTRK inhibitor (NTRKi)) later that month. Larotrectinib continued for approximately 9 months, with discontinuation in March 2021 due to disease progression. A second-generation NTRKi (Selitrectinib) was accessed and prescribedthrough a single patient study. Selitrectinib was well tolerated. The patient experienced a complete radiological response within~4 months. Disease progression occurred once again in October 2021. Progression was slow, and Selitrectinib continuedwhile the medical team performed a thorough search for additional treatment options. In January 2022, a liver lesion biopsy was performed, and NGS showed an NTRKG623R solvent-front resistance mutation. Various treatment pathways were considered. The patient pursuedanother investigational NTRKi through a clinical trial, and Selitrectinib was discontinued in July 2022. Excellent performance status was maintained throughout the entire course of treatment.It can be concluded that NTRK inhibitors provided satisfactory treatment efficacy and tolerance for this patient with high-grade transformation and NTRK gene fusion cancer. In the future, more clinical research is needed on systemic treatment options for high-grade transformations in NTRK gene fusion SCs.

Keywords: secretory carcinoma, high-grade transformations, NTRK gene fusion, NTRK inhibitor

Procedia PDF Downloads 86
24231 Reviewing Privacy Preserving Distributed Data Mining

Authors: Sajjad Baghernezhad, Saeideh Baghernezhad

Abstract:

Nowadays considering human involved in increasing data development some methods such as data mining to extract science are unavoidable. One of the discussions of data mining is inherent distribution of the data usually the bases creating or receiving such data belong to corporate or non-corporate persons and do not give their information freely to others. Yet there is no guarantee to enable someone to mine special data without entering in the owner’s privacy. Sending data and then gathering them by each vertical or horizontal software depends on the type of their preserving type and also executed to improve data privacy. In this study it was attempted to compare comprehensively preserving data methods; also general methods such as random data, coding and strong and weak points of each one are examined.

Keywords: data mining, distributed data mining, privacy protection, privacy preserving

Procedia PDF Downloads 489
24230 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 350
24229 The Right to Data Portability and Its Influence on the Development of Digital Services

Authors: Roman Bieda

Abstract:

The General Data Protection Regulation (GDPR) will come into force on 25 May 2018 which will create a new legal framework for the protection of personal data in the European Union. Article 20 of GDPR introduces a right to data portability. This right allows for data subjects to receive the personal data which they have provided to a data controller, in a structured, commonly used and machine-readable format, and to transmit this data to another data controller. The right to data portability, by facilitating transferring personal data between IT environments (e.g.: applications), will also facilitate changing the provider of services (e.g. changing a bank or a cloud computing service provider). Therefore, it will contribute to the development of competition and the digital market. The aim of this paper is to discuss the right to data portability and its influence on the development of new digital services.

Keywords: data portability, digital market, GDPR, personal data

Procedia PDF Downloads 442
24228 Recent Advances in Data Warehouse

Authors: Fahad Hanash Alzahrani

Abstract:

This paper describes some recent advances in a quickly developing area of data storing and processing based on Data Warehouses and Data Mining techniques, which are associated with software, hardware, data mining algorithms and visualisation techniques having common features for any specific problems and tasks of their implementation.

Keywords: data warehouse, data mining, knowledge discovery in databases, on-line analytical processing

Procedia PDF Downloads 367
24227 Realization of Wearable Inertial Measurement Units-Sensor-Fusion Harness to Control Therapeutic Smartphone Applications

Authors: Svilen Dimitrov, Manthan Pancholi, Norbert Schmitz, Didier Stricker

Abstract:

This paper presents the end-to-end development of a wearable motion sensing harness consisting of computational unit and four inertial measurement units to control three smartphone therapeutic games for children. The inertial data is processed in real time to obtain lower body motion information like knee raises, feet taps and squads. By providing a Wi-Fi connection interface the sensor harness acts wireless remote control for smartphone applications. By performing various lower body movements the users provoke corresponding game state changes. In contrary to the current similar offers, like Nintendo Wii Remote, Xbox Kinect and Playstation Move, this product, consisting of the sensor harness and the applications on top of it, are fully wearable, which means they do not rely on the user to be bound to concrete soft- or hardwareequipped space.

Keywords: wearable harness, inertial measurement units, smartphone therapeutic games, motion tracking, lower-body activity monitoring

Procedia PDF Downloads 368
24226 How to Use Big Data in Logistics Issues

Authors: Mehmet Akif Aslan, Mehmet Simsek, Eyup Sensoy

Abstract:

Big Data stands for today’s cutting-edge technology. As the technology becomes widespread, so does Data. Utilizing massive data sets enable companies to get competitive advantages over their adversaries. Out of many area of Big Data usage, logistics has significance role in both commercial sector and military. This paper lays out what big data is and how it is used in both military and commercial logistics.

Keywords: big data, logistics, operational efficiency, risk management

Procedia PDF Downloads 609