Search results for: segmentation accuracy
200 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 142199 Using Real Truck Tours Feedback for Address Geocoding Correction
Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle
Abstract:
When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.Keywords: driver experience feedback, geocoding correction, real truck tours
Procedia PDF Downloads 675198 Effects of Sulphide Mining on AISI 304 Stainless Steel
Authors: Aguasanta Miguel Sarmiento, José Miguel Dávila, María Luisa de la Torre
Abstract:
Acid mine drainage (AMD) is an acidic leachate with high levels of metals and sulphates in solution, which seriously affects the durability and strength of metallic materials used in the construction of structural and mechanical components. This paper presents the results of the evolution over time of the reduction in tensile strength and defects in AISI 304 stainless steel in contact with acid mine drainage. For this purpose, a total of 30 bars with a diameter of 8 mm and a length of 14 cm were placed transversely in the course of a stream contaminated by AMD from the sulphide mines of the Iberian Pyritic Belt (SW Spain). This stream has average pH values of 2.6, a potential of 660 mV, and average concentrations of 12 g/L of sulphates, 1.2 g/L of Fe, 191 mg/L of Zn, etc. Every two months of exposure, 6 stainless steel bars were extracted from the acid stream. They were subjected to surface roughness analysis carried out with the help of Mitutoyo Surftest SJ-210 surface roughness tester. The analysis was carried out at three different points on 5 specimens from each series. The average reading of each parameter is calculated in order to ensure the accuracy of the measurements and the surface coverage. Arithmetic mean roughness value (Ra), mean roughness depth (Rz), and root mean square roughness (Rq) were measured. Five specimens from each series were statically tensile tested using universal equipment (Servosis ME 403 of 200kN). The specimens were clamped at their ends with two grips for cylindrical sections, and the tensile force was applied at a constant speed of 0.5 kN/s, according to the requirements of standard UNE-EN ISO 6892-1: 2020. To determine the modulus of elasticity, limits close to 15% and 55% of the maximum load were used, depending on the course of each test. Field Emission Scanning Electron Microscopy (FESEM) was used to observe corrosion products and defects generated by exposure to AMD. Energy dispersive X-ray spectrometry (EDS) was used to analyse the chemical composition of the corrosion products formed. For this purpose, small pieces were cut from the resulting specimens, cleaned, and embedded in epoxy resin. The results show that after only 5 months of exposure of AISI 304 stainless steel to the mining environment, the surface roughness increases significantly, with average depths almost 6 times greater than the initial one. Cracks are observed on the surface of the material, which increases in size with the time of exposure. A large number of grains with a composition of more than 57% Pb and 16% Sn can be observed inside these cracks. Tensile tests show a reduction in the resistance of this material after only two months of exposure. The results show the serious problems that would result from the use of this material for the use of mechanical components in a sulphide mining environment, not only because of the significant reduction in the lifetime of such components, but also because of the implications for human safety.Keywords: acid mine drainage, corrosion, mechanical properties, stainless steel
Procedia PDF Downloads 22197 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4
Authors: Ryan A. Black, Stacey A. McCaffrey
Abstract:
Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.Keywords: instrument development, item response theory, latent trait theory, psychometrics
Procedia PDF Downloads 358196 Combination of Unmanned Aerial Vehicle and Terrestrial Laser Scanner Data for Citrus Yield Estimation
Authors: Mohammed Hmimou, Khalid Amediaz, Imane Sebari, Nabil Bounajma
Abstract:
Annual crop production is one of the most important macroeconomic indicators for the majority of countries around the world. This information is valuable, especially for exporting countries which need a yield estimation before harvest in order to correctly plan the supply chain. When it comes to estimating agricultural yield, especially for arboriculture, conventional methods are mostly applied. In the case of the citrus industry, the sale before harvest is largely practiced, which requires an estimation of the production when the fruit is on the tree. However, conventional method based on the sampling surveys of some trees within the field is always used to perform yield estimation, and the success of this process mainly depends on the expertise of the ‘estimator agent’. The present study aims to propose a methodology based on the combination of unmanned aerial vehicle (UAV) images and terrestrial laser scanner (TLS) point cloud to estimate citrus production. During data acquisition, a fixed wing and rotatory drones, as well as a terrestrial laser scanner, were tested. After that, a pre-processing step was performed in order to generate point cloud and digital surface model. At the processing stage, a machine vision workflow was implemented to extract points corresponding to fruits from the whole tree point cloud, cluster them into fruits, and model them geometrically in a 3D space. By linking the resulting geometric properties to the fruit weight, the yield can be estimated, and the statistical distribution of fruits size can be generated. This later property, which is information required by importing countries of citrus, cannot be estimated before harvest using the conventional method. Since terrestrial laser scanner is static, data gathering using this technology can be performed over only some trees. So, integration of drone data was thought in order to estimate the yield over a whole orchard. To achieve that, features derived from drone digital surface model were linked to yield estimation by laser scanner of some trees to build a regression model that predicts the yield of a tree given its features. Several missions were carried out to collect drone and laser scanner data within citrus orchards of different varieties by testing several data acquisition parameters (fly height, images overlap, fly mission plan). The accuracy of the obtained results by the proposed methodology in comparison to the yield estimation results by the conventional method varies from 65% to 94% depending mainly on the phenological stage of the studied citrus variety during the data acquisition mission. The proposed approach demonstrates its strong potential for early estimation of citrus production and the possibility of its extension to other fruit trees.Keywords: citrus, digital surface model, point cloud, terrestrial laser scanner, UAV, yield estimation, 3D modeling
Procedia PDF Downloads 144195 Irradion: Portable Small Animal Imaging and Irradiation Unit
Authors: Josef Uher, Jana Boháčová, Richard Kadeřábek
Abstract:
In this paper, we present a multi-robot imaging and irradiation research platform referred to as Irradion, with full capabilities of portable arbitrary path computed tomography (CT). Irradion is an imaging and irradiation unit entirely based on robotic arms for research on cancer treatment with ion beams on small animals (mice or rats). The platform comprises two subsystems that combine several imaging modalities, such as 2D X-ray imaging, CT, and particle tracking, with precise positioning of a small animal for imaging and irradiation. Computed Tomography: The CT subsystem of the Irradion platform is equipped with two 6-joint robotic arms that position a photon counting detector and an X-ray tube independently and freely around the scanned specimen and allow image acquisition utilizing computed tomography. Irradiation measures nearly all conventional 2D and 3D trajectories of X-ray imaging with precisely calibrated and repeatable geometrical accuracy leading to a spatial resolution of up to 50 µm. In addition, the photon counting detectors allow X-ray photon energy discrimination, which can suppress scattered radiation, thus improving image contrast. It can also measure absorption spectra and recognize different materials (tissue) types. X-ray video recording and real-time imaging options can be applied for studies of dynamic processes, including in vivo specimens. Moreover, Irradion opens the door to exploring new 2D and 3D X-ray imaging approaches. We demonstrate in this publication various novel scan trajectories and their benefits. Proton Imaging and Particle Tracking: The Irradion platform allows combining several imaging modules with any required number of robots. The proton tracking module comprises another two robots, each holding particle tracking detectors with position, energy, and time-sensitive sensors Timepix3. Timepix3 detectors can track particles entering and exiting the specimen and allow accurate guiding of photon/ion beams for irradiation. In addition, quantifying the energy losses before and after the specimen brings essential information for precise irradiation planning and verification. Work on the small animal research platform Irradion involved advanced software and hardware development that will offer researchers a novel way to investigate new approaches in (i) radiotherapy, (ii) spectral CT, (iii) arbitrary path CT, (iv) particle tracking. The robotic platform for imaging and radiation research developed for the project is an entirely new product on the market. Preclinical research systems with precision robotic irradiation with photon/ion beams combined with multimodality high-resolution imaging do not exist currently. The researched technology can potentially cause a significant leap forward compared to the current, first-generation primary devices.Keywords: arbitrary path CT, robotic CT, modular, multi-robot, small animal imaging
Procedia PDF Downloads 92194 Augmenting Navigational Aids: The Development of an Assistive Maritime Navigation Application
Abstract:
On the bridge of a ship the officers are looking for visual aids to guide navigation in order to reconcile the outside world with the position communicated by the digital navigation system. Aids to navigation include: Lighthouses, lightships, sector lights, beacons, buoys, and others. They are designed to help navigators calculate their position, establish their course or avoid dangers. In poor visibility and dense traffic areas, it can be very difficult to identify these critical aids to guide navigation. The paper presents the usage of Augmented Reality (AR) as a means to present digital information about these aids to support navigation. To date, nautical navigation related mobile AR applications have been limited to the leisure industry. If proved viable, this prototype can facilitate the creation of other similar applications that could help commercial officers with navigation. While adopting a user centered design approach, the team has developed the prototype based on insights from initial research carried on board of several ships. The prototype, built on Nexus 9 tablet and Wikitude, features a head-up display of the navigational aids (lights) in the area, presented in AR and a bird’s eye view mode presented on a simplified map. The application employs the aids to navigation data managed by Hydrographic Offices and the tablet’s sensors: GPS, gyroscope, accelerometer, compass and camera. Sea trials on board of a Navy and a commercial ship revealed the end-users’ interest in using the application and further possibility of other data to be presented in AR. The application calculates the GPS position of the ship, the bearing and distance to the navigational aids; all within a high level of accuracy. However, during testing several issues were highlighted which need to be resolved as the prototype is developed further. The prototype stretched the capabilities of Wikitude, loading over 500 objects during tests in a major port. This overloaded the display and required over 45 seconds to load the data. Therefore, extra filters for the navigational aids are being considered in order to declutter the screen. At night, the camera is not powerful enough to distinguish all the lights in the area. Also, magnetic interference with the bridge of the ship generated a continuous compass error of the AR display that varied between 5 and 12 degrees. The deviation of the compass was consistent over the whole testing durations so the team is now looking at the possibility of allowing users to manually calibrate the compass. It is expected that for the usage of AR in professional maritime contexts, further development of existing AR tools and hardware is needed. Designers will also need to implement a user-centered design approach in order to create better interfaces and display technologies for enhanced solutions to aid navigation.Keywords: compass error, GPS, maritime navigation, mobile augmented reality
Procedia PDF Downloads 334193 Stainless Steel Degradation by Sulphide Mining
Authors: Aguasanta M. Sarmiento, Jose Miguel Davila, Juan Carlos Fortes, Maria Luisa de la Torre
Abstract:
Acid mine drainage (AMD) is an acidic leachate with high levels of metals and sulphates in solution, which seriously affects the durability and strength of metallic materials used in the construction of structural and mechanical components. This paper presents the results of the evolution over time of the reduction in tensile strength and defects in AISI 304 stainless steel in contact with acid mine drainage. For this purpose, a total of 30 bars with a diameter of 8 mm and a length of 14 cm were placed transversely in the course of a stream contaminated by AMD from the sulphide mines of the Iberian Pyritic Belt (SW Spain). This stream has average pH values of 2.6, a potential of 660 mV and average concentrations of 12 g/L of sulphates, 1.2 g/L of Fe, 191 mg/L of Zn, etc. Every two months of exposure, 6 stainless steel bars were extracted from the acid stream. They were subjected to surface roughness analysis carried out with the help of Mitutoyo Surftest SJ-210 surface roughness tester. The analysis was carried out at three different points on 5 specimens from each series. The average reading of each parameter is calculated in order to ensure the accuracy of the measurements and the surface coverage. Arithmetic mean roughness value (Ra), mean roughness depth (Rz) and root mean square roughness (Rq) were measured. Five specimens from each series were statically tensile tested using universal equipment (Servosis ME 403 of 200kN). The specimens were clamped at their ends with two grips for cylindrical sections, and the tensile force was applied at a constant speed of 0.5 kN/s, according to the requirements of standard UNE-EN ISO 6892-1: 2020. To determine the modulus of elasticity, limits close to 15% and 55% of the maximum load were used, depending on the course of each test. Field Emission Scanning Electron Microscopy (FESEM) was used to observe corrosion products and defects generated by exposure to AMD. Energy dispersive X-ray spectrometry (EDS) was used to analyze the chemical composition of the corrosion products formed. For this purpose, small pieces were cut from the resulting specimens, cleaned and embedded in epoxy resin. The results show that after only 5 months of exposure of AISI 304 stainless steel to the mining environment, the surface roughness increases significantly, with average depths almost 6 times greater than the initial one. Cracks are observed on the surface of the material, which increases in size with the time of exposure. A large number of grains with a composition of more than 57% Pb and 16% Sn can be observed inside these cracks. Tensile tests show a reduction in the resistance of this material after only two months of exposure. The results show the serious problems that would result from the use of this material for the use of mechanical components in a sulphide mining environment, not only because of the significant reduction in the lifetime of such components but also because of the implications for human safety.Keywords: Acid mine drainage, Corrosion, Mechanical properties, Stainless steel
Procedia PDF Downloads 12192 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration
Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu
Abstract:
Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery
Procedia PDF Downloads 133191 Comparison of GIS-Based Soil Erosion Susceptibility Models Using Support Vector Machine, Binary Logistic Regression and Artificial Neural Network in the Southwest Amazon Region
Authors: Elaine Lima Da Fonseca, Eliomar Pereira Da Silva Filho
Abstract:
The modeling of areas susceptible to soil loss by hydro erosive processes consists of a simplified instrument of reality with the purpose of predicting future behaviors from the observation and interaction of a set of geoenvironmental factors. The models of potential areas for soil loss will be obtained through binary logistic regression, artificial neural networks, and support vector machines. The choice of the municipality of Colorado do Oeste in the south of the western Amazon is due to soil degradation due to anthropogenic activities, such as agriculture, road construction, overgrazing, deforestation, and environmental and socioeconomic configurations. Initially, a soil erosion inventory map constructed through various field investigations will be designed, including the use of remotely piloted aircraft, orbital imagery, and the PLANAFLORO/RO database. 100 sampling units with the presence of erosion will be selected based on the assumptions indicated in the literature, and, to complement the dichotomous analysis, 100 units with no erosion will be randomly designated. The next step will be the selection of the predictive parameters that exert, jointly, directly, or indirectly, some influence on the mechanism of occurrence of soil erosion events. The chosen predictors are altitude, declivity, aspect or orientation of the slope, curvature of the slope, composite topographic index, flow power index, lineament density, normalized difference vegetation index, drainage density, lithology, soil type, erosivity, and ground surface temperature. After evaluating the relative contribution of each predictor variable, the erosion susceptibility model will be applied to the municipality of Colorado do Oeste - Rondônia through the SPSS Statistic 26 software. Evaluation of the model will occur through the determination of the values of the R² of Cox & Snell and the R² of Nagelkerke, Hosmer and Lemeshow Test, Log Likelihood Value, and Wald Test, in addition to analysis of the Confounding Matrix, ROC Curve and Accumulated Gain according to the model specification. The validation of the synthesis map resulting from both models of the potential risk of soil erosion will occur by means of Kappa indices, accuracy, and sensitivity, as well as by field verification of the classes of susceptibility to erosion using drone photogrammetry. Thus, it is expected to obtain the mapping of the following classes of susceptibility to erosion very low, low, moderate, very high, and high, which may constitute a screening tool to identify areas where more detailed investigations need to be carried out, applying more efficient social resources.Keywords: modeling, susceptibility to erosion, artificial intelligence, Amazon
Procedia PDF Downloads 68190 A Clinical Audit on Screening Women with Subfertility Using Transvaginal Scan and Hysterosalpingo Contrast Sonography
Authors: Aarti M. Shetty, Estela Davoodi, Subrata Gangooly, Anita Rao-Coppisetty
Abstract:
Background: Testing Patency of Fallopian Tubes is among one of the several protocols for investigating Subfertile Couples. Both, Hysterosalpingogram (HSG) and Laparoscopy and dye test have been used as Tubal patency test for several years, with well-known limitation. Hysterosalpingo Contrast Sonography (HyCoSy) can be used as an alternative tool to HSG, to screen patency of Fallopian tubes, with an advantage of being non-ionising, and also, use of transvaginal scan to diagnose pelvic pathology. Aim: To determine the indication and analyse the performance of transvaginal scan and HyCoSy in Broomfield Hospital. Methods: We retrospectively analysed fertility workup of 282 women, who attended HyCoSy clinic at our institution from January 2015 to June 2016. An Audit proforma was designed, to aid data collection. Data was collected from patient notes and electronic records, which included patient demographics; age, parity, type of subfertility (primary or secondary), duration of subfertility, past medical history and base line investigation (hormone profile and semen analysis). Findings of the transvaginal scan, HyCoSy and Laparoscopy were also noted. Results: The most common indication for referral were as a part of primary fertility workup on couples who had failure to conceive despite intercourse for a year, other indication for referral were recurrent miscarriage, history of ectopic pregnancy, post reversal of sterilization(vasectomy and tuboplasty), Post Gynaecology surgery(Loop excision, cone biopsy) and amenorrhea. Basic Fertility workup showed 34% men had abnormal semen analysis. HyCoSy was successfully completed in 270 (95%) women using ExEm foam and Transvaginal Scan. Of the 270 patients, 535 tubes were examined in total. 495/535 (93%) tubes were reported as patent, 40/535 (7.5%) tubes were reported as blocked. A total of 17 (6.3%) patients required laparoscopy and dye test after HyCoSy. In these 17 patients, 32 tubes were examined under laparoscopy, and 21 tubes had findings similar to HyCoSy, with a concordance rate of 65%. In addition to this, 41 patients had some form of pelvic pathology (endometrial polyp, fibroid, cervical polyp, fibroid, bicornuate uterus) detected during transvaginal scan, who referred to corrective surgeries after attending HyCoSy Clinic. Conclusion: Our audit shows that HyCoSy and Transvaginal scan can be a reliable screening test for low risk women. Furthermore, it has competitive diagnostic accuracy to HSG in identifying tubal patency, with an additional advantage of screening for pelvic pathology. With addition of 3D Scan, pulse Doppler and other non-invasive imaging modality, HyCoSy may potentially replace Laparoscopy and chromopertubation in near future.Keywords: hysterosalpingo contrast sonography (HyCoSy), transvaginal scan, tubal infertility, tubal patency test
Procedia PDF Downloads 252189 Applications of Digital Tools, Satellite Images and Geographic Information Systems in Data Collection of Greenhouses in Guatemala
Authors: Maria A. Castillo H., Andres R. Leandro, Jose F. Bienvenido B.
Abstract:
During the last 20 years, the globalization of economies, population growth, and the increase in the consumption of fresh agricultural products have generated greater demand for ornamentals, flowers, fresh fruits, and vegetables, mainly from tropical areas. This market situation has demanded greater competitiveness and control over production, with more efficient protected agriculture technologies, which provide greater productivity and allow us to guarantee the quality and quantity that is required in a constant and sustainable way. Guatemala, located in the north of Central America, is one of the largest exporters of agricultural products in the region and exports fresh vegetables, flowers, fruits, ornamental plants, and foliage, most of which were grown in greenhouses. Although there are no official agricultural statistics on greenhouse production, several thesis works, and congress reports have presented consistent estimates. A wide range of protection structures and roofing materials are used, from the most basic and simple ones for rain control to highly technical and automated structures connected with remote sensors for monitoring and control of crops. With this breadth of technological models, it is necessary to analyze georeferenced data related to the cultivated area, to the different existing models, and to the covering materials, integrated with altitude, climate, and soil data. The georeferenced registration of the production units, the data collection with digital tools, the use of satellite images, and geographic information systems (GIS) provide reliable tools to elaborate more complete, agile, and dynamic information maps. This study details a methodology proposed for gathering georeferenced data of high protection structures (greenhouses) in Guatemala, structured in four phases: diagnosis of available information, the definition of the geographic frame, selection of satellite images, and integration with an information system geographic (GIS). It especially takes account of the actual lack of complete data in order to obtain a reliable decision-making system; this gap is solved through the proposed methodology. A summary of the results is presented in each phase, and finally, an evaluation with some improvements and tentative recommendations for further research is added. The main contribution of this study is to propose a methodology that allows to reduce the gap of georeferenced data in protected agriculture in this specific area where data is not generally available and to provide data of better quality, traceability, accuracy, and certainty for the strategic agricultural decision öaking, applicable to other crops, production models and similar/neighboring geographic areas.Keywords: greenhouses, protected agriculture, GIS, Guatemala, satellite image, digital tools, precision agriculture
Procedia PDF Downloads 195188 Pyramid of Deradicalization: Causes and Possible Solutions
Authors: Ashir Ahmed
Abstract:
Generally, radicalization happens when a person's thinking and behaviour become significantly different from how most of the members of their society and community view social issues and participate politically. Radicalization often leads to violent extremism that refers to the beliefs and actions of people who support or use violence to achieve ideological, religious or political goals. Studies on radicalization negate the common myths that someone must be in a group to be radicalised or anyone who experiences radical thoughts is a violent extremist. Moreover, it is erroneous to suggest that radicalisation is always linked to religion. Generally, the common motives of radicalization include ideological, issue-based, ethno-nationalist or separatist underpinning. Moreover, there are number of factors that further augments the chances of someone being radicalised and may choose the path of violent extremism and possibly terrorism. Since there are numbers of factors (and sometimes quite different) contributing in radicalization and violent extremism, it is highly unlikely to devise a single solution that could produce effective outcomes to deal with radicalization, violent extremism and terrorism. The pathway to deradicalization, like the pathway to radicalisation, is different for everyone. Considering the need of having customized deradicalization resolution, this study proposes a multi-tier framework, called ‘pyramid of deradicalization’ that first help identifying the stage at which an individual could be on the radicalization pathway and then propose a customize strategy to deal with the respective stage. The first tier (tier 1) addresses broader community and proposes a ‘universal approach’ aiming to offer community-based design and delivery of educational programs to raise awareness and provide general information on possible factors leading to radicalization and their remedies. The second tier focuses on the members of community who are more vulnerable and are disengaged from the rest of the community. This tier proposes a ‘targeted approach’ targeting the vulnerable members of the community through early intervention such as providing anonymous help lines where people feel confident and comfortable in seeking help without fearing the disclosure of their identity. The third tier aims to focus on people having clear evidence of moving toward extremism or getting radicalized. The people falls in this tier are believed to be supported through ‘interventionist approach’. The interventionist approach advocates the community engagement and community-policing, introducing deradicalization programmes to the targeted individuals and looking after their physical and mental health issues. The fourth and the last tier suggests the strategies to deal with people who are actively breaking the law. ‘Enforcement approach’ suggests various approaches such as strong law enforcement, fairness and accuracy in reporting radicalization events, unbiased treatment by law based on gender, race, nationality or religion and strengthen the family connections.It is anticipated that the operationalization of the proposed framework (‘pyramid of deradicalization’) would help in categorising people considering their tendency to become radicalized and then offer an appropriate strategy to make them valuable and peaceful members of the community.Keywords: deradicalization, framework, terrorism, violent extremism
Procedia PDF Downloads 273187 Stochastic Approach for Technical-Economic Viability Analysis of Electricity Generation Projects with Natural Gas Pressure Reduction Turbines
Authors: Roberto M. G. Velásquez, Jonas R. Gazoli, Nelson Ponce Jr, Valério L. Borges, Alessandro Sete, Fernanda M. C. Tomé, Julian D. Hunt, Heitor C. Lira, Cristiano L. de Souza, Fabio T. Bindemann, Wilmar Wounnsoscky
Abstract:
Nowadays, society is working toward reducing energy losses and greenhouse gas emissions, as well as seeking clean energy sources, as a result of the constant increase in energy demand and emissions. Energy loss occurs in the gas pressure reduction stations at the delivery points in natural gas distribution systems (city gates). Installing pressure reduction turbines (PRT) parallel to the static reduction valves at the city gates enhances the energy efficiency of the system by recovering the enthalpy of the pressurized natural gas, obtaining in the pressure-lowering process shaft work and generating electrical power. Currently, the Brazilian natural gas transportation network has 9,409 km in extension, while the system has 16 national and 3 international natural gas processing plants, including more than 143 delivery points to final consumers. Thus, the potential of installing PRT in Brazil is 66 MW of power, which could yearly avoid the emission of 235,800 tons of CO2 and generate 333 GWh/year of electricity. On the other hand, an economic viability analysis of these energy efficiency projects is commonly carried out based on estimates of the project's cash flow obtained from several variables forecast. Usually, the cash flow analysis is performed using representative values of these variables, obtaining a deterministic set of financial indicators associated with the project. However, in most cases, these variables cannot be predicted with sufficient accuracy, resulting in the need to consider, to a greater or lesser degree, the risk associated with the calculated financial return. This paper presents an approach applied to the technical-economic viability analysis of PRTs projects that explicitly considers the uncertainties associated with the input parameters for the financial model, such as gas pressure at the delivery point, amount of energy generated by TRP, the future price of energy, among others, using sensitivity analysis techniques, scenario analysis, and Monte Carlo methods. In the latter case, estimates of several financial risk indicators, as well as their empirical probability distributions, can be obtained. This is a methodology for the financial risk analysis of PRT projects. The results of this paper allow a more accurate assessment of the potential PRT project's financial feasibility in Brazil. This methodology will be tested at the Cuiabá thermoelectric plant, located in the state of Mato Grosso, Brazil, and can be applied to study the potential in other countries.Keywords: pressure reduction turbine, natural gas pressure drop station, energy efficiency, electricity generation, monte carlo methods
Procedia PDF Downloads 114186 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley
Abstract:
Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 80185 Music Reading Expertise Facilitates Implicit Statistical Learning of Sentence Structures in a Novel Language: Evidence from Eye Movement Behavior
Authors: Sara T. K. Li, Belinda H. J. Chung, Jeffery C. N. Yip, Janet H. Hsiao
Abstract:
Music notation and text reading both involve statistical learning of music or linguistic structures. However, it remains unclear how music reading expertise influences text reading behavior. The present study examined this issue through an eye-tracking study. Chinese-English bilingual musicians and non-musicians read English sentences, Chinese sentences, musical phrases, and sentences in Tibetan, a language novel to the participants, with their eye movement recorded. Each set of stimuli consisted of two conditions in terms of structural regularity: syntactically correct and syntactically incorrect musical phrases/sentences. They then completed a sentence comprehension (for syntactically correct sentences) or a musical segment/word recognition task afterwards to test their comprehension/recognition abilities. The results showed that in reading musical phrases, as compared with non-musicians, musicians had a higher accuracy in the recognition task, and had shorter reading time, fewer fixations, and shorter fixation duration when reading syntactically correct (i.e., in diatonic key) than incorrect (i.e., in non-diatonic key/atonal) musical phrases. This result reflects their expertise in music reading. Interestingly, in reading Tibetan sentences, which was novel to both participant groups, while non-musicians did not show any behavior differences between reading syntactically correct or incorrect Tibetan sentences, musicians showed a shorter reading time and had marginally fewer fixations when reading syntactically correct sentences than syntactically incorrect ones. However, none of the musicians reported discovering any structural regularities in the Tibetan stimuli after the experiment when being asked explicitly, suggesting that they may have implicitly acquired the structural regularities in Tibetan sentences. This group difference was not observed when they read English or Chinese sentences. This result suggests that music reading expertise facilities reading texts in a novel language (i.e., Tibetan), but not in languages that the readers are already familiar with (i.e., English and Chinese). This phenomenon may be due to the similarities between reading music notations and reading texts in a novel language, as in both cases the stimuli follow particular statistical structures but do not involve semantic or lexical processing. Thus, musicians may transfer their statistical learning skills stemmed from music notation reading experience to implicitly discover structures of sentences in a novel language. This speculation is consistent with a recent finding showing that music reading expertise modulates the processing of English nonwords (i.e., words that do not follow morphological or orthographic rules) but not pseudo- or real words. These results suggest that the modulation of music reading expertise on language processing depends on the similarities in the cognitive processes involved. It also has important implications for the benefits of music education on language and cognitive development.Keywords: eye movement behavior, eye-tracking, music reading expertise, sentence reading, structural regularity, visual processing
Procedia PDF Downloads 383184 Experimental and Numerical Investigations on the Vulnerability of Flying Structures to High-Energy Laser Irradiations
Authors: Vadim Allheily, Rudiger Schmitt, Lionel Merlat, Gildas L'Hostis
Abstract:
Inflight devices are nowadays major actors in both military and civilian landscapes. Among others, missiles, mortars, rockets or even drones this last decade are increasingly sophisticated, and it is today of prior manner to develop always more efficient defensive systems from all these potential threats. In this frame, recent High Energy Laser weapon prototypes (HEL) have demonstrated some extremely good operational abilities to shot down within seconds flying targets several kilometers off. Whereas test outcomes are promising from both experimental and cost-related perspectives, the deterioration process still needs to be explored to be able to closely predict the effects of a high-energy laser irradiation on typical structures, heading finally to an effective design of laser sources and protective countermeasures. Laser matter interaction researches have a long history of more than 40 years at the French-German Research Institute (ISL). Those studies were tied with laser sources development in the mid-60s, mainly for specific metrology of fast phenomena. Nowadays, laser matter interaction can be viewed as the terminal ballistics of conventional weapons, with the unique capability of laser beams to carry energy at light velocity over large ranges. In the last years, a strong focus was made at ISL on the interaction process of laser radiation with metal targets such as artillery shells. Due to the absorbed laser radiation and the resulting heating process, an encased explosive charge can be initiated resulting in deflagration or even detonation of the projectile in flight. Drones and Unmanned Air Vehicles (UAVs) are of outmost interests in modern warfare. Those aerial systems are usually made up of polymer-based composite materials, whose complexity involves new scientific challenges. Aside this main laser-matter interaction activity, a lot of experimental and numerical knowledge has been gathered at ISL within domains like spectrometry, thermodynamics or mechanics. Techniques and devices were developed to study separately each aspect concerned by this topic; optical characterization, thermal investigations, chemical reactions analysis or mechanical examinations are beyond carried out to neatly estimate essential key values. Results from these diverse tasks are then incorporated into analytic or FE numerical models that were elaborated, for example, to predict thermal repercussion on explosive charges or mechanical failures of structures. These simulations highlight the influence of each phenomenon during the laser irradiation and forecast experimental observations with good accuracy.Keywords: composite materials, countermeasure, experimental work, high-energy laser, laser-matter interaction, modeling
Procedia PDF Downloads 264183 Prediction of Endotracheal Tube Size in Children by Predicting Subglottic Diameter Using Ultrasonographic Measurement versus Traditional Formulas
Authors: Parul Jindal, Shubhi Singh, Priya Ramakrishnan, Shailender Raghuvanshi
Abstract:
Background: Knowledge of the influence of the age of the child on laryngeal dimensions is essential for all practitioners who are dealing with paediatric airway. Choosing the correct endotracheal tube (ETT) size is a crucial step in pediatric patients because a large-sized tube may cause complications like post-extubation stridor and subglottic stenosis. On the other hand with a smaller tube, there will be increased gas flow resistance, aspiration risk, poor ventilation, inaccurate monitoring of end-tidal gases and reintubation may also be required with a different size of the tracheal tube. Recent advancement in ultrasonography (USG) techniques should now allow for accurate and descriptive evaluation of pediatric airway. Aims and objectives: This study was planned to determine the accuracy of Ultrasonography (USG) to assess the appropriate ETT size and compare it with physical indices based formulae. Methods: After obtaining approval from Institute’s Ethical and Research committee, and parental written and informed consent, the study was conducted on 100 subjects of either sex between 12-60 months of age, undergoing various elective surgeries under general anesthesia requiring endotracheal intubation. The same experienced radiologist performed ultrasonography. The transverse diameter was measured at the level of cricoids cartilage by USG. After USG, general anesthesia was administered using standard techniques followed by the institute. An experienced anesthesiologist performed the endotracheal intubations with uncuffed endotracheal tube (Portex Tracheal Tube Smiths Medical India Pvt. Ltd.) with Murphy’s eye. He was unaware of the finding of the ultrasonography. The tracheal tube was considered best fit if air leak was satisfactory at 15-20 cm H₂O of airway pressure. The obtained values were compared with the values of endotracheal tube size calculated by ultrasonography, various age, height, weight-based formulas and diameter of right and left little finger. The correlation of the size of the endotracheal tube by different modalities was done and Pearson's correlation coefficient was obtained. The comparison of the mean size of the endotracheal tube by ultrasonography and by traditional formula was done by the Friedman’s test and Wilcoxon sign-rank test. Results: The predicted tube size was equal to best fit and best determined by ultrasonography (100%) followed by comparison to left little finger (98%) and right little finger (97%) and age-based formula (95%) followed by multivariate formula (83%) and body length (81%) formula. According to Pearson`s correlation, there was a moderate correlation of best fit endotracheal tube with endotracheal tube size by age-based formula (r=0.743), body length based formula (r=0.683), right little finger based formula (r=0.587), left little finger based formula (r=0.587) and multivariate formula (r=0.741). There was a strong correlation with ultrasonography (r=0.943). Ultrasonography was the most sensitive (100%) method of prediction followed by comparison to left (98%) and right (97%) little finger and age-based formula (95%), the multivariate formula had an even lesser sensitivity (83%) whereas body length based formula was least sensitive with a sensitivity of 78%. Conclusion: USG is a reliable method of estimation of subglottic diameter and for prediction of ETT size in children.Keywords: endotracheal intubation, pediatric airway, subglottic diameter, traditional formulas, ultrasonography
Procedia PDF Downloads 241182 A Fast Multi-Scale Finite Element Method for Geophysical Resistivity Measurements
Authors: Mostafa Shahriari, Sergio Rojas, David Pardo, Angel Rodriguez- Rozas, Shaaban A. Bakr, Victor M. Calo, Ignacio Muga
Abstract:
Logging-While Drilling (LWD) is a technique to record down-hole logging measurements while drilling the well. Nowadays, LWD devices (e.g., nuclear, sonic, resistivity) are mostly used commercially for geo-steering applications. Modern borehole resistivity tools are able to measure all components of the magnetic field by incorporating tilted coils. The depth of investigation of LWD tools is limited compared to the thickness of the geological layers. Thus, it is a common practice to approximate the Earth’s subsurface with a sequence of 1D models. For a 1D model, we can reduce the dimensionality of the problem using a Hankel transform. We can solve the resulting system of ordinary differential equations (ODEs) either (a) analytically, which results in a so-called semi-analytic method after performing a numerical inverse Hankel transform, or (b) numerically. Semi-analytic methods are used by the industry due to their high performance. However, they have major limitations, namely: -The analytical solution of the aforementioned system of ODEs exists only for piecewise constant resistivity distributions. For arbitrary resistivity distributions, the solution of the system of ODEs is unknown by today’s knowledge. -In geo-steering, we need to solve inverse problems with respect to the inversion variables (e.g., the constant resistivity value of each layer and bed boundary positions) using a gradient-based inversion method. Thus, we need to compute the corresponding derivatives. However, the analytical derivatives of cross-bedded formation and the analytical derivatives with respect to the bed boundary positions have not been published to the best of our knowledge. The main contribution of this work is to overcome the aforementioned limitations of semi-analytic methods by solving each 1D model (associated with each Hankel mode) using an efficient multi-scale finite element method. The main idea is to divide our computations into two parts: (a) offline computations, which are independent of the tool positions and we precompute only once and use them for all logging positions, and (b) online computations, which depend upon the logging position. With the above method, (a) we can consider arbitrary resistivity distributions along the 1D model, and (b) we can easily and rapidly compute the derivatives with respect to any inversion variable at a negligible additional cost by using an adjoint state formulation. Although the proposed method is slower than semi-analytic methods, its computational efficiency is still high. In the presentation, we shall derive the mathematical variational formulation, describe the proposed multi-scale finite element method, and verify the accuracy and efficiency of our method by performing a wide range of numerical experiments and comparing the numerical solutions to semi-analytic ones when the latest are available.Keywords: logging-While-Drilling, resistivity measurements, multi-scale finite elements, Hankel transform
Procedia PDF Downloads 387181 Precise Determination of the Residual Stress Gradient in Composite Laminates Using a Configurable Numerical-Experimental Coupling Based on the Incremental Hole Drilling Method
Authors: A. S. Ibrahim Mamane, S. Giljean, M.-J. Pac, G. L’Hostis
Abstract:
Fiber reinforced composite laminates are particularly subject to residual stresses due to their heterogeneity and the complex chemical, mechanical and thermal mechanisms that occur during their processing. Residual stresses are now well known to cause damage accumulation, shape instability, and behavior disturbance in composite parts. Many works exist in the literature on techniques for minimizing residual stresses in thermosetting and thermoplastic composites mainly. To study in-depth the influence of processing mechanisms on the formation of residual stresses and to minimize them by establishing a reliable correlation, it is essential to be able to measure very precisely the profile of residual stresses in the composite. Residual stresses are important data to consider when sizing composite parts and predicting their behavior. The incremental hole drilling is very effective in measuring the gradient of residual stresses in composite laminates. This method is semi-destructive and consists of drilling incrementally a hole through the thickness of the material and measuring relaxation strains around the hole for each increment using three strain gauges. These strains are then converted into residual stresses using a matrix of coefficients. These coefficients, called calibration coefficients, depending on the diameter of the hole and the dimensions of the gauges used. The reliability of the incremental hole drilling depends on the accuracy with which the calibration coefficients are determined. These coefficients are calculated using a finite element model. The samples’ features and the experimental conditions must be considered in the simulation. Any mismatch can lead to inadequate calibration coefficients, thus introducing errors on residual stresses. Several calibration coefficient correction methods exist for isotropic material, but there is a lack of information on this subject concerning composite laminates. In this work, a Python program was developed to automatically generate the adequate finite element model. This model allowed us to perform a parametric study to assess the influence of experimental errors on the calibration coefficients. The results highlighted the sensitivity of the calibration coefficients to the considered errors and gave an order of magnitude of the precisions required on the experimental device to have reliable measurements. On the basis of these results, improvements were proposed on the experimental device. Furthermore, a numerical method was proposed to correct the calibration coefficients for different types of materials, including thick composite parts for which the analytical approach is too complex. This method consists of taking into account the experimental errors in the simulation. Accurate measurement of the experimental errors (such as eccentricity of the hole, angular deviation of the gauges from their theoretical position, or errors on increment depth) is therefore necessary. The aim is to determine more precisely the residual stresses and to expand the validity domain of the incremental hole drilling technique.Keywords: fiber reinforced composites, finite element simulation, incremental hole drilling method, numerical correction of the calibration coefficients, residual stresses
Procedia PDF Downloads 133180 Contribution of Word Decoding and Reading Fluency on Reading Comprehension in Young Typical Readers of Kannada Language
Authors: Vangmayee V. Subban, Suzan Deelan. Pinto, Somashekara Haralakatta Shivananjappa, Shwetha Prabhu, Jayashree S. Bhat
Abstract:
Introduction and Need: During early years of schooling, the instruction in the schools mainly focus on children’s word decoding abilities. However, the skilled readers should master all the components of reading such as word decoding, reading fluency and comprehension. Nevertheless, the relationship between each component during the process of learning to read is less clear. The studies conducted in alphabetical languages have mixed opinion on relative contribution of word decoding and reading fluency on reading comprehension. However, the scenarios in alphasyllabary languages are unexplored. Aim and Objectives: The aim of the study was to explore the role of word decoding, reading fluency on reading comprehension abilities in children learning to read Kannada between the age ranges of 5.6 to 8.6 years. Method: In this cross sectional study, a total of 60 typically developing children, 20 each from Grade I, Grade II, Grade III maintaining equal gender ratio between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. The reading fluency and reading comprehension abilities of the children were assessed using Grade level passages selected from the Kannada text book of children core curriculum. All the passages consist of five questions to assess reading comprehension. The pseudoword decoding skills were assessed using 40 pseudowords with varying syllable length and their Akshara composition. Pseudowords are formed by interchanging the syllables within the meaningful word while maintaining the phonotactic constraints of Kannada language. The assessment material was subjected to content validation and reliability measures before collecting the data on the study samples. The data were collected individually, and reading fluency was assessed for words correctly read per minute. Pseudoword decoding was scored for the accuracy of reading. Results: The descriptive statistics indicated that the mean pseudoword reading, reading comprehension, words accurately read per minute increased with the Grades. The performance of Grade III children found to be higher, Grade I lower and Grade II remained intermediate of Grade III and Grade I. The trend indicated that reading skills gradually improve with the Grades. Pearson’s correlation co-efficient showed moderate and highly significant (p=0.00) positive co-relation between the variables, indicating the interdependency of all the three components required for reading. The hierarchical regression analysis revealed 37% variance in reading comprehension was explained by pseudoword decoding and was highly significant. Subsequent entry of reading fluency measure, there was no significant change in R-square and was only change 3%. Therefore, pseudoword-decoding evolved as a single most significant predictor of reading comprehension during early Grades of reading acquisition. Conclusion: The present study concludes that the pseudoword decoding skills contribute significantly to reading comprehension than reading fluency during initial years of schooling in children learning to read Kannada language.Keywords: alphasyllabary, pseudo-word decoding, reading comprehension, reading fluency
Procedia PDF Downloads 264179 Using Machine Learning to Extract Patient Data from Non-standardized Sports Medicine Physician Notes
Authors: Thomas Q. Pan, Anika Basu, Chamith S. Rajapakse
Abstract:
Machine learning requires data that is categorized into features that models train on. This topic is important to the field of sports medicine due to the many tools it provides to physicians such as diagnosis support and risk assessment. Physician note that healthcare professionals take are usually unclean and not suitable for model training. The objective of this study was to develop and evaluate an advanced approach for extracting key features from sports medicine data without the need for extensive model training or data labeling. An LLM (Large Language Model) was given a narrative (Physician’s Notes) and prompted to extract four features (details about the patient). The narrative was found in a datasheet that contained six columns: Case Number, Validation Age, Validation Gender, Validation Diagnosis, Validation Body Part, and Narrative. The validation columns represent the accurate responses that the LLM attempts to output. With the given narrative, the LLM would output its response and extract the age, gender, diagnosis, and injured body part with each category taking up one line. The output would then be cleaned, matched, and added to new columns containing the extracted responses. Five ways of checking the accuracy were used: unclear count, substring comparison, LLM comparison, LLM re-check, and hand-evaluation. The unclear count essentially represented the extractions the LLM missed. This can be also understood as the recall score ([total - false negatives] over total). The rest of these correspond to the precision score ([total - false positives] over total). Substring comparison evaluated the validation (X) and extracted (Y) columns’ likeness by checking if X’s results were a substring of Y's findings and vice versa. LLM comparison directly asked an LLM if the X and Y’s results were similar. LLM Re-check prompted the LLM to see if the extracted results can be found in the narrative. Lastly, A selection of 1,000 random narratives was also selected and hand-evaluated to give an estimate of how well the LLM-based feature extraction model performed. With a selection of 10,000 narratives, the LLM-based approach had a recall score of roughly 98%. However, the precision scores of the substring comparison and LLM comparison models were around 72% and 76% respectively. The reason for these low figures is due to the minute differences between answers. For example, the ‘chest’ is a part of the ‘upper trunk’ however, these models cannot detect that. On the other hand, the LLM re-check and subset of hand-tested narratives showed a precision score of 96% and 95%. If this subset is used to extrapolate the possible outcome of the whole 10,000 narratives, the LLM-based approach would be strong in both precision and recall. These results indicated that an LLM-based feature extraction model could be a useful way for medical data in sports to be collected and analyzed by machine learning models. Wide use of this method could potentially increase the availability of data thus improving machine learning algorithms and supporting doctors with more enhanced tools. Procedia PDF Downloads 12178 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging
Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi
Abstract:
Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA
Procedia PDF Downloads 281177 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System
Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii
Abstract:
Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression
Procedia PDF Downloads 162176 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors
Authors: Navid Kaboudi, Ali Shayanfar
Abstract:
Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.Keywords: logistic regression, breastfeeding, descriptors, penetration
Procedia PDF Downloads 74175 Impact of 6-Week Brain Endurance Training on Cognitive and Cycling Performance in Highly Trained Individuals
Authors: W. Staiano, S. Marcora
Abstract:
Introduction: It has been proposed that acute negative effect of mental fatigue (MF) could potentially become a training stimulus for the brain (Brain endurance training (BET)) to adapt and improve its ability to attenuate MF states during sport competitions. Purpose: The aim of this study was to test the efficacy of 6 weeks of BET on cognitive and cycling tests in a group of well-trained subjects. We hypothesised that combination of BET and standard physical training (SPT) would increase cognitive capacity and cycling performance by reducing rating of perceived exertion (RPE) and increase resilience to fatigue more than SPT alone. Methods: In a randomized controlled trial design, 26 well trained participants, after a familiarization session, cycled to exhaustion (TTE) at 80% peak power output (PPO) and, after 90 min rest, at 65% PPO, before and after random allocation to a 6 week BET or active placebo control. Cognitive performance was measured using 30 min of STROOP coloured task performed before cycling performance. During the training, BET group performed a series of cognitive tasks for a total of 30 sessions (5 sessions per week) with duration increasing from 30 to 60 min per session. Placebo engaged in a breathing relaxation training. Both groups were monitored for physical training and were naïve to the purpose of the study. Physiological and perceptual parameters of heart rate, lactate (LA) and RPE were recorded during cycling performances, while subjective workload (NASA TLX scale) was measured during the training. Results: Group (BET vs. Placebo) x Test (Pre-test vs. Post-test) mixed model ANOVA’s revealed significant interaction for performance at 80% PPO (p = .038) or 65% PPO (p = .011). In both tests, groups improved their TTE performance; however, BET group improved significantly more compared to placebo. No significant differences were found for heart rate during the TTE cycling tests. LA did not change significantly at rest in both groups. However, at completion of 65% TTE, it was significantly higher (p = 0.043) in the placebo condition compared to BET. RPE measured at ISO-time in BET was significantly lower (80% PPO, p = 0.041; 65% PPO p= 0.021) compared to placebo. Cognitive results in the STROOP task showed that reaction time in both groups decreased at post-test. However, BET decreased significantly (p = 0.01) more compared to placebo despite no differences accuracy. During training sessions, participants in the BET showed, through NASA TLX questionnaires, constantly significantly higher (p < 0.01) mental demand rates compared to placebo. No significant differences were found for physical demand. Conclusion: The results of this study provide evidences that combining BET and SPT seems to be more effective than SPT alone in increasing cognitive and cycling performance in well trained endurance participants. The cognitive overload produced during the 6-week training of BET can induce a reduction in perception of effort at a specific power, and thus improving cycling performance. Moreover, it provides evidence that including neurocognitive interventions will benefit athletes by increasing their mental resilience, without affecting their physical training load and routine.Keywords: cognitive training, perception of effort, endurance performance, neuro-performance
Procedia PDF Downloads 121174 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception
Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu
Abstract:
Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish
Procedia PDF Downloads 148173 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker
Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation
Procedia PDF Downloads 28172 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection
Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa
Abstract:
Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.Keywords: classification, airborne LiDAR, parameters selection, support vector machine
Procedia PDF Downloads 149171 An Introduction to the Radiation-Thrust Based on Alpha Decay and Spontaneous Fission
Authors: Shiyi He, Yan Xia, Xiaoping Ouyang, Liang Chen, Zhongbing Zhang, Jinlu Ruan
Abstract:
As the key system of the spacecraft, various propelling system have been developing rapidly, including ion thrust, laser thrust, solar sail and other micro-thrusters. However, there still are some shortages in these systems. The ion thruster requires the high-voltage or magnetic field to accelerate, resulting in extra system, heavy quantity and large volume. The laser thrust now is mostly ground-based and providing pulse thrust, restraint by the station distribution and the capacity of laser. The thrust direction of solar sail is limited to its relative position with the Sun, so it is hard to propel toward the Sun or adjust in the shadow.In this paper, a novel nuclear thruster based on alpha decay and spontaneous fission is proposed and the principle of this radiation-thrust with alpha particle has been expounded. Radioactive materials with different released energy, such as 210Po with 5.4MeV and 238Pu with 5.29MeV, attached to a metal film will provides various thrust among 0.02-5uN/cm2. With this repulsive force, radiation is able to be a power source. With the advantages of low system quantity, high accuracy and long active time, the radiation thrust is promising in the field of space debris removal, orbit control of nano-satellite array and deep space exploration. To do further study, a formula lead to the amplitude and direction of thrust by the released energy and decay coefficient is set up. With the initial formula, the alpha radiation elements with the half life period longer than a hundred days are calculated and listed. As the alpha particles emit continuously, the residual charge in metal film grows and affects the emitting energy distribution of alpha particles. With the residual charge or extra electromagnetic field, the emitting of alpha particles performs differently and is analyzed in this paper. Furthermore, three more complex situations are discussed. Radiation element generating alpha particles with several energies in different intensity, mixture of various radiation elements, and cascaded alpha decay are studied respectively. In combined way, it is more efficient and flexible to adjust the thrust amplitude. The propelling model of the spontaneous fission is similar with the one of alpha decay, which has a more complex angular distribution. A new quasi-sphere space propelling system based on the radiation-thrust has been introduced, as well as the collecting and processing system of excess charge and reaction heat. The energy and spatial angular distribution of emitting alpha particles on unit area and certain propelling system have been studied. As the alpha particles are easily losing energy and self-absorb, the distribution is not the simple stacking of each nuclide. With the change of the amplitude and angel of radiation-thrust, orbital variation strategy on space debris removal is shown and optimized.Keywords: alpha decay, angular distribution, emitting energy, orbital variation, radiation-thruster
Procedia PDF Downloads 209