Search results for: high precision
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19887

Search results for: high precision

19647 Automated Pothole Detection Using Convolution Neural Networks and 3D Reconstruction Using Stereovision

Authors: Eshta Ranyal, Kamal Jain, Vikrant Ranyal

Abstract:

Potholes are a severe threat to road safety and a major contributing factor towards road distress. In the Indian context, they are a major road hazard. Timely detection of potholes and subsequent repair can prevent the roads from deteriorating. To facilitate the roadway authorities in the timely detection and repair of potholes, we propose a pothole detection methodology using convolutional neural networks. The YOLOv3 model is used as it is fast and accurate in comparison to other state-of-the-art models. You only look once v3 (YOLOv3) is a state-of-the-art, real-time object detection system that features multi-scale detection. A mean average precision(mAP) of 73% was obtained on a training dataset of 200 images. The dataset was then increased to 500 images, resulting in an increase in mAP. We further calculated the depth of the potholes using stereoscopic vision by reconstruction of 3D potholes. This enables calculating pothole volume, its extent, which can then be used to evaluate the pothole severity as low, moderate, high.

Keywords: CNN, pothole detection, pothole severity, YOLO, stereovision

Procedia PDF Downloads 110
19646 In-door Localization Algorithm and Appropriate Implementation Using Wireless Sensor Networks

Authors: Adeniran K. Ademuwagun, Alastair Allen

Abstract:

The relationship dependence between RSS and distance in an enclosed environment is an important consideration because it is a factor that can influence the reliability of any localization algorithm founded on RSS. Several algorithms effectively reduce the variance of RSS to improve localization or accuracy performance. Our proposed algorithm essentially avoids this pitfall and consequently, its high adaptability in the face of erratic radio signal. Using 3 anchors in close proximity of each other, we are able to establish that RSS can be used as reliable indicator for localization with an acceptable degree of accuracy. Inherent in this concept, is the ability for each prospective anchor to validate (guarantee) the position or the proximity of the other 2 anchors involved in the localization and vice versa. This procedure ensures that the uncertainties of radio signals due to multipath effects in enclosed environments are minimized. A major driver of this idea is the implicit topological relationship among sensors due to raw radio signal strength. The algorithm is an area based algorithm; however, it does not trade accuracy for precision (i.e the size of the returned area).

Keywords: anchor nodes, centroid algorithm, communication graph, radio signal strength

Procedia PDF Downloads 473
19645 NanoFrazor Lithography for advanced 2D and 3D Nanodevices

Authors: Zhengming Wu

Abstract:

NanoFrazor lithography systems were developed as a first true alternative or extension to standard mask-less nanolithography methods like electron beam lithography (EBL). In contrast to EBL they are based on thermal scanning probe lithography (t-SPL). Here a heatable ultra-sharp probe tip with an apex of a few nm is used for patterning and simultaneously inspecting complex nanostructures. The heat impact from the probe on a thermal responsive resist generates those high-resolution nanostructures. The patterning depth of each individual pixel can be controlled with better than 1 nm precision using an integrated in-situ metrology method. Furthermore, the inherent imaging capability of the Nanofrazor technology allows for markerless overlay, which has been achieved with sub-5 nm accuracy as well as it supports stitching layout sections together with < 10 nm error. Pattern transfer from such resist features below 10 nm resolution were demonstrated. The technology has proven its value as an enabler of new kinds of ultra-high resolution nanodevices as well as for improving the performance of existing device concepts. The application range for this new nanolithography technique is very broad spanning from ultra-high resolution 2D and 3D patterning to chemical and physical modification of matter at the nanoscale. Nanometer-precise markerless overlay and non-invasiveness to sensitive materials are among the key strengths of the technology. However, while patterning at below 10 nm resolution is achieved, significantly increasing the patterning speed at the expense of resolution is not feasible by using the heated tip alone. Towards this end, an integrated laser write head for direct laser sublimation (DLS) of the thermal resist has been introduced for significantly faster patterning of micrometer to millimeter-scale features. Remarkably, the areas patterned by the tip and the laser are seamlessly stitched together and both processes work on the very same resist material enabling a true mix-and-match process with no developing or any other processing steps in between. The presentation will include examples for (i) high-quality metal contacting of 2D materials, (ii) tuning photonic molecules, (iii) generating nanofluidic devices and (iv) generating spintronic circuits. Some of these applications have been enabled only due to the various unique capabilities of NanoFrazor lithography like the absence of damage from a charged particle beam.

Keywords: nanofabrication, grayscale lithography, 2D materials device, nano-optics, photonics, spintronic circuits

Procedia PDF Downloads 49
19644 Assessment of Highly Sensitive Dielectric Modulated GaN-FinFET for Label-Free Biosensing Applications

Authors: Ajay Kumar, Neha Gupta

Abstract:

This work presents the sensitivity assessment of Gallium Nitride (GaN) material-based FinFET by dielectric modulation in the nanocavity gap for label-free biosensing applications. The significant deflection is observed in the electrical characteristics such as drain current (ID), transconductance (gm), surface potential, energy band profile, electric field, sub-threshold slope (SS), and threshold voltage (Vth) in the presence of biomolecules owing to GaN material. Further, the device sensitivity is evaluated to identify the effectiveness of the proposed biosensor and its capability to detect the biomolecules with high precision or accuracy. Higher sensitivity is observed for Gelatin (k=12) in terms of on-current (SION), threshold voltage (SVth), and switching ratio (SSR) by 104.88%, 82.12%, and 119.73%, respectively. This work is performed using a powerful tool 3D Sentaurus TCAD using a well-calibrated structure. All the results pave the way for GaN-FinFET as a viable candidate for label-free dielectric modulated biosensor applications.

Keywords: biosensor, biomolecules, FinFET, sensitivity

Procedia PDF Downloads 159
19643 Development and Validation of High-Performance Liquid Chromatography Method for the Determination and Pharmacokinetic Study of Linagliptin in Rat Plasma

Authors: Hoda Mahgoub, Abeer Hanafy

Abstract:

Linagliptin (LNG) belongs to dipeptidyl-peptidase-4 (DPP-4) inhibitor class. DPP-4 inhibitors represent a new therapeutic approach for the treatment of type 2 diabetes in adults. The aim of this work was to develop and validate an accurate and reproducible HPLC method for the determination of LNG with high sensitivity in rat plasma. The method involved separation of both LNG and pindolol (internal standard) at ambient temperature on a Zorbax Eclipse XDB C18 column and a mobile phase composed of 75% methanol: 25% formic acid 0.1% pH 4.1 at a flow rate of 1.0 mL.min-1. UV detection was performed at 254nm. The method was validated in compliance with ICH guidelines and found to be linear in the range of 5–1000ng.mL-1. The limit of quantification (LOQ) was found to be 5ng.mL-1 based on 100µL of plasma. The variations for intra- and inter-assay precision were less than 10%, and the accuracy values were ranged between 93.3% and 102.5%. The extraction recovery (R%) was more than 83%. The method involved a single extraction step of a very small plasma volume (100µL). The assay was successfully applied to an in-vivo pharmacokinetic study of LNG in rats that were administered a single oral dose of 10mg.kg-1 LNG. The maximum concentration (Cmax) was found to be 927.5 ± 23.9ng.mL-1. The area under the plasma concentration-time curve (AUC0-72) was 18285.02 ± 605.76h.ng.mL-1. In conclusion, the good accuracy and low LOQ of the bioanalytical HPLC method were suitable for monitoring the full pharmacokinetic profile of LNG in rats. The main advantages of the method were the sensitivity, small sample volume, single-step extraction procedure and the short time of analysis.

Keywords: HPLC, linagliptin, pharmacokinetic study, rat plasma

Procedia PDF Downloads 219
19642 A Validated High-Performance Liquid Chromatography-UV Method for Determination of Malondialdehyde-Application to Study in Chronic Ciprofloxacin Treated Rats

Authors: Anil P. Dewani, Ravindra L. Bakal, Anil V. Chandewar

Abstract:

Present work demonstrates the applicability of high-performance liquid chromatography (HPLC) with UV detection for the determination of malondialdehyde as malondialdehyde-thiobarbituric acid complex (MDA-TBA) in-vivo in rats. The HPLC-UV method for MDA-TBA was achieved by isocratic mode on a reverse-phase C18 column (250mm×4.6mm) at a flow rate of 1.0mLmin−1 followed by UV detection at 278 nm. The chromatographic conditions were optimized by varying the concentration and pH followed by changes in percentage of organic phase optimal mobile phase consisted of mixture of water (0.2% Triethylamine pH adjusted to 2.3 by ortho-phosphoric acid) and acetonitrile in ratio (80:20 % v/v). The retention time of MDA-TBA complex was 3.7 min. The developed method was sensitive as limit of detection and quantification (LOD and LOQ) for MDA-TBA complex were (standard deviation and slope of calibration curve) 110 ng/ml and 363 ng/ml respectively. The method was linear for MDA spiked in plasma and subjected to derivatization at concentrations ranging from 100 to 1000 ng/ml. The precision of developed method measured in terms of relative standard deviations for intra-day and inter-day studies was 1.6–5.0% and 1.9–3.6% respectively. The HPLC method was applied for monitoring MDA levels in rats subjected to chronic treatment of ciprofloxacin (CFL) (5mg/kg/day) for 21 days. Results were compared by findings in control group rats. Mean peak areas of both study groups was subjected for statistical treatment to unpaired student t-test to find p-values. The p value was < 0.001 indicating significant results and suggesting increased MDA levels in rats subjected to chronic treatment of CFL of 21 days.

Keywords: MDA, TBA, ciprofloxacin, HPLC-UV

Procedia PDF Downloads 295
19641 Investigation a New Approach "AGM" to Solve of Complicate Nonlinear Partial Differential Equations at All Engineering Field and Basic Science

Authors: Mohammadreza Akbari, Pooya Soleimani Besheli, Reza Khalili, Davood Domiri Danji

Abstract:

In this conference, our aims are accuracy, capabilities and power at solving of the complicated non-linear partial differential. Our purpose is to enhance the ability to solve the mentioned nonlinear differential equations at basic science and engineering field and similar issues with a simple and innovative approach. As we know most of engineering system behavior in practical are nonlinear process (especially basic science and engineering field, etc.) and analytical solving (no numeric) these problems are difficult, complex, and sometimes impossible like (Fluids and Gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure an innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will emerge after comparing the achieved solutions by Numerical method (Runge-Kutta 4th). Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear partial differential equations, with help of that there is no difficulty for solving all nonlinear differential equations. Advantages and ability of this method (AGM) as follow: (a) Non-linear Differential equations (ODE, PDE) are directly solvable by this method. (b) In this method (AGM), most of the time, without any dimensionless procedure, we can solve equation(s) by any boundary or initial condition number. (c) AGM method always is convergent in boundary or initial condition. (d) Parameters of exponential, Trigonometric and Logarithmic of the existent in the non-linear differential equation with AGM method no needs Taylor expand which are caused high solve precision. (e) AGM method is very flexible in the coding system, and can solve easily varieties of the non-linear differential equation at high acceptable accuracy. (f) One of the important advantages of this method is analytical solving with high accuracy such as partial differential equation in vibration in solids, waves in water and gas, with minimum initial and boundary condition capable to solve problem. (g) It is very important to present a general and simple approach for solving most problems of the differential equations with high non-linearity in engineering sciences especially at civil engineering, and compare output with numerical method (Runge-Kutta 4th) and Exact solutions.

Keywords: new approach, AGM, sets of coupled nonlinear differential equation, exact solutions, numerical

Procedia PDF Downloads 423
19640 A Coupling Study of Public Service Facilities and Land Price Based on Big Data Perspective in Wuxi City

Authors: Sisi Xia, Dezhuan Tao, Junyan Yang, Weiting Xiong

Abstract:

Under the background of Chinese urbanization changing from incremental development to stock development, the completion of urban public service facilities is essential to urban spatial quality. As public services facilities is a huge and complicated system, clarifying the various types of internal rules associated with the land market price is key to optimizing spatial layout. This paper takes Wuxi City as a representative sample location and establishes the digital analysis platform using urban price and several high-precision big data acquisition methods. On this basis, it analyzes the coupling relationship between different public service categories and land price, summarizing the coupling patterns of urban public facilities distribution and urban land price fluctuations. Finally, the internal mechanism within each of the two elements is explored, providing the reference of the optimum layout of urban planning and public service facilities.

Keywords: public service facilities, land price, urban spatial morphology, big data

Procedia PDF Downloads 174
19639 Investigation of the Unbiased Characteristic of Doppler Frequency to Different Antenna Array Geometries

Authors: Somayeh Komeylian

Abstract:

Array signal processing techniques have been recently developing in a variety application of the performance enhancement of receivers by refraining the power of jamming and interference signals. In this scenario, biases induced to the antenna array receiver degrade significantly the accurate estimation of the carrier phase. Owing to the integration of frequency becomes the carrier phase, we have obtained the unbiased doppler frequency for the high precision estimation of carrier phase. The unbiased characteristic of Doppler frequency to the power jamming and the other interference signals allows achieving the highly accurate estimation of phase carrier. In this study, we have rigorously investigated the unbiased characteristic of Doppler frequency to the variation of the antenna array geometries. The simulation results have efficiently verified that the Doppler frequency remains also unbiased and accurate to the variation of antenna array geometries.

Keywords: array signal processing, unbiased doppler frequency, GNSS, carrier phase, and slowly fluctuating point target

Procedia PDF Downloads 129
19638 Study on the Process of Detumbling Space Target by Laser

Authors: Zhang Pinliang, Chen Chuan, Song Guangming, Wu Qiang, Gong Zizheng, Li Ming

Abstract:

The active removal of space debris and asteroid defense are important issues in human space activities. Both of them need a detumbling process, for almost all space debris and asteroid are in a rotating state, and it`s hard and dangerous to capture or remove a target with a relatively high tumbling rate. So it`s necessary to find a method to reduce the angular rate first. The laser ablation method is an efficient way to tackle this detumbling problem, for it`s a contactless technique and can work at a safe distance. In existing research, a laser rotational control strategy based on the estimation of the instantaneous angular velocity of the target has been presented. But their calculation of control torque produced by a laser, which is very important in detumbling operation, is not accurate enough, for the method they used is only suitable for the plane or regularly shaped target, and they did not consider the influence of irregular shape and the size of the spot. In this paper, based on the triangulation reconstruction of the target surface, we propose a new method to calculate the impulse of the irregularly shaped target under both the covered irradiation and spot irradiation of the laser and verify its accuracy by theoretical formula calculation and impulse measurement experiment. Then we use it to study the process of detumbling cylinder and asteroid by laser. The result shows that the new method is universally practical and has high precision; it will take more than 13.9 hours to stop the rotation of Bennu with 1E+05kJ laser pulse energy; the speed of the detumbling process depends on the distance between the spot and the centroid of the target, which can be found an optimal value in every particular case.

Keywords: detumbling, laser ablation drive, space target, space debris remove

Procedia PDF Downloads 54
19637 Measure the Gas to Dust Ratio Towards Bright Sources in the Galactic Bulge

Authors: Jun Yang, Norbert Schulz, Claude Canizares

Abstract:

Knowing the dust content in the interstellar matter is necessary to understand the composition and evolution of the interstellar medium (ISM). The metal composition of the ISM enables us to study the cooling and heating processes that dominate the star formation rates in our Galaxy. The Chandra High Energy Transmission Grating (HETG) Spectrometer provides a unique opportunity to measure element dust compositions through X-ray edge absorption structure. We measure gas to dust optical depth ratios towards 9 bright Low-Mass X-ray Binaries (LMXBs) in the Galactic Bulge with the highest precision so far. Well calibrated and pile-up free optical depths are measured with the HETG spectrometer with respect to broadband hydrogen equivalent absorption in bright LMXBs: 4U 1636-53, Ser X-1, GX 3+1, 4U 1728-34, 4U 1705-44, GX 340+0, GX 13+1, GX 5-1, and GX 349+2. From the optical depths results, we deduce gas to dust ratios for various silicates in the ISM and present our results for the Si K edge in different lines of sight towards the Galactic Bulge.

Keywords: low-mass X-ray binaries, interstellar medium, gas to dust ratio, spectrometer

Procedia PDF Downloads 115
19636 Using Machine Learning to Build a Real-Time COVID-19 Mask Safety Monitor

Authors: Yash Jain

Abstract:

The US Center for Disease Control has recommended wearing masks to slow the spread of the virus. The research uses a video feed from a camera to conduct real-time classifications of whether or not a human is correctly wearing a mask, incorrectly wearing a mask, or not wearing a mask at all. Utilizing two distinct datasets from the open-source website Kaggle, a mask detection network had been trained. The first dataset that was used to train the model was titled 'Face Mask Detection' on Kaggle, where the dataset was retrieved from and the second dataset was titled 'Face Mask Dataset, which provided the data in a (YOLO Format)' so that the TinyYoloV3 model could be trained. Based on the data from Kaggle, two machine learning models were implemented and trained: a Tiny YoloV3 Real-time model and a two-stage neural network classifier. The two-stage neural network classifier had a first step of identifying distinct faces within the image, and the second step was a classifier to detect the state of the mask on the face and whether it was worn correctly, incorrectly, or no mask at all. The TinyYoloV3 was used for the live feed as well as for a comparison standpoint against the previous two-stage classifier and was trained using the darknet neural network framework. The two-stage classifier attained a mean average precision (MAP) of 80%, while the model trained using TinyYoloV3 real-time detection had a mean average precision (MAP) of 59%. Overall, both models were able to correctly classify stages/scenarios of no mask, mask, and incorrectly worn masks.

Keywords: datasets, classifier, mask-detection, real-time, TinyYoloV3, two-stage neural network classifier

Procedia PDF Downloads 127
19635 Cranioplasty with Custom Implant Realized Using 3D Printing Technology

Authors: Trad Khodja Rafik, Mahtout Amine, Ghoul Rachid, Benbouali Amine, Boulahlib Amine, Hariza Abdelmalik

Abstract:

Cranioplasty with custom implant realized using 3D printing technology. Cranioplasty is a surgical act that aims restoring cranial bone losses in order to protect the brain from external aggressions and to improve the patient aesthetic appearance. This objective can be achieved with taking advantage of the current technological development in computer science and biomechanics. The objective of this paper it to present an approach for the realization of high precision biocompatible cranial implants using new 3D printing technologies at the lowest cost. The proposed method is to reproduce the missing part of the skull by referring to its healthy contralateral part. Once the model is validated by the neurosurgeons, a mold is 3D printed for the production of a biocompatible implant in Poly-Methyl-Methacrylate (PMMA) acrylic cement. Using this procedure four patients underwent this procedure with excellent aesthetic results.

Keywords: cranioplasty, cranial bone loss, 3D printing technology, custom-made implants, PMMA

Procedia PDF Downloads 80
19634 Environmental Controls on the Distribution of Intertidal Foraminifers in Sabkha Al-Kharrar, Saudi Arabia: Implications for Sea-Level Changes

Authors: Talha A. Al-Dubai, Rashad A. Bantan, Ramadan H. Abu-Zied, Brian G. Jones, Aaid G. Al-Zubieri

Abstract:

Contemporary foraminiferal samples sediments were collected from the intertidal sabkha of Al-Kharrar Lagoon, Saudi Arabia, to study the vertical distribution of Foraminifera and, based on a modern training set, their potential to develop a predictor of former sea-level changes in the area. Based on hierarchical cluster analysis, the intertidal sabkha is divided into three vertical zones (A, B & C) represented by three foraminiferal assemblages, where agglutinated species occupied Zone A and calcareous species occupied the other two zones. In Zone A (high intertidal), Agglutinella compressa, Clavulina angularis and C. multicamerata are dominant species with a minor presence of Peneroplis planatus, Coscinospira hemprichii, Sorites orbiculus, Quinqueloculina lamarckiana, Q. seminula, Ammonia convexa and A. tepida. In contrast, in Zone B (middle intertidal) the most abundant species are P. planatus, C. hemprichii, S. orbiculus, Q. lamarckiana, Q. seminula and Q. laevigata, while Zone C (low intertidal) is characterised by C. hemprichii, Q. costata, S. orbiculus, P. planatus, A. convexa, A. tepida, Spiroloculina communis and S. costigera. A transfer function for sea-level reconstruction was developed using a modern dataset of 75 contemporary sediment samples and 99 species collected from several transects across the sabkha. The model provided an error of 0.12m, suggesting that intertidal foraminifers are able to predict the past sea-level changes with high precision in Al-Kharrar Lagoon, and thus the future prediction of those changes in the area.

Keywords: Lagoonal foraminifers, intertidal sabkha, vertical zonation, transfer function, sea level

Procedia PDF Downloads 147
19633 Using Heat-Mask in the Thermoforming Machine for Component Positioning in Thermoformed Electronics

Authors: Behnam Madadnia

Abstract:

For several years, 3D-shaped electronics have been rising, with many uses in home appliances, automotive, and manufacturing. One of the biggest challenges in the fabrication of 3D shape electronics, which are made by thermoforming, is repeatable and accurate component positioning, and typically there is no control over the final position of the component. This paper aims to address this issue and present a reliable approach for guiding the electronic components in the desired place during thermoforming. We have proposed a heat-control mask in the thermoforming machine to control the heating of the polymer, not allowing specific parts to be formable, which can assure the conductive traces' mechanical stability during thermoforming of the substrate. We have verified our approach's accuracy by applying our method on a real industrial semi-sphere mold for positioning 7 LEDs and one touch sensor. We measured the LEDs' position after thermoforming to prove the process's repeatability. The experiment results demonstrate that the proposed method is capable of positioning electronic components in thermoformed 3D electronics with high precision.

Keywords: 3D-shaped electronics, electronic components, thermoforming, component positioning

Procedia PDF Downloads 64
19632 Machine Learning Prediction of Compressive Damage and Energy Absorption in Carbon Fiber-Reinforced Polymer Tubular Structures

Authors: Milad Abbasi

Abstract:

Carbon fiber-reinforced polymer (CFRP) composite structures are increasingly being utilized in the automotive industry due to their lightweight and specific energy absorption capabilities. Although it is impossible to predict composite mechanical properties directly using theoretical methods, various research has been conducted so far in the literature for accurate simulation of CFRP structures' energy-absorbing behavior. In this research, axial compression experiments were carried out on hand lay-up unidirectional CFRP composite tubes. The fabrication method allowed the authors to extract the material properties of the CFRPs using ASTM D3039, D3410, and D3518 standards. A neural network machine learning algorithm was then utilized to build a robust prediction model to forecast the axial compressive properties of CFRP tubes while reducing high-cost experimental efforts. The predicted results have been compared with the experimental outcomes in terms of load-carrying capacity and energy absorption capability. The results showed high accuracy and precision in the prediction of the energy-absorption capacity of the CFRP tubes. This research also demonstrates the effectiveness and challenges of machine learning techniques in the robust simulation of composites' energy-absorption behavior. Interestingly, the proposed method considerably condensed numerical and experimental efforts in the simulation and calibration of CFRP composite tubes subjected to compressive loading.

Keywords: CFRP composite tubes, energy absorption, crushing behavior, machine learning, neural network

Procedia PDF Downloads 112
19631 Utilizing Spatial Uncertainty of On-The-Go Measurements to Design Adaptive Sampling of Soil Electrical Conductivity in a Rice Field

Authors: Ismaila Olabisi Ogundiji, Hakeem Mayowa Olujide, Qasim Usamot

Abstract:

The main reasons for site-specific management for agricultural inputs are to increase the profitability of crop production, to protect the environment and to improve products’ quality. Information about the variability of different soil attributes within a field is highly essential for the decision-making process. Lack of fast and accurate acquisition of soil characteristics remains one of the biggest limitations of precision agriculture due to being expensive and time-consuming. Adaptive sampling has been proven as an accurate and affordable sampling technique for planning within a field for site-specific management of agricultural inputs. This study employed spatial uncertainty of soil apparent electrical conductivity (ECa) estimates to identify adaptive re-survey areas in the field. The original dataset was grouped into validation and calibration groups where the calibration group was sub-grouped into three sets of different measurements pass intervals. A conditional simulation was performed on the field ECa to evaluate the ECa spatial uncertainty estimates by the use of the geostatistical technique. The grouping of high-uncertainty areas for each set was done using image segmentation in MATLAB, then, high and low area value-separate was identified. Finally, an adaptive re-survey was carried out on those areas of high-uncertainty. Adding adaptive re-surveying significantly minimized the time required for resampling whole field and resulted in ECa with minimal error. For the most spacious transect, the root mean square error (RMSE) yielded from an initial crude sampling survey was minimized after an adaptive re-survey, which was close to that value of the ECa yielded with an all-field re-survey. The estimated sampling time for the adaptive re-survey was found to be 45% lesser than that of all-field re-survey. The results indicate that designing adaptive sampling through spatial uncertainty models significantly mitigates sampling cost, and there was still conformity in the accuracy of the observations.

Keywords: soil electrical conductivity, adaptive sampling, conditional simulation, spatial uncertainty, site-specific management

Procedia PDF Downloads 106
19630 Development of a Robot Assisted Centrifugal Casting Machine for Manufacturing Multi-Layer Journal Bearing and High-Tech Machine Components

Authors: Mohammad Syed Ali Molla, Mohammed Azim, Mohammad Esharuzzaman

Abstract:

Centrifugal-casting machine is used in manufacturing special machine components like multi-layer journal bearing used in all internal combustion engine, steam, gas turbine and air craft turboengine where isotropic properties and high precisions are desired. Moreover, this machine can be used in manufacturing thin wall hightech machine components like cylinder liners and piston rings of IC engine and other machine parts like sleeves, and bushes. Heavy-duty machine component like railway wheel can also be prepared by centrifugal casting. A lot of technological developments are required in casting process for production of good casted machine body and machine parts. Usually defects like blowholes, surface roughness, chilled surface etc. are found in sand casted machine parts. But these can be removed by centrifugal casting machine using rotating metallic die. Moreover, die rotation, its temperature control, and good pouring practice can contribute to the quality of casting because of the fact that the soundness of a casting in large part depends upon how the metal enters into the mold or dies and solidifies. Poor pouring practice leads to variety of casting defects such as temperature loss, low quality casting, excessive turbulence, over pouring etc. Besides these, handling of molten metal is very unsecured and dangerous for the workers. In order to get rid of all these problems, the need of an automatic pouring device arises. In this research work, a robot assisted pouring device and a centrifugal casting machine are designed, developed constructed and tested experimentally which are found to work satisfactorily. The robot assisted pouring device is further modified and developed for using it in actual metal casting process. Lot of settings and tests are required to control the system and ultimately it can be used in automation of centrifugal casting machine to produce high-tech machine parts with desired precision.

Keywords: bearing, centrifugal casting, cylinder liners, robot

Procedia PDF Downloads 382
19629 Quantitative Analysis of (+)-Catechin and (-)-Epicatechin in Pentace burmanica Stem Bark by HPLC

Authors: Thidarat Duangyod, Chanida Palanuvej, Nijsiri Ruangrungsi

Abstract:

Pentace burmanica Kurz., belonging to the Malvaceae family, is commonly used for anti-diarrhea in Thai traditional medicine. A method for quantification of (+)-catechin and (-)-epicatechin in P. burmanica stem bark from 12 different Thailand markets by reverse-phase high performance liquid chromatography (HPLC) was investigated and validated. The analysis was performed by a Shimadzu DGU-20A3 HPLC equipped with a Shimadzu SPD-M20A photo diode array detector. The separation was accomplished with an Inersil ODS-3 column (5 µm x 4.6 x 250 mm) using 0.1% formic acid in water (A) and 0.1% formic acid in acetonitrile (B) as mobile phase at the flow rate of 1 ml/min. The isocratic was set at 20% B for 15 min and the column temperature was maintained at 40 ºC. The detection was at the wavelength of 280 nm. Both (+)-catechin and (-)-epicatechin existed in the ethanolic extract of P. burmanica stem bark. The content of (-)-epicatechin was found as 59.74 ± 1.69 µg/mg of crude extract. In contrast, the quantitation of (+)-catechin content was omitted because of its small amount. The method was linear over a range of 5-200 µg/ml with good coefficients (r2 > 0.99) for (+)-catechin and (-)-epicatechin. Limit of detection values were found to be 4.80 µg/ml for (+)-catechin and 5.14 µg/ml for (-)-epicatechin. Limit of quantitation of (+)-catechin and (-)-epicatechin were of 14.54 µg/ml and 15.57 µg/ml respectively. Good repeatability and intermediate precision (%RSD < 3) were found in this study. The average recoveries of both (+)-catechin and (-)-epicatechin were obtained with good recovery in the range of 91.11 – 97.02% and 88.53 – 93.78%, respectively, with the %RSD less than 2. The peak purity indices of catechins were more than 0.99. The results suggested that HPLC method proved to be precise and accurate and the method can be conveniently used for (+)-catechin and (-)-epicatechin determination in ethanolic extract of P. burmanica stem bark. Moreover, the stem bark of P. burmanica was found to be a rich source of (-)-epicatechin.

Keywords: pentace burmanica, (+)-catechin, (-)-epicatechin, high performance liquid chromatography

Procedia PDF Downloads 419
19628 Feasibility Study of Measurement of Turning Based-Surfaces Using Perthometer, Optical Profiler and Confocal Sensor

Authors: Khavieya Anandhan, Soundarapandian Santhanakrishnan, Vijayaraghavan Laxmanan

Abstract:

In general, measurement of surfaces is carried out by using traditional methods such as contact type stylus instruments. This prevalent approach is challenged by using non-contact instruments such as optical profiler, co-ordinate measuring machine, laser triangulation sensors, machine vision system, etc. Recently, confocal sensor is trying to be used in the surface metrology field. This sensor, such as a confocal sensor, is explored in this study to determine the surface roughness value for various turned surfaces. Turning is a crucial machining process to manufacture products such as grooves, tapered domes, threads, tapers, etc. The roughness value of turned surfaces are in the range of range 0.4-12.5 µm, were taken for analysis. Three instruments were used, namely, perthometer, optical profiler, and confocal sensor. Among these, in fact, a confocal sensor is least explored, despite its good resolution about 5 nm. Thus, such a high-precision sensor was used in this study to explore the possibility of measuring turned surfaces. Further, using this data, measurement uncertainty was also studied.

Keywords: confocal sensor, optical profiler, surface roughness, turned surfaces

Procedia PDF Downloads 111
19627 Plasmonic Nanoshells Based Metabolite Detection for in-vitro Metabolic Diagnostics and Therapeutic Evaluation

Authors: Deepanjali Gurav, Kun Qian

Abstract:

In-vitro metabolic diagnosis relies on designed materials-based analytical platforms for detection of selected metabolites in biological samples, which has a key role in disease detection and therapeutic evaluation in clinics. However, the basic challenge deals with developing a simple approach for metabolic analysis in bio-samples with high sample complexity and low molecular abundance. In this work, we report a designer plasmonic nanoshells based platform for direct detection of small metabolites in clinical samples for in-vitro metabolic diagnostics. We first synthesized a series of plasmonic core-shell particles with tunable nanoshell structures. The optimized plasmonic nanoshells as new matrices allowed fast, multiplex, sensitive, and selective LDI MS (Laser desorption/ionization mass spectrometry) detection of small metabolites in 0.5 μL of bio-fluids without enrichment or purification. Furthermore, coupling with isotopic quantification of selected metabolites, we demonstrated the use of these plasmonic nanoshells for disease detection and therapeutic evaluation in clinics. For disease detection, we identified patients with postoperative brain infection through glucose quantitation and daily monitoring by cerebrospinal fluid (CSF) analysis. For therapeutic evaluation, we investigated drug distribution in blood and CSF systems and validated the function and permeability of blood-brain/CSF-barriers, during therapeutic treatment of patients with cerebral edema for pharmacokinetic study. Our work sheds light on the design of materials for high-performance metabolic analysis and precision diagnostics in real cases.

Keywords: plasmonic nanoparticles, metabolites, fingerprinting, mass spectrometry, in-vitro diagnostics

Procedia PDF Downloads 109
19626 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 57
19625 Effect of Segregation on the Reaction Rate of Sewage Sludge Pyrolysis in a Bubbling Fluidized Bed

Authors: A. Soria-Verdugo, A. Morato-Godino, L. M. García-Gutiérrez, N. García-Hernando

Abstract:

The evolution of the pyrolysis of sewage sludge in a fixed and a fluidized bed was analyzed using a novel measuring technique. This original measuring technique consists of installing the whole reactor over a precision scale, capable of measuring the mass of the complete reactor with enough precision to detect the mass released by the sewage sludge sample during its pyrolysis. The inert conditions required for the pyrolysis process were obtained supplying the bed with a nitrogen flowrate, and the bed temperature was adjusted to either 500 ºC or 600 ºC using a group of three electric resistors. The sewage sludge sample was supplied through the top of the bed in a batch of 10 g. The measurement of the mass released by the sewage sludge sample was employed to determine the evolution of the reaction rate during the pyrolysis, the total amount of volatile matter released, and the pyrolysis time. The pyrolysis tests of sewage sludge in the fluidized bed were conducted using two different bed materials of the same size but different densities: silica sand and sepiolite particles. The higher density of silica sand particles induces a flotsam behavior for the sewage sludge particles which move close to the bed surface. In contrast, the lower density of sepiolite produces a neutrally-buoyant behavior for the sewage sludge particles, which shows a proper circulation throughout the whole bed in this case. The analysis of the evolution of the pyrolysis process in both fluidized beds show that the pyrolysis is faster when buoyancy effects are negligible, i.e. in the bed conformed by sepiolite particles. Moreover, sepiolite was found to show an absorbent capability for the volatile matter released during the pyrolysis of sewage sludge.

Keywords: bubbling fluidized bed, pyrolysis, reaction rate, segregation effects, sewage sludge

Procedia PDF Downloads 326
19624 Shock and Particle Velocity Determination from Microwave Interrogation

Authors: Benoit Rougier, Alexandre Lefrancois, Herve Aubert

Abstract:

Microwave interrogation in the range 10-100 GHz is identified as an advanced technique to investigate simultaneously shock and particle velocity measurements. However, it requires the understanding of electromagnetic wave propagation in a multi-layered moving media. The existing models limit their approach to wave guides or evaluate the velocities with a fitting method, restricting therefore the domain of validity and the precision of the results. Moreover, few data of permittivity on high explosives at these frequencies under dynamic compression have been reported. In this paper, shock and particle velocities are computed concurrently for steady and unsteady shocks for various inert and reactive materials, via a propagation model based on Doppler shifts and signal amplitude. Refractive index of the material under compression is also calculated. From experimental data processing, it is demonstrated that Hugoniot curve can be evaluated. The comparison with published results proves the accuracy of the proposed method. This microwave interrogation technique seems promising for shock and detonation waves studies.

Keywords: electromagnetic propagation, experimental setup, Hugoniot measurement, shock propagation

Procedia PDF Downloads 184
19623 Development of Polymeric Fluorescence Sensor for the Determination of Bisphenol-A

Authors: Neşe Taşci, Soner Çubuk, Ece Kök Yetimoğlu, M. Vezir Kahraman

Abstract:

Bisphenol-A (BPA), 2,2-bis(4-hydroxyphenly)propane, is one of the highest usage volume chemicals in the world. Studies showed that BPA maybe has negative effects on the central nervous system, immune and endocrine systems. Several of analytical methods for the analysis of BPA have been reported including electrochemical processes, chemical oxidation, ozonization, spectrophotometric, chromatographic techniques. Compared with other conventional analytical techniques, optic sensors are reliable, providing quick results, low cost, easy to use, stands out as a much more advantageous method because of the high precision and sensitivity. In this work, a new photocured polymeric fluorescence sensor was prepared and characterized for Bisphenol-A (BPA) analysis. Characterization of the membrane was carried out by Attenuated Total Reflectance Fourier Transform Infrared Spectroscopy (ATR-FTIR) and Scanning Electron Microscope (SEM) techniques. The response characteristics of the sensor including dynamic range, pH effect and response time were systematically investigated. Acknowledgment: This work was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Grant 115Y469.

Keywords: bisphenol-a, fluorescence, photopolymerization, polymeric sensor

Procedia PDF Downloads 199
19622 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes

Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo

Abstract:

Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).

Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation

Procedia PDF Downloads 168
19621 Research on Construction of Subject Knowledge Base Based on Literature Knowledge Extraction

Authors: Yumeng Ma, Fang Wang, Jinxia Huang

Abstract:

Researchers put forward higher requirements for efficient acquisition and utilization of domain knowledge in the big data era. As literature is an effective way for researchers to quickly and accurately understand the research situation in their field, the knowledge discovery based on literature has become a new research method. As a tool to organize and manage knowledge in a specific domain, the subject knowledge base can be used to mine and present the knowledge behind the literature to meet the users' personalized needs. This study designs the construction route of the subject knowledge base for specific research problems. Information extraction method based on knowledge engineering is adopted. Firstly, the subject knowledge model is built through the abstraction of the research elements. Then under the guidance of the knowledge model, extraction rules of knowledge points are compiled to analyze, extract and correlate entities, relations, and attributes in literature. Finally, a database platform based on this structured knowledge is developed that can provide a variety of services such as knowledge retrieval, knowledge browsing, knowledge q&a, and visualization correlation. Taking the construction practices in the field of activating blood circulation and removing stasis as an example, this study analyzes how to construct subject knowledge base based on literature knowledge extraction. As the system functional test shows, this subject knowledge base can realize the expected service scenarios such as a quick query of knowledge, related discovery of knowledge and literature, knowledge organization. As this study enables subject knowledge base to help researchers locate and acquire deep domain knowledge quickly and accurately, it provides a transformation mode of knowledge resource construction and personalized precision knowledge services in the data-intensive research environment.

Keywords: knowledge model, literature knowledge extraction, precision knowledge services, subject knowledge base

Procedia PDF Downloads 130
19620 Investigation of Beam Defocusing Impact in Millisecond Laser Drilling for Variable Operational Currents

Authors: Saad Nawaz, Yu Gang, Baber Saeed Olakh, M. Bilal Awan

Abstract:

Owing to its exceptional performance and precision, laser drilling is being widely used in modern manufacturing industries. This experimental study mainly addressed the defocusing of laser beam along with different operational currents. The performance has been evaluated in terms of tapering phenomena, entrance and exit diameters etc. The operational currents have direct influence on laser power which ultimately affected the shape of the drilled hole. Different operational currents in low, medium and high ranges are used for laser drilling of 18CrNi8. Experiment results have depicted that there is an increase in entrance diameter with an increase in defocusing distance. However, the exit diameter first decreases and then increases with respect to increasing defocusing length. The evolution of drilled hole from tapered to straight hole has been explained with defocusing at different levels. The optimum parametric combinations for attaining perfect shape of drilled hole is proposed along with lower heat treatment effects for higher process efficiency.

Keywords: millisecond laser, defocusing beam, operational current, keyhole profile, recast layer

Procedia PDF Downloads 140
19619 Transmission Design That Eliminates Gradual System Problems in Gearboxes

Authors: Ömer Ateş, Atilla Savaş

Abstract:

Reducers and transmission systems are power and speed transfer tools that have been used for many years in the technology world and in all engineering fields. Since today's transmissions have a threaded tap system, torque interruption occurs during tap change. besides, breakdown and manufacturing costs are high. Another problem is the limited torque and rpm setting in stepped gearbox systems. In this study, a new type of transmission system is designed to solve these problems. This new type of transmission system has been called the Continuously Variable Pulley. The most important feature of the transmission system in the study is that it can be adjusted Revolutions Per Minute-wise and torque-wise at the millimeter (precision) adjustment level. In order to make adjustments at this level, an adjustable pulley with the help of hydraulic piston is designed. The efficiency of the designed transmission system is 97 percent, the efficiency of today's transmissions is in the range of 85-95 percent. examined at the analysis and calculations, it is seen that the designed system gives realistic results and can be compared with today's transmissions and reducers. Therefore, this new type of transmission has been proven to be usable in production areas and the world of technology.

Keywords: gearbox, reducer, transmission, torque

Procedia PDF Downloads 94
19618 Implementation of CNV-CH Algorithm Using Map-Reduce Approach

Authors: Aishik Deb, Rituparna Sinha

Abstract:

We have developed an algorithm to detect the abnormal segment/"structural variation in the genome across a number of samples. We have worked on simulated as well as real data from the BAM Files and have designed a segmentation algorithm where abnormal segments are detected. This algorithm aims to improve the accuracy and performance of the existing CNV-CH algorithm. The next-generation sequencing (NGS) approach is very fast and can generate large sequences in a reasonable time. So the huge volume of sequence information gives rise to the need for Big Data and parallel approaches of segmentation. Therefore, we have designed a map-reduce approach for the existing CNV-CH algorithm where a large amount of sequence data can be segmented and structural variations in the human genome can be detected. We have compared the efficiency of the traditional and map-reduce algorithms with respect to precision, sensitivity, and F-Score. The advantages of using our algorithm are that it is fast and has better accuracy. This algorithm can be applied to detect structural variations within a genome, which in turn can be used to detect various genetic disorders such as cancer, etc. The defects may be caused by new mutations or changes to the DNA and generally result in abnormally high or low base coverage and quantification values.

Keywords: cancer detection, convex hull segmentation, map reduce, next generation sequencing

Procedia PDF Downloads 102