Search results for: accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3720

Search results for: accuracy

2760 Evaluating the Accuracy of Biologically Relevant Variables Generated by ClimateAP

Authors: Jing Jiang, Wenhuan XU, Lei Zhang, Shiyi Zhang, Tongli Wang

Abstract:

Climate data quality significantly affects the reliability of ecological modeling. In the Asia Pacific (AP) region, low-quality climate data hinders ecological modeling. ClimateAP, a software developed in 2017, generates high-quality climate data for the AP region, benefiting researchers in forestry and agriculture. However, its adoption remains limited. This study aims to confirm the validity of biologically relevant variable data generated by ClimateAP during the normal climate period through comparison with the currently available gridded data. Climate data from 2,366 weather stations were used to evaluate the prediction accuracy of ClimateAP in comparison with the commonly used gridded data from WorldClim1.4. Univariate regressions were applied to 48 monthly biologically relevant variables, and the relationship between the observational data and the predictions made by ClimateAP and WorldClim was evaluated using Adjusted R-Squared and Root Mean Squared Error (RMSE). Locations were categorized into mountainous and flat landforms, considering elevation, slope, ruggedness, and Topographic Position Index. Univariate regressions were then applied to all biologically relevant variables for each landform category. Random Forest (RF) models were implemented for the climatic niche modeling of Cunninghamia lanceolata. A comparative analysis of the prediction accuracies of RF models constructed with distinct climate data sources was conducted to evaluate their relative effectiveness. Biologically relevant variables were obtained from three unpublished Chinese meteorological datasets. ClimateAPv3.0 and WorldClim predictions were obtained from weather station coordinates and WorldClim1.4 rasters, respectively, for the normal climate period of 1961-1990. Occurrence data for Cunninghamia lanceolata came from integrated biodiversity databases with 3,745 unique points. ClimateAP explains a minimum of 94.74%, 97.77%, 96.89%, and 94.40% of monthly maximum, minimum, average temperature, and precipitation variances, respectively. It outperforms WorldClim in 37 biologically relevant variables with lower RMSE values. ClimateAP achieves higher R-squared values for the 12 monthly minimum temperature variables and consistently higher Adjusted R-squared values across all landforms for precipitation. ClimateAP's temperature data yields lower Adjusted R-squared values than gridded data in high-elevation, rugged, and mountainous areas but achieves higher values in mid-slope drainages, plains, open slopes, and upper slopes. Using ClimateAP improves the prediction accuracy of tree occurrence from 77.90% to 82.77%. The biologically relevant climate data produced by ClimateAP is validated based on evaluations using observations from weather stations. The use of ClimateAP leads to an improvement in data quality, especially in non-mountainous regions. The results also suggest that using biologically relevant variables generated by ClimateAP can slightly enhance climatic niche modeling for tree species, offering a better understanding of tree species adaptation and resilience compared to using gridded data.

Keywords: climate data validation, data quality, Asia pacific climate, climatic niche modeling, random forest models, tree species

Procedia PDF Downloads 68
2759 Flicker Detection with Motion Tolerance for Embedded Camera

Authors: Jianrong Wu, Xuan Fu, Akihiro Higashi, Zhiming Tan

Abstract:

CMOS image sensors with a rolling shutter are used broadly in the digital cameras embedded in mobile devices. The rolling shutter suffers the flicker artifacts from the fluorescent lamp, and it could be observed easily. In this paper, the characteristics of illumination flicker in motion case were analyzed, and two efficient detection methods based on matching fragment selection were proposed. According to the experimental results, our methods could achieve as high as 100% accuracy in static scene, and at least 97% in motion scene.

Keywords: illumination flicker, embedded camera, rolling shutter, detection

Procedia PDF Downloads 420
2758 Method for Improving ICESAT-2 ATL13 Altimetry Data Utility on Rivers

Authors: Yun Chen, Qihang Liu, Catherine Ticehurst, Chandrama Sarker, Fazlul Karim, Dave Penton, Ashmita Sengupta

Abstract:

The application of ICESAT-2 altimetry data in river hydrology critically depends on the accuracy of the mean water surface elevation (WSE) at a virtual station (VS) where satellite observations intersect with water. The ICESAT-2 track generates multiple VSs as it crosses the different water bodies. The difficulties are particularly pronounced in large river basins where there are many tributaries and meanders often adjacent to each other. One challenge is to split photon segments along a beam to accurately partition them to extract only the true representative water height for individual elements. As far as we can establish, there is no automated procedure to make this distinction. Earlier studies have relied on human intervention or river masks. Both approaches are unsatisfactory solutions where the number of intersections is large, and river width/extent changes over time. We describe here an automated approach called “auto-segmentation”. The accuracy of our method was assessed by comparison with river water level observations at 10 different stations on 37 different dates along the Lower Murray River, Australia. The congruence is very high and without detectable bias. In addition, we compared different outlier removal methods on the mean WSE calculation at VSs post the auto-segmentation process. All four outlier removal methods perform almost equally well with the same R2 value (0.998) and only subtle variations in RMSE (0.181–0.189m) and MAE (0.130–0.142m). Overall, the auto-segmentation method developed here is an effective and efficient approach to deriving accurate mean WSE at river VSs. It provides a much better way of facilitating the application of ICESAT-2 ATL13 altimetry to rivers compared to previously reported studies. Therefore, the findings of our study will make a significant contribution towards the retrieval of hydraulic parameters, such as water surface slope along the river, water depth at cross sections, and river channel bathymetry for calculating flow velocity and discharge from remotely sensed imagery at large spatial scales.

Keywords: lidar sensor, virtual station, cross section, mean water surface elevation, beam/track segmentation

Procedia PDF Downloads 62
2757 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver

Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin

Abstract:

National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.

Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band

Procedia PDF Downloads 369
2756 BERT-Based Chinese Coreference Resolution

Authors: Li Xiaoge, Wang Chaodong

Abstract:

We introduce the first Chinese Coreference Resolution Model based on BERT (CCRM-BERT) and show that it significantly outperforms all previous work. The key idea is to consider the features of the mention, such as part of speech, width of spans, distance between spans, etc. And the influence of each features on the model is analyzed. The model computes mention embeddings that combine BERT with features. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the Chinese OntoNotes benchmark.

Keywords: BERT, coreference resolution, deep learning, nature language processing

Procedia PDF Downloads 216
2755 An Accurate Prediction of Surface Temperature History in a Supersonic Flight

Authors: A. M. Tahsini, S. A. Hosseini

Abstract:

In the present study, the surface temperature history of the adaptor part in a two-stage supersonic launch vehicle is accurately predicted. The full Navier-Stokes equations are used to estimate the aerodynamic heat flux. The one-dimensional heat conduction in solid phase is used to compute the temperature history. The instantaneous surface temperature is used to improve the applied heat flux, to improve the accuracy of the results.

Keywords: aerodynamic heating, heat conduction, numerical simulation, supersonic flight, launch vehicle

Procedia PDF Downloads 452
2754 Deep Learning-Based Liver 3D Slicer for Image-Guided Therapy: Segmentation and Needle Aspiration

Authors: Ahmedou Moulaye Idriss, Tfeil Yahya, Tamas Ungi, Gabor Fichtinger

Abstract:

Image-guided therapy (IGT) plays a crucial role in minimally invasive procedures for liver interventions. Accurate segmentation of the liver and precise needle placement is essential for successful interventions such as needle aspiration. In this study, we propose a deep learning-based liver 3D slicer designed to enhance segmentation accuracy and facilitate needle aspiration procedures. The developed 3D slicer leverages state-of-the-art convolutional neural networks (CNNs) for automatic liver segmentation in medical images. The CNN model is trained on a diverse dataset of liver images obtained from various imaging modalities, including computed tomography (CT) and magnetic resonance imaging (MRI). The trained model demonstrates robust performance in accurately delineating liver boundaries, even in cases with anatomical variations and pathological conditions. Furthermore, the 3D slicer integrates advanced image registration techniques to ensure accurate alignment of preoperative images with real-time interventional imaging. This alignment enhances the precision of needle placement during aspiration procedures, minimizing the risk of complications and improving overall intervention outcomes. To validate the efficacy of the proposed deep learning-based 3D slicer, a comprehensive evaluation is conducted using a dataset of clinical cases. Quantitative metrics, including the Dice similarity coefficient and Hausdorff distance, are employed to assess the accuracy of liver segmentation. Additionally, the performance of the 3D slicer in guiding needle aspiration procedures is evaluated through simulated and clinical interventions. Preliminary results demonstrate the effectiveness of the developed 3D slicer in achieving accurate liver segmentation and guiding needle aspiration procedures with high precision. The integration of deep learning techniques into the IGT workflow shows great promise for enhancing the efficiency and safety of liver interventions, ultimately contributing to improved patient outcomes.

Keywords: deep learning, liver segmentation, 3D slicer, image guided therapy, needle aspiration

Procedia PDF Downloads 48
2753 The Use of Artificial Intelligence in Diagnosis of Mastitis in Cows

Authors: Djeddi Khaled, Houssou Hind, Miloudi Abdellatif, Rabah Siham

Abstract:

In the field of veterinary medicine, there is a growing application of artificial intelligence (AI) for diagnosing bovine mastitis, a prevalent inflammatory disease in dairy cattle. AI technologies, such as automated milking systems, have streamlined the assessment of key metrics crucial for managing cow health during milking and identifying prevalent diseases, including mastitis. These automated milking systems empower farmers to implement automatic mastitis detection by analyzing indicators like milk yield, electrical conductivity, fat, protein, lactose, blood content in the milk, and milk flow rate. Furthermore, reports highlight the integration of somatic cell count (SCC), thermal infrared thermography, and diverse systems utilizing statistical models and machine learning techniques, including artificial neural networks, to enhance the overall efficiency and accuracy of mastitis detection. According to a review of 15 publications, machine learning technology can predict the risk and detect mastitis in cattle with an accuracy ranging from 87.62% to 98.10% and sensitivity and specificity ranging from 84.62% to 99.4% and 81.25% to 98.8%, respectively. Additionally, machine learning algorithms and microarray meta-analysis are utilized to identify mastitis genes in dairy cattle, providing insights into the underlying functional modules of mastitis disease. Moreover, AI applications can assist in developing predictive models that anticipate the likelihood of mastitis outbreaks based on factors such as environmental conditions, herd management practices, and animal health history. This proactive approach supports farmers in implementing preventive measures and optimizing herd health. By harnessing the power of artificial intelligence, the diagnosis of bovine mastitis can be significantly improved, enabling more effective management strategies and ultimately enhancing the health and productivity of dairy cattle. The integration of artificial intelligence presents valuable opportunities for the precise and early detection of mastitis, providing substantial benefits to the dairy industry.

Keywords: artificial insemination, automatic milking system, cattle, machine learning, mastitis

Procedia PDF Downloads 65
2752 Discharge Estimation in a Two Flow Braided Channel Based on Energy Concept

Authors: Amiya Kumar Pati, Spandan Sahu, Kishanjit Kumar Khatua

Abstract:

River is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. A river flow consisting of small and shallow channels sometimes divide and recombine numerous times because of the slow water flow or the built up sediments. The pattern formed during this process resembles the strands of a braid. Braided streams form where the sediment load is so heavy that some of the sediments are deposited as shifting islands. Braided rivers often exist near the mountainous regions and typically carry coarse-grained and heterogeneous sediments down a fairly steep gradient. In this paper, the apparent shear stress formulae were suitably modified, and the Energy Concept Method (ECM) was applied for the prediction of discharges at the junction of a two-flow braided compound channel. The Energy Concept Method has not been applied for estimating the discharges in the braided channels. The energy loss in the channels is analyzed based on mechanical analysis. The cross-section of channel is divided into two sub-areas, namely the main-channel below the bank-full level and region above the bank-full level for estimating the total discharge. The experimental data are compared with a wide range of theoretical data available in the published literature to verify this model. The accuracy of this approach is also compared with Divided Channel Method (DCM). From error analysis of this method, it is observed that the relative error is less for the data-sets having smooth floodplains when compared to rough floodplains. Comparisons with other models indicate that the present method has reasonable accuracy for engineering purposes.

Keywords: critical flow, energy concept, open channel flow, sediment, two-flow braided compound channel

Procedia PDF Downloads 126
2751 Orthogonal Basis Extreme Learning Algorithm and Function Approximation

Authors: Ying Li, Yan Li

Abstract:

A new algorithm for single hidden layer feedforward neural networks (SLFN), Orthogonal Basis Extreme Learning (OBEL) algorithm, is proposed and the algorithm derivation is given in the paper. The algorithm can decide both the NNs parameters and the neuron number of hidden layer(s) during training while providing extreme fast learning speed. It will provide a practical way to develop NNs. The simulation results of function approximation showed that the algorithm is effective and feasible with good accuracy and adaptability.

Keywords: neural network, orthogonal basis extreme learning, function approximation

Procedia PDF Downloads 534
2750 ANAC-id - Facial Recognition to Detect Fraud

Authors: Giovanna Borges Bottino, Luis Felipe Freitas do Nascimento Alves Teixeira

Abstract:

This article aims to present a case study of the National Civil Aviation Agency (ANAC) in Brazil, ANAC-id. ANAC-id is the artificial intelligence algorithm developed for image analysis that recognizes standard images of unobstructed and uprighted face without sunglasses, allowing to identify potential inconsistencies. It combines YOLO architecture and 3 libraries in python - face recognition, face comparison, and deep face, providing robust analysis with high level of accuracy.

Keywords: artificial intelligence, deepface, face compare, face recognition, YOLO, computer vision

Procedia PDF Downloads 156
2749 Brainwave Classification for Brain Balancing Index (BBI) via 3D EEG Model Using k-NN Technique

Authors: N. Fuad, M. N. Taib, R. Jailani, M. E. Marwan

Abstract:

In this paper, the comparison between k-Nearest Neighbor (kNN) algorithms for classifying the 3D EEG model in brain balancing is presented. The EEG signal recording was conducted on 51 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, maximum PSD values were extracted as features from the model. There are three indexes for the balanced brain; index 3, index 4 and index 5. There are significant different of the EEG signals due to the brain balancing index (BBI). Alpha-α (8–13 Hz) and beta-β (13–30 Hz) were used as input signals for the classification model. The k-NN classification result is 88.46% accuracy. These results proved that k-NN can be used in order to predict the brain balancing application.

Keywords: power spectral density, 3D EEG model, brain balancing, kNN

Procedia PDF Downloads 486
2748 A Two-Step Framework for Unsupervised Speaker Segmentation Using BIC and Artificial Neural Network

Authors: Ahmad Alwosheel, Ahmed Alqaraawi

Abstract:

This work proposes a new speaker segmentation approach for two speakers. It is an online approach that does not require a prior information about speaker models. It has two phases, a conventional approach such as unsupervised BIC-based is utilized in the first phase to detect speaker changes and train a Neural Network, while in the second phase, the output trained parameters from the Neural Network are used to predict next incoming audio stream. Using this approach, a comparable accuracy to similar BIC-based approaches is achieved with a significant improvement in terms of computation time.

Keywords: artificial neural network, diarization, speaker indexing, speaker segmentation

Procedia PDF Downloads 502
2747 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 72
2746 Environmental Performance Improvement of Additive Manufacturing Processes with Part Quality Point of View

Authors: Mazyar Yosofi, Olivier Kerbrat, Pascal Mognol

Abstract:

Life cycle assessment of additive manufacturing processes has evolved significantly since these past years. A lot of existing studies mainly focused on energy consumption. Nowadays, new methodologies of life cycle inventory acquisition came through the literature and help manufacturers to take into account all the input and output flows during the manufacturing step of the life cycle of products. Indeed, the environmental analysis of the phenomena that occur during the manufacturing step of additive manufacturing processes is going to be well known. Now it becomes possible to count and measure accurately all the inventory data during the manufacturing step. Optimization of the environmental performances of processes can now be considered. Environmental performance improvement can be made by varying process parameters. However, a lot of these parameters (such as manufacturing speed, the power of the energy source, quantity of support materials) affect directly the mechanical properties, surface finish and the dimensional accuracy of a functional part. This study aims to improve the environmental performance of an additive manufacturing process without deterioration of the part quality. For that purpose, the authors have developed a generic method that has been applied on multiple parts made by additive manufacturing processes. First, a complete analysis of the process parameters is made in order to identify which parameters affect only the environmental performances of the process. Then, multiple parts are manufactured by varying the identified parameters. The aim of the second step is to find the optimum value of the parameters that decrease significantly the environmental impact of the process and keep the part quality as desired. Finally, a comparison between the part made by initials parameters and changed parameters is made. In this study, the major finding claims by authors is to reduce the environmental impact of an additive manufacturing process while respecting the three quality criterion of parts, mechanical properties, dimensional accuracy and surface roughness. Now that additive manufacturing processes can be seen as mature from a technical point of view, environmental improvement of these processes can be considered while respecting the part properties. The first part of this study presents the methodology applied to multiple academic parts. Then, the validity of the methodology is demonstrated on functional parts.

Keywords: additive manufacturing, environmental impact, environmental improvement, mechanical properties

Procedia PDF Downloads 288
2745 Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: transformers, generative ai, gene expression design, classification

Procedia PDF Downloads 59
2744 Using Mixed Methods in Studying Classroom Social Network Dynamics

Authors: Nashrawan Naser Taha, Andrew M. Cox

Abstract:

In a multi-cultural learning context, where ties are weak and dynamic, combining qualitative with quantitative research methods may be more effective. Such a combination may also allow us to answer different types of question, such as about people’s perception of the network. In this study the use of observation, interviews and photos were explored as ways of enhancing data from social network questionnaires. Integrating all of these methods was found to enhance the quality of data collected and its accuracy, also providing a richer story of the network dynamics and the factors that shaped these changes over time.

Keywords: mixed methods, social network analysis, multi-cultural learning, social network dynamics

Procedia PDF Downloads 510
2743 Digital Phase Shifting Holography in a Non-Linear Interferometer using Undetected Photons

Authors: Sebastian Töpfer, Marta Gilaberte Basset, Jorge Fuenzalida, Fabian Steinlechner, Juan P. Torres, Markus Gräfe

Abstract:

This work introduces a combination of digital phase-shifting holography with a non-linear interferometer using undetected photons. Non-linear interferometers can be used in combination with a measurement scheme called quantum imaging with undetected photons, which allows for the separation of the wavelengths used for sampling an object and detecting it in the imaging sensor. This method recently faced increasing attention, as it allows to use of exotic wavelengths (e.g., mid-infrared, ultraviolet) for object interaction while at the same time keeping the detection in spectral areas with highly developed, comparable low-cost imaging sensors. The object information, including its transmission and phase influence, is recorded in the form of an interferometric pattern. To collect these, this work combines the method of quantum imaging with undetected photons with digital phase-shifting holography with a minimal sampling of the interference. With this, the quantum imaging scheme gets extended in its measurement capabilities and brings it one step closer to application. Quantum imaging with undetected photons uses correlated photons generated by spontaneous parametric down-conversion in a non-linear interferometer to create indistinguishable photon pairs, which leads to an effect called induced coherence without induced emission. Placing an object inside changes the interferometric pattern depending on the object’s properties. Digital phase-shifting holography records multiple images of the interference with determined phase shifts to reconstruct the complete interference shape, which can afterward be used to analyze the changes introduced by the object and conclude its properties. An extensive characterization of this method was done using a proof-of-principle setup. The measured spatial resolution, phase accuracy, and transmission accuracy are compared for different combinations of camera exposure times and the number of interference sampling steps. The current limits of this method are shown to allow further improvements. To summarize, this work presents an alternative holographic measurement method using non-linear interferometers in combination with quantum imaging to enable new ways of measuring and motivating continuing research.

Keywords: digital holography, quantum imaging, quantum holography, quantum metrology

Procedia PDF Downloads 92
2742 The Enhancement of Target Localization Using Ship-Borne Electro-Optical Stabilized Platform

Authors: Jaehoon Ha, Byungmo Kang, Kilho Hong, Jungsoo Park

Abstract:

Electro-optical (EO) stabilized platforms have been widely used for surveillance and reconnaissance on various types of vehicles, from surface ships to unmanned air vehicles (UAVs). EO stabilized platforms usually consist of an assembly of structure, bearings, and motors called gimbals in which a gyroscope is installed. EO elements such as a CCD camera and IR camera, are mounted to a gimbal, which has a range of motion in elevation and azimuth and can designate and track a target. In addition, a laser range finder (LRF) can be added to the gimbal in order to acquire the precise slant range from the platform to the target. Recently, a versatile functionality of target localization is needed in order to cooperate with the weapon systems that are mounted on the same platform. The target information, such as its location or velocity, needed to be more accurate. The accuracy of the target information depends on diverse component errors and alignment errors of each component. Specially, the type of moving platform can affect the accuracy of the target information. In the case of flying platforms, or UAVs, the target location error can be increased with altitude so it is important to measure altitude as precisely as possible. In the case of surface ships, target location error can be increased with obliqueness of the elevation angle of the gimbal since the altitude of the EO stabilized platform is supposed to be relatively low. The farther the slant ranges from the surface ship to the target, the more extreme the obliqueness of the elevation angle. This can hamper the precise acquisition of the target information. So far, there have been many studies on EO stabilized platforms of flying vehicles. However, few researchers have focused on ship-borne EO stabilized platforms of the surface ship. In this paper, we deal with a target localization method when an EO stabilized platform is located on the mast of a surface ship. Especially, we need to overcome the limitation caused by the obliqueness of the elevation angle of the gimbal. We introduce a well-known approach for target localization using Unscented Kalman Filter (UKF) and present the problem definition showing the above-mentioned limitation. Finally, we want to show the effectiveness of the approach that will be demonstrated through computer simulations.

Keywords: target localization, ship-borne electro-optical stabilized platform, unscented kalman filter

Procedia PDF Downloads 520
2741 Development of a New Device for Bending Fatigue Testing

Authors: B. Mokhtarnia, M. Layeghi

Abstract:

This work presented an original bending fatigue-testing setup for fatigue characterization of composite materials. A three-point quasi-static setup was introduced that was capable of applying stress control load in different loading waveforms, frequencies, and stress ratios. This setup was equipped with computerized measuring instruments to evaluate fatigue damage mechanisms. A detailed description of its different parts and working features was given, and dynamic analysis was done to verify the functional accuracy of the device. Feasibility was validated successfully by conducting experimental fatigue tests.

Keywords: bending fatigue, quasi-static testing setup, experimental fatigue testing, composites

Procedia PDF Downloads 132
2740 DEMs: A Multivariate Comparison Approach

Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo

Abstract:

The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variables

Keywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison

Procedia PDF Downloads 115
2739 Improving the Design of Blood Pressure and Blood Saturation Monitors

Authors: L. Parisi

Abstract:

A blood pressure monitor or sphygmomanometer can be either manual or automatic, employing respectively either the auscultatory method or the oscillometric method. The manual version of the sphygmomanometer involves an inflatable cuff with a stethoscope adopted to detect the sounds generated by the arterial walls to measure blood pressure in an artery. An automatic sphygmomanometer can be effectively used to monitor blood pressure through a pressure sensor, which detects vibrations provoked by oscillations of the arterial walls. The pressure sensor implemented in this device improves the accuracy of the measurements taken.

Keywords: blood pressure, blood saturation, sensors, actuators, design improvement

Procedia PDF Downloads 455
2738 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI

Authors: Rutej R. Mehta, Michael A. Chappell

Abstract:

Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.

Keywords: arterial spin labelling, dispersion, MRI, perfusion

Procedia PDF Downloads 371
2737 Biomedical Definition Extraction Using Machine Learning with Synonymous Feature

Authors: Jian Qu, Akira Shimazu

Abstract:

OOV (Out Of Vocabulary) terms are terms that cannot be found in many dictionaries. Although it is possible to translate such OOV terms, the translations do not provide any real information for a user. We present an OOV term definition extraction method by using information available from the Internet. We use features such as occurrence of the synonyms and location distances. We apply machine learning method to find the correct definitions for OOV terms. We tested our method on both biomedical type and name type OOV terms, our work outperforms existing work with an accuracy of 86.5%.

Keywords: information retrieval, definition retrieval, OOV (out of vocabulary), biomedical information retrieval

Procedia PDF Downloads 495
2736 Eliminating Cutter-Path Deviation For Five-Axis Nc Machining

Authors: Alan C. Lin, Tsong Der Lin

Abstract:

This study proposes a deviation control method to add interpolation points to numerical control (NC) codes of five-axis machining in order to achieve the required machining accuracy. Specific research issues include: (1) converting machining data between the CL (cutter location) domain and the NC domain, (2) calculating the deviation between the deviated path and the linear path, (3) finding interpolation points, and (4) determining tool orientations for the interpolation points. System implementation with practical examples will also be included to highlight the applicability of the proposed methodology.

Keywords: CAD/CAM, cutter path, five-axis machining, numerical control

Procedia PDF Downloads 424
2735 Pyramid Binary Pattern for Age Invariant Face Verification

Authors: Saroj Bijarnia, Preety Singh

Abstract:

We propose a simple and effective biometrics system based on face verification across aging using a new variant of texture feature, Pyramid Binary Pattern. This employs Local Binary Pattern along with its hierarchical information. Dimension reduction of generated texture feature vector is done using Principal Component Analysis. Support Vector Machine is used for classification. Our proposed method achieves an accuracy of 92:24% and can be used in an automated age-invariant face verification system.

Keywords: biometrics, age invariant, verification, support vector machine

Procedia PDF Downloads 352
2734 On the Utility of Bidirectional Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of the flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on the spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts, as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with an attention mechanism. In previous works on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work, with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on the presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: machine learning, classification and regression, gene circuit design, bidirectional transformers

Procedia PDF Downloads 61
2733 Extracting Attributes for Twitter Hashtag Communities

Authors: Ashwaq Alsulami, Jianhua Shao

Abstract:

Various organisations often need to understand discussions on social media, such as what trending topics are and characteristics of the people engaged in the discussion. A number of approaches have been proposed to extract attributes that would characterise a discussion group. However, these approaches are largely based on supervised learning, and as such they require a large amount of labelled data. We propose an approach in this paper that does not require labelled data, but rely on lexical sources to detect meaningful attributes for online discussion groups. Our findings show an acceptable level of accuracy in detecting attributes for Twitter discussion groups.

Keywords: attributed community, attribute detection, community, social network

Procedia PDF Downloads 162
2732 Deciphering Orangutan Drawing Behavior Using Artificial Intelligence

Authors: Benjamin Beltzung, Marie Pelé, Julien P. Renoult, Cédric Sueur

Abstract:

To this day, it is not known if drawing is specifically human behavior or if this behavior finds its origins in ancestor species. An interesting window to enlighten this question is to analyze the drawing behavior in genetically close to human species, such as non-human primate species. A good candidate for this approach is the orangutan, who shares 97% of our genes and exhibits multiple human-like behaviors. Focusing on figurative aspects may not be suitable for orangutans’ drawings, which may appear as scribbles but may have meaning. A manual feature selection would lead to an anthropocentric bias, as the features selected by humans may not match with those relevant for orangutans. In the present study, we used deep learning to analyze the drawings of a female orangutan named Molly († in 2011), who has produced 1,299 drawings in her last five years as part of a behavioral enrichment program at the Tama Zoo in Japan. We investigate multiple ways to decipher Molly’s drawings. First, we demonstrate the existence of differences between seasons by training a deep learning model to classify Molly’s drawings according to the seasons. Then, to understand and interpret these seasonal differences, we analyze how the information spreads within the network, from shallow to deep layers, where early layers encode simple local features and deep layers encode more complex and global information. More precisely, we investigate the impact of feature complexity on classification accuracy through features extraction fed to a Support Vector Machine. Last, we leverage style transfer to dissociate features associated with drawing style from those describing the representational content and analyze the relative importance of these two types of features in explaining seasonal variation. Content features were relevant for the classification, showing the presence of meaning in these non-figurative drawings and the ability of deep learning to decipher these differences. The style of the drawings was also relevant, as style features encoded enough information to have a classification better than random. The accuracy of style features was higher for deeper layers, demonstrating and highlighting the variation of style between seasons in Molly’s drawings. Through this study, we demonstrate how deep learning can help at finding meanings in non-figurative drawings and interpret these differences.

Keywords: cognition, deep learning, drawing behavior, interpretability

Procedia PDF Downloads 165
2731 Data Model to Predict Customize Skin Care Product Using Biosensor

Authors: Ashi Gautam, Isha Shukla, Akhil Seghal

Abstract:

Biosensors are analytical devices that use a biological sensing element to detect and measure a specific chemical substance or biomolecule in a sample. These devices are widely used in various fields, including medical diagnostics, environmental monitoring, and food analysis, due to their high specificity, sensitivity, and selectivity. In this research paper, a machine learning model is proposed for predicting the suitability of skin care products based on biosensor readings. The proposed model takes in features extracted from biosensor readings, such as biomarker concentration, skin hydration level, inflammation presence, sensitivity, and free radicals, and outputs the most appropriate skin care product for an individual. This model is trained on a dataset of biosensor readings and corresponding skin care product information. The model's performance is evaluated using several metrics, including accuracy, precision, recall, and F1 score. The aim of this research is to develop a personalised skin care product recommendation system using biosensor data. By leveraging the power of machine learning, the proposed model can accurately predict the most suitable skin care product for an individual based on their biosensor readings. This is particularly useful in the skin care industry, where personalised recommendations can lead to better outcomes for consumers. The developed model is based on supervised learning, which means that it is trained on a labeled dataset of biosensor readings and corresponding skin care product information. The model uses these labeled data to learn patterns and relationships between the biosensor readings and skin care products. Once trained, the model can predict the most suitable skin care product for an individual based on their biosensor readings. The results of this study show that the proposed machine learning model can accurately predict the most appropriate skin care product for an individual based on their biosensor readings. The evaluation metrics used in this study demonstrate the effectiveness of the model in predicting skin care products. This model has significant potential for practical use in the skin care industry for personalised skin care product recommendations. The proposed machine learning model for predicting the suitability of skin care products based on biosensor readings is a promising development in the skin care industry. The model's ability to accurately predict the most appropriate skin care product for an individual based on their biosensor readings can lead to better outcomes for consumers. Further research can be done to improve the model's accuracy and effectiveness.

Keywords: biosensors, data model, machine learning, skin care

Procedia PDF Downloads 97