Search results for: affymetrix visualization
453 Design and Testing of Electrical Capacitance Tomography Sensors for Oil Pipeline Monitoring
Authors: Sidi M. A. Ghaly, Mohammad O. Khan, Mohammed Shalaby, Khaled A. Al-Snaie
Abstract:
Electrical capacitance tomography (ECT) is a valuable, non-invasive technique used to monitor multiphase flow processes, especially within industrial pipelines. This study focuses on the design, testing, and performance comparison of ECT sensors configured with 8, 12, and 16 electrodes, aiming to evaluate their effectiveness in imaging accuracy, resolution, and sensitivity. Each sensor configuration was designed to capture the spatial permittivity distribution within a pipeline cross-section, enabling visualization of phase distribution and flow characteristics such as oil and water interactions. The sensor designs were implemented and tested in closed pipes to assess their response to varying flow regimes. Capacitance data collected from each electrode configuration were reconstructed into cross-sectional images, enabling a comparison of image resolution, noise levels, and computational demands. Results indicate that the 16-electrode configuration yields higher image resolution and sensitivity to phase boundaries compared to the 8- and 12-electrode setups, making it more suitable for complex flow visualization. However, the 8 and 12-electrode sensors demonstrated advantages in processing speed and lower computational requirements. This comparative analysis provides critical insights into optimizing ECT sensor design based on specific industrial requirements, from high-resolution imaging to real-time monitoring needs.Keywords: capacitance tomography, modeling, simulation, electrode, permittivity, fluid dynamics, imaging sensitivity measurement
Procedia PDF Downloads 14452 Supplementation of Annatto (Bixa orellana)-Derived δ-Tocotrienol Produced High Number of Morula through Increased Expression of 3-Phosphoinositide-Dependent Protein Kinase-1 (PDK1) in Mice
Authors: S. M. M. Syairah, M. H. Rajikin, A. R. Sharaniza
Abstract:
Several embryonic cellular mechanism including cell cycle, growth and apoptosis are regulated by phosphatidylinositol-3-kinase (PI3K)/Akt signaling pathway. The goal of present study is to determine the effects of annatto (Bixa orellana)-derived δ-tocotrienol (δ-TCT) on the regulations of PI3K/Akt genes in murine morula. Twenty four 6-8 week old (23-25g) female balb/c mice were randomly divided into four groups (G1-G4; n=6). Those groups were subjected to the following treatments for 7 consecutive days: G1 (control) received tocopherol stripped corn oil, G2 was given 60 mg/kg/day of δ-TCT mixture (contains 90% delta & 10% gamma isomers), G3 was given 60 mg/kg/day of pure δ-TCT (>98% purity) and G4 received 60 mg/kg/day α-TOC. On Day 8, females were superovulated with 5 IU Pregnant Mare’s Serum Gonadotropin (PMSG) for 48 hours followed with 5 IU human Chorionic Gonadotropin (hCG) before mated with males at the ratio of 1:1. Females were sacrificed by cervical dislocation for embryo collection 48 hours post-coitum. About fifty morula from each group were used in the gene expression analyses using Affymetrix QuantiGene Plex 2.0 Assay. Present data showed a significant increase (p<0.05) in the average number (mean + SEM) of morula produced in G2 (26.0 + 0.45), G3 (23.0 + 0.63) and G4 (25.0 + 0.73) compared to control group (G1 – 16.0 + 0.63). This is parallel with the high expression of PDK1 gene with increase of 2.75-fold (G2), 3.07-fold (G3) and 3.59-fold (G4) compared to G1 (1.78-fold). From the present data, it can be concluded that supplementation with δ-TCT(s) and α-TOC induced high expression of PDK1 in G2-G4 which enhanced the PI3K/Akt signaling activity, resulting in the increased number of morula.Keywords: delta-tocotrienol, embryonic development, nicotine, vitamin E
Procedia PDF Downloads 428451 Using Artificial Intelligence Technology to Build the User-Oriented Platform for Integrated Archival Service
Authors: Lai Wenfang
Abstract:
Tthis study will describe how to use artificial intelligence (AI) technology to build the user-oriented platform for integrated archival service. The platform will be launched in 2020 by the National Archives Administration (NAA) in Taiwan. With the progression of information communication technology (ICT) the NAA has built many systems to provide archival service. In order to cope with new challenges, such as new ICT, artificial intelligence or blockchain etc. the NAA will try to use the natural language processing (NLP) and machine learning (ML) skill to build a training model and propose suggestions based on the data sent to the platform. NAA expects the platform not only can automatically inform the sending agencies’ staffs which records catalogues are against the transfer or destroy rules, but also can use the model to find the details hidden in the catalogues and suggest NAA’s staff whether the records should be or not to be, to shorten the auditing time. The platform keeps all the users’ browse trails; so that the platform can predict what kinds of archives user could be interested and recommend the search terms by visualization, moreover, inform them the new coming archives. In addition, according to the Archives Act, the NAA’s staff must spend a lot of time to mark or remove the personal data, classified data, etc. before archives provided. To upgrade the archives access service process, the platform will use some text recognition pattern to black out automatically, the staff only need to adjust the error and upload the correct one, when the platform has learned the accuracy will be getting higher. In short, the purpose of the platform is to deduct the government digital transformation and implement the vision of a service-oriented smart government.Keywords: artificial intelligence, natural language processing, machine learning, visualization
Procedia PDF Downloads 176450 Creation of a Realistic Railway Simulator Developed on a 3D Graphic Game Engine Using a Numerical Computing Programming Environment
Authors: Kshitij Ansingkar, Yohei Hoshino, Liangliang Yang
Abstract:
Advances in algorithms related to autonomous systems have made it possible to research on improving the accuracy of a train’s location. This has the capability of increasing the throughput of a railway network without the need for the creation of additional infrastructure. To develop such a system, the railway industry requires data to test sensor fusion theories or implement simultaneous localization and mapping (SLAM) algorithms. Though such simulation data and ground truth datasets are available for testing automation algorithms of vehicles, however, due to regulations and economic considerations, there is a dearth of such datasets in the railway industry. Thus, there is a need for the creation of a simulation environment that can generate realistic synthetic datasets. This paper proposes (1) to leverage the capabilities of open-source 3D graphic rendering software to create a visualization of the environment. (2) to utilize open-source 3D geospatial data for accurate visualization and (3) to integrate the graphic rendering software with a programming language and numerical computing platform. To develop such an integrated platform, this paper utilizes the computing platform’s advanced sensor models like LIDAR, camera, IMU or GPS and merges it with the 3D rendering of the game engine to generate high-quality synthetic data. Further, these datasets can be used to train Railway models and improve the accuracy of a train’s location.Keywords: 3D game engine, 3D geospatial data, dataset generation, railway simulator, sensor fusion, SLAM
Procedia PDF Downloads 12449 The Impact of Introspective Models on Software Engineering
Authors: Rajneekant Bachan, Dhanush Vijay
Abstract:
The visualization of operating systems has refined the Turing machine, and current trends suggest that the emulation of 32 bit architectures will soon emerge. After years of technical research into Web services, we demonstrate the synthesis of gigabit switches, which embodies the robust principles of theory. Loam, our new algorithm for forward-error correction, is the solution to all of these challenges.Keywords: software engineering, architectures, introspective models, operating systems
Procedia PDF Downloads 539448 C-eXpress: A Web-Based Analysis Platform for Comparative Functional Genomics and Proteomics in Human Cancer Cell Line, NCI-60 as an Example
Authors: Chi-Ching Lee, Po-Jung Huang, Kuo-Yang Huang, Petrus Tang
Abstract:
Background: Recent advances in high-throughput research technologies such as new-generation sequencing and multi-dimensional liquid chromatography makes it possible to dissect the complete transcriptome and proteome in a single run for the first time. However, it is almost impossible for many laboratories to handle and analysis these “BIG” data without the support from a bioinformatics team. We aimed to provide a web-based analysis platform for users with only limited knowledge on bio-computing to study the functional genomics and proteomics. Method: We use NCI-60 as an example dataset to demonstrate the power of the web-based analysis platform and data delivering system: C-eXpress takes a simple text file that contain the standard NCBI gene or protein ID and expression levels (rpkm or fold) as input file to generate a distribution map of gene/protein expression levels in a heatmap diagram organized by color gradients. The diagram is hyper-linked to a dynamic html table that allows the users to filter the datasets based on various gene features. A dynamic summary chart is generated automatically after each filtering process. Results: We implemented an integrated database that contain pre-defined annotations such as gene/protein properties (ID, name, length, MW, pI); pathways based on KEGG and GO biological process; subcellular localization based on GO cellular component; functional classification based on GO molecular function, kinase, peptidase and transporter. Multiple ways of sorting of column and rows is also provided for comparative analysis and visualization of multiple samples.Keywords: cancer, visualization, database, functional annotation
Procedia PDF Downloads 619447 A Rapid Prototyping Tool for Suspended Biofilm Growth Media
Authors: Erifyli Tsagkari, Stephanie Connelly, Zhaowei Liu, Andrew McBride, William Sloan
Abstract:
Biofilms play an essential role in treating water in biofiltration systems. The biofilm morphology and function are inextricably linked to the hydrodynamics of flow through a filter, and yet engineers rarely explicitly engineer this interaction. We develop a system that links computer simulation and 3-D printing to optimize and rapidly prototype filter media to optimize biofilm function with the hypothesis that biofilm function is intimately linked to the flow passing through the filter. A computational model that numerically solves the incompressible time-dependent Navier Stokes equations coupled to a model for biofilm growth and function is developed. The model is imbedded in an optimization algorithm that allows the model domain to adapt until criteria on biofilm functioning are met. This is applied to optimize the shape of filter media in a simple flow channel to promote biofilm formation. The computer code links directly to a 3-D printer, and this allows us to prototype the design rapidly. Its validity is tested in flow visualization experiments and by microscopy. As proof of concept, the code was constrained to explore a small range of potential filter media, where the medium acts as an obstacle in the flow that sheds a von Karman vortex street that was found to enhance the deposition of bacteria on surfaces downstream. The flow visualization and microscopy in the 3-D printed realization of the flow channel validated the predictions of the model and hence its potential as a design tool. Overall, it is shown that the combination of our computational model and the 3-D printing can be effectively used as a design tool to prototype filter media to optimize biofilm formation.Keywords: biofilm, biofilter, computational model, von karman vortices, 3-D printing.
Procedia PDF Downloads 143446 Identification of microRNAs in Early and Late Onset of Parkinson’s Disease Patient
Authors: Ahmad Rasyadan Arshad, A. Rahman A. Jamal, N. Mohamed Ibrahim, Nor Azian Abdul Murad
Abstract:
Introduction: Parkinson’s disease (PD) is a complex and asymptomatic disease where patients are usually diagnosed at late stage where about 70% of the dopaminergic neurons are lost. Therefore, identification of molecular biomarkers is crucial for early diagnosis of PD. MicroRNA (miRNA) is a short nucleotide non-coding small RNA which regulates the gene expression in post-translational process. The involvement of these miRNAs in neurodegenerative diseases includes maintenance of neuronal development, necrosis, mitochondrial dysfunction and oxidative stress. Thus, miRNA could be a potential biomarkers for diagnosis of PD. Objective: This study aim to identify the miRNA involved in Late Onset PD (LOPD) and Early Onset PD (EOPD) compared to the controls. Methods: This is a case-control study involved PD patients in the Chancellor Tunku Muhriz Hospital at the UKM Medical Centre. miRNA samples were extracted using miRNeasy serum/plasma kit from Qiagen. The quality of miRNA extracted was determined using Agilent RNA 6000 Nano kit in the Bioanalyzer. miRNA expression was performed using GeneChip miRNA 4.0 chip from Affymetrix. Microarray was performed in EOPD (n= 7), LOPD (n=9) and healthy control (n=11). Expression Console and Transcriptomic Analyses Console were used to analyze the microarray data. Result: miR-129-5p was significantly downregulated in EOPD compared to LOPD with -4.2 fold change (p = <0.050. miR-301a-3p was upregulated in EOPD compared to healthy control (fold = 10.3, p = <0.05). In LOPD versus healthy control, miR-486-3p (fold = 15.28, p = <0.05), miR-29c-3p (fold = 12.21, p = <0.05) and miR-301a-3p (fold = 10.01, p =< 0.05) were upregulated. Conclusion: Several miRNA have been identified to be differentially expressed in EOPD compared to LOPD and PD versus control. These miRNAs could serve as the potential biomarkers for early diagnosis of PD. However, these miRNAs need to be validated in a larger sample size.Keywords: early onset PD, late onset PD, microRNA (miRNA), microarray
Procedia PDF Downloads 259445 Computer-Aided Diagnosis of Eyelid Skin Tumors Using Machine Learning
Authors: Ofira Zloto, Ofir Fogel, Eyal Klang
Abstract:
Purpose: The aim is to develop an automated framework based on machine learning to diagnose malignant eyelid skin tumors. Methods: This study utilized eyelid lesion images from Sheba Medical Center, a large tertiary center in Israel. Before model training, we pre-trained our models on the ISIC 2019 dataset consisting of 25,332 images. The proprietary eyelid dataset was then used for fine-tuning. The dataset contained multiple images per patient, aiming to classify malignant lesions in comparison to benign counterparts. Results: The analyzed dataset consisted of images representing both benign and malignant eyelid lesions. For the benign category, a total of 373 images were sourced. In comparison, the malignant category has 186 images. Based on the accuracy values, the model with 3 epochs and a learning rate of 0.0001 exhibited the best performance, achieving an accuracy of 0.748 with a standard deviation of 0.034. At a sensitivity of 69%, the model has a corresponding specificity of 82%. To further understand the decision-making process of our model, we employed heatmap visualization techniques, specifically Gradient-weighted Class Activation Mapping. Discussion: This study introduces a dependable model-aided diagnostic technology for assessing eyelid skin lesions. The model demonstrated accuracy comparable to human evaluation, effectively determining whether a lesion raises a high suspicion of malignancy or is benign. Such a model has the potential to alleviate the burden on the healthcare system, particularly benefiting rural areas and enhancing the efficiency of clinicians and overall healthcare.Keywords: machine learning;, eyelid skin tumors;, decision-making process;, heatmap visualization techniques
Procedia PDF Downloads 4444 A Wearable Fluorescence Imaging Device for Intraoperative Identification of Human Brain Tumors
Authors: Guoqiang Yu, Mehrana Mohtasebi, Jinghong Sun, Thomas Pittman
Abstract:
Malignant glioma (MG) is the most common type of primary malignant brain tumor. Surgical resection of MG remains the cornerstone of therapy, and the extent of resection correlates with patient survival. A limiting factor for resection, however, is the difficulty in differentiating the tumor from normal tissue during surgery. Fluorescence imaging is an emerging technique for real-time intraoperative visualization of MGs and their boundaries. However, most clinical-grade neurosurgical operative microscopes with fluorescence imaging ability are hampered by low adoption rates due to high cost, limited portability, limited operation flexibility, and lack of skilled professionals with technical knowledge. To overcome the limitations, we innovatively integrated miniaturized light sources, flippable filters, and a recording camera to the surgical eye loupes to generate a wearable fluorescence eye loupe (FLoupe) device for intraoperative imaging of fluorescent MGs. Two FLoupe prototypes were constructed for imaging of Fluorescein and 5-aminolevulinic acid (5-ALA), respectively. The wearable FLoupe devices were tested on tumor-simulating phantoms and patients with MGs. Comparable results were observed against the standard neurosurgical operative microscope (PENTERO® 900) with fluorescence kits. The affordable and wearable FLoupe devices enable visualization of both color and fluorescence images with the same quality as the large and expensive stationary operative microscopes. The wearable FLoupe device allows for a greater range of movement, less obstruction, and faster/easier operation. Thus, it reduces surgery time and is more easily adapted to the surgical environment than unwieldy neurosurgical operative microscopes.Keywords: fluorescence guided surgery, malignant glioma, neurosurgical operative microscope, wearable fluorescence imaging device
Procedia PDF Downloads 66443 Evidence of Natural Selection Footprints among Some African Chicken Breeds and Village Ecotypes
Authors: Ahmed Elbeltagy, Francesca Bertolini, Damarius Fleming, Angelica Van Goor, Chris Ashwell, Carl Schmidt, Donald Kugonza, Susan Lamont, Max Rothschild
Abstract:
The major factor in shaping genomic variation of the African indigenous rural chicken is likely natural selection drives the development genetic footprints in the chicken genomes. To investigate such a hypothesis of a selection footprint, a total of 292 birds were randomly sampled from three indigenous ecotypes from East Africa (Uganda, Rwanda) and North Africa (Egypt) and two registered Egyptian breeds (Fayoumi and Dandarawi), and from the synthetic Kuroiler breed. Samples were genotyped using the Affymetrix 600K Axiom® Array. A total of 526,652 SNPs were utilized in the downstream analysis after quality control measures. The intra-population runs of homozygosity (ROH) that were consensuses in > 50% of individuals of an ecotype or > 75% of a breed were studied. To identify inter-population differentiation due to genetic structure, FST was calculated for North- vs. East- African populations in addition to population-pairwise combinations for overlapping windows (500Kb with an overlap of 250Kb). A total of 28,563 ROH were determined and were classified into three length categories. ROH and Fst detected sweeps were identified on several autosomes. Several genes in these regions are likely to be related to adaptation to local environmental stresses that include high altitude, diseases resistance, poor nutrition, oxidative and heat stresses and were linked to gene ontology terms (GO) related to immune response, oxygen consumption and heme binding, carbohydrate metabolism, oxidation-reduction, and behavior. Results indicated a possible effect of natural selection forces on shaping genomic structure for adaptation to local environmental stresses.Keywords: African Chicken, runs of homozygosity, FST, selection footprints
Procedia PDF Downloads 313442 A Multi-Output Network with U-Net Enhanced Class Activation Map and Robust Classification Performance for Medical Imaging Analysis
Authors: Jaiden Xuan Schraut, Leon Liu, Yiqiao Yin
Abstract:
Computer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image to-label result provides insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. In order to gain local insight into cancerous regions, separate tasks such as imaging segmentation need to be implemented to aid the doctors in treating patients, which doubles the training time and costs which renders the diagnosis system inefficient and difficult to be accepted by the public. To tackle this issue and drive AI-first medical solutions further, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional convolutional neural networks (CNN) module for auxiliary classification output. Class activation maps are a method of providing insight into a convolutional neural network’s feature maps that leads to its classification but in the case of lung diseases, the region of interest is enhanced by U-net-assisted Class Activation Map (CAM) visualization. Therefore, our proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray’s class activation map to provide a visualization that improves the explainability and is able to generate classification results simultaneously which builds trust for AI-led diagnosis systems. The proposed U-Net model achieves 97.61% accuracy and a dice coefficient of 0.97 on testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs.Keywords: multi-output network model, U-net, class activation map, image classification, medical imaging analysis
Procedia PDF Downloads 205441 FlameCens: Visualization of Expressive Deviations in Music Performance
Authors: Y. Trantafyllou, C. Alexandraki
Abstract:
Music interpretation accounts to the way musicians shape their performance by deliberately deviating from composers’ intentions, which are commonly communicated via some form of music transcription, such as a music score. For transcribed and non-improvised music, music expression is manifested by introducing subtle deviations in tempo, dynamics and articulation during the evolution of performance. This paper presents an application, named FlameCens, which, given two recordings of the same piece of music, presumably performed by different musicians, allow visualising deviations in tempo and dynamics during playback. The application may also compare a certain performance to the music score of that piece (i.e. MIDI file), which may be thought of as an expression-neutral representation of that piece, hence depicting the expressive queues employed by certain performers. FlameCens uses the Dynamic Time Warping algorithm to compare two audio sequences, based on CENS (Chroma Energy distribution Normalized Statistics) audio features. Expressive deviations are illustrated in a moving flame, which is generated by an animation of particles. The length of the flame is mapped to deviations in dynamics, while the slope of the flame is mapped to tempo deviations so that faster tempo changes the slope to the right and slower tempo changes the slope to the left. Constant slope signifies no tempo deviation. The detected deviations in tempo and dynamics can be additionally recorded in a text file, which allows for offline investigation. Moreover, in the case of monophonic music, the color of particles is used to convey the pitch of the notes during performance. FlameCens has been implemented in Python and it is openly available via GitHub. The application has been experimentally validated for different music genres including classical, contemporary, jazz and popular music. These experiments revealed that FlameCens can be a valuable tool for music specialists (i.e. musicians or musicologists) to investigate the expressive performance strategies employed by different musicians, as well as for music audience to enhance their listening experience.Keywords: audio synchronization, computational music analysis, expressive music performance, information visualization
Procedia PDF Downloads 131440 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids
Authors: Ayalew Yimam Ali
Abstract:
The Y-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the Y-junction microchannel can be a difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the Y-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the Y-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement
Procedia PDF Downloads 23439 Enhancement of Road Defect Detection Using First-Level Algorithm Based on Channel Shuffling and Multi-Scale Feature Fusion
Authors: Yifan Hou, Haibo Liu, Le Jiang, Wandong Su, Binqing Wang
Abstract:
Road defect detection is crucial for modern urban management and infrastructure maintenance. Traditional road defect detection methods mostly rely on manual labor, which is not only inefficient but also difficult to ensure their reliability. However, existing deep learning-based road defect detection models have poor detection performance in complex environments and lack robustness to multi-scale targets. To address this challenge, this paper proposes a distinct detection framework based on the one stage algorithm network structure. This article designs a deep feature extraction network based on RCSDarknet, which applies channel shuffling to enhance information fusion between tensors. Through repeated stacking of RCS modules, the information flow between different channels of adjacent layer features is enhanced to improve the model's ability to capture target spatial features. In addition, a multi-scale feature fusion mechanism with weighted dual flow paths was adopted to fuse spatial features of different scales, thereby further improving the detection performance of the model at different scales. To validate the performance of the proposed algorithm, we tested it using the RDD2022 dataset. The experimental results show that the enhancement algorithm achieved 84.14% mAP, which is 1.06% higher than the currently advanced YOLOv8 algorithm. Through visualization analysis of the results, it can also be seen that our proposed algorithm has good performance in detecting targets of different scales in complex scenes. The above experimental results demonstrate the effectiveness and superiority of the proposed algorithm, providing valuable insights for advancing real-time road defect detection methods.Keywords: roads, defect detection, visualization, deep learning
Procedia PDF Downloads 13438 Monotone Rational Trigonometric Interpolation
Authors: Uzma Bashir, Jamaludin Md. Ali
Abstract:
This study is concerned with the visualization of monotone data using a piece-wise C1 rational trigonometric interpolating scheme. Four positive shape parameters are incorporated in the structure of rational trigonometric spline. Conditions on two of these parameters are derived to attain the monotonicity of monotone data and other two are left-free. Figures are used widely to exhibit that the proposed scheme produces graphically smooth monotone curves.Keywords: trigonometric splines, monotone data, shape preserving, C1 monotone interpolant
Procedia PDF Downloads 271437 Three-Dimensional Computer Graphical Demonstration of Calcified Tissue and Its Clinical Significance
Authors: Itsuo Yokoyama, Rikako Kikuti, Miti Sekikawa, Tosinori Asai, Sarai Tsuyoshi
Abstract:
Introduction: Vascular access for hemodialysis therapy is often difficult, even for experienced medical personnel. Ultrasound guided needle placement have been performed occasionally but is not always helpful in certain cases with complicated vascular anatomy. Obtaining precise anatomical knowledge of the vascular structure is important to prevent access-related complications. With augmented reality (AR) device such as AR glasses, the virtual vascular structure is shown superimposed on the actual patient vessels, thus enabling the operator to maneuver catheter placement easily with free both hands. We herein report our method of AR guided vascular access method in dialysis treatment Methods: Three dimensional (3D) object of the arm with arteriovenous fistula is computer graphically created with 3D software from the data obtained by computer tomography, ultrasound echogram, and image scanner. The 3D vascular object thus created is viewed on the screen of the AR digital display device (such as AR glass or iPad). The picture of the vascular anatomical structure becomes visible, which is superimposed over the real patient’s arm, thereby the needle insertion be performed under the guidance of AR visualization with ease. By this method, technical difficulty in catheter placement for dialysis can be lessened and performed safely. Considerations: Virtual reality technology has been applied in various fields and medical use is not an exception. Yet AR devices have not been widely used among medical professions. Visualization of the virtual vascular object can be achieved by creation of accurate three dimensional object with the help of computer graphical technique. Although our experience is limited, this method is applicable with relative easiness and our accumulating evidence has suggested that our method of vascular access with the use of AR can be promising.Keywords: abdominal-aorta, calcification, extraskeletal, dialysis, computer graphics, 3DCG, CT, calcium, phosphorus
Procedia PDF Downloads 166436 Emerging Trends of Geographic Information Systems in Built Environment Education: A Bibliometric Review Analysis
Authors: Kiara Lawrence, Robynne Hansmann, Clive Greentsone
Abstract:
Geographic Information Systems (GIS) are used to store, analyze, visualize, capture and monitor geographic data. Built environment professionals as well as urban planners specifically, need to possess GIS skills to effectively and efficiently plan spaces. GIS application extends beyond the production of map artifacts and can be applied to relate to spatially referenced, real time data to support spatial visualization, analysis, community engagement, scenarios, and so forth. Though GIS has been used in the built environment for a few decades, its use in education has not been researched enough to draw conclusions on the trends in the last 20 years. The study looks to discover current and emerging trends of GIS in built environment education. A bibliometric review analysis methodology was carried out through exporting documents from Scopus and Web of Science using keywords around "Geographic information systems" OR "GIS" AND "built environment" OR “geography” OR "architecture" OR "quantity surveying" OR "construction" OR "urban planning" OR "town planning" AND “education” between the years 1994 to 2024. A total of 564 documents were identified and exported. The data was then analyzed using VosViewer software to generate network analysis and visualization maps on the co-occurrence of keywords, co-citation of documents and countries and co-author network analysis. By analyzing each aspect of the data, deeper insight of GIS within education can be understood. Preliminary results from Scopus indicate that GIS research focusing on built environment education seems to have peaked prior to 2014 with much focus on remote sensing, demography, land use, engineering education and so forth. This invaluable data can help in understanding and implementing GIS in built environment education in ways that are foundational and innovative to ensure that students are equipped with sufficient knowledge and skills to carry out tasks in their respective fields.Keywords: architecture, built environment, construction, education, geography, geographic information systems, quantity surveying, town planning, urban planning
Procedia PDF Downloads 17435 Legal Judgment Prediction through Indictments via Data Visualization in Chinese
Authors: Kuo-Chun Chien, Chia-Hui Chang, Ren-Der Sun
Abstract:
Legal Judgment Prediction (LJP) is a subtask for legal AI. Its main purpose is to use the facts of a case to predict the judgment result. In Taiwan's criminal procedure, when prosecutors complete the investigation of the case, they will decide whether to prosecute the suspect and which article of criminal law should be used based on the facts and evidence of the case. In this study, we collected 305,240 indictments from the public inquiry system of the procuratorate of the Ministry of Justice, which included 169 charges and 317 articles from 21 laws. We take the crime facts in the indictments as the main input to jointly learn the prediction model for law source, article, and charge simultaneously based on the pre-trained Bert model. For single article cases where the frequency of the charge and article are greater than 50, the prediction performance of law sources, articles, and charges reach 97.66, 92.22, and 60.52 macro-f1, respectively. To understand the big performance gap between articles and charges, we used a bipartite graph to visualize the relationship between the articles and charges, and found that the reason for the poor prediction performance was actually due to the wording precision. Some charges use the simplest words, while others may include the perpetrator or the result to make the charges more specific. For example, Article 284 of the Criminal Law may be indicted as “negligent injury”, "negligent death”, "business injury", "driving business injury", or "non-driving business injury". As another example, Article 10 of the Drug Hazard Control Regulations can be charged as “Drug Control Regulations” or “Drug Hazard Control Regulations”. In order to solve the above problems and more accurately predict the article and charge, we plan to include the article content or charge names in the input, and use the sentence-pair classification method for question-answer problems in the BERT model to improve the performance. We will also consider a sequence-to-sequence approach to charge prediction.Keywords: legal judgment prediction, deep learning, natural language processing, BERT, data visualization
Procedia PDF Downloads 122434 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach
Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista
Abstract:
The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.Keywords: depth, deep learning, geovisualisation, satellite images
Procedia PDF Downloads 13433 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data
Authors: Jian-Heng Wu, Bor-Shen Lin
Abstract:
The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.Keywords: water mass, Gaussian mixture model, data visualization, system framework
Procedia PDF Downloads 145432 IL4/IL13 STAT6 Mediated Macrophage Polarization During Acute and Chronic Pancreatitis
Authors: Hager Elsheikh, Juliane Glaubitz, Frank Ulrich Weiss, Matthias Sendler
Abstract:
Aim: Acute pancreatitis (AP) and chronic pancreatitis (CP) are both accompanied by a prominent immune response which influences the course of disease. Whereas during AP the pro-inflammatory immune response dominates, during CP a fibroinflammatory response regulates organ remodeling. The transcription factor signal transducer and activator of transcription 6 (STAT6) is a crucial part of the Type 2 immune response. Here we investigate the role of STAT6 in a mouse model of AP and CP. Material and Methods: AP was induced by hourly repetitive i.p. injections of caerulein (50µg/kg/bodyweight) in C57Bl/6 J and STAT6-/- mice. CP was induced by repetitive caerulein injections 6 times a day, 3 days a week over 4 weeks. Disease severity was evaluated by serum amylase/lipase measurement, H&E staining of pancreas. Pancreatic infiltrate was characterized by immunofluorescent labeling of CD68, CD206, CCR2, CD4 and CD8. Pancreas fibrosis was evaluated by Azan blue staining. qRT-PCR was performed of Arg1, Nos2, Il6, Il1b, Col3a, Socs3 and Ym1. Affymetrix chip array analyses were done to illustrate the IL4/IL13/STAT6 signaling in bone marrow derived macrophages. Results: AP severity is mitigated in STAT6-/- mice, as shown by decreased serum amylase and lipase, as well as histological damage. CP mice surprisingly showed only slightly reduced fibrosis of the pancreas. Also staining of CD206 a classical marker of alternatively activated macrophages showed no decrease of M2-like polarization in the absence of STAT6. In contrast, transcription profile analysis in BMDM showed complete blockade of the IL4/IL13 pathway in STAT6-/- animals. Conclusion: STAT6 signaling pathway is protective during AP and mitigates the pancreatic damage. During chronic pancreatitis the IL4/IL13 – STAT6 axisis involved in organ fibrogenesis. Notably, fibrosis is not dependent on a single signaling pathway, and alternative macrophage activation is also complex and involves different subclasses (M2a, M2b, M2c and M2d) which could be independent of the IL4/IL13 STAT6 axis.Keywords: chronic pancreatitis, macrophages, IL4/IL13, Type immune response
Procedia PDF Downloads 68431 The Visualization of Hydrological and Hydraulic Models Based on the Platform of Autodesk Civil 3D
Authors: Xiyue Wang, Shaoning Yan
Abstract:
Cities in China today is faced with an increasingly serious river ecological crisis accompanying with the development of urbanization: waterlogging on account of the fragmented urban natural hydrological system; the limited ecological function of the hydrological system caused by a destruction of water system and waterfront ecological environment. Additionally, the eco-hydrological processes of rivers are affected by various environmental factors, which are more complex in the context of urban environment. Therefore, efficient hydrological monitoring and analysis tools, accurate and visual hydrological and hydraulic models are becoming more important basis for decision-makers and an important way for landscape architects to solve urban hydrological problems, formulating sustainable and forward-looking schemes. The study mainly introduces the river and flood analysis model based on the platform of Autodesk Civil 3D. Taking the Luanhe River in Qian'an City of Hebei Province as an example, the 3D models of the landform, river, embankment, shoal, pond, underground stream and other land features were initially built, with which the water transfer simulation analysis, river floodplain analysis, and river ecology analysis were carried out, ultimately the real-time visualized simulation and analysis of rivers in various hypothetical scenarios were realized. Through the establishment of digital hydrological and hydraulic model, the hydraulic data can be accurately and intuitively simulated, which provides basis for rational water system and benign urban ecological system design. Though, the hydrological and hydraulic model based on Autodesk Civil3D own its boundedness: the interaction between the model and other data and software is unfavorable; the huge amount of 3D data and the lack of basic data restrict the accuracy and application range. The hydrological and hydraulic model based on Autodesk Civil3D platform provides more possibility to access convenient and intelligent tool for urban planning and monitoring, a solid basis for further urban research and design.Keywords: visualization, hydrological and hydraulic model, Autodesk Civil 3D, urban river
Procedia PDF Downloads 297430 Visualizing the Consequences of Smoking Using Augmented Reality
Authors: B. Remya Mohan, Kamal Bijlani, R. Jayakrishnan
Abstract:
Visualization in an educational context provides the learner with visual means of information. Conceptualizing certain circumstances such as consequences of smoking can be done more effectively with the help of the technology, Augmented Reality (AR). It is a new methodology for effective learning. This paper proposes an approach on how AR based on Marker Technology simulates the harmful effects of smoking and its consequences using Unity 3D game engine. The study also illustrates the impact of AR technology on students for better learning. AR technology can be used as a method to improve learning.Keywords: augmented reality, marker technology, multi-platform, virtual buttons
Procedia PDF Downloads 578429 Dynamic Web-Based 2D Medical Image Visualization and Processing Software
Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail
Abstract:
In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN
Procedia PDF Downloads 162428 Interactive Glare Visualization Model for an Architectural Space
Authors: Florina Dutt, Subhajit Das, Matthew Swartz
Abstract:
Lighting design and its impact on indoor comfort conditions are an integral part of good interior design. Impact of lighting in an interior space is manifold and it involves many sub components like glare, color, tone, luminance, control, energy efficiency, flexibility etc. While other components have been researched and discussed multiple times, this paper discusses the research done to understand the glare component from an artificial lighting source in an indoor space. Consequently, the paper discusses a parametric model to convey real time glare level in an interior space to the designer/ architect. Our end users are architects and likewise for them it is of utmost importance to know what impression the proposed lighting arrangement and proposed furniture layout will have on indoor comfort quality. This involves specially those furniture elements (or surfaces) which strongly reflect light around the space. Essentially, the designer needs to know the ramification of the ‘discomfortable glare’ at the early stage of design cycle, when he still can afford to make changes to his proposed design and consider different routes of solution for his client. Unfortunately, most of the lighting analysis tools that are present, offer rigorous computation and analysis on the back end eventually making it challenging for the designer to analyze and know the glare from interior light quickly. Moreover, many of them do not focus on glare aspect of the artificial light. That is why, in this paper, we explain a novel approach to approximate interior glare data. Adding to that we visualize this data in a color coded format, expressing the implications of their proposed interior design layout. We focus on making this analysis process very fluid and fast computationally, enabling complete user interaction with the capability to vary different ranges of user inputs adding more degrees of freedom for the user. We test our proposed parametric model on a case study, a Computer Lab space in our college facility.Keywords: computational geometry, glare impact in interior space, info visualization, parametric lighting analysis
Procedia PDF Downloads 350427 Documenting the 15th Century Prints with RTI
Authors: Peter Fornaro, Lothar Schmitt
Abstract:
The Digital Humanities Lab and the Institute of Art History at the University of Basel are collaborating in the SNSF research project ‘Digital Materiality’. Its goal is to develop and enhance existing methods for the digital reproduction of cultural heritage objects in order to support art historical research. One part of the project focuses on the visualization of a small eye-catching group of early prints that are noteworthy for their subtle reliefs and glossy surfaces. Additionally, this group of objects – known as ‘paste prints’ – is characterized by its fragile state of preservation. Because of the brittle substances that were used for their production, most paste prints are heavily damaged and thus very hard to examine. These specific material properties make a photographic reproduction extremely difficult. To obtain better results we are working with Reflectance Transformation Imaging (RTI), a computational photographic method that is already used in archaeological and cultural heritage research. This technique allows documenting how three-dimensional surfaces respond to changing lighting situations. Our first results show that RTI can capture the material properties of paste prints and their current state of preservation more accurately than conventional photographs, although there are limitations with glossy surfaces because the mathematical models that are included in RTI are kept simple in order to keep the software robust and easy to use. To improve the method, we are currently developing tools for a more detailed analysis and simulation of the reflectance behavior. An enhanced analytical model for the representation and visualization of gloss will increase the significance of digital representations of cultural heritage objects. For collaborative efforts, we are working on a web-based viewer application for RTI images based on WebGL in order to make acquired data accessible to a broader international research community. At the ICDH Conference, we would like to present unpublished results of our work and discuss the implications of our concept for art history, computational photography and heritage science.Keywords: art history, computational photography, paste prints, reflectance transformation imaging
Procedia PDF Downloads 276426 Informing, Enabling and Inspiring Social Innovation by Geographic Systems Mapping: A Case Study in Workforce Development
Authors: Cassandra A. Skinner, Linda R. Chamberlain
Abstract:
The nonprofit and public sectors are increasingly turning to Geographic Information Systems for data visualizations which can better inform programmatic and policy decisions. Additionally, the private and nonprofit sectors are turning to systems mapping to better understand the ecosystems within which they operate. This study explores the potential which combining these data visualization methods—a method which is called geographic systems mapping—to create an exhaustive and comprehensive understanding of a social problem’s ecosystem may have in social innovation efforts. Researchers with Grand Valley State University collaborated with Talent 2025 of West Michigan to conduct a mixed-methods research study to paint a comprehensive picture of the workforce development ecosystem in West Michigan. Using semi-structured interviewing, observation, secondary research, and quantitative analysis, data were compiled on workforce development organizations’ locations, programming, metrics for success, partnerships, funding sources, and service language. To best visualize and disseminate the data, a geographic system map was created which identifies programmatic, operational, and geographic gaps in workforce development services of West Michigan. By combining geographic and systems mapping methods, the geographic system map provides insight into the cross-sector relationships, collaboration, and competition which exists among and between workforce development organizations. These insights identify opportunities for and constraints around cross-sectoral social innovation in the West Michigan workforce development ecosystem. This paper will discuss the process utilized to prepare the geographic systems map, explain the results and outcomes, and demonstrate how geographic systems mapping illuminated the needs of the community and opportunities for social innovation. As complicated social problems like unemployment often require cross-sectoral and multi-stakeholder solutions, there is potential for geographic systems mapping to be a tool which informs, enables, and inspires these solutions.Keywords: cross-sector collaboration, data visualization, geographic systems mapping, social innovation, workforce development
Procedia PDF Downloads 297425 Transitional Separation Bubble over a Rounded Backward Facing Step Due to a Temporally Applied Very High Adverse Pressure Gradient Followed by a Slow Adverse Pressure Gradient Applied at Inlet of the Profile
Authors: Saikat Datta
Abstract:
Incompressible laminar time-varying flow is investigated over a rounded backward-facing step for a triangular piston motion at the inlet of a straight channel with very high acceleration, followed by a slow deceleration experimentally and through numerical simulation. The backward-facing step is an important test-case as it embodies important flow characteristics such as separation point, reattachment length, and recirculation of flow. A sliding piston imparts two successive triangular velocities at the inlet, constant acceleration from rest, 0≤t≤t0, and constant deceleration to rest, t0≤t424 Visual and Verbal Imagination in a Bilingual Context
Authors: Erzsebet Gulyas
Abstract:
Our inner world, our imagination, and our way of thinking are invisible and inaudible to others, but they influence our behavior. To investigate the relationship between thinking and language use, we created a test in Hungarian using ideas from the literature. The test prompts participants to make decisions based on visual images derived from the written information presented. There is a correlation (r=0.5) between the test result and the self-assessment of the visual imagery vividness and the visual and verbal components of internal representations measured by self-report questionnaires, as well as with responses to language-use inquiries in the background questionnaire. 56 university students completed the tests, and SPSS was used to analyze the data.Keywords: imagination, internal representations, verbalization, visualization
Procedia PDF Downloads 56