Search results for: stereo-based digital image correlation
6748 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 2196747 Network Conditioning and Transfer Learning for Peripheral Nerve Segmentation in Ultrasound Images
Authors: Harold Mauricio Díaz-Vargas, Cristian Alfonso Jimenez-Castaño, David Augusto Cárdenas-Peña, Guillermo Alberto Ortiz-Gómez, Alvaro Angel Orozco-Gutierrez
Abstract:
Precise identification of the nerves is a crucial task performed by anesthesiologists for an effective Peripheral Nerve Blocking (PNB). Now, anesthesiologists use ultrasound imaging equipment to guide the PNB and detect nervous structures. However, visual identification of the nerves from ultrasound images is difficult, even for trained specialists, due to artifacts and low contrast. The recent advances in deep learning make neural networks a potential tool for accurate nerve segmentation systems, so addressing the above issues from raw data. The most widely spread U-Net network yields pixel-by-pixel segmentation by encoding the input image and decoding the attained feature vector into a semantic image. This work proposes a conditioning approach and encoder pre-training to enhance the nerve segmentation of traditional U-Nets. Conditioning is achieved by the one-hot encoding of the kind of target nerve a the network input, while the pre-training considers five well-known deep networks for image classification. The proposed approach is tested in a collection of 619 US images, where the best C-UNet architecture yields an 81% Dice coefficient, outperforming the 74% of the best traditional U-Net. Results prove that pre-trained models with the conditional approach outperform their equivalent baseline by supporting learning new features and enriching the discriminant capability of the tested networks.Keywords: nerve segmentation, U-Net, deep learning, ultrasound imaging, peripheral nerve blocking
Procedia PDF Downloads 1066746 AI-based Digital Healthcare Application to Assess and Reduce Fall Risks in Residents of Nursing Homes in Germany
Authors: Knol Hester, Müller Swantje, Danchenko Natalya
Abstract:
Objective: Falls in older people cause an autonomy loss and result in an economic burden. LCare is an AI-based application to manage fall risks. The study's aim was to assess the effect of LCare use on patient outcomes in nursing homes in Germany. Methods: LCare identifies and monitors fall risks through a 3D-gait analysis and a digital questionnaire, resulting in tailored recommendations on fall prevention. A study was conducted with AOK Baden-Württemberg (01.09.2019- 31.05.2021) in 16 care facilities. Assessments at baseline and follow-up included: a fall risk score; falls (baseline: fall history in the past 12 months; follow-up: a fall record since the last analysis); fall-related injuries and hospitalizations; gait speed; fear of falling; psychological stress; nurses experience on app use. Results: 94 seniors were aged 65-99 years at the initial analysis (average 84±7 years); 566 mobility analyses were carried out in total. On average, the fall risk was reduced by 17.8 % as compared to the baseline (p<0.05). The risk of falling decreased across all subgroups, including a trend in dementia patients (p=0.06), constituting 43% of analyzed patients, and patients with walking aids (p<0.05), constituting 76% of analyzed patients. There was a trend (p<0.1) towards fewer falls and fall-related injuries and hospitalizations (baseline: 23 seniors who fell, 13 injury consequences, 9 hospitalizations; follow-up: 14 seniors who fell, 2 injury consequences, 0 hospitalizations). There was a 16% improvement in gait speed (p<0.05). Residents reported less fear of falling and psychological stress by 38% in both outcomes (p<0.05). 81% of nurses found LCare effective. Conclusions: In the presented study, the use of LCare app was associated with a reduction of fall risk among nursing home residents, improvement of health-related outcomes, and a trend toward reduction in injuries and hospitalizations. LCare may help to improve senior resident care and save healthcare costs.Keywords: falls, digital healthcare, falls prevention, nursing homes, seniors, AI, digital assessment
Procedia PDF Downloads 1316745 Correlation between Body Mass Index and Blood Sugar/Serum Lipid Levels in Fourth-Grade Boys in Japan
Authors: Kotomi Yamashita, Hiromi Kawasaki, Satoko Yamasaki, Susumu Fukita, Risako Sakai
Abstract:
Lifestyle-related diseases develop from the long-term accumulation of health consequences from a poor lifestyle. Thus, schoolchildren, who have not accumulated long-term lifestyle habits, are believed to be at a lower risk for lifestyle-related diseases. However, schoolchildren rarely receive blood tests unless they are under treatment for a serious disease; without such data on their blood, the impacts of their young lifestyle could not be known. Blood data from physical measurements can help in the implementation of more effective health education. Therefore, we examined the correlation between body mass index (BMI) and blood sugar/serum lipid (BS/SL) levels. From 2014 to 2016, we measured the blood data of fourth-grade students living in a city in Japan. The present study reported on the results of 281 fourth-grade boys only (80.3% of total). We analyzed their BS/SL levels by comparing the blood data against the criteria of the National Center for Child Health and Development in Japan. Next, we examined the correlation between BMI and BS/SL levels. IBM SPSS Statistics for Windows, Version 25 was used for analysis. A total of 69 boys (24.6%) were within the normal range for BMI (18.5–24), whereas 193 (71.5%) and 8 boys (2.8%) had lower and higher BMI, respectively. Regarding BS levels, 280 boys were within the normal range (70–90 mg/dl); 1 boy reported a higher value. All the boys were within the normal range for glycated Hemoglobin (HbA1c) (4.6–6.2%). Regarding SL levels, 271 boys were within the normal range (125–230 mg/dl) for total cholesterol (TC), whereas 5 boys (1.8%) had lower and 5 boys (1.8%) had higher levels. A total of 243 boys (92.7%) were within the normal range (36-138mg/dL) for triglycerides (TG), whereas 19 boys (7.3%) had lower and 19 boys (7.3%) had higher levels. Regarding high-density lipoprotein cholesterol (HDL-C), 276 boys (98.2%) were within the normal range (40-mg/dl), whereas 5 boys (1.8%) reported lower values. All but one boy (280, 99.6%) were within the normal range (-170 mg/dl) for low-density lipoprotein cholesterol (LDL-C); the exception (0.4%) had a higher level. BMI and BS didn’t show a correlation. BMI and HbA1c were moderately positively correlated (r = 0.139, p=0.019). We also observed moderate positive correlations between BMI and TG (r = 0.328, p < 0.01), TC (r=0.239, p< 0.01), LDL-C (r = 0.324, p < 0.01), respectively. BMI and HDL-C were low correlated (r = -0.185, p = 0.002). Most of the boys were within the normal range for BS/SL levels. However, some boys exceeded the normal TG range. Fourth graders with a high TG may develop a lifestyle-related disease in the future. Given its relation to TG, food habits should be improved in this group. Our findings suggested a positive correlation between BMI and BS/SL levels. Fourth-grade schoolboys with a high BMI may be at high risk for developing lifestyle-related diseases. Lifestyle improvement may be recommended to lower the BS/SL levels in this group.Keywords: blood sugar level, lifestyle-related diseases, school students, serum lipid level
Procedia PDF Downloads 1386744 The Morphing Avatar of Startup Sales - Destination Virtual Reality
Authors: Sruthi Kannan
Abstract:
The ongoing covid pandemic has accelerated digital transformation like never before. The physical barriers brought in as a result of the pandemic are being bridged by digital alternatives. While basic collaborative activities like voice, video calling, screen sharing have been replicated in these alternatives, there are several others that require a more intimate setup. Pitching, showcasing, and providing demonstrations are an integral part of selling strategies for startups. Traditionally these have been in-person engagements, enabling a depth of understanding of the startups’ offerings. In the new normal scenario of virtual-only connects, startups are feeling the brunt of the lack of in-person connections with potential customers and investors. This poster demonstrates how a virtual reality platform has been conceptualized and custom-built for startups to engage with their stakeholders and redefine their selling strategies. This virtual reality platform is intended to provide an immersive experience for startup showcases and offers the nearest possible alternative to physical meetings for the startup ecosystem, thereby opening newer frontiers for entrepreneurial collaborations.Keywords: collaboration, sales, startups, strategy, virtual reality
Procedia PDF Downloads 3056743 Evaluation of Digital Marketing Strategies by Behavioral Economics
Authors: Sajjad Esmaeili Aghdam
Abstract:
Economics typically conceptualizes individual behavior as the consequence of external states, for example, budgets and prices (or respective beliefs) and choices. As the main goal, we focus on the influence of a range of Behavioral Economics factors on Strategies of Digital Marketing, evaluation of strategies and deformation of it into highly prospective marketing strategies. The different forms of behavioral prospects all lead to the succeeding two main results. First, the steadiness of the economic dynamics in a currency union be contingent fatefully on the level of economic incorporation. More economic incorporation leads to more steady economic dynamics. Electronic word-of-mouth (eWOM) is “all casual communications focused at consumers through Internet-based technology connected to the usage or characteristics of specific properties and services or their venders.” eWOM can take many methods, the most significant one being online analyses. Writing this paper, 72 articles have been gathered, focusing on the title and the aim of the article from research search engines like Google Scholar, Web of Science, and PubMed. Recent research in strategic management and marketing proposes that markets should not be viewed as a given and deterministic setting, exogenous to the firm. Instead, firms are progressively abstracted as dynamic inventors of market prospects. The use of new technologies touches all spheres of the modern lifestyle. Social and economic life becomes unbearable without fast, applicable, first-class and fitting material. Psychology and economics (together known as behavioral economics) are two protruding disciplines underlying many theories in marketing. The wide marketing works papers consumers’ none balanced behavior even though behavioral biases might not continuously be steadily called or officially labeled.Keywords: behavioral economics, digital marketing, marketing strategy, high impact strategies
Procedia PDF Downloads 1836742 Satellite Statistical Data Approach for Upwelling Identification and Prediction in South of East Java and Bali Sea
Authors: Hary Aprianto Wijaya Siahaan, Bayu Edo Pratama
Abstract:
Sea fishery's potential to become one of the nation's assets which very contributed to Indonesia's economy. This fishery potential not in spite of the availability of the chlorophyll in the territorial waters of Indonesia. The research was conducted using three methods, namely: statistics, comparative and analytical. The data used include MODIS sea temperature data imaging results in Aqua satellite with a resolution of 4 km in 2002-2015, MODIS data of chlorophyll-a imaging results in Aqua satellite with a resolution of 4 km in 2002-2015, and Imaging results data ASCAT on MetOp and NOAA satellites with 27 km resolution in 2002-2015. The results of the processing of the data show that the incidence of upwelling in the south of East Java Sea began to happen in June identified with sea surface temperature anomaly below normal, the mass of the air that moves from the East to the West, and chlorophyll-a concentrations are high. In July the region upwelling events are increasingly expanding towards the West and reached its peak in August. Chlorophyll-a concentration prediction using multiple linear regression equations demonstrate excellent results to chlorophyll-a concentrations prediction in 2002 until 2015 with the correlation of predicted chlorophyll-a concentration indicate a value of 0.8 and 0.3 with RMSE value. On the chlorophyll-a concentration prediction in 2016 indicate good results despite a decline in the value of the correlation, where the correlation of predicted chlorophyll-a concentration in the year 2016 indicate a value 0.6, but showed improvement in RMSE values with 0.2.Keywords: satellite, sea surface temperature, upwelling, wind stress
Procedia PDF Downloads 1586741 A Use Case-Oriented Performance Measurement Framework for AI and Big Data Solutions in the Banking Sector
Authors: Yassine Bouzouita, Oumaima Belghith, Cyrine Zitoun, Charles Bonneau
Abstract:
Performance measurement framework (PMF) is an essential tool in any organization to assess the performance of its processes. It guides businesses to stay on track with their objectives and benchmark themselves from the market. With the growing trend of the digital transformation of business processes, led by innovations in artificial intelligence (AI) & Big Data applications, developing a mature system capable of capturing the impact of digital solutions across different industries became a necessity. Based on the conducted research, no such system has been developed in academia nor the industry. In this context, this paper covers a variety of methodologies on performance measurement, overviews the major AI and big data applications in the banking sector, and covers an exhaustive list of relevant metrics. Consequently, this paper is of interest to both researchers and practitioners. From an academic perspective, it offers a comparative analysis of the reviewed performance measurement frameworks. From an industry perspective, it offers exhaustive research, from market leaders, of the major applications of AI and Big Data technologies, across the different departments of an organization. Moreover, it suggests a standardized classification model with a well-defined structure of intelligent digital solutions. The aforementioned classification is mapped to a centralized library that contains an indexed collection of potential metrics for each application. This library is arranged in a manner that facilitates the rapid search and retrieval of relevant metrics. This proposed framework is meant to guide professionals in identifying the most appropriate AI and big data applications that should be adopted. Furthermore, it will help them meet their business objectives through understanding the potential impact of such solutions on the entire organization.Keywords: AI and Big Data applications, impact assessment, metrics, performance measurement
Procedia PDF Downloads 1986740 Perception of Predictive Confounders for the Prevalence of Hypertension among Iraqi Population: A Pilot Study
Authors: Zahraa Albasry, Hadeel D. Najim, Anmar Al-Taie
Abstract:
Background: Hypertension is considered as one of the most important causes of cardiovascular complications and one of the leading causes of worldwide mortality. Identifying the potential risk factors associated with this medical health problem plays an important role in minimizing its incidence and related complications. The objective of this study is to explore the prevalence of receptor sensitivity regarding assess and understand the perception of specific predictive confounding factors on the prevalence of hypertension (HT) among a sample of Iraqi population in Baghdad, Iraq. Materials and Methods: A randomized cross sectional study was carried out on 100 adult subjects during their visit to the outpatient clinic at a certain sector of Baghdad Province, Iraq. Demographic, clinical and health records alongside specific screening and laboratory tests of the participants were collected and analyzed to detect the potential of confounding factors on the prevalence of HT. Results: 63% of the study participants suffered from HT, most of them were female patients (P < 0.005). Patients aged between 41-50 years old significantly suffered from HT than other age groups (63.5%, P < 0.001). 88.9% of the participants were obese (P < 0.001) and 47.6% had diabetes with HT. Positive family history and sedentary lifestyle were significantly higher among all hypertensive groups (P < 0.05). High salt and fatty food intake was significantly found among patients suffered from isolated systolic hypertension (ISHT) (P < 0.05). A significant positive correlation between packed cell volume (PCV) and systolic blood pressure (SBP) (r = 0.353, P = 0.048) found among normotensive participants. Among hypertensive patients, a positive significant correlation found between triglycerides (TG) and both SBP (r = 0.484, P = 0.031) and diastolic blood pressure (DBP) (r = 0.463, P = 0.040), while low density lipoprotein-cholesterol (LDL-c) showed a positive significant correlation with DBP (r = 0.443, P = 0.021). Conclusion: The prevalence of HT among Iraqi populations is of major concern. Further consideration is required to detect the impact of potential risk factors and to minimize blood pressure (BP) elevation and reduce the risk of other cardiovascular complications later in life.Keywords: Correlation, Hypertension, Iraq, Risk factors
Procedia PDF Downloads 1286739 Effect of Depth on Texture Features of Ultrasound Images
Authors: M. A. Alqahtani, D. P. Coleman, N. D. Pugh, L. D. M. Nokes
Abstract:
In diagnostic ultrasound, the echo graphic B-scan texture is an important area of investigation since it can be analyzed to characterize the histological state of internal tissues. An important factor requiring consideration when evaluating ultrasonic tissue texture is the depth. The effect of attenuation with depth of ultrasound, the size of the region of interest, gain, and dynamic range are important variables to consider as they can influence the analysis of texture features. These sources of variability have to be considered carefully when evaluating image texture as different settings might influence the resultant image. The aim of this study is to investigate the effect of depth on the texture features in-vivo using a 3D ultrasound probe. The left leg medial head of the gastrocnemius muscle of 10 healthy subjects were scanned. Two regions A and B were defined at different depth within the gastrocnemius muscle boundary. The size of both ROI’s was 280*20 pixels and the distance between region A and B was kept constant at 5 mm. Texture parameters include gray level, variance, skewness, kurtosis, co-occurrence matrix; run length matrix, gradient, autoregressive (AR) model and wavelet transform were extracted from the images. The paired t –test was used to test the depth effect for the normally distributed data and the Wilcoxon–Mann-Whitney test was used for the non-normally distributed data. The gray level, variance, and run length matrix were significantly lowered when the depth increased. The other texture parameters showed similar values at different depth. All the texture parameters showed no significant difference between depths A and B (p > 0.05) except for gray level, variance and run length matrix (p < 0.05). This indicates that gray level, variance, and run length matrix are depth dependent.Keywords: ultrasound image, texture parameters, computational biology, biomedical engineering
Procedia PDF Downloads 2956738 Factors Influencing Consumer Adoption of Digital Banking Apps in the UK
Authors: Sevelina Ndlovu
Abstract:
Financial Technology (fintech) advancement is recognised as one of the most transformational innovations in the financial industry. Fintech has given rise to internet-only digital banking, a novel financial technology advancement, and innovation that allows banking services through internet applications with no need for physical branches. This technology is becoming a new banking normal among consumers for its ubiquitous and real-time access advantages. There is evident switching and migration from traditional banking towards these fintech facilities, which could possibly pose a systemic risk if not properly understood and monitored. Fintech advancement has also brought about the emergence and escalation of financial technology consumption themes such as trust, security, perceived risk, and sustainability within the banking industry, themes scarcely covered in existing theoretic literature. To that end, the objective of this research is to investigate factors that determine fintech adoption and propose an integrated adoption model. This study aims to establish what the significant drivers of adoption are and develop a conceptual model that integrates technological, behavioral, and environmental constructs by extending the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2). It proposes integrating constructs that influence financial consumption themes such as trust, perceived risk, security, financial incentives, micro-investing opportunities, and environmental consciousness to determine the impact of these factors on the adoption and intention to use digital banking apps. The main advantage of this conceptual model is the consolidation of a greater number of predictor variables that can provide a fuller explanation of the consumer's adoption of digital banking Apps. Moderating variables of age, gender, and income are incorporated. To the best of author’s knowledge, this study is the first that extends the UTAUT2 model with this combination of constructs to investigate user’s intention to adopt internet-only digital banking apps in the UK context. By investigating factors that are not included in the existing theories but are highly pertinent to the adoption of internet-only banking services, this research adds to existing knowledge and extends the generalisability of the UTAUT2 in a financial services adoption context. This is something that fills a gap in knowledge, as highlighted to needing further research on UTAUT2 after reviewing the theory in 2016 from its original version of 2003. To achieve the objectives of this study, this research assumes a quantitative research approach to empirically test the hypotheses derived from existing literature and pilot studies to give statistical support to generalise the research findings for further possible applications in theory and practice. This research is explanatory or casual in nature and uses cross-section primary data collected through a survey method. Convenient and purposive sampling using structured self-administered online questionnaires is used for data collection. The proposed model is tested using Structural Equation Modelling (SEM), and the analysis of primary data collected through an online survey is processed using Smart PLS software with a sample size of 386 digital bank users. The results are expected to establish if there are significant relationships between the dependent and independent variables and establish what the most influencing factors are.Keywords: banking applications, digital banking, financial technology, technology adoption, UTAUT2
Procedia PDF Downloads 726737 Deep Learning for Image Correction in Sparse-View Computed Tomography
Authors: Shubham Gogri, Lucia Florescu
Abstract:
Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net
Procedia PDF Downloads 1626736 Financial Markets Integration between Morocco and France: Implications on International Portfolio Diversification
Authors: Abdelmounaim Lahrech, Hajar Bousfiha
Abstract:
This paper examines equity market integration between Morocco and France and its consequent implications on international portfolio diversification. In the absence of stock market linkages, Morocco can act as a diversification destination to European investors, allowing higher returns at a comparable level of risk in developed markets. In contrast, this attractiveness is limited if both financial markets show significant linkage. The research empirically measures financial market’s integration in by capturing the conditional correlation between the two markets using the Generalized Autoregressive Conditionally Heteroscedastic (GARCH) model. Then, the research uses the Dynamic Conditional Correlation (DCC) model of Engle (2002) to track the correlations. The research findings show that there is no important increase over the years in the correlation between the Moroccan and the French equity markets, even though France is considered Morocco’s first trading partner. Failing to prove evidence of the stock index linkage between the two countries, the volatility series of each market were assumed to change over time separately. Yet, the study reveals that despite the important historical and economic linkages between Morocco and France, there is no evidence that equity markets follow. The small correlations and their stationarity over time show that over the 10 years studied, correlations were fluctuating around a stable mean with no significant change at their level. Different explanations can be attributed to the absence of market linkage between the two equity markets.Keywords: equity market linkage, DCC GARCH, international portfolio diversification, Morocco, France
Procedia PDF Downloads 4426735 Object-Based Image Analysis for Gully-Affected Area Detection in the Hilly Loess Plateau Region of China Using Unmanned Aerial Vehicle
Authors: Hu Ding, Kai Liu, Guoan Tang
Abstract:
The Chinese Loess Plateau suffers from serious gully erosion induced by natural and human causes. Gully features detection including gully-affected area and its two dimension parameters (length, width, area et al.), is a significant task not only for researchers but also for policy-makers. This study aims at gully-affected area detection in three catchments of Chinese Loess Plateau, which were selected in Changwu, Ansai, and Suide by using unmanned aerial vehicle (UAV). The methodology includes a sequence of UAV data generation, image segmentation, feature calculation and selection, and random forest classification. Two experiments were conducted to investigate the influences of segmentation strategy and feature selection. Results showed that vertical and horizontal root-mean-square errors were below 0.5 and 0.2 m, respectively, which were ideal for the Loess Plateau region. The segmentation strategy adopted in this paper, which considers the topographic information, and optimal parameter combination can improve the segmentation results. Besides, the overall extraction accuracy in Changwu, Ansai, and Suide achieved was 84.62%, 86.46%, and 93.06%, respectively, which indicated that the proposed method for detecting gully-affected area is more objective and effective than traditional methods. This study demonstrated that UAV can bridge the gap between field measurement and satellite-based remote sensing, obtaining a balance in resolution and efficiency for catchment-scale gully erosion research.Keywords: unmanned aerial vehicle (UAV), object-analysis image analysis, gully erosion, gully-affected area, Loess Plateau, random forest
Procedia PDF Downloads 2186734 Smokeless Tobacco Oral Manifestation and Inflammatory Biomarkers in Saliva
Authors: Sintija Miļuna, Ričards Melderis, Loreta Briuka, Dagnija Rostoka, Ingus Skadiņš, Juta Kroiča
Abstract:
Objectives Smokeless tobacco products in Latvia become more available and favorable to young adults, especially students and athletes like hockey and floorball players. The aim of the research was to detect visual mucosal changes in the oral cavity in smokeless tobacco users and to evaluate pro - inflammatory and anti - inflammatory cytokine (IL-6, IL-1, IL-8, TNF Alpha) levels in saliva from smokeless tobacco users. Methods A smokeless tobacco group (n=10) and a control group (non-tobacco users) (n=10) were intraorally examined for oral lesions and 5 ml of saliva were collected. Saliva was analysed for Il-6, IL-1, Il-8, TNF Alpha using ELISA Sigma-Aldrich. For statistical analysis IBM Statistics 27 was used (Mann - Whitney U test, Spearman’s Rank Correlation coefficient). This research was approved by the Ethics Committee of Rīga Stradiņš University No.22/28.01.2016. This research has been developed with financing from the European Social Fund and Latvian state budget within the project no. 8.2.2.0/20/I/004 “Support for involving doctoral students in scientific research and studies” at Rīga Stradiņš University. Results IL-1, IL-6, IL-8, TNF Alpha levels were higher in the smokeless tobacco group (IL-1 83.34 pg/ml vs. 74.26 pg/ml; IL-6 195.10 pg/ml vs. 6.16 pg/ml; IL-8 736.34 pg/ml vs. 285.26 pg/ml; TNF Alpha 489.27 pg/ml vs. 200.9 pg/ml), but statistically there is no difference between control group and smokeless tobacco group (IL1 p=0.190, IL6 p=0.052, IL8 p=0.165, TNF alpha p=0.089). There was statistical correlation between IL1 and IL6 (p=0.023), IL6 and TNF alpha (p=0.028), IL8 and IL6 (p=0.005). Conclusions White localized lesions were detected in places where smokeless tobacco users placed sachets. There is a statistical correlation between IL6 and IL1 levels, IL6 and TNF alpha levels, IL8 and IL6 levels in saliva. There are no differences in the inflammatory cytokine levels between control group and smokeless tobacco group.Keywords: smokeless tobacco, Snus, inflammatory biomarkers, oral lesions, oral pathology
Procedia PDF Downloads 1396733 Array Type Miniaturized Ultrasonic Sensors for Detecting Sinkhole in the City
Authors: Won Young Choi, Kwan Kyu Park
Abstract:
Recently, the road depression happening in the urban area is different from the cause of the sink hole and the generation mechanism occurring in the limestone area. The main cause of sinkholes occurring in the city center is the loss of soil due to the damage of old underground buried materials and groundwater discharge due to large underground excavation works. The method of detecting the sinkhole in the urban area is mostly using the Ground Penetration Radar (GPR). However, it is challenging to implement compact system and detecting watery state since it is based on electromagnetic waves. Although many ultrasonic underground detection studies have been conducted, near-ground detection (several tens of cm to several meters) has been developed for bulk systems using geophones as a receiver. The goal of this work is to fabricate a miniaturized sinkhole detecting system based on low-cost ultrasonic transducers of 40 kHz resonant frequency with high transmission pressure and receiving sensitivity. Motived by biomedical ultrasonic imaging methods, we detect air layers below the ground such as asphalt through the pulse-echo method. To improve image quality using multi-channel, linear array system is implemented, and image is acquired by classical synthetic aperture imaging method. We present the successful feasibility test of multi-channel sinkhole detector based on ultrasonic transducer. In this work, we presented and analyzed image results which are imaged by single channel pulse-echo imaging, synthetic aperture imaging.Keywords: road depression, sinkhole, synthetic aperture imaging, ultrasonic transducer
Procedia PDF Downloads 1446732 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 1296731 Small Target Recognition Based on Trajectory Information
Authors: Saad Alkentar, Abdulkareem Assalem
Abstract:
Recognizing small targets has always posed a significant challenge in image analysis. Over long distances, the image signal-to-noise ratio tends to be low, limiting the amount of useful information available to detection systems. Consequently, visual target recognition becomes an intricate task to tackle. In this study, we introduce a Track Before Detect (TBD) approach that leverages target trajectory information (coordinates) to effectively distinguish between noise and potential targets. By reframing the problem as a multivariate time series classification, we have achieved remarkable results. Specifically, our TBD method achieves an impressive 97% accuracy in separating target signals from noise within a mere half-second time span (consisting of 10 data points). Furthermore, when classifying the identified targets into our predefined categories—airplane, drone, and bird—we achieve an outstanding classification accuracy of 96% over a more extended period of 1.5 seconds (comprising 30 data points).Keywords: small targets, drones, trajectory information, TBD, multivariate time series
Procedia PDF Downloads 476730 Lock in, Lock Out: A Double Lens Analysis of Local Media Paywall Strategies and User Response
Authors: Mona Solvoll, Ragnhild Kr. Olsen
Abstract:
Background and significance of the study: Newspapers are going through radical changes with increased competition, eroding readerships and declining advertising resulting in plummeting overall revenues. This has lead to a quest for new business models, focusing on monetizing content. This research paper investigates both how local online newspapers have introduced user payment and how the audience has received these changes. Given the role of local media in keeping their communities informed and those in power accountable, their potential impact on civic engagement and cultural integration in local communities, the business model innovations of local media deserves far more research interest. Empirically, the findings are interesting for local journalists, local media managers as well as local advertisers. Basic methodologies: The study is based on interviews with commercial leaders in 20 Norwegian local newspapers in addition to a national survey data from 1600 respondents among local media users. The interviews were conducted in the second half of 2015, while the survey was conducted in September 2016. Theoretically, the study draws on the business model framework. Findings: The analysis indicates that paywalls aim more at reducing digital cannibalisation of print revenue than about creating new digital income. The newspapers are mostly concerned with retaining “old” print subscribers and transform them into digital subscribers. However, this strategy may come at a high price for newspapers if their defensive print strategy drives away younger digital readership and hamper their recruitment potential for new audiences as some previous studies have indicated. Analysis of young reader news habits indicates that attracting the younger audience to traditional local news providers is particularly challenging and that they are more prone to seek alternative news sources than the older audience is. Conclusion: The paywall strategy applied by the local newspapers may be well fitted to stabilise print subscription figures and facilitate more tailored and better services for already existing customers, but far less suited for attracting new ones. The paywall is a short-sighted strategy, which drives away younger readers and paves the road for substitute offerings, particularly Facebook.Keywords: business model, newspapers, paywall, user payment
Procedia PDF Downloads 2776729 Best Timing for Capturing Satellite Thermal Images, Asphalt, and Concrete Objects
Authors: Toufic Abd El-Latif Sadek
Abstract:
The asphalt object represents the asphalted areas like roads, and the concrete object represents the concrete areas like concrete buildings. The efficient extraction of asphalt and concrete objects from one satellite thermal image occurred at a specific time, by preventing the gaps in times which give the close and same brightness values between asphalt and concrete, and among other objects. So that to achieve efficient extraction and then better analysis. Seven sample objects were used un this study, asphalt, concrete, metal, rock, dry soil, vegetation, and water. It has been found that, the best timing for capturing satellite thermal images to extract the two objects asphalt and concrete from one satellite thermal image, saving time and money, occurred at a specific time in different months. A table is deduced shows the optimal timing for capturing satellite thermal images to extract effectively these two objects.Keywords: asphalt, concrete, satellite thermal images, timing
Procedia PDF Downloads 3226728 A Method to Estimate Wheat Yield Using Landsat Data
Authors: Zama Mahmood
Abstract:
The increasing demand of food management, monitoring of the crop growth and forecasting its yield well before harvest is very important. These days, yield assessment together with monitoring of crop development and its growth are being identified with the help of satellite and remote sensing images. Studies using remote sensing data along with field survey validation reported high correlation between vegetation indices and yield. With the development of remote sensing technique, the detection of crop and its mechanism using remote sensing data on regional or global scales have become popular topics in remote sensing applications. Punjab, specially the southern Punjab region is extremely favourable for wheat production. But measuring the exact amount of wheat production is a tedious job for the farmers and workers using traditional ground based measurements. However, remote sensing can provide the most real time information. In this study, using the Normalized Differentiate Vegetation Index (NDVI) indicator developed from Landsat satellite images, the yield of wheat has been estimated during the season of 2013-2014 for the agricultural area around Bahawalpur. The average yield of the wheat was found 35 kg/acre by analysing field survey data. The field survey data is in fair agreement with the NDVI values extracted from Landsat images. A correlation between wheat production (ton) and number of wheat pixels has also been calculated which is in proportional pattern with each other. Also a strong correlation between the NDVI and wheat area was found (R2=0.71) which represents the effectiveness of the remote sensing tools for crop monitoring and production estimation.Keywords: landsat, NDVI, remote sensing, satellite images, yield
Procedia PDF Downloads 3356727 Spatial Differentiation Patterns and Influencing Mechanism of Urban Greening in China: Based on Data of 289 Cities
Authors: Fangzheng Li, Xiong Li
Abstract:
Significant differences in urban greening have occurred in Chinese cities, which accompanied with China's rapid urbanization. However, few studies focused on the spatial differentiation of urban greening in China with large amounts of data. The spatial differentiation pattern, spatial correlation characteristics and the distribution shape of urban green space ratio, urban green coverage rate and public green area per capita were calculated and analyzed, using Global and Local Moran's I using data from 289 cities in 2014. We employed Spatial Lag Model and Spatial Error Model to assess the impacts of urbanization process on urban greening of China. Then we used Geographically Weighted Regression to estimate the spatial variations of the impacts. The results showed: 1. a significant spatial dependence and heterogeneity existed in urban greening values, and the differentiation patterns were featured by the administrative grade and the spatial agglomeration simultaneously; 2. it revealed that urbanization has a negative correlation with urban greening in Chinese cities. Among the indices, the the proportion of secondary industry, urbanization rate, population and the scale of urban land use has significant negative correlation with the urban greening of China. Automobile density and per capita Gross Domestic Product has no significant impact. The results of GWR modeling showed that the relationship between urbanization and urban greening was not constant in space. Further, the local parameter estimates suggested significant spatial variation in the impacts of various urbanization factors on urban greening.Keywords: China’s urbanization, geographically weighted regression, spatial differentiation pattern, urban greening
Procedia PDF Downloads 4616726 High Resolution Sandstone Connectivity Modelling: Implications for Outcrop Geological and Its Analog Studies
Authors: Numair Ahmed Siddiqui, Abdul Hadi bin Abd Rahman, Chow Weng Sum, Wan Ismail Wan Yousif, Asif Zameer, Joel Ben-Awal
Abstract:
Advances in data capturing from outcrop studies have made possible the acquisition of high-resolution digital data, offering improved and economical reservoir modelling methods. Terrestrial laser scanning utilizing LiDAR (Light detection and ranging) provides a new method to build outcrop based reservoir models, which provide a crucial piece of information to understand heterogeneities in sandstone facies with high-resolution images and data set. This study presents the detailed application of outcrop based sandstone facies connectivity model by acquiring information gathered from traditional fieldwork and processing detailed digital point-cloud data from LiDAR to develop an intermediate small-scale reservoir sandstone facies model of the Miocene Sandakan Formation, Sabah, East Malaysia. The software RiScan pro (v1.8.0) was used in digital data collection and post-processing with an accuracy of 0.01 m and point acquisition rate of up to 10,000 points per second. We provide an accurate and descriptive workflow to triangulate point-clouds of different sets of sandstone facies with well-marked top and bottom boundaries in conjunction with field sedimentology. This will provide highly accurate qualitative sandstone facies connectivity model which is a challenge to obtain from subsurface datasets (i.e., seismic and well data). Finally, by applying this workflow, we can build an outcrop based static connectivity model, which can be an analogue to subsurface reservoir studies.Keywords: LiDAR, outcrop, high resolution, sandstone faceis, connectivity model
Procedia PDF Downloads 2266725 Applying Big Data to Understand Urban Design Quality: The Correlation between Social Activities and Automated Pedestrian Counts in Dilworth Park, Philadelphia
Authors: Jae Min Lee
Abstract:
Presence of people and intensity of activities have been widely accepted as an indicator for successful public spaces in urban design literature. This study attempts to predict the qualitative indicators, presence of people and intensity of activities, with the quantitative measurements of pedestrian counting. We conducted participant observation in Dilworth Park, Philadelphia to collect the total number of people and activities in the park. Then, the participant observation data is compared with detailed pedestrian counts at 10 exit locations to estimate the number of park users. The study found that there is a clear correlation between the intensity of social activities and automated pedestrian counts.Keywords: automated pedestrian count, computer vision, public space, urban design
Procedia PDF Downloads 4016724 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture
Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko
Abstract:
Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.Keywords: classification, feature selection, texture analysis, tree algorithms
Procedia PDF Downloads 1786723 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Annulus Pulley
Authors: Bijit Kalita, K. V. N. Surendra
Abstract:
The pulley works under both compressive loading due to contacting belt in tension and central torque due to cause rotation. In a power transmission system, the belt pulley assemblies offer a contact problem in the form of two mating cylindrical parts. In this work, we modeled a pulley as a heavy two-dimensional circular disk. Stress analysis due to contact loading in the pulley mechanism is performed. Finite element analysis (FEA) is conducted for a pulley to investigate the stresses experienced on its inner and outer periphery. In most of the heavy-duty applications, most frequently used mechanisms to transmit power in applications such as automotive engines, industrial machines, etc. is Belt Drive. Usually, very heavy circular disks are used as pulleys. A pulley could be entitled as a drum and may have a groove between two flanges around the circumference. A rope, belt, cable or chain can be the driving element of a pulley system that runs over the pulley inside the groove. A pulley is experienced by normal and shear tractions on its contact region in the process of motion transmission. The region may be belt-pulley contact surface or pulley-shaft contact surface. In 1895, Hertz solved the elastic contact problem for point contact and line contact of an ideal smooth object. Afterward, this hypothesis is generally utilized for computing the actual contact zone. Detailed stress analysis in such contact region of such pulleys is quite necessary to prevent early failure. In this paper, the results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. Based on the literature on contact stress problem induced in the wide field of applications, generated stress distribution on the shaft-pulley and belt-pulley interfaces due to the application of high-tension and torque was evaluated in this study using FEA concepts. Finally, the results obtained from ANSYS (APDL) were compared with the Hertzian contact theory. The study is mainly focused on the fatigue life estimation of a rotating part as a component of an engine assembly using the most famous Paris equation. Digital Image Correlation (DIC) analyses have been performed using the open-source software. From the displacement computed using the images acquired at a minimum and maximum force, displacement field amplitude is computed. From these fields, the crack path is defined and stress intensity factors and crack tip position are extracted. A non-linear least-squares projection is used for the purpose of the estimation of fatigue crack growth. Further study will be extended for the various application of rotating machinery such as rotating flywheel disk, jet engine, compressor disk, roller disk cutter etc., where Stress Intensity Factor (SIF) calculation plays a significant role on the accuracy and reliability of a safe design. Additionally, this study will be progressed to predict crack propagation in the pulley using maximum tangential stress (MTS) criteria for mixed mode fracture.Keywords: crack-tip deformations, contact stress, stress concentration, stress intensity factor
Procedia PDF Downloads 1246722 New Method to Increase Contrast of Electromicrograph of Rat Tissues Sections
Authors: Lise Paule Labéjof, Raíza Sales Pereira Bizerra, Galileu Barbosa Costa, Thaísa Barros dos Santos
Abstract:
Since the beginning of the microscopy, improving the image quality has always been a concern of its users. Especially for transmission electron microscopy (TEM), the problem is even more important due to the complexity of the sample preparation technique and the many variables that can affect the conservation of structures, proper operation of the equipment used and then the quality of the images obtained. Animal tissues being transparent it is necessary to apply a contrast agent in order to identify the elements of their ultrastructural morphology. Several methods of contrastation of tissues for TEM imaging have already been developed. The most used are the “in block” contrastation and “in situ” contrastation. This report presents an alternative technique of application of contrast agent in vivo, i.e. before sampling. By this new method the electromicrographies of the tissue sections have better contrast compared to that in situ and present no artefact of precipitation of contrast agent. Another advantage is that a small amount of contrast is needed to get a good result given that most of them are expensive and extremely toxic.Keywords: image quality, microscopy research, staining technique, ultra thin section
Procedia PDF Downloads 4336721 Development of a Mobile Image-Based Reminder Application to Support Tuberculosis Treatment in Africa
Authors: Haji Ali Haji, Hussein Suleman, Ulrike Rivett
Abstract:
This paper presents the design, development and evaluation of an application prototype developed to support tuberculosis (TB) patients’ treatment adherence. The system makes use of graphics and voice reminders as opposed to text messaging to encourage patients to follow their medication routine. To evaluate the effect of the prototype applications, participants were given mobile phones on which the reminder system was installed. Thirty-eight people, including TB health workers and patients from Zanzibar, Tanzania, participated in the evaluation exercises. The results indicate that the participants found the mobile graphic-based application is useful to support TB treatment. All participants understood and interpreted the intended meaning of every image correctly. The study findings revealed that the use of a mobile visual-based application may have potential benefit to support TB patients (both literate and illiterate) in their treatment processes.Keywords: ICT4D, mobile technology, tuberculosis, visual-based reminder
Procedia PDF Downloads 4306720 A Prediction Method for Large-Size Event Occurrences in the Sandpile Model
Authors: S. Channgam, A. Sae-Tang, T. Termsaithong
Abstract:
In this research, the occurrences of large size events in various system sizes of the Bak-Tang-Wiesenfeld sandpile model are considered. The system sizes (square lattice) of model considered here are 25×25, 50×50, 75×75 and 100×100. The cross-correlation between the ratio of sites containing 3 grain time series and the large size event time series for these 4 system sizes are also analyzed. Moreover, a prediction method of the large-size event for the 50×50 system size is also introduced. Lastly, it can be shown that this prediction method provides a slightly higher efficiency than random predictions.Keywords: Bak-Tang-Wiesenfeld sandpile model, cross-correlation, avalanches, prediction method
Procedia PDF Downloads 3826719 Deep Learning-Based Liver 3D Slicer for Image-Guided Therapy: Segmentation and Needle Aspiration
Authors: Ahmedou Moulaye Idriss, Tfeil Yahya, Tamas Ungi, Gabor Fichtinger
Abstract:
Image-guided therapy (IGT) plays a crucial role in minimally invasive procedures for liver interventions. Accurate segmentation of the liver and precise needle placement is essential for successful interventions such as needle aspiration. In this study, we propose a deep learning-based liver 3D slicer designed to enhance segmentation accuracy and facilitate needle aspiration procedures. The developed 3D slicer leverages state-of-the-art convolutional neural networks (CNNs) for automatic liver segmentation in medical images. The CNN model is trained on a diverse dataset of liver images obtained from various imaging modalities, including computed tomography (CT) and magnetic resonance imaging (MRI). The trained model demonstrates robust performance in accurately delineating liver boundaries, even in cases with anatomical variations and pathological conditions. Furthermore, the 3D slicer integrates advanced image registration techniques to ensure accurate alignment of preoperative images with real-time interventional imaging. This alignment enhances the precision of needle placement during aspiration procedures, minimizing the risk of complications and improving overall intervention outcomes. To validate the efficacy of the proposed deep learning-based 3D slicer, a comprehensive evaluation is conducted using a dataset of clinical cases. Quantitative metrics, including the Dice similarity coefficient and Hausdorff distance, are employed to assess the accuracy of liver segmentation. Additionally, the performance of the 3D slicer in guiding needle aspiration procedures is evaluated through simulated and clinical interventions. Preliminary results demonstrate the effectiveness of the developed 3D slicer in achieving accurate liver segmentation and guiding needle aspiration procedures with high precision. The integration of deep learning techniques into the IGT workflow shows great promise for enhancing the efficiency and safety of liver interventions, ultimately contributing to improved patient outcomes.Keywords: deep learning, liver segmentation, 3D slicer, image guided therapy, needle aspiration
Procedia PDF Downloads 48