Search results for: automatic neuropathy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 946

Search results for: automatic neuropathy

706 In-Context Meta Learning for Automatic Designing Pretext Tasks for Self-Supervised Image Analysis

Authors: Toktam Khatibi

Abstract:

Self-supervised learning (SSL) includes machine learning models that are trained on one aspect and/or one part of the input to learn other aspects and/or part of it. SSL models are divided into two different categories, including pre-text task-based models and contrastive learning ones. Pre-text tasks are some auxiliary tasks learning pseudo-labels, and the trained models are further fine-tuned for downstream tasks. However, one important disadvantage of SSL using pre-text task solving is defining an appropriate pre-text task for each image dataset with a variety of image modalities. Therefore, it is required to design an appropriate pretext task automatically for each dataset and each downstream task. To the best of our knowledge, the automatic designing of pretext tasks for image analysis has not been considered yet. In this paper, we present a framework based on In-context learning that describes each task based on its input and output data using a pre-trained image transformer. Our proposed method combines the input image and its learned description for optimizing the pre-text task design and its hyper-parameters using Meta-learning models. The representations learned from the pre-text tasks are fine-tuned for solving the downstream tasks. We demonstrate that our proposed framework outperforms the compared ones on unseen tasks and image modalities in addition to its superior performance for previously known tasks and datasets.

Keywords: in-context learning (ICL), meta learning, self-supervised learning (SSL), vision-language domain, transformers

Procedia PDF Downloads 74
705 Tool for Maxillary Sinus Quantification in Computed Tomography Exams

Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina

Abstract:

The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.

Keywords: maxillary sinus, support vector machine, region growing, volume quantification

Procedia PDF Downloads 501
704 HPTLC Fingerprint Profiling of Protorhus longifolia Methanolic Leaf Extract and Qualitative Analysis of Common Biomarkers

Authors: P. S. Seboletswe, Z. Mkhize, L. M. Katata-Seru

Abstract:

Protorhus longifolia is known as a medicinal plant that has been used traditionally to treat various ailments such as hemiplegic paralysis, blood clotting related diseases, diarrhoea, heartburn, etc. The study reports a High-Performance Thin Layer Chromatography (HPTLC) fingerprint profile of Protorhus longifolia methanolic extract and its qualitative analysis of gallic acid, rutin, and quercetin. HPTLC analysis was achieved using CAMAG HPTLC system equipped with CAMAG automatic TLC sampler 4, CAMAG Automatic Developing Chamber 2 (ADC2), CAMAG visualizer 2, CAMAG Thin Layer Chromatography (TLC) scanner and visionCATS CAMAG HPTLC software. Mobile phase comprising toluene, ethyl acetate, formic acid (21:15:3) was used for qualitative analysis of gallic acid and revealed eight peaks while the mobile phase containing ethyl acetate, water, glacial acetic acid, formic acid (100:26:11:11) for qualitative analysis of rutin and quercetin revealed six peaks. HPTLC sillica gel 60 F254 glass plates (10 × 10) were used as the stationary phase. Gallic acid was detected at the Rf = 0.35; while rutin and quercetin were not evident in the extract. Further studies will be performed to quantify gallic acid in Protorhus longifolia leaves and also identify other biomarkers.

Keywords: biomarkers, fingerprint profiling, gallic acid, HPTLC, Protorhus longifolia

Procedia PDF Downloads 135
703 Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification

Authors: Krishna Mohan Bathula, Fatou Bintou Loucoubar, FNU Kaleemunnisa, Christelle Scharff, Mark Anthony De Castro

Abstract:

Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms.

Keywords: automatic speech recognition, interactive voice response, voice response recognition, wolof word classification

Procedia PDF Downloads 105
702 Impacts of Applying Automated Vehicle Location Systems to Public Bus Transport Management

Authors: Vani Chintapally

Abstract:

The expansion of modest and minimized Global Positioning System (GPS) beneficiaries has prompted most Automatic Vehicle Location (AVL) frameworks today depending solely on satellite-based finding frameworks, as GPS is the most stable usage of these. This paper shows the attributes of a proposed framework for following and dissecting open transport in a run of the mill medium-sized city and complexities the qualities of such a framework to those of broadly useful AVL frameworks. Particular properties of the courses broke down by the AVL framework utilized for the examination of open transport in our study incorporate cyclic vehicle courses, the requirement for particular execution reports, and so forth. This paper particularly manages vehicle movement forecasts and the estimation of station landing time, combined with consequently produced reports on timetable conformance and other execution measures. Another side of the watched issue is proficient exchange of information from the vehicles to the control focus. The pervasiveness of GSM bundle information exchange advancements combined with decreased information exchange expenses have brought on today's AVL frameworks to depend predominantly on parcel information exchange administrations from portable administrators as the correspondences channel in the middle of vehicles and the control focus. This methodology brings numerous security issues up in this conceivably touchy application field.

Keywords: automatic vehicle location (AVL), expectation of landing times, AVL security, data administrations, wise transport frameworks (ITS), guide coordinating

Procedia PDF Downloads 375
701 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English

Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista

Abstract:

The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.

Keywords: corpus linguistics, historical linguistics, old English, parallel corpus

Procedia PDF Downloads 205
700 Effect of Automatic Self Transcending Meditation on Perceived Stress and Sleep Quality in Adults

Authors: Divya Kanchibhotla, Shashank Kulkarni, Shweta Singh

Abstract:

Chronic stress and sleep quality reduces mental health and increases the risk of developing depression and anxiety as well. There is increasing evidence for the utility of meditation as an adjunct clinical intervention for conditions like depression and anxiety. The present study is an attempt to explore the impact of Sahaj Samadhi Meditation (SSM), a category of Automatic Self Transcending Meditation (ASTM), on perceived stress and sleep quality in adults. The study design was a single group pre-post assessment. Perceived Stress Scale (PSS) and the Pittsburgh Sleep Quality Index (PSQI) were used in this study. Fifty-two participants filled PSS, and 60 participants filled PSQI at the beginning of the program (day 0), after two weeks (day 16) and at two months (day 60). Significant pre-post differences for the perceived stress level on Day 0 - Day 16 (p < 0.01; Cohen's d = 0.46) and Day 0 - Day 60 (p < 0.01; Cohen's d = 0.76) clearly demonstrated that by practicing SSM, participants experienced reduction in the perceived stress. The effect size of the intervention observed on the 16th day of assessment was small to medium, but on the 60th day, a medium to large effect size of the intervention was observed. In addition to this, significant pre-post differences for the sleep quality on Day 0 - Day 16 and Day 0 - Day 60 (p < 0.05) clearly demonstrated that by practicing SSM, participants experienced improvement in the sleep quality. Compared with Day 0 assessment, participants demonstrated significant improvement in the quality of sleep on Day 16 and Day 60. The effect size of the intervention observed on the 16th day of assessment was small, but on the 60th day, a small to medium effect size of the intervention was observed. In the current study we found out that after practicing SSM for two months, participants reported a reduction in the perceived stress, they felt that they are more confident about their ability to handle personal problems, were able to cope with all the things that they had to do, felt that they were on top of the things, and felt less angered. Participants also reported that their overall sleep quality improved; they took less time to fall asleep; they had less disturbances in sleep and less daytime dysfunction due to sleep deprivation. The present study provides clear evidence of the efficacy and safety of non-pharmacological interventions such as SSM in reducing stress and improving sleep quality. Thus, ASTM may be considered a useful intervention to reduce psychological distress in healthy, non-clinical populations, and it can be an alternative remedy for treating poor sleep among individuals and decreasing the use of harmful sedatives.

Keywords: automatic self transcending meditation, Sahaj Samadhi meditation, sleep, stress

Procedia PDF Downloads 129
699 A Dynamic Model for Assessing the Advanced Glycation End Product Formation in Diabetes

Authors: Victor Arokia Doss, Kuberapandian Dharaniyambigai, K. Julia Rose Mary

Abstract:

Advanced Glycation End (AGE) products are the end products due to the reaction between excess reducing sugar present in diabetes and free amino group in protein lipids and nucleic acids. Thus, non-enzymic glycation of molecules such as hemoglobin, collagen, and other structurally and functionally important proteins add to the pathogenic complications such as diabetic retinopathy, neuropathy, nephropathy, vascular changes, atherosclerosis, Alzheimer's disease, rheumatoid arthritis, and chronic heart failure. The most common non-cross linking AGE, carboxymethyl lysine (CML) is formed by the oxidative breakdown of fructosyllysine, which is a product of glucose and lysine. CML is formed in a wide variety of tissues and is an index to assess the extent of glycoxidative damage. Thus we have constructed a mathematical and computational model that predicts the effect of temperature differences in vivo, on the formation of CML, which is now being considered as an important intracellular milieu. This hybrid model that had been tested for its parameter fitting and its sensitivity with available experimental data paves the way for designing novel laboratory experiments that would throw more light on the pathological formation of AGE adducts and in the pathophysiology of diabetic complications.

Keywords: advanced glycation end-products, CML, mathematical model, computational model

Procedia PDF Downloads 127
698 The Role of Situational Attribution Training in Reducing Automatic In-Group Stereotyping in Females

Authors: Olga Mironiuk, Małgorzata Kossowska

Abstract:

The aim of the present study was to investigate the influence of Situational Attribution Training on reducing automatic in-group stereotyping in females. The experiment was conducted with the control of age and level of prejudice. 90 female participants were randomly assigned to two conditions: experimental and control group (each group was also divided into younger- and older-aged condition). Participants from the experimental condition were subjected to more extensive training. In the first part of the experiment, the experimental group took part in the first session of Situational Attribution Training while the control group participated in the Grammatical Training Control. In the second part of the research both groups took part in the Situational Attribution Training (which was considered as the second training session for the experimental group and the first one for the control condition). The training procedure was based on the descriptions of ambiguous situations which could be explained using situational or dispositional attributions. The participant’s task was to choose the situational explanation from two alternatives, out of which the second one presented the explanation based on neutral or stereotypically associated with women traits. Moreover, the experimental group took part in the third training session after two- day time delay, in order to check the persistence of the training effect. The main hypothesis stated that among participants taking part in the more extensive training, the automatic in-group stereotyping would be less frequent after having finished training sessions. The effectiveness of the training was tested by measuring the response time and the correctness of answers: the longer response time for the examples where one of two possible answers was based on the stereotype trait and higher correctness of answers was considered to be a proof of the training effectiveness. As the participants’ level of prejudice was controlled (using the Ambivalent Sexism Inventory), it was also assumed that the training effect would be weaker for participants revealing a higher level of prejudice. The obtained results did not confirm the hypothesis based on the response time: participants from the experimental group responded faster in case of situations where one of the possible explanations was based on stereotype trait. However, an interesting observation was made during the analysis of the answers’ correctness: regardless the condition and age group affiliation, participants made more mistakes while choosing the situational explanations when the alternative was based on stereotypical trait associated with the dimension of warmth. What is more, the correctness of answers was higher in the third training session for the experimental group in case when the alternative of situational explanation was based on the stereotype trait associated with the dimension of competence. The obtained results partially confirm the effectiveness of the training.

Keywords: female, in-group stereotyping, prejudice, situational attribution training

Procedia PDF Downloads 177
697 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization

Authors: Younis Elhaddad, Alfonso Ortega

Abstract:

Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.

Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production

Procedia PDF Downloads 158
696 Analysis of Urban Rail Transit Station's Accessibility Reliability: A Case Study of Hangzhou Metro, China

Authors: Jin-Qu Chen, Jie Liu, Yong Yin, Zi-Qi Ju, Yu-Yao Wu

Abstract:

Increase in travel fare and station’s failure will have huge impact on passengers’ travel. The Urban Rail Transit (URT) station’s accessibility reliability under increasing travel fare and station failure are analyzed in this paper. Firstly, the passenger’s travel path is resumed based on stochastic user equilibrium and Automatic Fare Collection (AFC) data. Secondly, calculating station’s importance by combining LeaderRank algorithm and Ratio of Station Affected Passenger Volume (RSAPV), and then the station’s accessibility evaluation indicators are proposed based on the analysis of passenger’s travel characteristic. Thirdly, station’s accessibility under different scenarios are measured and rate of accessibility change is proposed as station’s accessibility reliability indicator. Finally, the accessibility of Hangzhou metro stations is analyzed by the formulated models. The result shows that Jinjiang station and Liangzhu station are the most important and convenient station in the Hangzhou metro, respectively. Station failure and increase in travel fare and station failure have huge impact on station’s accessibility, except for increase in travel fare. Stations in Hangzhou metro Line 1 have relatively worse accessibility reliability and Fengqi Road station’s accessibility reliability is weakest. For Hangzhou metro operational department, constructing new metro line around Line 1 and protecting Line 1’s station preferentially can effective improve the accessibility reliability of Hangzhou metro.

Keywords: automatic fare collection data, AFC, station’s accessibility reliability, stochastic user equilibrium, urban rail transit, URT

Procedia PDF Downloads 126
695 Population Dynamics and Land Use/Land Cover Change on the Chilalo-Galama Mountain Range, Ethiopia

Authors: Yusuf Jundi Sado

Abstract:

Changes in land use are mostly credited to human actions that result in negative impacts on biodiversity and ecosystem functions. This study aims to analyze the dynamics of land use and land cover changes for sustainable natural resources planning and management. Chilalo-Galama Mountain Range, Ethiopia. This study used Thematic Mapper 05 (TM) for 1986, 2001 and Landsat 8 (OLI) data 2017. Additionally, data from the Central Statistics Agency on human population growth were analyzed. Semi-Automatic classification plugin (SCP) in QGIS 3.2.3 software was used for image classification. Global positioning system, field observations and focus group discussions were used for ground verification. Land Use Land Cover (LU/LC) change analysis was using maximum likelihood supervised classification and changes were calculated for the 1986–2001 and the 2001–2017 and 1986-2017 periods. The results show that agricultural land increased from 27.85% (1986) to 44.43% and 51.32% in 2001 and 2017, respectively with the overall accuracies of 92% (1986), 90.36% (2001), and 88% (2017). On the other hand, forests decreased from 8.51% (1986) to 7.64 (2001) and 4.46% (2017), and grassland decreased from 37.47% (1986) to 15.22%, and 15.01% in 2001 and 2017, respectively. It indicates for the years 1986–2017 the largest area cover gain of agricultural land was obtained from grassland. The matrix also shows that shrubland gained land from agricultural land, afro-alpine, and forest land. Population dynamics is found to be one of the major driving forces for the LU/LU changes in the study area.

Keywords: Landsat, LU/LC change, Semi-Automatic classification plugin, population dynamics, Ethiopia

Procedia PDF Downloads 78
694 CMT4G: Rare Form of Charcot-Marie-Tooth Disease in Slovak Roma Patient

Authors: Dana Gabriková, Martin Mistrík, Jarmila Bernasovská, Iveta Tóthová, Jana Kisková

Abstract:

The Roma (Gypsies) is a transnational minority with a high degree of consanguineous marriages. Similar to other genetically isolated founder populations, the Roma harbor a number of unique or rare genetic disorders. This paper discusses about a rare form of Charcot-Marie-Tooth disease – type 4G (CMT4G), also called Hereditary Motor and Sensory Neuropathy type Russe, an autosomal recessive disease caused by mutation private to Roma characterized by abnormally increased density of non-myelinated axons. CMT4G was originally found in Bulgarian Roma and in 2009 two putative causative mutations in the HK1 gene were identified. Since then, several cases were reported in Roma families mainly from Bulgaria and Spain. Here we present a Slovak Roma family in which CMT4G was diagnosed on the basis of clinical examination and genetic testing. This case is a further proof of the role of the HK1 gene in pathogenesis of the disease. It confirms that mutation in the HK1 gene is a common cause of autosomal recessive CMT disease in Roma and should be considered as a common part of a diagnostic procedure.

Keywords: gypsies, HK1, HSMN-Russe, rare disease

Procedia PDF Downloads 380
693 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 398
692 Anatomical Features of Internal Pudendal Artery

Authors: Adel Yasky, Waseem Al-Talalwah, Shorok Al Dorazi, Roger Soames

Abstract:

The internal pudendal artery is a standard branch of the anterior division of the internal iliac artery. The current study includes 41 cadavers to investigate the origin and branches of the internal pudendal artery and its clinical significances. The internal pudendal artery arose directly from the anterior division of the internal iliac artery in 48.3% while it arose indirectly in 48.5%. However, the internal pudendal artery arose from the posterior division of internal iliac artery in 1.6%. Moreover, it arose internal iliac artery bifurcation site in 1.6%. Further, the internal pudendal artery supplied the urinary bladder in 17.1%. Also, the internal pudendal artery supplied the rectum 33.5% respectively. It gave uterine and vaginal arteries in 9.4% and 7.8% respectively. Finally, it supplied the sciatic nerve via giving lateral sacral branch in 1.6%. Internists, surgeons and radiologists have to be aware of the variability to decrease iatrogenic injury. Therefore, unnecessary proximal ligation should be avoided at the site of indirect origin of the internal pudendal artery to prevent sciatic neuropathy. Further, intrapelvic bleeding as result of laceration of internal pudendal branches during hysterectomy, prostatectomy or proctectomy should be expected. Therefore, this study increases the awareness of surgeons leading to minimize iatrogenic faults,

Keywords: internal pudendal artery, inferior gluteal artery, superior gluteal artery, internal iliac artery, impotence, decreased libido

Procedia PDF Downloads 347
691 Ophthalmic Hashing Based Supervision of Glaucoma and Corneal Disorders Imposed on Deep Graphical Model

Authors: P. S. Jagadeesh Kumar, Yang Yung, Mingmin Pan, Xianpei Li, Wenli Hu

Abstract:

Glaucoma is impelled by optic nerve mutilation habitually represented as cupping and visual field injury frequently with an arcuate pattern of mid-peripheral loss, subordinate to retinal ganglion cell damage and death. Glaucoma is the second foremost cause of blindness and the chief cause of permanent blindness worldwide. Consequently, all-embracing study into the analysis and empathy of glaucoma is happening to escort deep learning based neural network intrusions to deliberate this substantial optic neuropathy. This paper advances an ophthalmic hashing based supervision of glaucoma and corneal disorders preeminent on deep graphical model. Ophthalmic hashing is a newly proposed method extending the efficacy of visual hash-coding to predict glaucoma corneal disorder matching, which is the faster than the existing methods. Deep graphical model is proficient of learning interior explications of corneal disorders in satisfactory time to solve hard combinatoric incongruities using deep Boltzmann machines.

Keywords: corneal disorders, deep Boltzmann machines, deep graphical model, glaucoma, neural networks, ophthalmic hashing

Procedia PDF Downloads 238
690 Statistical Feature Extraction Method for Wood Species Recognition System

Authors: Mohd Iz'aan Paiz Bin Zamri, Anis Salwa Mohd Khairuddin, Norrima Mokhtar, Rubiyah Yusof

Abstract:

Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.

Keywords: classification, feature extraction, fuzzy, inspection system, image analysis, macroscopic images

Procedia PDF Downloads 416
689 Support Vector Machine Based Retinal Therapeutic for Glaucoma Using Machine Learning Algorithm

Authors: P. S. Jagadeesh Kumar, Mingmin Pan, Yang Yung, Tracy Lin Huan

Abstract:

Glaucoma is a group of visual maladies represented by the scheduled optic nerve neuropathy; means to the increasing dwindling in vision ground, resulting in loss of sight. In this paper, a novel support vector machine based retinal therapeutic for glaucoma using machine learning algorithm is conservative. The algorithm has fitting pragmatism; subsequently sustained on correlation clustering mode, it visualizes perfect computations in the multi-dimensional space. Support vector clustering turns out to be comparable to the scale-space advance that investigates the cluster organization by means of a kernel density estimation of the likelihood distribution, where cluster midpoints are idiosyncratic by the neighborhood maxima of the concreteness. The predicted planning has 91% attainment rate on data set deterrent on a consolidation of 500 realistic images of resolute and glaucoma retina; therefore, the computational benefit of depending on the cluster overlapping system pedestal on machine learning algorithm has complete performance in glaucoma therapeutic.

Keywords: machine learning algorithm, correlation clustering mode, cluster overlapping system, glaucoma, kernel density estimation, retinal therapeutic

Procedia PDF Downloads 239
688 Heart Rate Variability Responses Pre-, during, and Post-Exercise among Special Olympics Athletes

Authors: Kearney Dover, Viviene Temple, Lynneth Stuart-Hill

Abstract:

Heart Rate Variability (HRV) is the beat-to-beat variation in adjacent heartbeats. HRV is a non-invasive measure of the autonomic nervous system (ANS) and provides information about the sympathetic (SNS) and parasympathetic (PNS) nervous systems. The HRV of a well-conditioned heart is generally high at rest, whereas low HRV has been associated with adverse outcomes/conditions, including congestive heart failure, diabetic neuropathy, depression, and hospital admissions. HRV has received very little research attention among individuals with intellectual disabilities in general or Special Olympic athletes. Purpose: 1) Having a longer post-exercise rest and recovery time to establish how long it takes for the athletes’ HRV components to return to pre-exercise levels, 2) To determine if greater familiarization with the testing processes influences HRV. Participants: Two separate samples of 10 adult Special Olympics athletes will be recruited for 2 separate studies. Athletes will be between 18 and 50 years of age and will be members of Special Olympics BC. Anticipated Findings: To answer why the Special Olympics athletes display poor cardiac responsiveness to changes in autonomic modulation during exercise. By testing the cortisol levels in the athletes, we can determine their stress levels which will then explain their measured HRV.

Keywords: 6MWT, autonomic modulation, cortisol levels, intellectual disability

Procedia PDF Downloads 302
687 Automatic Differentiation of Ultrasonic Images of Cystic and Solid Breast Lesions

Authors: Dmitry V. Pasynkov, Ivan A. Egoshin, Alexey A. Kolchev, Ivan V. Kliouchkin

Abstract:

In most cases, typical cysts are easily recognized at ultrasonography. The specificity of this method for typical cysts reaches 98%, and it is usually considered as gold standard for typical cyst diagnosis. However, it is necessary to have all the following features to conclude the typical cyst: clear margin, the absence of internal echoes and dorsal acoustic enhancement. At the same time, not every breast cyst is typical. It is especially characteristic for protein-contained cysts that may have significant internal echoes. On the other hand, some solid lesions (predominantly malignant) may have cystic appearance and may be falsely accepted as cysts. Therefore we tried to develop the automatic method of cystic and solid breast lesions differentiation. Materials and methods. The input data were the ultrasonography digital images with the 256-gradations of gray color (Medison SA8000SE, Siemens X150, Esaote MyLab C). Identification of the lesion on these images was performed in two steps. On the first one, the region of interest (or contour of lesion) was searched and selected. Selection of such region is carried out using the sigmoid filter where the threshold is calculated according to the empirical distribution function of the image brightness and, if necessary, it was corrected according to the average brightness of the image points which have the highest gradient of brightness. At the second step, the identification of the selected region to one of lesion groups by its statistical characteristics of brightness distribution was made. The following characteristics were used: entropy, coefficients of the linear and polynomial regression, quantiles of different orders, an average gradient of brightness, etc. For determination of decisive criterion of belonging to one of lesion groups (cystic or solid) the training set of these characteristics of brightness distribution separately for benign and malignant lesions were received. To test our approach we used a set of 217 ultrasonic images of 107 cystic (including 53 atypical, difficult for bare eye differentiation) and 110 solid lesions. All lesions were cytologically and/or histologically confirmed. Visual identification was performed by trained specialist in breast ultrasonography. Results. Our system correctly distinguished all (107, 100%) typical cysts, 107 of 110 (97.3%) solid lesions and 50 of 53 (94.3%) atypical cysts. On the contrary, with the bare eye it was possible to identify correctly all (107, 100%) typical cysts, 96 of 110 (87.3%) solid lesions and 32 of 53 (60.4%) atypical cysts. Conclusion. Automatic approach significantly surpasses the visual assessment performed by trained specialist. The difference is especially large for atypical cysts and hypoechoic solid lesions with the clear margin. This data may have a clinical significance.

Keywords: breast cyst, breast solid lesion, differentiation, ultrasonography

Procedia PDF Downloads 261
686 Automatic Detection of Traffic Stop Locations Using GPS Data

Authors: Areej Salaymeh, Loren Schwiebert, Stephen Remias, Jonathan Waddell

Abstract:

Extracting information from new data sources has emerged as a crucial task in many traffic planning processes, such as identifying traffic patterns, route planning, traffic forecasting, and locating infrastructure improvements. Given the advanced technologies used to collect Global Positioning System (GPS) data from dedicated GPS devices, GPS equipped phones, and navigation tools, intelligent data analysis methodologies are necessary to mine this raw data. In this research, an automatic detection framework is proposed to help identify and classify the locations of stopped GPS waypoints into two main categories: signalized intersections or highway congestion. The Delaunay triangulation is used to perform this assessment in the clustering phase. While most of the existing clustering algorithms need assumptions about the data distribution, the effectiveness of the Delaunay triangulation relies on triangulating geographical data points without such assumptions. Our proposed method starts by cleaning noise from the data and normalizing it. Next, the framework will identify stoppage points by calculating the traveled distance. The last step is to use clustering to form groups of waypoints for signalized traffic and highway congestion. Next, a binary classifier was applied to find distinguish highway congestion from signalized stop points. The binary classifier uses the length of the cluster to find congestion. The proposed framework shows high accuracy for identifying the stop positions and congestion points in around 99.2% of trials. We show that it is possible, using limited GPS data, to distinguish with high accuracy.

Keywords: Delaunay triangulation, clustering, intelligent transportation systems, GPS data

Procedia PDF Downloads 269
685 AgriInnoConnect Pro System Using Iot and Firebase Console

Authors: Amit Barde, Dipali Khatave, Vaishali Savale, Atharva Chavan, Sapna Wagaj, Aditya Jilla

Abstract:

AgriInnoConnect Pro is an advanced agricultural automation system designed to enhance irrigation efficiency and overall farm management through IoT technology. Using MIT App Inventor, Telegram, Arduino IDE, and Firebase Console, it provides a user-friendly interface for farmers. Key hardware includes soil moisture sensors, DHT11 sensors, a 12V motor, a solenoid valve, a stepdown transformer, Smart Fencing, and AC switches. The system operates in automatic and manual modes. In automatic mode, the ESP32 microcontroller monitors soil moisture and autonomously controls irrigation to optimize water usage. In manual mode, users can control the irrigation motor via a mobile app. Telegram bots enable remote operation of the solenoid valve and electric fencing, enhancing farm security. Additionally, the system upgrades conventional devices to smart ones using AC switches, broadening automation capabilities. AgriInnoConnect Pro aims to improve farm productivity and resource management, addressing the critical need for sustainable water conservation and providing a comprehensive solution for modern farm management. The integration of smart technologies in AgriInnoConnect Pro ensures precision farming practices, promoting efficient resource allocation and sustainable agricultural development.

Keywords: agricultural automation, IoT, soil moisture sensor, ESP32, MIT app inventor, telegram bot, smart farming, remote control, firebase console

Procedia PDF Downloads 26
684 Induced Emotional Empathy and Contextual Factors like Presence of Others Reduce the Negative Stereotypes Towards Persons with Disabilities through Stronger Prosociality

Authors: Shailendra Kumar Mishra

Abstract:

In this paper, we focus on how contextual factors like the physical presence of other perceivers and then developed induced emotional empathy towards a person with disabilities may reduce the automatic negative stereotypes and then response towards that person. We demonstrated in study 1 that negative attitude based on negative stereotypes assessed on ATDP-test questionnaires on five points Linkert-scale are significantly less negative when participants were tested with a group of perceivers and then tested alone separately by applying 3 (positive, indifferent, and negative attitude levels) X 2 (physical presence condition and alone) factorial design of ANOVA test. In the second study, we demonstrate, by applying regression analysis, in the presence of other perceivers, whether in a small group, participants showed more induced emotional empathy through stronger prosociality towards a high distress target like a person with disabilities in comparison of that of other stigmatized persons such as racial biased or gender-biased people. Thus results show that automatic affective response in the form of induced emotional empathy in perceiver and contextual factors like the presence of other perceivers automatically activate stronger prosocial norms and egalitarian goals towards physically challenged persons in comparison to other stigmatized persons like racial or gender-biased people. This leads to less negative attitudes and behaviour towards a person with disabilities.

Keywords: contextual factors, high distress target, induced emotional empathy, stronger prosociality

Procedia PDF Downloads 130
683 Empowering Transformers for Evidence-Based Medicine

Authors: Jinan Fiaidhi, Hashmath Shaik

Abstract:

Breaking the barrier for practicing evidence-based medicine relies on effective methods for rapidly identifying relevant evidence from the body of biomedical literature. An important challenge confronted by medical practitioners is the long time needed to browse, filter, summarize and compile information from different medical resources. Deep learning can help in solving this based on automatic question answering (Q&A) and transformers. However, Q&A and transformer technologies are not trained to answer clinical queries that can be used for evidence-based practice, nor can they respond to structured clinical questioning protocols like PICO (Patient/Problem, Intervention, Comparison and Outcome). This article describes the use of deep learning techniques for Q&A that are based on transformer models like BERT and GPT to answer PICO clinical questions that can be used for evidence-based practice extracted from sound medical research resources like PubMed. We are reporting acceptable clinical answers that are supported by findings from PubMed. Our transformer methods are reaching an acceptable state-of-the-art performance based on two staged bootstrapping processes involving filtering relevant articles followed by identifying articles that support the requested outcome expressed by the PICO question. Moreover, we are also reporting experimentations to empower our bootstrapping techniques with patch attention to the most important keywords in the clinical case and the PICO questions. Our bootstrapped patched with attention is showing relevancy of the evidence collected based on entropy metrics.

Keywords: automatic question answering, PICO questions, evidence-based medicine, generative models, LLM transformers

Procedia PDF Downloads 32
682 A Study of Ocular Morbidity in Road Traffic Accidents

Authors: Nikhat Iqbal Tamboli

Abstract:

INTRODUCTION: road traffic accidents (RTAs) are one of the leading and common causes of ocular injuries especially in developing countries like India which are preventable with certain measures and so it is of public health importance. AIM: To study incidence and clinical presentation of ocular morbidity in road traffic accidents. METHOD: Prospective cross-sectional study was conducted on 360 patients reported in department of ophthalmology. Detailed ocular examination and relevant investigations done. RESULTS: Incidence of ocular injuries is 23%. male:female ratio is 4.5:1.Cases having Sub conjunctival haemorrhage [74].eccymosis[217]. lid lcerations [164]orbital fracture[12] corneal tear [7]corneal abrasion[2] sclera tear[6] hyphaema[4] traumatic mydriasis [7]traumatic cataract [2]vitreous haemorrhage [1]traumatic optic neuropathy[1].Maximum cases in age group 20-40 years, with two wheeler vehicles 94.7% .Under influence of alcohol 13.3%. CONCLUSION: Younger age group with male preponderance is involved in ocular trauma due to road traffic accidents .maximum cases reported are with anterior segment injuries. Alcohol and two wheeler vehicles are common risk factors. Injuries involving cornea had bad prognosis and involving retina had worst prognosis.

Keywords: ocular morbidity, eye trauma, RTA, eye injury

Procedia PDF Downloads 63
681 Effects of a Simulated Power Cut in Automatic Milking Systems on Dairy Cows Heart Activity

Authors: Anja Gräff, Stefan Holzer, Manfred Höld, Jörn Stumpenhausen, Heinz Bernhardt

Abstract:

In view of the increasing quantity of 'green energy' from renewable raw materials and photovoltaic facilities, it is quite conceivable that power supply variations may occur, so that constantly working machines like automatic milking systems (AMS) may break down temporarily. The usage of farm-made energy is steadily increasing in order to keep energy costs as low as possible. As a result, power cuts are likely to happen more frequently. Current work in the framework of the project 'stable 4.0' focuses on possible stress reactions by simulating power cuts up to four hours in dairy farms. Based on heart activity it should be found out whether stress on dairy cows increases under these circumstances. In order to simulate a power cut, 12 random cows out of 2 herds were not admitted to the AMS for at least two hours on three consecutive days. The heart rates of the cows were measured and the collected data evaluated with HRV Program Kubios Version 2.1 on the basis of eight parameters (HR, RMSSD, pNN50, SD1, SD2, LF, HF and LF/HF). Furthermore, stress reactions were examined closely via video analysis, milk yield, ruminant activity, pedometer and measurements of cortisol metabolites. Concluding it turned out, that during the test only some animals were suffering from minor stress symptoms, when they tried to get into the AMS at their regular milking time, but couldn´t be milked because the system was manipulated. However, the stress level during a regular “time-dependent milking rejection” was just as high. So the study comes to the conclusion, that the low psychological stress level in the case of a 2-4 hours failure of an AMS does not have any impact on animal welfare and health.

Keywords: dairy cow, heart activity, power cut, stable 4.0

Procedia PDF Downloads 307
680 A Deep Learning Approach to Calculate Cardiothoracic Ratio From Chest Radiographs

Authors: Pranav Ajmera, Amit Kharat, Tanveer Gupte, Richa Pant, Viraj Kulkarni, Vinay Duddalwar, Purnachandra Lamghare

Abstract:

The cardiothoracic ratio (CTR) is the ratio of the diameter of the heart to the diameter of the thorax. An abnormal CTR, that is, a value greater than 0.55, is often an indicator of an underlying pathological condition. The accurate prediction of an abnormal CTR from chest X-rays (CXRs) aids in the early diagnosis of clinical conditions. We propose a deep learning-based model for automatic CTR calculation that can assist the radiologist with the diagnosis of cardiomegaly and optimize the radiology flow. The study population included 1012 posteroanterior (PA) CXRs from a single institution. The Attention U-Net deep learning (DL) architecture was used for the automatic calculation of CTR. A CTR of 0.55 was used as a cut-off to categorize the condition as cardiomegaly present or absent. An observer performance test was conducted to assess the radiologist's performance in diagnosing cardiomegaly with and without artificial intelligence (AI) assistance. The Attention U-Net model was highly specific in calculating the CTR. The model exhibited a sensitivity of 0.80 [95% CI: 0.75, 0.85], precision of 0.99 [95% CI: 0.98, 1], and a F1 score of 0.88 [95% CI: 0.85, 0.91]. During the analysis, we observed that 51 out of 1012 samples were misclassified by the model when compared to annotations made by the expert radiologist. We further observed that the sensitivity of the reviewing radiologist in identifying cardiomegaly increased from 40.50% to 88.4% when aided by the AI-generated CTR. Our segmentation-based AI model demonstrated high specificity and sensitivity for CTR calculation. The performance of the radiologist on the observer performance test improved significantly with AI assistance. A DL-based segmentation model for rapid quantification of CTR can therefore have significant potential to be used in clinical workflows.

Keywords: cardiomegaly, deep learning, chest radiograph, artificial intelligence, cardiothoracic ratio

Procedia PDF Downloads 91
679 Influence of HbA1c on Nitric Oxide Level in Patients with Type 2 Diabetes Mellitus

Authors: Dara Kutsyk, Olga Bondarenko, Mariya Sorochka

Abstract:

In 21-century type 2 diabetes (T2D) has become a global health and social problem in the whole world. The goal of treatment for patients with T2D is to prevent complications of diabetes - macrovascular diseases (heart disease, stroke, and peripheral vascular disease) and microvascular diseases (retinopathy, neuropathy and nephropathy). Nitric oxide (NO) plays an important role in maintaining vascular homeostasis. Loss of NO function is one of the earliest indicators of disease and its progression especially in patients with T2D. Aim: To compare NO level between patients with well and bad controlled glycemia in T2D. Methods: The study included 32 patients with T2D. The diagnosis of T2D was confirmed due to International Diabetes Federation (IDF) criteria 2015. Patients were divided into two groups: with well controlled glycaemia (HbA1c < 7%) and bad controlled glycaemia (HbA1c > 7%). The control group consists of 15 healthy subjects. Results: NO level in patients with T2D is significantly higher (27,2 ±3,1 µmol), compared to controls (18,86±0,9 µmol; p < 0,001). A significant difference in NO level was found between patients with bad controlled glycaemia (25,9±2,2 µmol) and well controlled glycaemia (28,7 ± 3,0 µmol; p<0,01). The study showed a moderate negative correlation between NO level and HbA1c (-0,399; р< 0,05). Conclusions: Production of NO is impaired in patients with T2D, especially with badly controlled glycaemia. With the increase in HbAc serum NO decreases. This can be the main target for prevention vascular complication in T2D.

Keywords: type 2 diabetes, glycated hemoglobin, nitric oxide, Diabetes mellitus

Procedia PDF Downloads 260
678 Geographic Information System and Dynamic Segmentation of Very High Resolution Images for the Semi-Automatic Extraction of Sandy Accumulation

Authors: A. Bensaid, T. Mostephaoui, R. Nedjai

Abstract:

A considerable area of Algerian lands is threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mecheria department generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of LANDSAT images (5, 7, and 8) of three scenes 197/37, 198/36 and 198/37 for the year 2020. As a second step, we prospect the use of geospatial techniques to monitor the progression of sand dunes on developed (urban) lands as well as on the formation of sandy accumulations (dune, dunes fields, nebkha, barkhane, etc.). For this purpose, this study made use of the semi-automatic processing method for the dynamic segmentation of images with very high spatial resolution (SENTINEL-2 and Google Earth). This study was able to demonstrate that urban lands under current conditions are located in sand transit zones that are mobilized by the winds from the northwest and southwest directions.

Keywords: land development, GIS, segmentation, remote sensing

Procedia PDF Downloads 147
677 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI

Authors: Ananya Ananya, Karthik Rao

Abstract:

Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.

Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net

Procedia PDF Downloads 250