Search results for: automatic loom
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 876

Search results for: automatic loom

666 Study of Human Upper Arm Girth during Elbow Isokinetic Contractions Based on a Smart Circumferential Measuring System

Authors: Xi Wang, Xiaoming Tao, Raymond C. H. So

Abstract:

As one of the convenient and noninvasive sensing approaches, the automatic limb girth measurement has been applied to detect intention behind human motion from muscle deformation. The sensing validity has been elaborated by preliminary researches but still need more fundamental study, especially on kinetic contraction modes. Based on the novel fabric strain sensors, a soft and smart limb girth measurement system was developed by the authors’ group, which can measure the limb girth in-motion. Experiments were carried out on elbow isometric flexion and elbow isokinetic flexion (biceps’ isokinetic contractions) of 90°/s, 60°/s, and 120°/s for 10 subjects (2 canoeists and 8 ordinary people). After removal of natural circumferential increments due to elbow position, the joint torque is found not uniformly sensitive to the limb circumferential strains, but declining as elbow joint angle rises, regardless of the angular speed. Moreover, the maximum joint torque was found as an exponential function of the joint’s angular speed. This research highly contributes to the application of the automatic limb girth measuring during kinetic contractions, and it is useful to predict the contraction level of voluntary skeletal muscles.

Keywords: fabric strain sensor, muscle deformation, isokinetic contraction, joint torque, limb girth strain

Procedia PDF Downloads 297
665 Visual Template Detection and Compositional Automatic Regular Expression Generation for Business Invoice Extraction

Authors: Anthony Proschka, Deepak Mishra, Merlyn Ramanan, Zurab Baratashvili

Abstract:

Small and medium-sized businesses receive over 160 billion invoices every year. Since these documents exhibit many subtle differences in layout and text, extracting structured fields such as sender name, amount, and VAT rate from them automatically is an open research question. In this paper, existing work in template-based document extraction is extended, and a system is devised that is able to reliably extract all required fields for up to 70% of all documents in the data set, more than any other previously reported method. The approaches are described for 1) detecting through visual features which template a given document belongs to, 2) automatically generating extraction rules for a given new template by composing regular expressions from multiple components, and 3) computing confidence scores that indicate the accuracy of the automatic extractions. The system can generate templates with as little as one training sample and only requires the ground truth field values instead of detailed annotations such as bounding boxes that are hard to obtain. The system is deployed and used inside a commercial accounting software.

Keywords: data mining, information retrieval, business, feature extraction, layout, business data processing, document handling, end-user trained information extraction, document archiving, scanned business documents, automated document processing, F1-measure, commercial accounting software

Procedia PDF Downloads 94
664 Automatic Fluid-Structure Interaction Modeling and Analysis of Butterfly Valve Using Python Script

Authors: N. Guru Prasath, Sangjin Ma, Chang-Wan Kim

Abstract:

A butterfly valve is a quarter turn valve which is used to control the flow of a fluid through a section of pipe. Generally, butterfly valve is used in wide range of applications such as water distribution, sewage, oil and gas plants. In particular, butterfly valve with larger diameter finds its immense applications in hydro power plants to control the fluid flow. In-lieu with the constraints in cost and size to run laboratory setup, analysis of large diameter values will be mostly studied by computational method which is the best and inexpensive solution. For fluid and structural analysis, CFD and FEM software is used to perform large scale valve analyses, respectively. In order to perform above analysis in butterfly valve, the CAD model has to recreate and perform mesh in conventional software’s for various dimensions of valve. Therefore, its limitation is time consuming process. In-order to overcome that issue, python code was created to outcome complete pre-processing setup automatically in Salome software. Applying dimensions of the model clearly in the python code makes the running time comparatively lower and easier way to perform analysis of the valve. Hence, in this paper, an attempt was made to study the fluid-structure interaction (FSI) of butterfly valves by varying the valve angles and dimensions using python code in pre-processing software, and results are produced.

Keywords: butterfly valve, flow coefficient, automatic CFD analysis, FSI analysis

Procedia PDF Downloads 204
663 Enhance Engineering Learning Using Cognitive Simulator

Authors: Lior Davidovitch

Abstract:

Traditional training based on static models and case studies is the backbone of most teaching and training programs of engineering education. However, project management learning is characterized by dynamics models that requires new and enhanced learning method. The results of empirical experiments evaluating the effectiveness and efficiency of using cognitive simulator as a new training technique are reported. The empirical findings are focused on the impact of keeping and reviewing learning history in a dynamic and interactive simulation environment of engineering education. The cognitive simulator for engineering project management learning had two learning history keeping modes: manual (student-controlled), automatic (simulator-controlled) and a version with no history keeping. A group of industrial engineering students performed four simulation-runs divided into three identical simple scenarios and one complicated scenario. The performances of participants running the simulation with the manual history mode were significantly better than users running the simulation with the automatic history mode. Moreover, the effects of using the undo enhanced further the learning process. The findings indicate an enhancement of engineering students’ learning and decision making when they use the record functionality of the history during their engineering training process. Furthermore, the cognitive simulator as educational innovation improves students learning and training. The practical implications of using simulators in the field of engineering education are discussed.

Keywords: cognitive simulator, decision making, engineering learning, project management

Procedia PDF Downloads 210
662 Video Object Segmentation for Automatic Image Annotation of Ethernet Connectors with Environment Mapping and 3D Projection

Authors: Marrone Silverio Melo Dantas Pedro Henrique Dreyer, Gabriel Fonseca Reis de Souza, Daniel Bezerra, Ricardo Souza, Silvia Lins, Judith Kelner, Djamel Fawzi Hadj Sadok

Abstract:

The creation of a dataset is time-consuming and often discourages researchers from pursuing their goals. To overcome this problem, we present and discuss two solutions adopted for the automation of this process. Both optimize valuable user time and resources and support video object segmentation with object tracking and 3D projection. In our scenario, we acquire images from a moving robotic arm and, for each approach, generate distinct annotated datasets. We evaluated the precision of the annotations by comparing these with a manually annotated dataset, as well as the efficiency in the context of detection and classification problems. For detection support, we used YOLO and obtained for the projection dataset an F1-Score, accuracy, and mAP values of 0.846, 0.924, and 0.875, respectively. Concerning the tracking dataset, we achieved an F1-Score of 0.861, an accuracy of 0.932, whereas mAP reached 0.894. In order to evaluate the quality of the annotated images used for classification problems, we employed deep learning architectures. We adopted metrics accuracy and F1-Score, for VGG, DenseNet, MobileNet, Inception, and ResNet. The VGG architecture outperformed the others for both projection and tracking datasets. It reached an accuracy and F1-score of 0.997 and 0.993, respectively. Similarly, for the tracking dataset, it achieved an accuracy of 0.991 and an F1-Score of 0.981.

Keywords: RJ45, automatic annotation, object tracking, 3D projection

Procedia PDF Downloads 131
661 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping

Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting

Abstract:

Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.

Keywords: deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator

Procedia PDF Downloads 206
660 In-Context Meta Learning for Automatic Designing Pretext Tasks for Self-Supervised Image Analysis

Authors: Toktam Khatibi

Abstract:

Self-supervised learning (SSL) includes machine learning models that are trained on one aspect and/or one part of the input to learn other aspects and/or part of it. SSL models are divided into two different categories, including pre-text task-based models and contrastive learning ones. Pre-text tasks are some auxiliary tasks learning pseudo-labels, and the trained models are further fine-tuned for downstream tasks. However, one important disadvantage of SSL using pre-text task solving is defining an appropriate pre-text task for each image dataset with a variety of image modalities. Therefore, it is required to design an appropriate pretext task automatically for each dataset and each downstream task. To the best of our knowledge, the automatic designing of pretext tasks for image analysis has not been considered yet. In this paper, we present a framework based on In-context learning that describes each task based on its input and output data using a pre-trained image transformer. Our proposed method combines the input image and its learned description for optimizing the pre-text task design and its hyper-parameters using Meta-learning models. The representations learned from the pre-text tasks are fine-tuned for solving the downstream tasks. We demonstrate that our proposed framework outperforms the compared ones on unseen tasks and image modalities in addition to its superior performance for previously known tasks and datasets.

Keywords: in-context learning (ICL), meta learning, self-supervised learning (SSL), vision-language domain, transformers

Procedia PDF Downloads 40
659 Tool for Maxillary Sinus Quantification in Computed Tomography Exams

Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina

Abstract:

The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.

Keywords: maxillary sinus, support vector machine, region growing, volume quantification

Procedia PDF Downloads 475
658 HPTLC Fingerprint Profiling of Protorhus longifolia Methanolic Leaf Extract and Qualitative Analysis of Common Biomarkers

Authors: P. S. Seboletswe, Z. Mkhize, L. M. Katata-Seru

Abstract:

Protorhus longifolia is known as a medicinal plant that has been used traditionally to treat various ailments such as hemiplegic paralysis, blood clotting related diseases, diarrhoea, heartburn, etc. The study reports a High-Performance Thin Layer Chromatography (HPTLC) fingerprint profile of Protorhus longifolia methanolic extract and its qualitative analysis of gallic acid, rutin, and quercetin. HPTLC analysis was achieved using CAMAG HPTLC system equipped with CAMAG automatic TLC sampler 4, CAMAG Automatic Developing Chamber 2 (ADC2), CAMAG visualizer 2, CAMAG Thin Layer Chromatography (TLC) scanner and visionCATS CAMAG HPTLC software. Mobile phase comprising toluene, ethyl acetate, formic acid (21:15:3) was used for qualitative analysis of gallic acid and revealed eight peaks while the mobile phase containing ethyl acetate, water, glacial acetic acid, formic acid (100:26:11:11) for qualitative analysis of rutin and quercetin revealed six peaks. HPTLC sillica gel 60 F254 glass plates (10 × 10) were used as the stationary phase. Gallic acid was detected at the Rf = 0.35; while rutin and quercetin were not evident in the extract. Further studies will be performed to quantify gallic acid in Protorhus longifolia leaves and also identify other biomarkers.

Keywords: biomarkers, fingerprint profiling, gallic acid, HPTLC, Protorhus longifolia

Procedia PDF Downloads 107
657 Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification

Authors: Krishna Mohan Bathula, Fatou Bintou Loucoubar, FNU Kaleemunnisa, Christelle Scharff, Mark Anthony De Castro

Abstract:

Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms.

Keywords: automatic speech recognition, interactive voice response, voice response recognition, wolof word classification

Procedia PDF Downloads 78
656 Impacts of Applying Automated Vehicle Location Systems to Public Bus Transport Management

Authors: Vani Chintapally

Abstract:

The expansion of modest and minimized Global Positioning System (GPS) beneficiaries has prompted most Automatic Vehicle Location (AVL) frameworks today depending solely on satellite-based finding frameworks, as GPS is the most stable usage of these. This paper shows the attributes of a proposed framework for following and dissecting open transport in a run of the mill medium-sized city and complexities the qualities of such a framework to those of broadly useful AVL frameworks. Particular properties of the courses broke down by the AVL framework utilized for the examination of open transport in our study incorporate cyclic vehicle courses, the requirement for particular execution reports, and so forth. This paper particularly manages vehicle movement forecasts and the estimation of station landing time, combined with consequently produced reports on timetable conformance and other execution measures. Another side of the watched issue is proficient exchange of information from the vehicles to the control focus. The pervasiveness of GSM bundle information exchange advancements combined with decreased information exchange expenses have brought on today's AVL frameworks to depend predominantly on parcel information exchange administrations from portable administrators as the correspondences channel in the middle of vehicles and the control focus. This methodology brings numerous security issues up in this conceivably touchy application field.

Keywords: automatic vehicle location (AVL), expectation of landing times, AVL security, data administrations, wise transport frameworks (ITS), guide coordinating

Procedia PDF Downloads 349
655 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English

Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista

Abstract:

The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.

Keywords: corpus linguistics, historical linguistics, old English, parallel corpus

Procedia PDF Downloads 166
654 Effect of Automatic Self Transcending Meditation on Perceived Stress and Sleep Quality in Adults

Authors: Divya Kanchibhotla, Shashank Kulkarni, Shweta Singh

Abstract:

Chronic stress and sleep quality reduces mental health and increases the risk of developing depression and anxiety as well. There is increasing evidence for the utility of meditation as an adjunct clinical intervention for conditions like depression and anxiety. The present study is an attempt to explore the impact of Sahaj Samadhi Meditation (SSM), a category of Automatic Self Transcending Meditation (ASTM), on perceived stress and sleep quality in adults. The study design was a single group pre-post assessment. Perceived Stress Scale (PSS) and the Pittsburgh Sleep Quality Index (PSQI) were used in this study. Fifty-two participants filled PSS, and 60 participants filled PSQI at the beginning of the program (day 0), after two weeks (day 16) and at two months (day 60). Significant pre-post differences for the perceived stress level on Day 0 - Day 16 (p < 0.01; Cohen's d = 0.46) and Day 0 - Day 60 (p < 0.01; Cohen's d = 0.76) clearly demonstrated that by practicing SSM, participants experienced reduction in the perceived stress. The effect size of the intervention observed on the 16th day of assessment was small to medium, but on the 60th day, a medium to large effect size of the intervention was observed. In addition to this, significant pre-post differences for the sleep quality on Day 0 - Day 16 and Day 0 - Day 60 (p < 0.05) clearly demonstrated that by practicing SSM, participants experienced improvement in the sleep quality. Compared with Day 0 assessment, participants demonstrated significant improvement in the quality of sleep on Day 16 and Day 60. The effect size of the intervention observed on the 16th day of assessment was small, but on the 60th day, a small to medium effect size of the intervention was observed. In the current study we found out that after practicing SSM for two months, participants reported a reduction in the perceived stress, they felt that they are more confident about their ability to handle personal problems, were able to cope with all the things that they had to do, felt that they were on top of the things, and felt less angered. Participants also reported that their overall sleep quality improved; they took less time to fall asleep; they had less disturbances in sleep and less daytime dysfunction due to sleep deprivation. The present study provides clear evidence of the efficacy and safety of non-pharmacological interventions such as SSM in reducing stress and improving sleep quality. Thus, ASTM may be considered a useful intervention to reduce psychological distress in healthy, non-clinical populations, and it can be an alternative remedy for treating poor sleep among individuals and decreasing the use of harmful sedatives.

Keywords: automatic self transcending meditation, Sahaj Samadhi meditation, sleep, stress

Procedia PDF Downloads 103
653 The Role of Situational Attribution Training in Reducing Automatic In-Group Stereotyping in Females

Authors: Olga Mironiuk, Małgorzata Kossowska

Abstract:

The aim of the present study was to investigate the influence of Situational Attribution Training on reducing automatic in-group stereotyping in females. The experiment was conducted with the control of age and level of prejudice. 90 female participants were randomly assigned to two conditions: experimental and control group (each group was also divided into younger- and older-aged condition). Participants from the experimental condition were subjected to more extensive training. In the first part of the experiment, the experimental group took part in the first session of Situational Attribution Training while the control group participated in the Grammatical Training Control. In the second part of the research both groups took part in the Situational Attribution Training (which was considered as the second training session for the experimental group and the first one for the control condition). The training procedure was based on the descriptions of ambiguous situations which could be explained using situational or dispositional attributions. The participant’s task was to choose the situational explanation from two alternatives, out of which the second one presented the explanation based on neutral or stereotypically associated with women traits. Moreover, the experimental group took part in the third training session after two- day time delay, in order to check the persistence of the training effect. The main hypothesis stated that among participants taking part in the more extensive training, the automatic in-group stereotyping would be less frequent after having finished training sessions. The effectiveness of the training was tested by measuring the response time and the correctness of answers: the longer response time for the examples where one of two possible answers was based on the stereotype trait and higher correctness of answers was considered to be a proof of the training effectiveness. As the participants’ level of prejudice was controlled (using the Ambivalent Sexism Inventory), it was also assumed that the training effect would be weaker for participants revealing a higher level of prejudice. The obtained results did not confirm the hypothesis based on the response time: participants from the experimental group responded faster in case of situations where one of the possible explanations was based on stereotype trait. However, an interesting observation was made during the analysis of the answers’ correctness: regardless the condition and age group affiliation, participants made more mistakes while choosing the situational explanations when the alternative was based on stereotypical trait associated with the dimension of warmth. What is more, the correctness of answers was higher in the third training session for the experimental group in case when the alternative of situational explanation was based on the stereotype trait associated with the dimension of competence. The obtained results partially confirm the effectiveness of the training.

Keywords: female, in-group stereotyping, prejudice, situational attribution training

Procedia PDF Downloads 153
652 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization

Authors: Younis Elhaddad, Alfonso Ortega

Abstract:

Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.

Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production

Procedia PDF Downloads 135
651 Analysis of Urban Rail Transit Station's Accessibility Reliability: A Case Study of Hangzhou Metro, China

Authors: Jin-Qu Chen, Jie Liu, Yong Yin, Zi-Qi Ju, Yu-Yao Wu

Abstract:

Increase in travel fare and station’s failure will have huge impact on passengers’ travel. The Urban Rail Transit (URT) station’s accessibility reliability under increasing travel fare and station failure are analyzed in this paper. Firstly, the passenger’s travel path is resumed based on stochastic user equilibrium and Automatic Fare Collection (AFC) data. Secondly, calculating station’s importance by combining LeaderRank algorithm and Ratio of Station Affected Passenger Volume (RSAPV), and then the station’s accessibility evaluation indicators are proposed based on the analysis of passenger’s travel characteristic. Thirdly, station’s accessibility under different scenarios are measured and rate of accessibility change is proposed as station’s accessibility reliability indicator. Finally, the accessibility of Hangzhou metro stations is analyzed by the formulated models. The result shows that Jinjiang station and Liangzhu station are the most important and convenient station in the Hangzhou metro, respectively. Station failure and increase in travel fare and station failure have huge impact on station’s accessibility, except for increase in travel fare. Stations in Hangzhou metro Line 1 have relatively worse accessibility reliability and Fengqi Road station’s accessibility reliability is weakest. For Hangzhou metro operational department, constructing new metro line around Line 1 and protecting Line 1’s station preferentially can effective improve the accessibility reliability of Hangzhou metro.

Keywords: automatic fare collection data, AFC, station’s accessibility reliability, stochastic user equilibrium, urban rail transit, URT

Procedia PDF Downloads 98
650 Population Dynamics and Land Use/Land Cover Change on the Chilalo-Galama Mountain Range, Ethiopia

Authors: Yusuf Jundi Sado

Abstract:

Changes in land use are mostly credited to human actions that result in negative impacts on biodiversity and ecosystem functions. This study aims to analyze the dynamics of land use and land cover changes for sustainable natural resources planning and management. Chilalo-Galama Mountain Range, Ethiopia. This study used Thematic Mapper 05 (TM) for 1986, 2001 and Landsat 8 (OLI) data 2017. Additionally, data from the Central Statistics Agency on human population growth were analyzed. Semi-Automatic classification plugin (SCP) in QGIS 3.2.3 software was used for image classification. Global positioning system, field observations and focus group discussions were used for ground verification. Land Use Land Cover (LU/LC) change analysis was using maximum likelihood supervised classification and changes were calculated for the 1986–2001 and the 2001–2017 and 1986-2017 periods. The results show that agricultural land increased from 27.85% (1986) to 44.43% and 51.32% in 2001 and 2017, respectively with the overall accuracies of 92% (1986), 90.36% (2001), and 88% (2017). On the other hand, forests decreased from 8.51% (1986) to 7.64 (2001) and 4.46% (2017), and grassland decreased from 37.47% (1986) to 15.22%, and 15.01% in 2001 and 2017, respectively. It indicates for the years 1986–2017 the largest area cover gain of agricultural land was obtained from grassland. The matrix also shows that shrubland gained land from agricultural land, afro-alpine, and forest land. Population dynamics is found to be one of the major driving forces for the LU/LU changes in the study area.

Keywords: Landsat, LU/LC change, Semi-Automatic classification plugin, population dynamics, Ethiopia

Procedia PDF Downloads 47
649 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 370
648 Statistical Feature Extraction Method for Wood Species Recognition System

Authors: Mohd Iz'aan Paiz Bin Zamri, Anis Salwa Mohd Khairuddin, Norrima Mokhtar, Rubiyah Yusof

Abstract:

Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.

Keywords: classification, feature extraction, fuzzy, inspection system, image analysis, macroscopic images

Procedia PDF Downloads 391
647 Charter versus District Schools and Student Achievement: Implications for School Leaders

Authors: Kara Rosenblatt, Kevin Badgett, James Eldridge

Abstract:

There is a preponderance of information regarding the overall effectiveness of charter schools and their ability to increase academic achievement compared to traditional district schools. Most research on the topic is focused on comparing long and short-term outcomes, academic achievement in mathematics and reading, and locale (i.e., urban, v. Rural). While the lingering unanswered questions regarding effectiveness continue to loom for school leaders, data on charter schools suggests that enrollment increases by 10% annually and that charter schools educate more than 2 million U.S. students across 40 states each year. Given the increasing share of U.S. students educated in charter schools, it is important to better understand possible differences in student achievement defined in multiple ways for students in charter schools and for those in Independent School District (ISD) settings in the state of Texas. Data were retrieved from the Texas Education Agency’s (TEA) repository that includes data organized annually and available on the TEA website. Specific data points and definitions of achievement were based on characterizations of achievement found in the relevant literature. Specific data points include but were not limited to graduation rate, student performance on standardized testing, and teacher-related factors such as experience and longevity in the district. Initial findings indicate some similarities with the current literature on long-term student achievement in English/Language Arts; however, the findings differ substantially from other recent research related to long-term student achievement in social studies. There are a number of interesting findings also related to differences between achievement for students in charters and ISDs and within different types of charter schools in Texas. In addition to findings, implications for leadership in different settings will be explored.

Keywords: charter schools, ISDs, student achievement, implications for PK-12 school leadership

Procedia PDF Downloads 104
646 Automatic Differentiation of Ultrasonic Images of Cystic and Solid Breast Lesions

Authors: Dmitry V. Pasynkov, Ivan A. Egoshin, Alexey A. Kolchev, Ivan V. Kliouchkin

Abstract:

In most cases, typical cysts are easily recognized at ultrasonography. The specificity of this method for typical cysts reaches 98%, and it is usually considered as gold standard for typical cyst diagnosis. However, it is necessary to have all the following features to conclude the typical cyst: clear margin, the absence of internal echoes and dorsal acoustic enhancement. At the same time, not every breast cyst is typical. It is especially characteristic for protein-contained cysts that may have significant internal echoes. On the other hand, some solid lesions (predominantly malignant) may have cystic appearance and may be falsely accepted as cysts. Therefore we tried to develop the automatic method of cystic and solid breast lesions differentiation. Materials and methods. The input data were the ultrasonography digital images with the 256-gradations of gray color (Medison SA8000SE, Siemens X150, Esaote MyLab C). Identification of the lesion on these images was performed in two steps. On the first one, the region of interest (or contour of lesion) was searched and selected. Selection of such region is carried out using the sigmoid filter where the threshold is calculated according to the empirical distribution function of the image brightness and, if necessary, it was corrected according to the average brightness of the image points which have the highest gradient of brightness. At the second step, the identification of the selected region to one of lesion groups by its statistical characteristics of brightness distribution was made. The following characteristics were used: entropy, coefficients of the linear and polynomial regression, quantiles of different orders, an average gradient of brightness, etc. For determination of decisive criterion of belonging to one of lesion groups (cystic or solid) the training set of these characteristics of brightness distribution separately for benign and malignant lesions were received. To test our approach we used a set of 217 ultrasonic images of 107 cystic (including 53 atypical, difficult for bare eye differentiation) and 110 solid lesions. All lesions were cytologically and/or histologically confirmed. Visual identification was performed by trained specialist in breast ultrasonography. Results. Our system correctly distinguished all (107, 100%) typical cysts, 107 of 110 (97.3%) solid lesions and 50 of 53 (94.3%) atypical cysts. On the contrary, with the bare eye it was possible to identify correctly all (107, 100%) typical cysts, 96 of 110 (87.3%) solid lesions and 32 of 53 (60.4%) atypical cysts. Conclusion. Automatic approach significantly surpasses the visual assessment performed by trained specialist. The difference is especially large for atypical cysts and hypoechoic solid lesions with the clear margin. This data may have a clinical significance.

Keywords: breast cyst, breast solid lesion, differentiation, ultrasonography

Procedia PDF Downloads 238
645 Automatic Detection of Traffic Stop Locations Using GPS Data

Authors: Areej Salaymeh, Loren Schwiebert, Stephen Remias, Jonathan Waddell

Abstract:

Extracting information from new data sources has emerged as a crucial task in many traffic planning processes, such as identifying traffic patterns, route planning, traffic forecasting, and locating infrastructure improvements. Given the advanced technologies used to collect Global Positioning System (GPS) data from dedicated GPS devices, GPS equipped phones, and navigation tools, intelligent data analysis methodologies are necessary to mine this raw data. In this research, an automatic detection framework is proposed to help identify and classify the locations of stopped GPS waypoints into two main categories: signalized intersections or highway congestion. The Delaunay triangulation is used to perform this assessment in the clustering phase. While most of the existing clustering algorithms need assumptions about the data distribution, the effectiveness of the Delaunay triangulation relies on triangulating geographical data points without such assumptions. Our proposed method starts by cleaning noise from the data and normalizing it. Next, the framework will identify stoppage points by calculating the traveled distance. The last step is to use clustering to form groups of waypoints for signalized traffic and highway congestion. Next, a binary classifier was applied to find distinguish highway congestion from signalized stop points. The binary classifier uses the length of the cluster to find congestion. The proposed framework shows high accuracy for identifying the stop positions and congestion points in around 99.2% of trials. We show that it is possible, using limited GPS data, to distinguish with high accuracy.

Keywords: Delaunay triangulation, clustering, intelligent transportation systems, GPS data

Procedia PDF Downloads 243
644 Induced Emotional Empathy and Contextual Factors like Presence of Others Reduce the Negative Stereotypes Towards Persons with Disabilities through Stronger Prosociality

Authors: Shailendra Kumar Mishra

Abstract:

In this paper, we focus on how contextual factors like the physical presence of other perceivers and then developed induced emotional empathy towards a person with disabilities may reduce the automatic negative stereotypes and then response towards that person. We demonstrated in study 1 that negative attitude based on negative stereotypes assessed on ATDP-test questionnaires on five points Linkert-scale are significantly less negative when participants were tested with a group of perceivers and then tested alone separately by applying 3 (positive, indifferent, and negative attitude levels) X 2 (physical presence condition and alone) factorial design of ANOVA test. In the second study, we demonstrate, by applying regression analysis, in the presence of other perceivers, whether in a small group, participants showed more induced emotional empathy through stronger prosociality towards a high distress target like a person with disabilities in comparison of that of other stigmatized persons such as racial biased or gender-biased people. Thus results show that automatic affective response in the form of induced emotional empathy in perceiver and contextual factors like the presence of other perceivers automatically activate stronger prosocial norms and egalitarian goals towards physically challenged persons in comparison to other stigmatized persons like racial or gender-biased people. This leads to less negative attitudes and behaviour towards a person with disabilities.

Keywords: contextual factors, high distress target, induced emotional empathy, stronger prosociality

Procedia PDF Downloads 96
643 Effects of a Simulated Power Cut in Automatic Milking Systems on Dairy Cows Heart Activity

Authors: Anja Gräff, Stefan Holzer, Manfred Höld, Jörn Stumpenhausen, Heinz Bernhardt

Abstract:

In view of the increasing quantity of 'green energy' from renewable raw materials and photovoltaic facilities, it is quite conceivable that power supply variations may occur, so that constantly working machines like automatic milking systems (AMS) may break down temporarily. The usage of farm-made energy is steadily increasing in order to keep energy costs as low as possible. As a result, power cuts are likely to happen more frequently. Current work in the framework of the project 'stable 4.0' focuses on possible stress reactions by simulating power cuts up to four hours in dairy farms. Based on heart activity it should be found out whether stress on dairy cows increases under these circumstances. In order to simulate a power cut, 12 random cows out of 2 herds were not admitted to the AMS for at least two hours on three consecutive days. The heart rates of the cows were measured and the collected data evaluated with HRV Program Kubios Version 2.1 on the basis of eight parameters (HR, RMSSD, pNN50, SD1, SD2, LF, HF and LF/HF). Furthermore, stress reactions were examined closely via video analysis, milk yield, ruminant activity, pedometer and measurements of cortisol metabolites. Concluding it turned out, that during the test only some animals were suffering from minor stress symptoms, when they tried to get into the AMS at their regular milking time, but couldn´t be milked because the system was manipulated. However, the stress level during a regular “time-dependent milking rejection” was just as high. So the study comes to the conclusion, that the low psychological stress level in the case of a 2-4 hours failure of an AMS does not have any impact on animal welfare and health.

Keywords: dairy cow, heart activity, power cut, stable 4.0

Procedia PDF Downloads 285
642 A Deep Learning Approach to Calculate Cardiothoracic Ratio From Chest Radiographs

Authors: Pranav Ajmera, Amit Kharat, Tanveer Gupte, Richa Pant, Viraj Kulkarni, Vinay Duddalwar, Purnachandra Lamghare

Abstract:

The cardiothoracic ratio (CTR) is the ratio of the diameter of the heart to the diameter of the thorax. An abnormal CTR, that is, a value greater than 0.55, is often an indicator of an underlying pathological condition. The accurate prediction of an abnormal CTR from chest X-rays (CXRs) aids in the early diagnosis of clinical conditions. We propose a deep learning-based model for automatic CTR calculation that can assist the radiologist with the diagnosis of cardiomegaly and optimize the radiology flow. The study population included 1012 posteroanterior (PA) CXRs from a single institution. The Attention U-Net deep learning (DL) architecture was used for the automatic calculation of CTR. A CTR of 0.55 was used as a cut-off to categorize the condition as cardiomegaly present or absent. An observer performance test was conducted to assess the radiologist's performance in diagnosing cardiomegaly with and without artificial intelligence (AI) assistance. The Attention U-Net model was highly specific in calculating the CTR. The model exhibited a sensitivity of 0.80 [95% CI: 0.75, 0.85], precision of 0.99 [95% CI: 0.98, 1], and a F1 score of 0.88 [95% CI: 0.85, 0.91]. During the analysis, we observed that 51 out of 1012 samples were misclassified by the model when compared to annotations made by the expert radiologist. We further observed that the sensitivity of the reviewing radiologist in identifying cardiomegaly increased from 40.50% to 88.4% when aided by the AI-generated CTR. Our segmentation-based AI model demonstrated high specificity and sensitivity for CTR calculation. The performance of the radiologist on the observer performance test improved significantly with AI assistance. A DL-based segmentation model for rapid quantification of CTR can therefore have significant potential to be used in clinical workflows.

Keywords: cardiomegaly, deep learning, chest radiograph, artificial intelligence, cardiothoracic ratio

Procedia PDF Downloads 61
641 Geographic Information System and Dynamic Segmentation of Very High Resolution Images for the Semi-Automatic Extraction of Sandy Accumulation

Authors: A. Bensaid, T. Mostephaoui, R. Nedjai

Abstract:

A considerable area of Algerian lands is threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mecheria department generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of LANDSAT images (5, 7, and 8) of three scenes 197/37, 198/36 and 198/37 for the year 2020. As a second step, we prospect the use of geospatial techniques to monitor the progression of sand dunes on developed (urban) lands as well as on the formation of sandy accumulations (dune, dunes fields, nebkha, barkhane, etc.). For this purpose, this study made use of the semi-automatic processing method for the dynamic segmentation of images with very high spatial resolution (SENTINEL-2 and Google Earth). This study was able to demonstrate that urban lands under current conditions are located in sand transit zones that are mobilized by the winds from the northwest and southwest directions.

Keywords: land development, GIS, segmentation, remote sensing

Procedia PDF Downloads 117
640 Reviving the Ancient Craft of Patteda Anchu Saree Weaving of Karnataka, India

Authors: Hemalatha Jain, M. Vasantha

Abstract:

Patteda Anchu is one of the first variety of sari woven centuries ago in Gajendragarh village from Gadag district of north Karnataka. The sari played a significant role in bringing together the socio-cultural aspect in ancient days. It was used as wedding sari for bride and also to adorn goddess Yellamma Saundatti by the devotees. Indian traditional art and crafts were rich in culture and diversity, however with the onset of liberalisation and end of the license raj lot of traditional Indian artwork are on the verge of extinction today. Patteda Anchu is one of the examples of traditional art lost to globalisation. The main aim of the study was to document the ancient weaving tradition of the Patteda Anchu and revive by exploring the weaving possibility as yardage with different product layout. To accomplish the formulated objectives a exploratory cum diagnostic study was planned. Data was collected through observations and interviews schedule during the field visits in Gajendragarh village. There are very few weavers weaving on traditional looms and many weavers who have moved to weaving other sari's or construction work were interviewed to understand the downfall of the sari. The discussions and interviews conducted with the local weavers, shop keepers, sales agents, weaving society, NGOs and Self help groups helped in unearthing the new opportunities to develop products for the local and national market and help start weaving of Patteda Anchu and expand its market. The handloom art details in terms of raw materials, loom set up, dyeing, types of Patteda Anchu, weaving process and colors were documented through photographs, video recordings and supplemented with notes. Based on the analysis of the feedback gathered it was recommended to develop products on the handloom without changing the width frame or design of the traditional weaving methods. The weavers, weavers society and other cooperatives centres also were in consent with the new product development which will help sustain the Patteda Anchu.

Keywords: Gajendragarh, patteda Anchu sari, revival of traditional art, weaving, handloom

Procedia PDF Downloads 486
639 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI

Authors: Ananya Ananya, Karthik Rao

Abstract:

Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.

Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net

Procedia PDF Downloads 224
638 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents

Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty

Abstract:

A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.

Keywords: abstractive summarization, deep learning, natural language Processing, patent document

Procedia PDF Downloads 96
637 Ensuring Safe Operation by Providing an End-To-End Field Monitoring and Incident Management Approach for Autonomous Vehicle Based on ML/Dl SW Stack

Authors: Lucas Bublitz, Michael Herdrich

Abstract:

By achieving the first commercialization approval in San Francisco the Autonomous Driving (AD) industry proves the technology maturity of the SAE L4 AD systems and the corresponding software and hardware stack. This milestone reflects the upcoming phase in the industry, where the focus is now about scaling and supervising larger autonomous vehicle (AV) fleets in different operation areas. This requires an operation framework, which organizes and assigns responsibilities to the relevant AV technology and operation stakeholders from the AV system provider, the Remote Intervention Operator, the MaaS provider and regulatory & approval authority. This holistic operation framework consists of technological, processual, and organizational activities to ensure safe operation for fully automated vehicles. Regarding the supervision of large autonomous vehicle fleets, a major focus is on the continuous field monitoring. The field monitoring approach must reflect the safety and security criticality of incidents in the field during driving operation. This includes an automatic containment approach, with the overall goal to avoid safety critical incidents and reduce downtime by a malfunction of the AD software stack. An End-to-end (E2E) field monitoring approach detects critical faults in the field, uses a knowledge-based approach for evaluating the safety criticality and supports the automatic containment of these E/E faults. Applying such an approach will ensure the scalability of AV fleets, which is determined by the handling of incidents in the field and the continuous regulatory compliance of the technology after enhancing the Operational Design Domain (ODD) or the function scope by Functions on Demand (FoD) over the entire digital product lifecycle.

Keywords: field monitoring, incident management, multicompliance management for AI in AD, root cause analysis, database approach

Procedia PDF Downloads 36