Search results for: image features
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6122

Search results for: image features

4412 Design of Liquid Crystal Based Interface to Study the Interaction of Gram Negative Bacterial Endotoxin with Milk Protein Lactoferrin

Authors: Dibyendu Das, Santanu Kumar Pal

Abstract:

Milk protein lactoferrin (Lf) exhibits potent antibacterial activity due to its interaction with Gram-negative bacterial cell membrane component, lipopolysaccharide (LPS). This paper represents fabrication of new Liquid crystals (LCs) based biosensors to explore the interaction between Lf and LPS. LPS self-assembled at aqueous/LCs interface and orients interfacial nematic 4-cyano-4’- pentylbiphenyl (5CB) LCs in a homeotropic fashion (exhibiting dark optical image under polarized optical microscope). Interestingly, on the exposure of Lf on LPS decorated aqueous/LCs interface, an optical image of LCs changed from dark to bright indicating an ordering alteration of interfacial LCs from homeotropic to tilted/planar state. The ordering transition reflects strong binding between Lf and interfacial LPS that, in turn, perturbs the orientation of LCs. With the help of epifluorescence microscopy, we further affirmed the interfacial LPS-Lf binding event by imaging the presence of FITC tagged Lf at the LPS laden aqueous/LCs interface. Finally, we have investigated the conformational behavior of Lf in solution as well as in the presence of LPS using Circular Dichroism (CD) spectroscopy and further reconfirmed with Vibrational Circular Dichroism (VCD) spectroscopy where we found that Lf undergoes alpha-helix to random coil-like structure in the presence of LPS. As a whole the entire results described in this paper establish a robust approach to envisage the interaction between LPS and Lf through the ordering transitions of LCs at aqueous/LCs interface.

Keywords: endotoxin, interface, lactoferrin, lipopolysaccharide

Procedia PDF Downloads 260
4411 Characteristics Features and Action Mechanism of Some Country Made Pistols

Authors: Ajitesh Pal, Arpan Datta Roy, H. K. Pratihari

Abstract:

The different illegal firearms crudely made by skilled gunsmith from scrap materials are popularly known as country made firearms. Such firearms along with improvised ammunition are clandestinely marketed at the cheaper price without any license to the extremist group, criminal, poachers and firearm lovers. As per National Crime Records Bureau (NCRB), MHA, Govt of India about 80% firearm cases are committed by country made/improvised firearms. The ballistic division of the laboratory has examined a good number of cases. The analysis of firearm cases received for forensic examination revealed that 7.65mm calibre pistols mostly improvised firearm are commonly used in firearm related crime cases. In the present communication, physical parameters and other characteristics features of some 7.65mm calibre pistols have been discussed in detail. The detailed study on country made (CM) firearm will help to prepare a database related to type of material used, origin of the raw material and tools used for inscription. The study also includes to establish the chemistry of propellants & head stamp pattern. The database will be helpful to the firearm examiners, researchers, students pursuing study on forensic science as reference material.

Keywords: improvised pistol, stringent gun law, working mechanism, parameters, database

Procedia PDF Downloads 66
4410 Oral Examination: An Important Adjunct to the Diagnosis of Dermatological Disorders

Authors: Sanjay Saraf

Abstract:

The oral cavity can be the site for early manifestations of mucocutaneous disorders (MD) or the only site for occurrence of these disorders. It can also exhibit oral lesions with simultaneous associated skin lesions. The MD involving the oral mucosa commonly presents with signs such as ulcers, vesicles and bullae. The unique environment of the oral cavity may modify these signs of the disease, thereby making the clinical diagnosis an arduous task. In addition to the unique environment of oral cavity, the overlapping of the signs of various mucocutaneous disorders, also makes the clinical diagnosis more intricate. The aim of this review is to present the oral signs of dermatological disorders having common oral involvement and emphasize their importance in early detection of the systemic disorders. The aim is also to highlight the necessity of oral examination by a dermatologist while examining the skin lesions. Prior to the oral examination, it must be imperative for the dermatologists and the dental clinicians to have the knowledge of oral anatomy. It is also important to know the impact of various diseases on oral mucosa, and the characteristic features of various oral mucocutaneous lesions. An initial clinical oral examination is may help in the early diagnosis of the MD. Failure to identify the oral manifestations may reduce the likelihood of early treatment and lead to more serious problems. This paper reviews the oral manifestations of immune mediated dermatological disorders with common oral manifestations.

Keywords: dermatological investigations, genodermatosis, histological features, oral examination

Procedia PDF Downloads 351
4409 A Unified Constitutive Model for the Thermoplastic/Elastomeric-Like Cyclic Response of Polyethylene with Different Crystal Contents

Authors: A. Baqqal, O. Abduhamid, H. Abdul-Hameed, T. Messager, G. Ayoub

Abstract:

In this contribution, the effect of crystal content on the cyclic response of semi-crystalline polyethylene is studied over a large strain range. Experimental observations on a high-density polyethylene with 72% crystal content and an ultralow density polyethylene with 15% crystal content are reported. The cyclic stretching does appear a thermoplastic-like response for high crystallinity and an elastomeric-like response for low crystallinity, both characterized by a stress-softening, a hysteresis and a residual strain, whose amount depends on the crystallinity and the applied strain. Based on the experimental observations, a unified viscoelastic-viscoplastic constitutive model capturing the polyethylene cyclic response features is proposed. A two-phase representation of the polyethylene microstructure allows taking into consideration the effective contribution of the crystalline and amorphous phases to the intermolecular resistance to deformation which is coupled, to capture the strain hardening, to a resistance to molecular orientation. The polyethylene cyclic response features are captured by introducing evolution laws for the model parameters affected by the microstructure alteration due to the cyclic stretching.

Keywords: cyclic loading unloading, polyethylene, semi-crystalline polymer, viscoelastic-viscoplastic constitutive model

Procedia PDF Downloads 218
4408 Cosmetic Surgery on the Rise: The Impact of Remote Communication

Authors: Bruno Di Pace, Roxanne H. Padley

Abstract:

Aims: The recent increase in remote video interaction has increased the number of requests for teleconsultations with plastic surgeons in private practice (70% in the UK and 64% in the USA). This study investigated the motivations for such an increase and the underlying psychological impact on patients. Method: An anonymous web-based poll of 8 questions was designed and distributed to patients seeking cosmetic surgery through social networks in both Italy and the UK. The questions gathered responses regarding 1. Reasons for pursuing cosmetic surgery; 2. The effects of delays caused by the SARS-COV-2 pandemic; 3. The effects on mood; 4. The influence of video conferencing on body-image perception. Results: 85 respondents completed the online poll. Overall, 68% of respondents stated that seeing themselves more frequently online had influenced their decision to seek cosmetic surgery. The types of surgeries indicated were predominantly to the upper body and face (82%). Delays and access to surgeons during the pandemic were perceived as negatively impacting patients' moods (95%). Body-image perception and self-esteem were lower than in the pre-pandemic, particularly during lockdown (72%). Patients were more inclined to undergo cosmetic surgery during the pandemic, both due to the wish to improve their “lockdown face” for video conferencing (77%) and also due to the benefits of home recovery while in smart working (58%). Conclusions: Overall, findings suggest that video conferencing has led to a significant increase in requests for cosmetic surgery and the so-called “Zoom Boom” effect.

Keywords: cosmetic surgery, remote communication, telehealth, zoom boom

Procedia PDF Downloads 175
4407 Attribute Analysis of Quick Response Code Payment Users Using Discriminant Non-negative Matrix Factorization

Authors: Hironori Karachi, Haruka Yamashita

Abstract:

Recently, the system of quick response (QR) code is getting popular. Many companies introduce new QR code payment services and the services are competing with each other to increase the number of users. For increasing the number of users, we should grasp the difference of feature of the demographic information, usage information, and value of users between services. In this study, we conduct an analysis of real-world data provided by Nomura Research Institute including the demographic data of users and information of users’ usages of two services; LINE Pay, and PayPay. For analyzing such data and interpret the feature of them, Nonnegative Matrix Factorization (NMF) is widely used; however, in case of the target data, there is a problem of the missing data. EM-algorithm NMF (EMNMF) to complete unknown values for understanding the feature of the given data presented by matrix shape. Moreover, for comparing the result of the NMF analysis of two matrices, there is Discriminant NMF (DNMF) shows the difference of users features between two matrices. In this study, we combine EMNMF and DNMF and also analyze the target data. As the interpretation, we show the difference of the features of users between LINE Pay and Paypay.

Keywords: data science, non-negative matrix factorization, missing data, quality of services

Procedia PDF Downloads 124
4406 Reconfigurable Device for 3D Visualization of Three Dimensional Surfaces

Authors: Robson da C. Santos, Carlos Henrique de A. S. P. Coutinho, Lucas Moreira Dias, Gerson Gomes Cunha

Abstract:

The article refers to the development of an augmented reality 3D display, through the control of servo motors and projection of image with aid of video projector on the model. Augmented Reality is a branch that explores multiple approaches to increase real-world view by viewing additional information along with the real scene. The article presents the broad use of electrical, electronic, mechanical and industrial automation for geospatial visualizations, applications in mathematical models with the visualization of functions and 3D surface graphics and volumetric rendering that are currently seen in 2D layers. Application as a 3D display for representation and visualization of Digital Terrain Model (DTM) and Digital Surface Models (DSM), where it can be applied in the identification of canyons in the marine area of the Campos Basin, Rio de Janeiro, Brazil. The same can execute visualization of regions subject to landslides, as in Serra do Mar - Agra dos Reis and Serranas cities both in the State of Rio de Janeiro. From the foregoing, loss of human life and leakage of oil from pipelines buried in these regions may be anticipated in advance. The physical design consists of a table consisting of a 9 x 16 matrix of servo motors, totalizing 144 servos, a mesh is used on the servo motors for visualization of the models projected by a retro projector. Each model for by an image pre-processing, is sent to a server to be converted and viewed from a software developed in C # Programming Language.

Keywords: visualization, 3D models, servo motors, C# programming language

Procedia PDF Downloads 336
4405 Correlation between Funding and Publications: A Pre-Step towards Future Research Prediction

Authors: Ning Kang, Marius Doornenbal

Abstract:

Funding is a very important – if not crucial – resource for research projects. Usually, funding organizations will publish a description of the funded research to describe the scope of the funding award. Logically, we would expect research outcomes to align with this funding award. For that reason, we might be able to predict future research topics based on present funding award data. That said, it remains to be shown if and how future research topics can be predicted by using the funding information. In this paper, we extract funding project information and their generated paper abstracts from the Gateway to Research database as a group, and use the papers from the same domains and publication years in the Scopus database as a baseline comparison group. We annotate both the project awards and the papers resulting from the funded projects with linguistic features (noun phrases), and then calculate tf-idf and cosine similarity between these two set of features. We show that the cosine similarity between the project-generated papers group is bigger than the project-baseline group, and also that these two groups of similarities are significantly different. Based on this result, we conclude that the funding information actually correlates with the content of future research output for the funded project on the topical level. How funding really changes the course of science or of scientific careers remains an elusive question.

Keywords: natural language processing, noun phrase, tf-idf, cosine similarity

Procedia PDF Downloads 242
4404 Performance of On-site Earthquake Early Warning Systems for Different Sensor Locations

Authors: Ting-Yu Hsu, Shyu-Yu Wu, Shieh-Kung Huang, Hung-Wei Chiang, Kung-Chun Lu, Pei-Yang Lin, Kuo-Liang Wen

Abstract:

Regional earthquake early warning (EEW) systems are not suitable for Taiwan, as most destructive seismic hazards arise due to in-land earthquakes. These likely cause the lead-time provided by regional EEW systems before a destructive earthquake wave arrives to become null. On the other hand, an on-site EEW system can provide more lead-time at a region closer to an epicenter, since only seismic information of the target site is required. Instead of leveraging the information of several stations, the on-site system extracts some P-wave features from the first few seconds of vertical ground acceleration of a single station and performs a prediction of the oncoming earthquake intensity at the same station according to these features. Since seismometers could be triggered by non-earthquake events such as a passing of a truck or other human activities, to reduce the likelihood of false alarms, a seismometer was installed at three different locations on the same site and the performance of the EEW system for these three sensor locations were discussed. The results show that the location on the ground of the first floor of a school building maybe a good choice, since the false alarms could be reduced and the cost for installation and maintenance is the lowest.

Keywords: earthquake early warning, on-site, seismometer location, support vector machine

Procedia PDF Downloads 239
4403 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm

Authors: Ghada Badr, Arwa Alturki

Abstract:

The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.

Keywords: alignment, RNA secondary structure, pairwise, component-based, data mining

Procedia PDF Downloads 453
4402 Pathomorphological Features of Lungs from Brown Hares Infected with Parasites

Authors: Mariana Panayotova-Pencheva, Anetka Trifonova, Vassilena Dakova

Abstract:

790 lungs from brown hares (Lepus europeus L.) from different regions of Bulgaria were investigated during the period 2009-2017. The parasitological status and pathomorphological features in the lungs were recorded. The following parasite species were established: one nematode - Protostrongylus tauricus (7.59% prevalence), one tapeworm – larva of Taenia pisiformis Cysticercus pisiformis (3.04% prevalence) and one arthropod – larva of Linguatula serrata – Pentastomum dentatum (0.89% prevalence). Macroscopic lesions in the lungs were different depending on the causative agents. The infections with C. pisiformis and P. dentatum were attended with small, mainly superficial changes in the lungs. Protostrongylid infections were connected with different in appearance and burden macroscopic changes. In 77.7%, they were nodular, and in the rest of cases, they diffuse. The consistency of the lesions was compact. In most of the cases, alterations were grey in colour, rarely were dark-red or marble-like. In 91.7% of these cases, they were spread on the apical parts of large lung lobes. In 36.7% middle parts of the large lung lobes, and, in 26.7% small lung lobes, were also affected. The small lung lobes were never independently infected.

Keywords: Cysticercus pisiformis, Lepus europeus, lung lesions, Pentastomum dentatum, Protostrongylus tauricus

Procedia PDF Downloads 209
4401 Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model

Authors: Shaira L. Kee, Michael Aaron G. Sy, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar AlDahoul

Abstract:

Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16.

Keywords: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma

Procedia PDF Downloads 76
4400 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 156
4399 Using Serious Games to Integrate the Potential of Mass Customization into the Fuzzy Front-End of New Product Development

Authors: Michael N. O'Sullivan, Con Sheahan

Abstract:

Mass customization is the idea of offering custom products or services to satisfy the needs of each individual customer while maintaining the efficiency of mass production. Technologies like 3D printing and artificial intelligence have many start-ups hoping to capitalize on this dream of creating personalized products at an affordable price, and well established companies scrambling to innovate and maintain their market share. However, the majority of them are failing as they struggle to understand one key question – where does customization make sense? Customization and personalization only make sense where the value of the perceived benefit outweighs the cost to implement it. In other words, will people pay for it? Looking at the Kano Model makes it clear that it depends on the product. In products where customization is an inherent need, like prosthetics, mass customization technologies can be highly beneficial. However, for products that already sell as a standard, like headphones, offering customization is likely only an added bonus, and so the product development team must figure out if the customers’ perception of the added value of this feature will outweigh its premium price tag. This can be done through the use of a ‘serious game,’ whereby potential customers are given a limited budget to collaboratively buy and bid on potential features of the product before it is developed. If the group choose to buy customization over other features, then the product development team should implement it into their design. If not, the team should prioritize the features on which the customers have spent their budget. The level of customization purchased can also be translated to an appropriate production method, for example, the most expensive type of customization would likely be free-form design and could be achieved through digital fabrication, while a lower level could be achieved through short batch production. Twenty-five teams of final year students from design, engineering, construction and technology tested this methodology when bringing a product from concept through to production specification, and found that it allowed them to confidently decide what level of customization, if any, would be worth offering for their product, and what would be the best method of producing it. They also found that the discussion and negotiations between players during the game led to invaluable insights, and often decided to play a second game where they offered customers the option to buy the various customization ideas that had been discussed during the first game.

Keywords: Kano model, mass customization, new product development, serious game

Procedia PDF Downloads 131
4398 Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification

Authors: Krishna Mohan Bathula, Fatou Bintou Loucoubar, FNU Kaleemunnisa, Christelle Scharff, Mark Anthony De Castro

Abstract:

Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms.

Keywords: automatic speech recognition, interactive voice response, voice response recognition, wolof word classification

Procedia PDF Downloads 108
4397 Integrated Geophysical Approach for Subsurface Delineation in Srinagar, Uttarakhand, India

Authors: Pradeep Kumar Singh Chauhan, Gayatri Devi, Zamir Ahmad, Komal Chauhan, Abha Mittal

Abstract:

The application of geophysical methods to study the subsurface profile for site investigation is becoming popular globally. These methods are non-destructive and provide the image of subsurface at shallow depths. Seismic refraction method is one of the most common and efficient method being used for civil engineering site investigations particularly for knowing the seismic velocity of the subsurface layers. Resistivity imaging technique is a geo-electrical method used to image the subsurface, water bearing zone, bedrock and layer thickness. Integrated approach combining seismic refraction and 2-D resistivity imaging will provide a better and reliable picture of the subsurface. These are economical and less time-consuming field survey which provide high resolution image of the subsurface. Geophysical surveys carried out in this study include seismic refraction and 2D resistivity imaging method for delineation of sub-surface strata in different parts of Srinagar, Garhwal Himalaya, India. The aim of this survey was to map the shallow subsurface in terms of geological and geophysical properties mainly P-wave velocity, resistivity, layer thickness, and lithology of the area. Both sides of the river, Alaknanda which flows through the centre of the city, have been covered by taking two profiles on each side using both methods. Seismic and electrical surveys were carried out at the same locations to complement the results of each other. The seismic refraction survey was carried out using ABEM TeraLoc 24 channel Seismograph and 2D resistivity imaging was performed using ABEM Terrameter LS equipment. The results show three distinct layers on both sides of the river up to the depth of 20 m. The subsurface is divided into three distinct layers namely, alluvium extending up to, 3 m depth, conglomerate zone lying between the depth of 3 m to 15 m, and compacted pebbles and cobbles beyond 15 m. P-wave velocity in top layer is found in the range of 400 – 600 m/s, in second layer it varies from 700 – 1100 m/s and in the third layer it is 1500 – 3300 m/s. The resistivity results also show similar pattern and were in good agreement with seismic refraction results. The results obtained in this study were validated with an available exposed river scar at one site. The study established the efficacy of geophysical methods for subsurface investigations.

Keywords: 2D resistivity imaging, P-wave velocity, seismic refraction survey, subsurface

Procedia PDF Downloads 250
4396 The Use of Boosted Multivariate Trees in Medical Decision-Making for Repeated Measurements

Authors: Ebru Turgal, Beyza Doganay Erdogan

Abstract:

Machine learning aims to model the relationship between the response and features. Medical decision-making researchers would like to make decisions about patients’ course and treatment, by examining the repeated measurements over time. Boosting approach is now being used in machine learning area for these aims as an influential tool. The aim of this study is to show the usage of multivariate tree boosting in this field. The main reason for utilizing this approach in the field of decision-making is the ease solutions of complex relationships. To show how multivariate tree boosting method can be used to identify important features and feature-time interaction, we used the data, which was collected retrospectively from Ankara University Chest Diseases Department records. Dataset includes repeated PF ratio measurements. The follow-up time is planned for 120 hours. A set of different models is tested. In conclusion, main idea of classification with weighed combination of classifiers is a reliable method which was shown with simulations several times. Furthermore, time varying variables will be taken into consideration within this concept and it could be possible to make accurate decisions about regression and survival problems.

Keywords: boosted multivariate trees, longitudinal data, multivariate regression tree, panel data

Procedia PDF Downloads 200
4395 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery

Authors: Forouzan Salehi Fergeni

Abstract:

Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.

Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine

Procedia PDF Downloads 40
4394 Harnessing Emerging Creative Technology for Knowledge Discovery of Multiwavelenght Datasets

Authors: Basiru Amuneni

Abstract:

Astronomy is one domain with a rise in data. Traditional tools for data management have been employed in the quest for knowledge discovery. However, these traditional tools become limited in the face of big. One means of maximizing knowledge discovery for big data is the use of scientific visualisation. The aim of the work is to explore the possibilities offered by emerging creative technologies of Virtual Reality (VR) systems and game engines to visualize multiwavelength datasets. Game Engines are primarily used for developing video games, however their advanced graphics could be exploited for scientific visualization which provides a means to graphically illustrate scientific data to ease human comprehension. Modern astronomy is now in the era of multiwavelength data where a single galaxy for example, is captured by the telescope several times and at different electromagnetic wavelength to have a more comprehensive picture of the physical characteristics of the galaxy. Visualising this in an immersive environment would be more intuitive and natural for an observer. This work presents a standalone VR application that accesses galaxy FITS files. The application was built using the Unity Game Engine for the graphics underpinning and the OpenXR API for the VR infrastructure. The work used a methodology known as Design Science Research (DSR) which entails the act of ‘using design as a research method or technique’. The key stages of the galaxy modelling pipeline are FITS data preparation, Galaxy Modelling, Unity 3D Visualisation and VR Display. The FITS data format cannot be read by the Unity Game Engine directly. A DLL (CSHARPFITS) which provides a native support for reading and writing FITS files was used. The Galaxy modeller uses an approach that integrates cleaned FITS image pixels into the graphics pipeline of the Unity3d game Engine. The cleaned FITS images are then input to the galaxy modeller pipeline phase, which has a pre-processing script that extracts, pixel, galaxy world position, and colour maps the FITS image pixels. The user can visualise image galaxies in different light bands, control the blend of the image with similar images from different sources or fuse images for a holistic view. The framework will allow users to build tools to realise complex workflows for public outreach and possibly scientific work with increased scalability, near real time interactivity with ease of access. The application is presented in an immersive environment and can use all commercially available headset built on the OpenXR API. The user can select galaxies in the scene, teleport to the galaxy, pan, zoom in/out, and change colour gradients of the galaxy. The findings and design lessons learnt in the implementation of different use cases will contribute to the development and design of game-based visualisation tools in immersive environment by enabling informed decisions to be made.

Keywords: astronomy, visualisation, multiwavelenght dataset, virtual reality

Procedia PDF Downloads 83
4393 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text

Authors: Duncan Wallace, M-Tahar Kechadi

Abstract:

In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.

Keywords: artificial neural networks, data-mining, machine learning, medical informatics

Procedia PDF Downloads 123
4392 Unsupervised Part-of-Speech Tagging for Amharic Using K-Means Clustering

Authors: Zelalem Fantahun

Abstract:

Part-of-speech tagging is the process of assigning a part-of-speech or other lexical class marker to each word into naturally occurring text. Part-of-speech tagging is the most fundamental and basic task almost in all natural language processing. In natural language processing, the problem of providing large amount of manually annotated data is a knowledge acquisition bottleneck. Since, Amharic is one of under-resourced language, the availability of tagged corpus is the bottleneck problem for natural language processing especially for POS tagging. A promising direction to tackle this problem is to provide a system that does not require manually tagged data. In unsupervised learning, the learner is not provided with classifications. Unsupervised algorithms seek out similarity between pieces of data in order to determine whether they can be characterized as forming a group. This paper explicates the development of unsupervised part-of-speech tagger using K-Means clustering for Amharic language since large amount of data is produced in day-to-day activities. In the development of the tagger, the following procedures are followed. First, the unlabeled data (raw text) is divided into 10 folds and tokenization phase takes place; at this level, the raw text is chunked at sentence level and then into words. The second phase is feature extraction which includes word frequency, syntactic and morphological features of a word. The third phase is clustering. Among different clustering algorithms, K-means is selected and implemented in this study that brings group of similar words together. The fourth phase is mapping, which deals with looking at each cluster carefully and the most common tag is assigned to a group. This study finds out two features that are capable of distinguishing one part-of-speech from others these are morphological feature and positional information and show that it is possible to use unsupervised learning for Amharic POS tagging. In order to increase performance of the unsupervised part-of-speech tagger, there is a need to incorporate other features that are not included in this study, such as semantic related information. Finally, based on experimental result, the performance of the system achieves a maximum of 81% accuracy.

Keywords: POS tagging, Amharic, unsupervised learning, k-means

Procedia PDF Downloads 440
4391 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection

Authors: Ali Hamza

Abstract:

Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.

Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network

Procedia PDF Downloads 78
4390 Designing an Effective Accountability Model for Islamic Azad University Using the Qualitative Approach of Grounded Theory

Authors: Davoud Maleki, Neda Zamani

Abstract:

The present study aims at exploring the effective accountability model of Islamic Azad University using a qualitative approach of grounded theory. The data of this study were obtained from semi-structured interviews with 25 professors and scholars in Islamic Azad University of Tehran who were selected by theoretical sampling method. In the data analysis, the stepwise method and Strauss and Corbin analytical methods (1992) were used. After identification of the main component (balanced response to stakeholders’ needs) and using it to bring the categories together, expressions and ideas representing the relationships between the main and subcomponents, and finally, the revealed components were categorized into six dimensions of the paradigm model, with the relationships among them, including causal conditions (7 components), main component (balanced response to stakeholders’ needs), strategies (5 components), environmental conditions (5 components), intervention features (4 components), and consequences (3 components). Research findings show an exploratory model for describing the relationships between causal conditions, main components, accountability strategies, environmental conditions, university environmental features, and that consequences.

Keywords: accountability, effectiveness, Islamic Azad University, grounded theory

Procedia PDF Downloads 81
4389 Hope in the Ruins of 'Ozymandias': Reimagining Temporal Horizons in Felicia Hemans 'the Image in Lava'

Authors: Lauren Schuldt Wilson

Abstract:

Felicia Hemans’ memorializing of the unwritten lives of women and the consequent allowance for marginalized voices to remember and be remembered has been considered by many critics in terms of ekphrasis and elegy, terms which privilege the question of whether Hemans’ poeticizing can represent lost voices of history or only her poetic expression. Amy Gates, Brian Elliott, and others point out Hemans’ acknowledgement of the self-projection necessary for imaginatively filling the absences of unrecorded histories. Yet, few have examined the complex temporal positioning Hemans inscribes in these moments of self-projection and imaginative historicizing. In poems like ‘The Image in Lava,’ Hemans maps not only a lost past, but also a lost potential future onto the image of a dead infant in its mother’s arms, the discovery and consideration of which moves the imagined viewer to recover and incorporate the ‘hope’ encapsulated in the figure of the infant into a reevaluation of national time embodied by the ‘relics / Left by the pomps of old.’ By examining Hemans’ acknowledgement and response to Percy Bysshe Shelley’s ‘Ozymandias,’ this essay explores how Hemans’ depictions of imaginative historicizing open new horizons of possibility and reevaluate temporal value structures by imagining previously undiscovered or unexplored potentialities of the past. Where Shelley’s poem mocks the futility of national power and time, this essay outlines Hemans’ suggestion of alternative threads of identity and temporal meaning-making which, regardless of historical veracity, exist outside of and against the structures Shelley challenges. Counter to previous readings of Hemans’ poem as celebration of either recovered or poetically constructed maternal love, this essay argues that Hemans offers a meditation on sites of reproduction—both of personal reproductive futurity and of national reproduction of power. This meditation culminates in Hemans’ gesturing towards a method of historicism by which the imagined viewer reinvigorates the sterile, ‘shattered visage’ of national time by forming temporal identity through the imagining of trans-historical hope inscribed on the infant body of the universal, individual subject rather than the broken monument of the king.

Keywords: futurity, national temporalities, reproduction, revisionary histories

Procedia PDF Downloads 161
4388 Pulmonary Disease Identification Using Machine Learning and Deep Learning Techniques

Authors: Chandu Rathnayake, Isuri Anuradha

Abstract:

Early detection and accurate diagnosis of lung diseases play a crucial role in improving patient prognosis. However, conventional diagnostic methods heavily rely on subjective symptom assessments and medical imaging, often causing delays in diagnosis and treatment. To overcome this challenge, we propose a novel lung disease prediction system that integrates patient symptoms and X-ray images to provide a comprehensive and reliable diagnosis.In this project, develop a mobile application specifically designed for detecting lung diseases. Our application leverages both patient symptoms and X-ray images to facilitate diagnosis. By combining these two sources of information, our application delivers a more accurate and comprehensive assessment of the patient's condition, minimizing the risk of misdiagnosis. Our primary aim is to create a user-friendly and accessible tool, particularly important given the current circumstances where many patients face limitations in visiting healthcare facilities. To achieve this, we employ several state-of-the-art algorithms. Firstly, the Decision Tree algorithm is utilized for efficient symptom-based classification. It analyzes patient symptoms and creates a tree-like model to predict the presence of specific lung diseases. Secondly, we employ the Random Forest algorithm, which enhances predictive power by aggregating multiple decision trees. This ensemble technique improves the accuracy and robustness of the diagnosis. Furthermore, we incorporate a deep learning model using Convolutional Neural Network (CNN) with the RestNet50 pre-trained model. CNNs are well-suited for image analysis and feature extraction. By training CNN on a large dataset of X-ray images, it learns to identify patterns and features indicative of lung diseases. The RestNet50 architecture, known for its excellent performance in image recognition tasks, enhances the efficiency and accuracy of our deep learning model. By combining the outputs of the decision tree-based algorithms and the deep learning model, our mobile application generates a comprehensive lung disease prediction. The application provides users with an intuitive interface to input their symptoms and upload X-ray images for analysis. The prediction generated by the system offers valuable insights into the likelihood of various lung diseases, enabling individuals to take appropriate actions and seek timely medical attention. Our proposed mobile application has significant potential to address the rising prevalence of lung diseases, particularly among young individuals with smoking addictions. By providing a quick and user-friendly approach to assessing lung health, our application empowers individuals to monitor their well-being conveniently. This solution also offers immense value in the context of limited access to healthcare facilities, enabling timely detection and intervention. In conclusion, our research presents a comprehensive lung disease prediction system that combines patient symptoms and X-ray images using advanced algorithms. By developing a mobile application, we provide an accessible tool for individuals to assess their lung health conveniently. This solution has the potential to make a significant impact on the early detection and management of lung diseases, benefiting both patients and healthcare providers.

Keywords: CNN, random forest, decision tree, machine learning, deep learning

Procedia PDF Downloads 71
4387 Rendering Cognition Based Learning in Coherence with Development within the Context of PostgreSQL

Authors: Manuela Nayantara Jeyaraj, Senuri Sucharitharathna, Chathurika Senarath, Yasanthy Kanagaraj, Indraka Udayakumara

Abstract:

PostgreSQL is an Object Relational Database Management System (ORDBMS) that has been in existence for a while. Despite the superior features that it wraps and packages to manage database and data, the database community has not fully realized the importance and advantages of PostgreSQL. Hence, this research tends to focus on provisioning a better environment of development for PostgreSQL in order to induce the utilization and elucidate the importance of PostgreSQL. PostgreSQL is also known to be the world’s most elementary SQL-compliant open source ORDBMS. But, users have not yet resolved to PostgreSQL due to the facts that it is still under the layers and the complexity of its persistent textual environment for an introductory user. Simply stating this, there is a dire need to explicate an easy way of making the users comprehend the procedure and standards with which databases are created, tables and the relationships among them, manipulating queries and their flow based on conditions in PostgreSQL to help the community resolve to PostgreSQL at an augmented rate. Hence, this research under development within the context tends to initially identify the dominant features provided by PostgreSQL over its competitors. Following the identified merits, an analysis on why the database community holds a hesitance in migrating to PostgreSQL’s environment will be carried out. These will be modulated and tailored based on the scope and the constraints discovered. The resultant of the research proposes a system that will serve as a designing platform as well as a learning tool that will provide an interactive method of learning via a visual editor mode and incorporate a textual editor for well-versed users. The study is based on conjuring viable solutions that analyze a user’s cognitive perception in comprehending human computer interfaces and the behavioural processing of design elements. By providing a visually draggable and manipulative environment to work with Postgresql databases and table queries, it is expected to highlight the elementary features displayed by Postgresql over any other existent systems in order to grasp and disseminate the importance and simplicity offered by this to a hesitant user.

Keywords: cognition, database, PostgreSQL, text-editor, visual-editor

Procedia PDF Downloads 276
4386 Linguistic Features for Sentence Difficulty Prediction in Aspect-Based Sentiment Analysis

Authors: Adrian-Gabriel Chifu, Sebastien Fournier

Abstract:

One of the challenges of natural language understanding is to deal with the subjectivity of sentences, which may express opinions and emotions that add layers of complexity and nuance. Sentiment analysis is a field that aims to extract and analyze these subjective elements from text, and it can be applied at different levels of granularity, such as document, paragraph, sentence, or aspect. Aspect-based sentiment analysis is a well-studied topic with many available data sets and models. However, there is no clear definition of what makes a sentence difficult for aspect-based sentiment analysis. In this paper, we explore this question by conducting an experiment with three data sets: ”Laptops”, ”Restaurants”, and ”MTSC” (Multi-Target-dependent Sentiment Classification), and a merged version of these three datasets. We study the impact of domain diversity and syntactic diversity on difficulty. We use a combination of classifiers to identify the most difficult sentences and analyze their characteristics. We employ two ways of defining sentence difficulty. The first one is binary and labels a sentence as difficult if the classifiers fail to correctly predict the sentiment polarity. The second one is a six-level scale based on how many of the top five best-performing classifiers can correctly predict the sentiment polarity. We also define 9 linguistic features that, combined, aim at estimating the difficulty at sentence level.

Keywords: sentiment analysis, difficulty, classification, machine learning

Procedia PDF Downloads 75
4385 Comprehensive Review of Ultralightweight Security Protocols

Authors: Prashansa Singh, Manjot Kaur, Rohit Bajaj

Abstract:

The proliferation of wireless sensor networks and Internet of Things (IoT) devices in the quickly changing digital landscape has highlighted the urgent need for strong security solutions that can handle these systems’ limited resources. A key solution to this problem is the emergence of ultralightweight security protocols, which provide strong security features while respecting the strict computational, energy, and memory constraints imposed on these kinds of devices. This in-depth analysis explores the field of ultralightweight security protocols, offering a thorough examination of their evolution, salient features, and the particular security issues they resolve. We carefully examine and contrast different protocols, pointing out their advantages and disadvantages as well as the compromises between resource limitations and security resilience. We also study these protocols’ application domains, including the Internet of Things, RFID systems, and wireless sensor networks, to name a few. In addition, the review highlights recent developments and advancements in the field, pointing out new trends and possible avenues for future research. This paper aims to be a useful resource for researchers, practitioners, and developers, guiding the design and implementation of safe, effective, and scalable systems in the Internet of Things era by providing a comprehensive overview of ultralightweight security protocols.

Keywords: wireless sensor network, machine-to-machine, MQTT broker, server, ultralightweight, TCP/IP

Procedia PDF Downloads 69
4384 Features of Calculating Structures for Frequent Weak Earthquakes

Authors: M. S. Belashov, A. V. Benin, Lin Hong, Sh. Sh. Nazarova, O. B. Sabirova, A. M. Uzdin, Lin Hong

Abstract:

The features of calculating structures for the action of weak earthquakes are analyzed. Earthquakes with a recurrence of 30 years and 50 years are considered. In the first case, the structure is to operate normally without damage after the earthquake. In the second case, damages are allowed that do not affect the possibility of the structure operation. Three issues are emphasized: setting elastic and damping characteristics of reinforced concrete, formalization of limit states, and combinations of loads. The dependence of damping on the reinforcement coefficient is estimated. When evaluating limit states, in addition to calculations for crack resistance and strength, a human factor, i.e., the possibility of panic among people, was considered. To avoid it, it is proposed to limit a floor-by-floor speed level in certain octave ranges. Proposals have been developed for estimating the coefficients of the combination of various loads with the seismic one. As an example, coefficients of combinations of seismic and ice loads are estimated. It is shown that for strong actions, the combination coefficients for different regions turn out to be close, while for weak actions, they may differ.

Keywords: weak earthquake, frequent earthquake, damage, limit state, reinforcement, crack resistance, strength resistance, a floor-by-floor velocity, combination coefficients

Procedia PDF Downloads 81
4383 Accurate Positioning Method of Indoor Plastering Robot Based on Line Laser

Authors: Guanqiao Wang, Hongyang Yu

Abstract:

There is a lot of repetitive work in the traditional construction industry. These repetitive tasks can significantly improve production efficiency by replacing manual tasks with robots. There- fore, robots appear more and more frequently in the construction industry. Navigation and positioning are very important tasks for construction robots, and the requirements for accuracy of positioning are very high. Traditional indoor robots mainly use radiofrequency or vision methods for positioning. Compared with ordinary robots, the indoor plastering robot needs to be positioned closer to the wall for wall plastering, so the requirements for construction positioning accuracy are higher, and the traditional navigation positioning method has a large error, which will cause the robot to move. Without the exact position, the wall cannot be plastered, or the error of plastering the wall is large. A new positioning method is proposed, which is assisted by line lasers and uses image processing-based positioning to perform more accurate positioning on the traditional positioning work. In actual work, filter, edge detection, Hough transform and other operations are performed on the images captured by the camera. Each time the position of the laser line is found, it is compared with the standard value, and the position of the robot is moved or rotated to complete the positioning work. The experimental results show that the actual positioning error is reduced to less than 0.5 mm by this accurate positioning method.

Keywords: indoor plastering robot, navigation, precise positioning, line laser, image processing

Procedia PDF Downloads 142