Search results for: synthetic dataset
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2210

Search results for: synthetic dataset

1430 Alkaloid Levels in Experimental Lines of Ryegrass in Southtern Chile

Authors: Leonardo Parra, Manuel Chacón-Fuentes, Andrés Quiroz

Abstract:

One of the most important factors in beef and dairy production in the world as well as also in Chile, is related to the correct choice of cultivars or mixtures of forage grasses and legumes to ensure high yields and quality of grassland. However, a great problem is the persistence of the grasses as a result of the action of different hypogeous as epigean pests. The complex insect pests associated with grassland include white grubs (Hylamorpha elegans, Phytoloema herrmanni), blackworm (Dalaca pallens) and Argentine stem weevil (Listronotus bonariensis). In Chile, the principal strategy utilized for controlling this pest is chemical control, through the use of synthetic insecticides, however, underground feeding habits of larval and flight activity of adults makes this uneconomic method. Furthermore, due to problems including environmental degradation, development of resistance and chemical residues, there is a worldwide interest in the use of alternative environmentally friendly pest control methods. In this sense, in recent years there has been an increasing interest in determining the role of endophyte fungi in controlling epigean and hypogeous pest. Endophytes from ryegrass (Lolium perenne), establish a biotrophic relationship with the host, defined as mutualistic symbiosis. The plant-fungi association produces a “cocktail of alkaloids” where peramine is the main toxic substance present in endophyte of ryegrass and responsible for damage reduction of L. bonariensis. In the last decade, few studies have been developed on the effectiveness of new ryegrass cultivars carriers of endophyte in controlling insect pests. Therefore, the aim of this research is to provide knowledge concerning to evaluate the alkaloid content, such as peramine and Lolitrem B, present in new experimental lines of ryegrass and feasible to be used in grasslands of southern Chile. For this, during 2016, ryegrass plants of six experimental lines and two commercial cultivars sown at the Instituto de Investigaciones Agropecuarias Carrillanca (Vilcún, Chile) were collected and subjected to a process of chemical extraction to identify and quantify the presence of peramine and lolitrem B by the technique of liquid chromatography of high resolution (HPLC). The results indicated that the experimental lines EL-1 and EL-3 had high content of peramine (0.25 and 0.43 ppm, respectively) than with lolitrem B (0.061 and 0.19 ppm, respectively). Furthermore, the higher contents of lolitrem B were detected in the EL-4 and commercial cultivar Alto (positive control) with 0.08 and 0.17 ppm, respectively. Peramine and lolitrem B were not detected in the cultivar Jumbo (negative control). These results suggest that EL-3 would have potential as future cultivate because it has high content of peramine, alkaloid responsible for controlling insect pest. However, their current role on the complex insects attacking ryegrass grasslands should be evaluated. The information obtained in this research could be used to improve control strategies against hypogeous and epigean pests of grassland in southern Chile and also to reduce the use of synthetic pesticides.

Keywords: HPLC, Lolitrem B, peramine, pest

Procedia PDF Downloads 242
1429 Radical Web Text Classification Using a Composite-Based Approach

Authors: Kolade Olawande Owoeye, George R. S. Weir

Abstract:

The widespread of terrorism and extremism activities on the internet has become a major threat to the government and national securities due to their potential dangers which have necessitated the need for intelligence gathering via web and real-time monitoring of potential websites for extremist activities. However, the manual classification for such contents is practically difficult or time-consuming. In response to this challenge, an automated classification system called composite technique was developed. This is a computational framework that explores the combination of both semantics and syntactic features of textual contents of a web. We implemented the framework on a set of extremist webpages dataset that has been subjected to the manual classification process. Therein, we developed a classification model on the data using J48 decision algorithm, this is to generate a measure of how well each page can be classified into their appropriate classes. The classification result obtained from our method when compared with other states of arts, indicated a 96% success rate in classifying overall webpages when matched against the manual classification.

Keywords: extremist, web pages, classification, semantics, posit

Procedia PDF Downloads 145
1428 Direct Palladium-Catalyzed Selective N-Allylation of 2,3-Disubstituted Indoles with Allylic Alcohols in Water

Authors: Bai-Jing Peng, Shyh-Chyun Yang

Abstract:

Organic reactions in water have recently attracted much attention, not only because unique reactivity is often observed in water but also because water is a safe and economical substitute for conventional organic solvents. Thus, development of environmental safe, atom-economical reactions in water is one of the most important goals of synthetic chemistry. The recent paper has documented renewed interest in the use of allylic substrates in the synthesis of new C−C, C−N, and C−O bonds. We have reported our attempts and some successful applications of a process involving the C-O bond cleavage catalyzed by palladium or platinum complexes in water. Because of the importance of heterocycle indole derivatives, much effort has been directed toward the development of methods for functionalization of the indole nucleus at N1 site. In our research, the palladium-catalyzed 2,3-disubstitued indoles with allylic alcohols was investigated under different conditions. Herein, we will establish a simple, convenient, and efficient method, which affords high yields of allylated indoles.

Keywords: palladium-catalyzed, allylic alcohols, indoles, water, allylation

Procedia PDF Downloads 238
1427 Surfactant Improved Heavy Oil Recovery in Sandstone Reservoirs by Wettability Alteration

Authors: Rabia Hunky, Hayat Kalifa, Bai

Abstract:

The wettability of carbonate reservoirs has been widely recognized as an important parameter in oil recovery by flooding technology. Many surfactants have been studied for this application. However, the importance of wettability alteration in sandstone reservoirs by surfactant has been poorly studied. In this paper, our recent study of the relationship between rock surface wettability and cumulative oil recovery for sandstone cores is reported. In our research, it has been found there is a good agreement between the wettability and oil recovery. Nonionic surfactants, Tomadol® 25-12 and Tomadol® 45-13, are very effective in wettability alteration of sandstone core surface from highly oil-wet conditions to water-wet conditions. By spontaneous imbibition test, Interfacial tension, and contact angle measurement these two surfactants exhibit the highest recovery of the synthetic oil made with heavy oil. Based on these experimental results, we can further conclude that the contact angle measurement and imbibition test can be used as rapid screening tools to identify better EOR surfactants to increase heavy oil recovery from sandstone reservoirs.

Keywords: EOR, oil gas, IOR, WC, IF, oil and gas

Procedia PDF Downloads 103
1426 Effect of Personality Traits on Classification of Political Orientation

Authors: Vesile Evrim, Aliyu Awwal

Abstract:

Today as in the other domains, there are an enormous number of political transcripts available in the Web which is waiting to be mined and used for various purposes such as statistics and recommendations. Therefore, automatically determining the political orientation on these transcripts becomes crucial. The methodologies used by machine learning algorithms to do the automatic classification are based on different features such as Linguistic. Considering the ideology differences between Liberals and Conservatives, in this paper, the effect of Personality Traits on political orientation classification is studied. This is done by considering the correlation between LIWC features and the BIG Five Personality Traits. Several experiments are conducted on Convote U.S. Congressional-Speech dataset with seven benchmark classification algorithms. The different methodologies are applied on selecting different feature sets that constituted by 8 to 64 varying number of features. While Neuroticism is obtained to be the most differentiating personality trait on classification of political polarity, when its top 10 representative features are combined with several classification algorithms, it outperformed the results presented in previous research.

Keywords: politics, personality traits, LIWC, machine learning

Procedia PDF Downloads 495
1425 Identification of Breast Anomalies Based on Deep Convolutional Neural Networks and K-Nearest Neighbors

Authors: Ayyaz Hussain, Tariq Sadad

Abstract:

Breast cancer (BC) is one of the widespread ailments among females globally. The early prognosis of BC can decrease the mortality rate. Exact findings of benign tumors can avoid unnecessary biopsies and further treatments of patients under investigation. However, due to variations in images, it is a tough job to isolate cancerous cases from normal and benign ones. The machine learning technique is widely employed in the classification of BC pattern and prognosis. In this research, a deep convolution neural network (DCNN) called AlexNet architecture is employed to get more discriminative features from breast tissues. To achieve higher accuracy, K-nearest neighbor (KNN) classifiers are employed as a substitute for the softmax layer in deep learning. The proposed model is tested on a widely used breast image database called MIAS dataset for experimental purposes and achieved 99% accuracy.

Keywords: breast cancer, DCNN, KNN, mammography

Procedia PDF Downloads 136
1424 A Deep Reinforcement Learning-Based Secure Framework against Adversarial Attacks in Power System

Authors: Arshia Aflaki, Hadis Karimipour, Anik Islam

Abstract:

Generative Adversarial Attacks (GAAs) threaten critical sectors, ranging from fingerprint recognition to industrial control systems. Existing Deep Learning (DL) algorithms are not robust enough against this kind of cyber-attack. As one of the most critical industries in the world, the power grid is not an exception. In this study, a Deep Reinforcement Learning-based (DRL) framework assisting the DL model to improve the robustness of the model against generative adversarial attacks is proposed. Real-world smart grid stability data, as an IIoT dataset, test our method and improves the classification accuracy of a deep learning model from around 57 percent to 96 percent.

Keywords: generative adversarial attack, deep reinforcement learning, deep learning, IIoT, generative adversarial networks, power system

Procedia PDF Downloads 36
1423 Reinforcement Learning for Classification of Low-Resolution Satellite Images

Authors: Khadija Bouzaachane, El Mahdi El Guarmah

Abstract:

The classification of low-resolution satellite images has been a worthwhile and fertile field that attracts plenty of researchers due to its importance in monitoring geographical areas. It could be used for several purposes such as disaster management, military surveillance, agricultural monitoring. The main objective of this work is to classify efficiently and accurately low-resolution satellite images by using novel technics of deep learning and reinforcement learning. The images include roads, residential areas, industrial areas, rivers, sea lakes, and vegetation. To achieve that goal, we carried out experiments on the sentinel-2 images considering both high accuracy and efficiency classification. Our proposed model achieved a 91% accuracy on the testing dataset besides a good classification for land cover. Focus on the parameter precision; we have obtained 93% for the river, 92% for residential, 97% for residential, 96% for the forest, 87% for annual crop, 84% for herbaceous vegetation, 85% for pasture, 78% highway and 100% for Sea Lake.

Keywords: classification, deep learning, reinforcement learning, satellite imagery

Procedia PDF Downloads 213
1422 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System

Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii

Abstract:

Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.

Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression

Procedia PDF Downloads 157
1421 Analysis of Matching Pursuit Features of EEG Signal for Mental Tasks Classification

Authors: Zin Mar Lwin

Abstract:

Brain Computer Interface (BCI) Systems have developed for people who suffer from severe motor disabilities and challenging to communicate with their environment. BCI allows them for communication by a non-muscular way. For communication between human and computer, BCI uses a type of signal called Electroencephalogram (EEG) signal which is recorded from the human„s brain by means of an electrode. The electroencephalogram (EEG) signal is an important information source for knowing brain processes for the non-invasive BCI. Translating human‟s thought, it needs to classify acquired EEG signal accurately. This paper proposed a typical EEG signal classification system which experiments the Dataset from “Purdue University.” Independent Component Analysis (ICA) method via EEGLab Tools for removing artifacts which are caused by eye blinks. For features extraction, the Time and Frequency features of non-stationary EEG signals are extracted by Matching Pursuit (MP) algorithm. The classification of one of five mental tasks is performed by Multi_Class Support Vector Machine (SVM). For SVMs, the comparisons have been carried out for both 1-against-1 and 1-against-all methods.

Keywords: BCI, EEG, ICA, SVM

Procedia PDF Downloads 277
1420 Facile Synthetic Process for Lamivudine and Emtricitabine

Authors: Devender Mandala, Paul Watts

Abstract:

Cis-Nucleosides mainly lamivudine (3TC) and emtricitabine (FTC) are an important tool in the treatment of Human immune deficiency virus (HIV), Hepatitis B virus (HBV) and Human T-Lymotropoic virus (HTLV). Lamivudine and emtricitabine are potent nucleoside analog reverse transcriptase inhibitors (nRTI). These two drugs are synthesized by a four-stage process from the starting materials: menthyl glyoxylate hydrate and 1,4-dithane-2,5-diol to produce the 5-hydroxy oxathiolane which upon acetylation with acetic anhydride to yield 5-acetoxy oxathiolane. Then glycosylation of this acetyl product with silyl protected nucleoside to produce the intermediate. The reduction of this intermediates can provide the final targets. Although there are several different methods reported for the synthesis of lamivudine and emtricitabine as a single enantiomer, we required an efficient route, which was suitable for large-scale synthesis to support the development of these compounds. In this process, we successfully prepared the intermediates of lamivudine and emtricitabine without using any solvents and catalyst, thus promoting the green synthesis. All the synthesized compound were confirmed by TLC, GC, Mass, NMR and 13C NMR spectroscopy.

Keywords: emtricitabine, green synthesis, lamivudine, nucleoside

Procedia PDF Downloads 229
1419 Liver Tumor Detection by Classification through FD Enhancement of CT Image

Authors: N. Ghatwary, A. Ahmed, H. Jalab

Abstract:

In this paper, an approach for the liver tumor detection in computed tomography (CT) images is represented. The detection process is based on classifying the features of target liver cell to either tumor or non-tumor. Fractional differential (FD) is applied for enhancement of Liver CT images, with the aim of enhancing texture and edge features. Later on, a fusion method is applied to merge between the various enhanced images and produce a variety of feature improvement, which will increase the accuracy of classification. Each image is divided into NxN non-overlapping blocks, to extract the desired features. Support vector machines (SVM) classifier is trained later on a supplied dataset different from the tested one. Finally, the block cells are identified whether they are classified as tumor or not. Our approach is validated on a group of patients’ CT liver tumor datasets. The experiment results demonstrated the efficiency of detection in the proposed technique.

Keywords: fractional differential (FD), computed tomography (CT), fusion, aplha, texture features.

Procedia PDF Downloads 358
1418 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language

Authors: Ghazal Faraj, András Micsik

Abstract:

Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.

Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping

Procedia PDF Downloads 253
1417 Facial Emotion Recognition with Convolutional Neural Network Based Architecture

Authors: Koray U. Erbas

Abstract:

Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented.

Keywords: convolutional neural network, deep learning, deep learning based FER, facial emotion recognition

Procedia PDF Downloads 273
1416 Degradation of Hydrocarbons by Surfactants and Biosurfactants

Authors: Samira Ferhat, Redha Alouaoui, Leila Trifi, Abdelmalek Badis

Abstract:

The objective of this work is the use of natural surfactant (biosurfactant) and synthetic (sodium dodecyl sulfate and tween 80) for environmental application. In fact the solubility of the polycyclic hydrocarbon (naphthalene) and the desorption of the heavy metals in the presence of surfactants. The microorganisms selected in this work are bacterial strain (Bacillus licheniformis) for the production of biosurfactant for use in this study. In the first part of this study, we evaluated the effectiveness of surfactants solubilization certain hydrocarbons few soluble in water such as polyaromatic (case naphthalene). Tests have shown that from the critical micelle concentration, decontamination is performed. The second part presents the results on the desorption of heavy metals (for copper) by the three surfactants, using concentrations above the critical micelle concentration. The comparison between the desorption of copper by the three surfactants, it is shown that the biosurfactant is more effective than tween 80 and sodium dodecyl sulfate.

Keywords: surfactants, biosurfactant, naphthalene, copper, critical micelle concentration, solubilization, desorption

Procedia PDF Downloads 397
1415 Dual Role of Microalgae: Carbon Dioxide Capture Nutrients Removal

Authors: Mohamad Shurair, Fares Almomani, Simon Judd, Rahul Bhosale, Anand Kumar, Ujjal Gosh

Abstract:

This study evaluated the use of mixed indigenous microalgae (MIMA) as a treatment process for wastewaters and CO2 capturing technology at different temperatures. The study follows the growth rate of MIMA, removals of organic matter, removal of nutrients from synthetic wastewater and its effectiveness as CO2 capturing technology from flue gas. A noticeable difference between the growth patterns of MIMA was observed at different CO2 and different operational temperatures. MIMA showed the highest growth grate when injected with CO2 dosage of 10% and limited growth was observed for the systems injected with 5% and 15 % of CO2 at 30 ◦C. Ammonia and phosphorus removals for Spirulina were 69%, 75%, and 83%, and 20%, 45%, and 75% for the media injected with 0, 5 and 10% CO2. The results of this study show that simple and cost-effective microalgae-based wastewater treatment systems can be successfully employed at different temperatures as a successful CO2 capturing technology even with the small probability of inhibition at high temperatures

Keywords: greenhouse, climate change, CO2 capturing, green algae

Procedia PDF Downloads 333
1414 An Epsilon Hierarchical Fuzzy Twin Support Vector Regression

Authors: Arindam Chaudhuri

Abstract:

The research presents epsilon- hierarchical fuzzy twin support vector regression (epsilon-HFTSVR) based on epsilon-fuzzy twin support vector regression (epsilon-FTSVR) and epsilon-twin support vector regression (epsilon-TSVR). Epsilon-FTSVR is achieved by incorporating trapezoidal fuzzy numbers to epsilon-TSVR which takes care of uncertainty existing in forecasting problems. Epsilon-FTSVR determines a pair of epsilon-insensitive proximal functions by solving two related quadratic programming problems. The structural risk minimization principle is implemented by introducing regularization term in primal problems of epsilon-FTSVR. This yields dual stable positive definite problems which improves regression performance. Epsilon-FTSVR is then reformulated as epsilon-HFTSVR consisting of a set of hierarchical layers each containing epsilon-FTSVR. Experimental results on both synthetic and real datasets reveal that epsilon-HFTSVR has remarkable generalization performance with minimum training time.

Keywords: regression, epsilon-TSVR, epsilon-FTSVR, epsilon-HFTSVR

Procedia PDF Downloads 375
1413 Statistical Wavelet Features, PCA, and SVM-Based Approach for EEG Signals Classification

Authors: R. K. Chaurasiya, N. D. Londhe, S. Ghosh

Abstract:

The study of the electrical signals produced by neural activities of human brain is called Electroencephalography. In this paper, we propose an automatic and efficient EEG signal classification approach. The proposed approach is used to classify the EEG signal into two classes: epileptic seizure or not. In the proposed approach, we start with extracting the features by applying Discrete Wavelet Transform (DWT) in order to decompose the EEG signals into sub-bands. These features, extracted from details and approximation coefficients of DWT sub-bands, are used as input to Principal Component Analysis (PCA). The classification is based on reducing the feature dimension using PCA and deriving the support-vectors using Support Vector Machine (SVM). The experimental are performed on real and standard dataset. A very high level of classification accuracy is obtained in the result of classification.

Keywords: discrete wavelet transform, electroencephalogram, pattern recognition, principal component analysis, support vector machine

Procedia PDF Downloads 638
1412 Extending Image Captioning to Video Captioning Using Encoder-Decoder

Authors: Sikiru Ademola Adewale, Joe Thomas, Bolanle Hafiz Matti, Tosin Ige

Abstract:

This project demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over the video temporal dimension. Predicted captions were shown to generalize over video action, even in instances where the video scene changed dramatically. Model architecture changes are discussed to improve sentence grammar and correctness.

Keywords: decoder, encoder, many-to-many mapping, video captioning, 2-gram BLEU

Procedia PDF Downloads 108
1411 Event Extraction, Analysis, and Event Linking

Authors: Anam Alam, Rahim Jamaluddin Kanji

Abstract:

With the rapid growth of event in everywhere, event extraction has now become an important matter to retrieve the information from the unstructured data. One of the challenging problems is to extract the event from it. An event is an observable occurrence of interaction among entities. The paper investigates the effectiveness of event extraction capabilities of three software tools that are Wandora, Nitro and SPSS. We performed standard text mining techniques of these tools on the data sets of (i) Afghan War Diaries (AWD collection), (ii) MUC4 and (iii) WebKB. Information retrieval measures such as precision and recall which are computed under extensive set of experiments for Event Extraction. The experimental study analyzes the difference between events extracted by the software and human. This approach helps to construct an algorithm that will be applied for different machine learning methods.

Keywords: event extraction, Wandora, nitro, SPSS, event analysis, extraction method, AFG, Afghan War Diaries, MUC4, 4 universities, dataset, algorithm, precision, recall, evaluation

Procedia PDF Downloads 595
1410 Location Privacy Preservation of Vehicle Data In Internet of Vehicles

Authors: Ying Ying Liu, Austin Cooke, Parimala Thulasiraman

Abstract:

Internet of Things (IoT) has attracted a recent spark in research on Internet of Vehicles (IoV). In this paper, we focus on one research area in IoV: preserving location privacy of vehicle data. We discuss existing location privacy preserving techniques and provide a scheme for evaluating these techniques under IoV traffic condition. We propose a different strategy in applying Differential Privacy using k-d tree data structure to preserve location privacy and experiment on real world Gowalla data set. We show that our strategy produces differentially private data, good preservation of utility by achieving similar regression accuracy to the original dataset on an LSTM (Long Term Short Term Memory) neural network traffic predictor.

Keywords: differential privacy, internet of things, internet of vehicles, location privacy, privacy preservation scheme

Procedia PDF Downloads 179
1409 Sustainable Pavements with Reflective and Photoluminescent Properties

Authors: A.H. Martínez, T. López-Montero, R. Miró, R. Puig, R. Villar

Abstract:

An alternative to mitigate the heat island effect is to pave streets and sidewalks with pavements that reflect incident solar energy, keeping their surface temperature lower than conventional pavements. The “Heat island mitigation to prevent global warming by designing sustainable pavements with reflective and photoluminescent properties (RELUM) Project” has been carried out with this intention in mind. Its objective has been to develop bituminous mixtures for urban pavements that help in the fight against global warming and climate change, while improving the quality of life of citizens. The technology employed has focused on the use of reflective pavements, using bituminous mixes made with synthetic bitumens and light pigments that provide high solar reflectance. In addition to this advantage, the light surface colour achieved with these mixes can improve visibility, especially at night. In parallel and following the latter approach, an appropriate type of treatment has also been developed on bituminous mixtures to make them capable of illuminating at night, giving rise to photoluminescent applications, which can reduce energy consumption and increase road safety due to improved night-time visibility. The work carried out consisted of designing different bituminous mixtures in which the nature of the aggregate was varied (porphyry, granite and limestone) and also the colour of the mixture, which was lightened by adding pigments (titanium dioxide and iron oxide). The reflectance of each of these mixtures was measured, as well as the temperatures recorded throughout the day, at different times of the year. The results obtained make it possible to propose bituminous mixtures whose characteristics can contribute to the reduction of urban heat islands. Among the most outstanding results is the mixture made with synthetic bitumen, white limestone aggregate and a small percentage of titanium dioxide, which would be the most suitable for urban surfaces without road traffic, given its high reflectance and the greater temperature reduction it offers. With this solution, a surface temperature reduction of 9.7°C is achieved at the beginning of the night in the summer season with the highest radiation. As for luminescent pavements, paints with different contents of strontium aluminate and glass microspheres have been applied to asphalt mixtures, and the luminance of all the applications designed has been measured by exciting them with electric bulbs that simulate the effect of sunlight. The results obtained at this stage confirm the ability of all the designed dosages to emit light for a certain time, varying according to the proportions used. Not only the effect of the strontium aluminate and microsphere content has been observed, but also the influence of the colour of the base on which the paint is applied; the lighter the base, the higher the luminance. Ongoing studies are focusing on the evaluation of the durability of the designed solutions in order to determine their lifetime.

Keywords: heat island, luminescent paints, reflective pavement, temperature reduction

Procedia PDF Downloads 30
1408 Enhancing Fall Detection Accuracy with a Transfer Learning-Aided Transformer Model Using Computer Vision

Authors: Sheldon McCall, Miao Yu, Liyun Gong, Shigang Yue, Stefanos Kollias

Abstract:

Falls are a significant health concern for older adults globally, and prompt identification is critical to providing necessary healthcare support. Our study proposes a new fall detection method using computer vision based on modern deep learning techniques. Our approach involves training a trans- former model on a large 2D pose dataset for general action recognition, followed by transfer learning. Specifically, we freeze the first few layers of the trained transformer model and train only the last two layers for fall detection. Our experimental results demonstrate that our proposed method outperforms both classical machine learning and deep learning approaches in fall/non-fall classification. Overall, our study suggests that our proposed methodology could be a valuable tool for identifying falls.

Keywords: healthcare, fall detection, transformer, transfer learning

Procedia PDF Downloads 146
1407 Biomaterials Solutions to Medical Problems: A Technical Review

Authors: Ashish Thakur

Abstract:

This technical paper was written in view of focusing the biomaterials and its various applications in modern industries. Author tires to elaborate not only the medical, infect plenty of application in other industries. The scope of the research area covers the wide range of physical, biological and chemical sciences that underpin the design of biomaterials and the clinical disciplines in which they are used. A biomaterial is now defined as a substance that has been engineered to take a form which, alone or as part of a complex system, is used to direct, by control of interactions with components of living systems, the course of any therapeutic or diagnostic procedure. Biomaterials are invariably in contact with living tissues. Thus, interactions between the surface of a synthetic material and biological environment must be well understood. This paper reviews the benefits and challenges associated with surface modification of the metals in biomedical applications. The paper also elaborates how the surface characteristics of metallic biomaterials, such as surface chemistry, topography, surface charge, and wettability, influence the protein adsorption and subsequent cell behavior in terms of adhesion, proliferation, and differentiation at the biomaterial–tissue interface. The chapter also highlights various techniques required for surface modification and coating of metallic biomaterials, including physicochemical and biochemical surface treatments and calcium phosphate and oxide coatings. In this review, the attention is focused on the biomaterial-associated infections, from which the need for anti-infective biomaterials originates. Biomaterial-associated infections differ markedly for epidemiology, aetiology and severity, depending mainly on the anatomic site, on the time of biomaterial application, and on the depth of the tissues harbouring the prosthesis. Here, the diversity and complexity of the different scenarios where medical devices are currently utilised are explored, providing an overview of the emblematic applicative fields and of the requirements for anti-infective biomaterials. In addition to this, chapter introduces nanomedicine and the use of both natural and synthetic polymeric biomaterials, focuses on specific current polymeric nanomedicine applications and research, and concludes with the challenges of nanomedicine research. Infection is currently regarded as the most severe and devastating complication associated to the use of biomaterials. Osteoporosis is a worldwide disease with a very high prevalence in humans older than 50. The main clinical consequences are bone fractures, which often lead to patient disability or even death. A number of commercial biomaterials are currently used to treat osteoporotic bone fractures, but most of these have not been specifically designed for that purpose. Many drug- or cell-loaded biomaterials have been proposed in research laboratories, but very few have received approval for commercial use. Polymeric nanomaterial-based therapeutics plays a key role in the field of medicine in treatment areas such as drug delivery, tissue engineering, cancer, diabetes, and neurodegenerative diseases. Advantages in the use of polymers over other materials for nanomedicine include increased functionality, design flexibility, improved processability, and, in some cases, biocompatibility.

Keywords: nanomedicine, tissue, infections, biomaterials

Procedia PDF Downloads 264
1406 Future Trends in Sources of Natural Antioxidants from Indigenous Foods

Authors: Ahmed El-Ghorab

Abstract:

Indigenous foods are promising sources of various chemical bioactive compounds such as vitamins, phenolic compounds and carotenoids. Therefore, the presence o different bioactive compounds in fruits could be used to retard or prevent various diseases such as cardiovascular and cancer. This is an update report on nutritional compositions and health promoting phytochemicals of different indigenous food . This different type of fruits and/ or other sources such as spices, aromatic plants, grains by-products, which containing bioactive compounds might be used as functional foods or for nutraceutical purposes. most common bioactive compounds are vitamin C, polyphenol, β- carotene and lycopene contents. In recent years, there has been a global trend toward the use of natural phytochemical as antioxidants and functional ingredients, which are present in natural resources such as vegetables, fruits, oilseeds and herbs.. Our future trend the Use of Natural antioxidants as a promising alternative to use of synthetic antioxidants and the Production of natural antioxidant on commercial scale to maximize the value addition of indigenous food waste as a good source of bioactive compounds such as antioxidants.

Keywords: bioactive compounds, antioxidants, by-product, indigenous foods, phenolic compounds

Procedia PDF Downloads 484
1405 Facile Synthesis of Heterostructured Bi₂S₃-WS₂ Photocatalysts for Photodegradation of Organic Dye

Authors: S. V. Prabhakar Vattikuti, Chan Byon

Abstract:

In this paper, we report a facile synthetic strategy of randomly disturbed Bi₂S₃ nanorods on WS₂ nanosheets, which are synthesized via a controlled hydrothermal method without surfactant under an inert atmosphere. We developed a simple hydrothermal method for the formation of heterostructured of Bi₂S₃/WS₂ with a large scale (>95%). The structural features, composition, and morphology were characterized by XRD, SEM-EDX, TEM, HRTEM, XPS, UV-vis spectroscopy, N₂ adsorption-desorption, and TG-DTA measurements. The heterostructured Bi₂S₃/WS₂ composite has significant photocatalytic efficiency toward the photodegradation of organic dye. The time-dependent UV-vis absorbance spectroscopy measurement was consistent with the enhanced photocatalytic degradation of rhodamine B (RhB) under visible light irradiation with the diminishing carrier recombination for the Bi₂S₃/WS₂ photocatalyst. Due to their marked synergistic effects, the supported Bi₂S₃ nanorods on WS₂ nanosheet heterostructures exhibit significant visible-light photocatalytic activity and stability for the degradation of RhB. A possible reaction mechanism is proposed for the Bi₂S₃/WS₂ composite.

Keywords: photocatalyst, heterostructures, transition metal disulfides, organic dye, nanorods

Procedia PDF Downloads 296
1404 ACBM: Attention-Based CNN and Bi-LSTM Model for Continuous Identity Authentication

Authors: Rui Mao, Heming Ji, Xiaoyu Wang

Abstract:

Keystroke dynamics are widely used in identity recognition. It has the advantage that the individual typing rhythm is difficult to imitate. It also supports continuous authentication through the keyboard without extra devices. The existing keystroke dynamics authentication methods based on machine learning have a drawback in supporting relatively complex scenarios with massive data. There are drawbacks to both feature extraction and model optimization in these methods. To overcome the above weakness, an authentication model of keystroke dynamics based on deep learning is proposed. The model uses feature vectors formed by keystroke content and keystroke time. It ensures efficient continuous authentication by cooperating attention mechanisms with the combination of CNN and Bi-LSTM. The model has been tested with Open Data Buffalo dataset, and the result shows that the FRR is 3.09%, FAR is 3.03%, and EER is 4.23%. This proves that the model is efficient and accurate on continuous authentication.

Keywords: keystroke dynamics, identity authentication, deep learning, CNN, LSTM

Procedia PDF Downloads 155
1403 Performativity and Valuation Techniques: Evidence from Investment Banks in the Wake of the Global Financial Crisis

Authors: Alicja Reuben, Amira Annabi

Abstract:

In this paper, we explore the relationship between the selection of valuation techniques by investment banks and the banks’ risk perceptions and performance in the context of the theory of performativity. We use inferential statistics to study these relationships by building a unique dataset based on the disclosure of 12 investment banks’ 2012-2015 annual financial statements. Moreover, we create two constructs, namely intensity of use and risk perception. We measure the intensity of use as a frequency metric of how often a particular bank adopts valuation techniques for a particular asset or liability. We measure risk perception based on disclosed ranges of values for unobservable inputs. Our results are twofold: we find a significant negative correlation between (1) intensity of use and investment bank performance and (2) intensity of use and risk perception. These results indicate that a performative process takes place, and the valuation techniques are enacting their environment.

Keywords: language, linguistics, performativity, financial techniques

Procedia PDF Downloads 160
1402 Spectrophotometric Methods for Simultaneous Determination of Binary Mixture of Amlodipine Besylate and Atenolol Based on Dual Wavelength

Authors: Nesrine T. Lamie

Abstract:

Four, accurate, precise, and sensitive spectrophotometric methods are developed for the simultaneous determination of a binary mixture containing amlodipine besylate (AM) and atenolol (AT) where AM is determined at its λmax 360 nm (0D), while atenolol can be determined by different methods. Method (A) is absorpotion factor (AFM). Method (B) is the new Ratio Difference method(RD) which measures the difference in amplitudes between 210 and 226 nm of ratio spectrum., Method (C) is novel constant center spectrophotometric method (CC) Method (D) is mean centering of the ratio spectra (MCR) at 284 nm. The calibration curve is linear over the concentration range of 10–80 and 4–40 μg/ml for AM and AT, respectively. These methods are tested by analyzing synthetic mixtures of the cited drugs and they are applied to their commercial pharmaceutical preparation. The validity of results was assessed by applying standard addition technique. The results obtained were found to agree statistically with those obtained by a reported method, showing no significant difference with respect to accuracy and precision.

Keywords: amlodipine, atenolol, absorption factor, constant center, mean centering, ratio difference

Procedia PDF Downloads 304
1401 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 90