Search results for: search data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26145

Search results for: search data

24045 Genome Sequencing, Assembly and Annotation of Gelidium Pristoides from Kenton-on-Sea, South Africa

Authors: Sandisiwe Mangali, Graeme Bradley

Abstract:

Genome is complete set of the organism's hereditary information encoded as either deoxyribonucleic acid or ribonucleic acid in most viruses. The three different types of genomes are nuclear, mitochondrial and the plastid genome and their sequences which are uncovered by genome sequencing are known as an archive for all genetic information and enable researchers to understand the composition of a genome, regulation of gene expression and also provide information on how the whole genome works. These sequences enable researchers to explore the population structure, genetic variations, and recent demographic events in threatened species. Particularly, genome sequencing refers to a process of figuring out the exact arrangement of the basic nucleotide bases of a genome and the process through which all the afore-mentioned genomes are sequenced is referred to as whole or complete genome sequencing. Gelidium pristoides is South African endemic Rhodophyta species which has been harvested in the Eastern Cape since the 1950s for its high economic value which is one motivation for its sequencing. Its endemism further motivates its sequencing for conservation biology as endemic species are more vulnerable to anthropogenic activities endangering a species. As sequencing, mapping and annotating the Gelidium pristoides genome is the aim of this study. To accomplish this aim, the genomic DNA was extracted and quantified using the Nucleospin Plank Kit, Qubit 2.0 and Nanodrop. Thereafter, the Ion Plus Fragment Library was used for preparation of a 600bp library which was then sequenced through the Ion S5 sequencing platform for two runs. The produced reads were then quality-controlled and assembled through the SPAdes assembler with default parameters and the genome assembly was quality assessed through the QUAST software. From this assembly, the plastid and the mitochondrial genomes were then sampled out using Gelidiales organellar genomes as search queries and ordered according to them using the Geneious software. The Qubit and the Nanodrop instruments revealed an A260/A280 and A230/A260 values of 1.81 and 1.52 respectively. A total of 30792074 reads were obtained and produced a total of 94140 contigs with resulted into a sequence length of 217.06 Mbp with N50 value of 3072 bp and GC content of 41.72%. A total length of 179281bp and 25734 bp was obtained for plastid and mitochondrial respectively. Genomic data allows a clear understanding of the genomic constituent of an organism and is valuable as foundation information for studies of individual genes and resolving the evolutionary relationships between organisms including Rhodophytes and other seaweeds.

Keywords: Gelidium pristoides, genome, genome sequencing and assembly, Ion S5 sequencing platform

Procedia PDF Downloads 144
24044 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA

Authors: Marek Dosbaba

Abstract:

Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.

Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data

Procedia PDF Downloads 105
24043 Prevailing Clinical Evidence on Medicinal Hemp (Cannabis Sativa L.)

Authors: Siti Hajar Muhamad Rosli, Xin Yi Lim, Terence Yew Chin Tan, Muhammad nor Farhan Sa’At, Syazwani Sirdar Ali, Ami Fazlin Syed Mohamed

Abstract:

A growing interest on therapeutic benefits of hemp (Cannabis sativa subsp. sativa) is evident in the pharmaceutical market, attributed to its lower levels of psychoactive constituent delta-9-tetrahydronannabidiol (THC). Deemed as a legal and safer alternative to its counterpart marijuana, the use of medicinal hemp is highly debatable as current scientific evidence on the efficacy for clinical use is yet to be established This study was aimed to provide an overview of the current landscape of hemp research, through recent clinical findings specific to the pharmacological properties of the hemp plant and its derived compounds. A systematic search was conducted following the Preferred Reporting Items for Systematic Review and Meta-Analysis-ScR (PRISMA) checklist on electronic databases (MEDLINE, OVID, Cochrane Library Central, and Clinicaltrials.gov) for articles published from 2009 to 2019. With predetermined inclusion criteria, all human trials with hemp intervention were included. A total of 18 human trials were identified, investigating therapeutic effects on the neuronal, gastrointestinal, musculoskeletal and immune system, with sample sizes ranging from one to 194 subjects. Three randomised controlled trials showed hempseed pills (in Traditional Chinese Medicine formulation MaZiRenWan) consumption significantly improved spontaneous bowel movement in functional constipation. The use of commercial cannabidiol (CBD) sourced from hemp suggested benefits in cannabis dependence, epilepsy, and anxiety disorders. However, there was insufficient evidence to suggest analgesic or anxiolytics effects of hemp being equivalent to marijuana. All clinical trials reviewed varied in terms of test item formulation and standardisation, which made it challenging to confirm overall efficacy for a specific disease or condition. Published efficacy data on hemp are still at a preliminary level, with limited high quality clinical evidence for any specific therapeutic indication. With multiple variants of this plant having different phytochemical and bioactive compounds, future empirical research should focus on uniformity in experimental designs to further strengthen the notion of using medicinal hemp.

Keywords: cannabis, complementary medicine, hemp, herbal medicine.

Procedia PDF Downloads 113
24042 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria

Authors: Isaac Kayode Ogunlade

Abstract:

Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.

Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device

Procedia PDF Downloads 86
24041 Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome

Authors: Iacopo Testi, Diego Pajarito, Nicoletta Roberto, Carmen Greco

Abstract:

Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.

Keywords: data science, spatial analysis, composite index, Rome, urban planning, youth economic discomfort index

Procedia PDF Downloads 129
24040 AI-Based Technologies in International Arbitration: An Exploratory Study on the Practicability of Applying AI Tools in International Arbitration

Authors: Annabelle Onyefulu-Kingston

Abstract:

One of the major purposes of AI today is to evaluate and analyze millions of micro and macro data in order to determine what is relevant in a particular case and proffer it in an adequate manner. Microdata, as far as it relates to AI in international arbitration, is the millions of key issues specifically mentioned by either one or both parties or by their counsels, arbitrators, or arbitral tribunals in arbitral proceedings. This can be qualifications of expert witness and admissibility of evidence, amongst others. Macro data, on the other hand, refers to data derived from the resolution of the dispute and, consequently, the final and binding award. A notable example of this includes the rationale of the award and specific and general damages awarded, amongst others. This paper aims to critically evaluate and analyze the possibility of technological inclusion in international arbitration. This research will be imploring the qualitative method by evaluating existing literature on the consequence of applying AI to both micro and macro data in international arbitration, and how this can be of assistance to parties, counsels, and arbitrators.

Keywords: AI-based technologies, algorithms, arbitrators, international arbitration

Procedia PDF Downloads 83
24039 A Virtual Grid Based Energy Efficient Data Gathering Scheme for Heterogeneous Sensor Networks

Authors: Siddhartha Chauhan, Nitin Kumar Kotania

Abstract:

Traditional Wireless Sensor Networks (WSNs) generally use static sinks to collect data from the sensor nodes via multiple forwarding. Therefore, network suffers with some problems like long message relay time, bottle neck problem which reduces the performance of the network. Many approaches have been proposed to prevent this problem with the help of mobile sink to collect the data from the sensor nodes, but these approaches still suffer from the buffer overflow problem due to limited memory size of sensor nodes. This paper proposes an energy efficient scheme for data gathering which overcomes the buffer overflow problem. The proposed scheme creates virtual grid structure of heterogeneous nodes. Scheme has been designed for sensor nodes having variable sensing rate. Every node finds out its buffer overflow time and on the basis of this cluster heads are elected. A controlled traversing approach is used by the proposed scheme in order to transmit data to sink. The effectiveness of the proposed scheme is verified by simulation.

Keywords: buffer overflow problem, mobile sink, virtual grid, wireless sensor networks

Procedia PDF Downloads 383
24038 Information Communication Technology Based Road Traffic Accidents’ Identification, and Related Smart Solution Utilizing Big Data

Authors: Ghulam Haider Haidaree, Nsenda Lukumwena

Abstract:

Today the world of research enjoys abundant data, available in virtually any field, technology, science, and business, politics, etc. This is commonly referred to as big data. This offers a great deal of precision and accuracy, supportive of an in-depth look at any decision-making process. When and if well used, Big Data affords its users with the opportunity to produce substantially well supported and good results. This paper leans extensively on big data to investigate possible smart solutions to urban mobility and related issues, namely road traffic accidents, its casualties, and fatalities based on multiple factors, including age, gender, location occurrences of accidents, etc. Multiple technologies were used in combination to produce an Information Communication Technology (ICT) based solution with embedded technology. Those technologies include principally Geographic Information System (GIS), Orange Data Mining Software, Bayesian Statistics, to name a few. The study uses the Leeds accident 2016 to illustrate the thinking process and extracts thereof a model that can be tested, evaluated, and replicated. The authors optimistically believe that the proposed model will significantly and smartly help to flatten the curve of road traffic accidents in the fast-growing population densities, which increases considerably motor-based mobility.

Keywords: accident factors, geographic information system, information communication technology, mobility

Procedia PDF Downloads 206
24037 Analysis of ECGs Survey Data by Applying Clustering Algorithm

Authors: Irum Matloob, Shoab Ahmad Khan, Fahim Arif

Abstract:

As Indo-pak has been the victim of heart diseases since many decades. Many surveys showed that percentage of cardiac patients is increasing in Pakistan day by day, and special attention is needed to pay on this issue. The framework is proposed for performing detailed analysis of ECG survey data which is conducted for measuring the prevalence of heart diseases statistics in Pakistan. The ECG survey data is evaluated or filtered by using automated Minnesota codes and only those ECGs are used for further analysis which is fulfilling the standardized conditions mentioned in the Minnesota codes. Then feature selection is performed by applying proposed algorithm based on discernibility matrix, for selecting relevant features from the database. Clustering is performed for exposing natural clusters from the ECG survey data by applying spectral clustering algorithm using fuzzy c means algorithm. The hidden patterns and interesting relationships which have been exposed after this analysis are useful for further detailed analysis and for many other multiple purposes.

Keywords: arrhythmias, centroids, ECG, clustering, discernibility matrix

Procedia PDF Downloads 349
24036 The Impact of Motivation on Employee Performance in South Korea

Authors: Atabong Awung Lekeazem

Abstract:

The purpose of this paper is to identify the impact or role of incentives on employee’s performance with a particular emphasis on Korean workers. The process involves defining and explaining the different types of motivation. In defining them, we also bring out the difference between the two major types of motivations. The second phase of the paper shall involve gathering data/information from a sample population and then analyzing the data. In the analysis, we shall get to see the almost similar mentality or value which Koreans attach to motivation, which a slide different view coming only from top management personnel. The last phase shall have us presenting the data and coming to a conclusion from which possible knowledge on how managers and potential managers can ignite the best out of their employees.

Keywords: motivation, employee’s performance, Korean workers, business information systems

Procedia PDF Downloads 403
24035 Improved Classification Procedure for Imbalanced and Overlapped Situations

Authors: Hankyu Lee, Seoung Bum Kim

Abstract:

The issue with imbalance and overlapping in the class distribution becomes important in various applications of data mining. The imbalanced dataset is a special case in classification problems in which the number of observations of one class (i.e., major class) heavily exceeds the number of observations of the other class (i.e., minor class). Overlapped dataset is the case where many observations are shared together between the two classes. Imbalanced and overlapped data can be frequently found in many real examples including fraud and abuse patients in healthcare, quality prediction in manufacturing, text classification, oil spill detection, remote sensing, and so on. The class imbalance and overlap problem is the challenging issue because this situation degrades the performance of most of the standard classification algorithms. In this study, we propose a classification procedure that can effectively handle imbalanced and overlapped datasets by splitting data space into three parts: nonoverlapping, light overlapping, and severe overlapping and applying the classification algorithm in each part. These three parts were determined based on the Hausdorff distance and the margin of the modified support vector machine. An experiments study was conducted to examine the properties of the proposed method and compared it with other classification algorithms. The results showed that the proposed method outperformed the competitors under various imbalanced and overlapped situations. Moreover, the applicability of the proposed method was demonstrated through the experiment with real data.

Keywords: classification, imbalanced data with class overlap, split data space, support vector machine

Procedia PDF Downloads 305
24034 Mapping of Geological Structures Using Aerial Photography

Authors: Ankit Sharma, Mudit Sachan, Anurag Prakash

Abstract:

Rapid growth in data acquisition technologies through drones, have led to advances and interests in collecting high-resolution images of geological fields. Being advantageous in capturing high volume of data in short flights, a number of challenges have to overcome for efficient analysis of this data, especially while data acquisition, image interpretation and processing. We introduce a method that allows effective mapping of geological fields using photogrammetric data of surfaces, drainage area, water bodies etc, which will be captured by airborne vehicles like UAVs, we are not taking satellite images because of problems in adequate resolution, time when it is captured may be 1 yr back, availability problem, difficult to capture exact image, then night vision etc. This method includes advanced automated image interpretation technology and human data interaction to model structures and. First Geological structures will be detected from the primary photographic dataset and the equivalent three dimensional structures would then be identified by digital elevation model. We can calculate dip and its direction by using the above information. The structural map will be generated by adopting a specified methodology starting from choosing the appropriate camera, camera’s mounting system, UAVs design ( based on the area and application), Challenge in air borne systems like Errors in image orientation, payload problem, mosaicing and geo referencing and registering of different images to applying DEM. The paper shows the potential of using our method for accurate and efficient modeling of geological structures, capture particularly from remote, of inaccessible and hazardous sites.

Keywords: digital elevation model, mapping, photogrammetric data analysis, geological structures

Procedia PDF Downloads 683
24033 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System

Authors: Dong Seop Lee, Byung Sik Kim

Abstract:

In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.

Keywords: disaster information management, unstructured data, optical character recognition, machine learning

Procedia PDF Downloads 122
24032 Movement of the Viscous Elastic Fixed Vertically Located Cylinder in Liquid with the Free Surface Under the Influence of Waves

Authors: T. J. Hasanova, C. N. Imamalieva

Abstract:

The problem about the movement of the rigid cylinder keeping the vertical position under the influence of running superficial waves in a liquid is considered. The indignation of a falling wave caused by the presence of the cylinder which moves is thus considered. Special decomposition on a falling harmonious wave is used. The problem dares an operational method. For a finding of the original decision, Considering that the image denominator represents a tabular function, Voltaire's integrated equation of the first sort which dares a numerical method is used. Cylinder movement in the continuous environment under the influence of waves is considered in work. Problems are solved by an operational method, thus originals of required functions are looked for by the numerical definition of poles of combinations of transcendental functions and calculation of not own integrals. Using specificity of a task below, Decisions are under construction the numerical solution of the integrated equation of Volter of the first sort that does not create computing problems of the complex roots of transcendental functions connected with search.

Keywords: rigid cylinder, linear interpolation, fluctuations, Voltaire's integrated equation, harmonious wave

Procedia PDF Downloads 316
24031 Predicting Seoul Bus Ridership Using Artificial Neural Network Algorithm with Smartcard Data

Authors: Hosuk Shin, Young-Hyun Seo, Eunhak Lee, Seung-Young Kho

Abstract:

Currently, in Seoul, users have the privilege to avoid riding crowded buses with the installation of Bus Information System (BIS). BIS has three levels of on-board bus ridership level information (spacious, normal, and crowded). However, there are flaws in the system due to it being real time which could provide incomplete information to the user. For example, a bus comes to the station, and on the BIS it shows that the bus is crowded, but on the stop that the user is waiting many people get off, which would mean that this station the information should show as normal or spacious. To fix this problem, this study predicts the bus ridership level using smart card data to provide more accurate information about the passenger ridership level on the bus. An Artificial Neural Network (ANN) is an interconnected group of nodes, that was created based on the human brain. Forecasting has been one of the major applications of ANN due to the data-driven self-adaptive methods of the algorithm itself. According to the results, the ANN algorithm was stable and robust with somewhat small error ratio, so the results were rational and reasonable.

Keywords: smartcard data, ANN, bus, ridership

Procedia PDF Downloads 162
24030 Combination of Artificial Neural Network Model and Geographic Information System for Prediction Water Quality

Authors: Sirilak Areerachakul

Abstract:

Water quality has initiated serious management efforts in many countries. Artificial Neural Network (ANN) models are developed as forecasting tools in predicting water quality trend based on historical data. This study endeavors to automatically classify water quality. The water quality classes are evaluated using 6 factor indices. These factors are pH value (pH), Dissolved Oxygen (DO), Biochemical Oxygen Demand (BOD), Nitrate Nitrogen (NO3N), Ammonia Nitrogen (NH3N) and Total Coliform (T-Coliform). The methodology involves applying data mining techniques using multilayer perceptron (MLP) neural network models. The data consisted of 11 sites of Saen Saep canal in Bangkok, Thailand. The data is obtained from the Department of Drainage and Sewerage Bangkok Metropolitan Administration during 2007-2011. The results of multilayer perceptron neural network exhibit a high accuracy multilayer perception rate at 94.23% in classifying the water quality of Saen Saep canal in Bangkok. Subsequently, this encouraging result could be combined with GIS data improves the classification accuracy significantly.

Keywords: artificial neural network, geographic information system, water quality, computer science

Procedia PDF Downloads 339
24029 Improving Temporal Correlations in Empirical Orthogonal Function Expansions for Data Interpolating Empirical Orthogonal Function Algorithm

Authors: Ping Bo, Meng Yunshan

Abstract:

Satellite-derived sea surface temperature (SST) is a key parameter for many operational and scientific applications. However, the disadvantage of SST data is a high percentage of missing data which is mainly caused by cloud coverage. Data Interpolating Empirical Orthogonal Function (DINEOF) algorithm is an EOF-based technique for reconstructing the missing data and has been widely used in oceanographic field. The reconstruction of SST images within a long time series using DINEOF can cause large discontinuities and one solution for this problem is to filter the temporal covariance matrix to reduce the spurious variability. Based on the previous researches, an algorithm is presented in this paper to improve the temporal correlations in EOF expansion. Similar with the previous researches, a filter, such as Laplacian filter, is implemented on the temporal covariance matrix, but the temporal relationship between two consecutive images which is used in the filter is considered in the presented algorithm, for example, two images in the same season are more likely correlated than those in the different seasons, hence the latter one is less weighted in the filter. The presented approach is tested for the monthly nighttime 4-km Advanced Very High Resolution Radiometer (AVHRR) Pathfinder SST for the long-term period spanning from 1989 to 2006. The results obtained from the presented algorithm are compared to those from the original DINEOF algorithm without filtering and from the DINEOF algorithm with filtering but without taking temporal relationship into account.

Keywords: data interpolating empirical orthogonal function, image reconstruction, sea surface temperature, temporal filter

Procedia PDF Downloads 321
24028 Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency

Authors: Fanqiang Kong, Chending Bian

Abstract:

In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.

Keywords: hyperspectral unmixing, joint-sparse, low-rank representation, abundance estimation

Procedia PDF Downloads 254
24027 Electronic Physical Activity Record (EPAR): Key for Data Driven Physical Activity Healthcare Services

Authors: Rishi Kanth Saripalle

Abstract:

Medical experts highly recommend to include physical activity in everyone’s daily routine irrespective of gender or age as it helps to improve various medical issues or curb potential issues. Simultaneously, experts are also diligently trying to provide various healthcare services (interventions, plans, exercise routines, etc.) for promoting healthy living and increasing physical activity in one’s ever increasing hectic schedules. With the introduction of wearables, individuals are able to keep track, analyze, and visualize their daily physical activities. However, there seems to be no common agreed standard for representing, gathering, aggregating and analyzing an individual’s physical activity data from disparate multiple sources (exercise pans, multiple wearables, etc.). This issue makes it highly impractical to develop any data-driven physical activity applications and healthcare programs. Further, the inability to integrate the physical activity data into an individual’s Electronic Health Record to provide a wholistic image of that individual’s health is still eluding the experts. This article has identified three primary reasons for this potential issue. First, there is no agreed standard, both structure and semantic, for representing and sharing physical activity data across disparate systems. Second, various organizations (e.g., LA fitness, Gold’s Gym, etc.) and research backed interventions and programs still primarily rely on paper or unstructured format (such as text or notes) to keep track of the data generated from physical activities. Finally, most of the wearable devices operate in silos. This article identifies the underlying problem, explores the idea of reusing existing standards, and identifies the essential modules required to move forward.

Keywords: electronic physical activity record, physical activity in EHR EIM, tracking physical activity data, physical activity data standards

Procedia PDF Downloads 279
24026 Developing Pavement Structural Deterioration Curves

Authors: Gregory Kelly, Gary Chai, Sittampalam Manoharan, Deborah Delaney

Abstract:

A Structural Number (SN) can be calculated for a road pavement from the properties and thicknesses of the surface, base course, sub-base, and subgrade. Historically, the cost of collecting structural data has been very high. Data were initially collected using Benkelman Beams and now by Falling Weight Deflectometer (FWD). The structural strength of pavements weakens over time due to environmental and traffic loading factors, but due to a lack of data, no structural deterioration curve for pavements has been implemented in a Pavement Management System (PMS). International Roughness Index (IRI) is a measure of the road longitudinal profile and has been used as a proxy for a pavement’s structural integrity. This paper offers two conceptual methods to develop Pavement Structural Deterioration Curves (PSDC). Firstly, structural data are grouped in sets by design Equivalent Standard Axles (ESA). An ‘Initial’ SN (ISN), Intermediate SN’s (SNI) and a Terminal SN (TSN), are used to develop the curves. Using FWD data, the ISN is the SN after the pavement is rehabilitated (Financial Accounting ‘Modern Equivalent’). Intermediate SNIs, are SNs other than the ISN and TSN. The TSN was defined as the SN of the pavement when it was approved for pavement rehabilitation. The second method is to use Traffic Speed Deflectometer data (TSD). The road network already divided into road blocks, is grouped by traffic loading. For each traffic loading group, road blocks that have had a recent pavement rehabilitation, are used to calculate the ISN and those planned for pavement rehabilitation to calculate the TSN. The remaining SNs are used to complete the age-based or if available, historical traffic loading-based SNI’s.

Keywords: conceptual, pavement structural number, pavement structural deterioration curve, pavement management system

Procedia PDF Downloads 538
24025 Towards a Large Scale Deep Semantically Analyzed Corpus for Arabic: Annotation and Evaluation

Authors: S. Alansary, M. Nagi

Abstract:

This paper presents an approach of conducting semantic annotation of Arabic corpus using the Universal Networking Language (UNL) framework. UNL is intended to be a promising strategy for providing a large collection of semantically annotated texts with formal, deep semantics rather than shallow. The result would constitute a semantic resource (semantic graphs) that is editable and that integrates various phenomena, including predicate-argument structure, scope, tense, thematic roles and rhetorical relations, into a single semantic formalism for knowledge representation. The paper will also present the Interactive Analysis​ tool for automatic semantic annotation (IAN). In addition, the cornerstone of the proposed methodology which are the disambiguation and transformation rules, will be presented. Semantic annotation using UNL has been applied to a corpus of 20,000 Arabic sentences representing the most frequent structures in the Arabic Wikipedia. The representation, at different linguistic levels was illustrated starting from the morphological level passing through the syntactic level till the semantic representation is reached. The output has been evaluated using the F-measure. It is 90% accurate. This demonstrates how powerful the formal environment is, as it enables intelligent text processing and search.

Keywords: semantic analysis, semantic annotation, Arabic, universal networking language

Procedia PDF Downloads 579
24024 Nilsson Model Performance in Estimating Bed Load Sediment, Case Study: Tale Zang Station

Authors: Nader Parsazadeh

Abstract:

The variety of bed sediment load relationships, insufficient information and data, and the influence of river conditions make the selection of an optimum relationship for a given river extremely difficult. Hence, in order to select the best formulae, the bed load equations should be evaluated. The affecting factors need to be scrutinized, and equations should be verified. Also, re-evaluation may be needed. In this research, sediment bed load of Dez Dam at Tal-e Zang Station has been studied. After reviewing the available references, the most common formulae were selected that included Meir-Peter and Muller, using MS Excel to compute and evaluate data. Then, 52 series of already measured data at the station were re-measured, and the sediment bed load was determined. 1. The calculated bed load obtained by different equations showed a great difference with that of measured data. 2. r difference ratio from 0.5 to 2.00 was 0% for all equations except for Nilsson and Shields equations while it was 61.5 and 59.6% for Nilsson and Shields equations, respectively. 3. By reviewing results and discarding probably erroneous measured data measurements (by human or machine), one may use Nilsson Equation due to its r value higher than 1 as an effective equation for estimating bed load at Tal-e Zang Station in order to predict activities that depend upon bed sediment load estimate to be determined. Also, since only few studies have been conducted so far, these results may be of assistance to the operators and consulting companies.

Keywords: bed load, empirical relation ship, sediment, Tale Zang Station

Procedia PDF Downloads 358
24023 Calpoly Autonomous Transportation Experience: Software for Driverless Vehicle Operating on Campus

Authors: F. Tang, S. Boskovich, A. Raheja, Z. Aliyazicioglu, S. Bhandari, N. Tsuchiya

Abstract:

Calpoly Autonomous Transportation Experience (CATE) is a driverless vehicle that we are developing to provide safe, accessible, and efficient transportation of passengers throughout the Cal Poly Pomona campus for events such as orientation tours. Unlike the other self-driving vehicles that are usually developed to operate with other vehicles and reside only on the road networks, CATE will operate exclusively on walk-paths of the campus (potentially narrow passages) with pedestrians traveling from multiple locations. Safety becomes paramount as CATE operates within the same environment as pedestrians. As driverless vehicles assume greater roles in today’s transportation, this project will contribute to autonomous driving with pedestrian traffic in a highly dynamic environment. The CATE project requires significant interdisciplinary work. Researchers from mechanical engineering, electrical engineering and computer science are working together to attack the problem from different perspectives (hardware, software and system). In this abstract, we describe the software aspects of the project, with a focus on the requirements and the major components. CATE shall provide a GUI interface for the average user to interact with the car and access its available functionalities, such as selecting a destination from any origin on campus. We have developed an interface that provides an aerial view of the campus map, the current car location, routes, and the goal location. Users can interact with CATE through audio or manual inputs. CATE shall plan routes from the origin to the selected destination for the vehicle to travel. We will use an existing aerial map for the campus and convert it to a spatial graph configuration where the vertices represent the landmarks and edges represent paths that the car should follow with some designated behaviors (such as stay on the right side of the lane or follow an edge). Graph search algorithms such as A* will be implemented as the default path planning algorithm. D* Lite will be explored to efficiently recompute the path when there are any changes to the map. CATE shall avoid any static obstacles and walking pedestrians within some safe distance. Unlike traveling along traditional roadways, CATE’s route directly coexists with pedestrians. To ensure the safety of the pedestrians, we will use sensor fusion techniques that combine data from both lidar and stereo vision for obstacle avoidance while also allowing CATE to operate along its intended route. We will also build prediction models for pedestrian traffic patterns. CATE shall improve its location and work under a GPS-denied situation. CATE relies on its GPS to give its current location, which has a precision of a few meters. We have implemented an Unscented Kalman Filter (UKF) that allows the fusion of data from multiple sensors (such as GPS, IMU, odometry) in order to increase the confidence of localization. We also noticed that GPS signals can easily get degraded or blocked on campus due to high-rise buildings or trees. UKF can also help here to generate a better state estimate. In summary, CATE will provide on-campus transportation experience that coexists with dynamic pedestrian traffic. In future work, we will extend it to multi-vehicle scenarios.

Keywords: driverless vehicle, path planning, sensor fusion, state estimate

Procedia PDF Downloads 140
24022 Hierarchical Filtering Method of Threat Alerts Based on Correlation Analysis

Authors: Xudong He, Jian Wang, Jiqiang Liu, Lei Han, Yang Yu, Shaohua Lv

Abstract:

Nowadays, the threats of the internet are enormous and increasing; however, the classification of huge alert messages generated in this environment is relatively monotonous. It affects the accuracy of the network situation assessment, and also brings inconvenience to the security managers to deal with the emergency. In order to deal with potential network threats effectively and provide more effective data to improve the network situation awareness. It is essential to build a hierarchical filtering method to prevent the threats. In this paper, it establishes a model for data monitoring, which can filter systematically from the original data to get the grade of threats and be stored for using again. Firstly, it filters the vulnerable resources, open ports of host devices and services. Then use the entropy theory to calculate the performance changes of the host devices at the time of the threat occurring and filter again. At last, sort the changes of the performance value at the time of threat occurring. Use the alerts and performance data collected in the real network environment to evaluate and analyze. The comparative experimental analysis shows that the threat filtering method can effectively filter the threat alerts effectively.

Keywords: correlation analysis, hierarchical filtering, multisource data, network security

Procedia PDF Downloads 198
24021 Human Action Retrieval System Using Features Weight Updating Based Relevance Feedback Approach

Authors: Munaf Rashid

Abstract:

For content-based human action retrieval systems, search accuracy is often inferior because of the following two reasons 1) global information pertaining to videos is totally ignored, only low level motion descriptors are considered as a significant feature to match the similarity between query and database videos, and 2) the semantic gap between the high level user concept and low level visual features. Hence, in this paper, we propose a method that will address these two issues and in doing so, this paper contributes in two ways. Firstly, we introduce a method that uses both global and local information in one framework for an action retrieval task. Secondly, to minimize the semantic gap, a user concept is involved by incorporating features weight updating (FWU) Relevance Feedback (RF) approach. We use statistical characteristics to dynamically update weights of the feature descriptors so that after every RF iteration feature space is modified accordingly. For testing and validation purpose two human action recognition datasets have been utilized, namely Weizmann and UCF. Results show that even with a number of visual challenges the proposed approach performs well.

Keywords: relevance feedback (RF), action retrieval, semantic gap, feature descriptor, codebook

Procedia PDF Downloads 467
24020 A Review of Methods for Handling Missing Data in the Formof Dropouts in Longitudinal Clinical Trials

Authors: A. Satty, H. Mwambi

Abstract:

Much clinical trials data-based research are characterized by the unavoidable problem of dropout as a result of missing or erroneous values. This paper aims to review some of the various techniques to address the dropout problems in longitudinal clinical trials. The fundamental concepts of the patterns and mechanisms of dropout are discussed. This study presents five general techniques for handling dropout: (1) Deletion methods; (2) Imputation-based methods; (3) Data augmentation methods; (4) Likelihood-based methods; and (5) MNAR-based methods. Under each technique, several methods that are commonly used to deal with dropout are presented, including a review of the existing literature in which we examine the effectiveness of these methods in the analysis of incomplete data. Two application examples are presented to study the potential strengths or weaknesses of some of the methods under certain dropout mechanisms as well as to assess the sensitivity of the modelling assumptions.

Keywords: incomplete longitudinal clinical trials, missing at random (MAR), imputation, weighting methods, sensitivity analysis

Procedia PDF Downloads 410
24019 Feedback Preference and Practice of English Majors’ in Pronunciation Instruction

Authors: Claerchille Jhulia Robin

Abstract:

This paper discusses the perspective of ESL learners towards pronunciation instruction. It sought to determine how these learners view the type of feedback their speech teacher gives and its impact on their own classroom practice of providing feedback. This study utilized a quantitative-qualitative approach to the problem. The respondents were Education students majoring in English. A survey questionnaire and interview guide were used for data gathering. The data from the survey was tabulated using frequency count and the data from the interview were then transcribed and analyzed. Results showed that ESL learners favor immediate corrective feedback and they do not find any issue in being corrected in front of their peers. They also practice the same corrective technique in their own classroom.

Keywords: ESL, feedback, learner perspective, pronunciation instruction

Procedia PDF Downloads 228
24018 In Search of High Growth: Mapping out Academic Spin-Off´s Performance in Catalonia

Authors: F. Guspi, E. García

Abstract:

This exploratory study gives an overview of the evolution of the main financial and performance indicators of the Academic Spin-Off’s and High Growth Academic Spin-Off’s in year 3 and year 6 after its creation in the region of Catalonia in Spain. The study compares and evaluates results of these different measures of performance and the degree of success of these companies for each University. We found that the average Catalonian Academic Spin-Off is small and have not achieved the sustainability stage at year 6. On the contrary, a small group of High Growth Academic Spin-Off’s exhibit robust performance with high profits in year 6. Our results support the need to increase selectivity and support for these companies especially near year 3, because are the ones that will bring wealth and employment. University role as an investor has rigid norms and habits that impede an efficient economic return from their ASO investment. Universities with high performance on sales and employment in year 3 not always could sustain this growth in year 6 because their ASO’s are not profitable. On the contrary, profitable ASO exhibit superior performance in all measurement indicators in year 6. We advocate the need of a balanced growth (with profits) as a way to obtain subsequent continuous growth.

Keywords: Academic Spin-Off (ASO), university entrepreneurship, entrepreneurial university, high growth, New Technology Based Companies (NTBC), University Spin-Off

Procedia PDF Downloads 456
24017 Pb and NI Removal from Aqueous Environment by Green Synthesized Iron Nanoparticles Using Fruit Cucumis Melo and Leaves of Ficus Virens

Authors: Amandeep Kaur, Sangeeta Sharma

Abstract:

Keeping in view the serious entanglement of heavy metals ( Pb+2 and Ni+2) ions in an aqueous environment, a rapid search for efficient adsorbents for the adsorption of heavy metals has become highly desirable. In this quest, green synthesized Fe np’s have gathered attention because of their excellent adsorption capability of heavy metals from aqueous solution. This research report aims at the fabrication of Fe np’s using the fruit Cucumis melo and leaves of Ficus virens via a biogenic synthesis route. Further, synthesized CM-Fe-np’s and FV-Fe-np’s have been tested as potential bio-adsorbents for the removal of Pb+2 and Ni+2 by carrying out adsorption batch experiments. The influence of myriad parameters like initial concentration of Pb/Ni (5,10,15,20,25 mg/L), contact time (10 to 200 min.), adsorbent dosage (0.5, 0.10, 0.15 mg/L), shaking speed (120 to 350 rpm) and pH value (6,7,8,9) has been investigated. The maximum removal with CM-Fe-np’s and FV-Fe-np’s has been achieved at pH 7, metal conc. 5 mg/L, dosage 0.9 g/L, shaking speed 200 rpm and reaction contact time 200 min during the adsorption experiment. The results obtained are found to be in accordance with Freundlich and Langmuir's adsorption models; consequently, they could be highly applicable to the wastewater treatment plant.

Keywords: adsorption, biogenic synthesis, nanoparticles, nickel, lead

Procedia PDF Downloads 83
24016 Automatic Tagging and Accuracy in Assamese Text Data

Authors: Chayanika Hazarika Bordoloi

Abstract:

This paper is an attempt to work on a highly inflectional language called Assamese. This is also one of the national languages of India and very little has been achieved in terms of computational research. Building a language processing tool for a natural language is not very smooth as the standard and language representation change at various levels. This paper presents inflectional suffixes of Assamese verbs and how the statistical tools, along with linguistic features, can improve the tagging accuracy. Conditional random fields (CRF tool) was used to automatically tag and train the text data; however, accuracy was improved after linguistic featured were fed into the training data. Assamese is a highly inflectional language; hence, it is challenging to standardizing its morphology. Inflectional suffixes are used as a feature of the text data. In order to analyze the inflections of Assamese word forms, a list of suffixes is prepared. This list comprises suffixes, comprising of all possible suffixes that various categories can take is prepared. Assamese words can be classified into inflected classes (noun, pronoun, adjective and verb) and un-inflected classes (adverb and particle). The corpus used for this morphological analysis has huge tokens. The corpus is a mixed corpus and it has given satisfactory accuracy. The accuracy rate of the tagger has gradually improved with the modified training data.

Keywords: CRF, morphology, tagging, tagset

Procedia PDF Downloads 189