Search results for: claims database
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1912

Search results for: claims database

1672 Specification of Requirements to Ensure Proper Implementation of Security Policies in Cloud-Based Multi-Tenant Systems

Authors: Rebecca Zahra, Joseph G. Vella, Ernest Cachia

Abstract:

The notion of cloud computing is rapidly gaining ground in the IT industry and is appealing mostly due to making computing more adaptable and expedient whilst diminishing the total cost of ownership. This paper focuses on the software as a service (SaaS) architecture of cloud computing which is used for the outsourcing of databases with their associated business processes. One approach for offering SaaS is basing the system’s architecture on multi-tenancy. Multi-tenancy allows multiple tenants (users) to make use of the same single application instance. Their requests and configurations might then differ according to specific requirements met through tenant customisation through the software. Despite the known advantages, companies still feel uneasy to opt for the multi-tenancy with data security being a principle concern. The fact that multiple tenants, possibly competitors, would have their data located on the same server process and share the same database tables heighten the fear of unauthorised access. Security is a vital aspect which needs to be considered by application developers, database administrators, data owners and end users. This is further complicated in cloud-based multi-tenant system where boundaries must be established between tenants and additional access control models must be in place to prevent unauthorised cross-tenant access to data. Moreover, when altering the database state, the transactions need to strictly adhere to the tenant’s known business processes. This paper focuses on the fact that security in cloud databases should not be considered as an isolated issue. Rather it should be included in the initial phases of the database design and monitored continuously throughout the whole development process. This paper aims to identify a number of the most common security risks and threats specifically in the area of multi-tenant cloud systems. Issues and bottlenecks relating to security risks in cloud databases are surveyed. Some techniques which might be utilised to overcome them are then listed and evaluated. After a description and evaluation of the main security threats, this paper produces a list of software requirements to ensure that proper security policies are implemented by a software development team when designing and implementing a multi-tenant based SaaS. This would then assist the cloud service providers to define, implement, and manage security policies as per tenant customisation requirements whilst assuring security for the customers’ data.

Keywords: cloud computing, data management, multi-tenancy, requirements, security

Procedia PDF Downloads 131
1671 ArcGIS as a Tool for Infrastructure Documentation and Asset Management: Establishing a GIS for Computer Network Documentation

Authors: John Segars

Abstract:

Built out of a real-world need to have better, more detailed, asset and infrastructure documentation, this project will lay out the case for using the database functionality of ArcGIS as a tool to track and maintain infrastructure location, status, maintenance and serviceability. Workflows and processes will be presented and detailed which may be applied to an organizations’ infrastructure needs that might allow them to make use of the robust tools which surround the ArcGIS platform. The end result is a value-added information system framework with a geographic component e.g., the spatial location of various I.T. assets, a detailed set of records which not only documents location but also captures the maintenance history for assets along with photographs and documentation of these various assets as attachments to the numerous feature class items. In addition to the asset location and documentation benefits, the staff will be able to log into the devices and pull SNMP (Simple Network Management Protocol) based query information from within the user interface. The entire collection of information may be displayed in ArcGIS, via a JavaScript based web application or via queries to the back-end database. The project is applicable to all organizations which maintain an IT infrastructure but specifically targets post-secondary educational institutions where access to ESRI resources is generally already available in house.

Keywords: ESRI, GIS, infrastructure, network documentation, PostgreSQL

Procedia PDF Downloads 154
1670 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network

Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.

Keywords: big data, k-NN, machine learning, traffic speed prediction

Procedia PDF Downloads 334
1669 Transcriptome Analysis of Saffron (crocus sativus L.) Stigma Focusing on Identification Genes Involved in the Biosynthesis of Crocin

Authors: Parvaneh Mahmoudi, Ahmad Moeni, Seyed Mojtaba Khayam Nekoei, Mohsen Mardi, Mehrshad Zeinolabedini, Ghasem Hosseini Salekdeh

Abstract:

Saffron (Crocus sativus L.) is one of the most important spice and medicinal plants. The three-branch style of C. sativus flowers are the most important economic part of the plant and known as saffron, which has several medicinal properties. Despite the economic and biological significance of this plant, knowledge about its molecular characteristics is very limited. In the present study, we, for the first time, constructed a comprehensive dataset for C. sativus stigma through de novo transcriptome sequencing. We performed de novo transcriptome sequencing of C. sativus stigma using the Illumina paired-end sequencing technology. A total of 52075128 reads were generated and assembled into 118075 unigenes, with an average length of 629 bp and an N50 of 951 bp. A total of 66171unigenes were identified, among them, 66171 (56%) were annotated in the non-redundant National Center for Biotechnology Information (NCBI) database, 30938 (26%) were annotated in the Swiss-Prot database, 10273 (8.7%) unigenes were mapped to 141 Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway database, while 52560 (44%) and 40756 (34%) unigenes were assigned to Gen Ontology (GO) categories and Eukaryotic Orthologous Groups of proteins (KOG), respectively. In addition, 65 candidate genes involved in three stages of crocin biosynthesis were identified. Finally, transcriptome sequencing of saffron stigma was used to identify 6779 potential microsatellites (SSRs) molecular markers. High-throughput de novo transcriptome sequencing provided a valuable resource of transcript sequences of C. sativus in public databases. In addition, most of candidate genes potentially involved in crocin biosynthesis were identified which could be further utilized in functional genomics studies. Furthermore, numerous obtained SSRs might contribute to address open questions about the origin of this amphiploid spices with probable little genetic diversity.

Keywords: saffron, transcriptome, NGS, bioinformatic

Procedia PDF Downloads 60
1668 A General Framework for Knowledge Discovery from Echocardiographic and Natural Images

Authors: S. Nandagopalan, N. Pradeep

Abstract:

The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.

Keywords: active contour, Bayesian, echocardiographic image, feature vector

Procedia PDF Downloads 417
1667 The Study of the Socio-Economic and Environmental Impact on the Semi-Arid Environments Using GIS in the Eastern Aurès, Algeria

Authors: Benmessaoud Hassen

Abstract:

We propose in this study to address the impact of socio-economic and environmental impact on the physical environment, especially their spatiotemporal dynamics in semi-arid and arid eastern Aurès. Including 11 municipalities, the study area spreads out over a relatively large surface area of about 60.000 ha. The hindsight is quite important and is determined by 03 days of analysis of environmental variation spread over thirty years (between 1987 and 2007). The multi-source data acquired in this context are integrated into a geographic information system (GIS).This allows, among other indices to calculate areas and classes for each thematic layer of the 4 layers previously defined by a method inspired MEDALUS (Mediterranean Desertification and Land Use).The database created is composed of four layers of information (population, livestock, farming and land use). His analysis in space and time has been supplemented by a validation of the ground truth. Once the database has corrected it used to develop the comprehensive map with the calculation of the index of socio-economic and environmental (ISCE). The map supports and the resulting information does not consist only of figures on the present situation but could be used to forecast future trends.

Keywords: impact of socio-economic and environmental, spatiotemporal dynamics, semi-arid environments, GIS, Eastern Aurès

Procedia PDF Downloads 294
1666 Mapping and Database on Mass Movements along the Eastern Edge of the East African Rift in Burundi

Authors: L. Nahimana

Abstract:

The eastern edge of the East African Rift in Burundi shows many mass movement phenomena corresponding to landslides, mudflow, debris flow, spectacular erosion (mega-gully), flash floods and alluvial deposits. These phenomena usually occur during the rainy season. Their extent and consecutive damages vary widely. To manage these phenomena, it is necessary to adopt a methodological approach of their mapping with a structured database. The elements for this database are: three-dimensional extent of the phenomenon, natural causes and conditions (geological lithology, slope, weathering depth and products, rainfall patterns, natural environment) and the anthropogenic factors corresponding to the various human activities. The extent of the area provides information about the possibilities and opportunities for mitigation technique. The lithological nature allows understanding the influence of the nature of the rock and its structure on the intensity of the weathering of rocks, as well as the geotechnical properties of the weathering products. The slope influences the land stability. The intensity of annual, monthly and daily rainfall helps to understand the conditions of water saturation of the terrains. Certain natural circumstances such as the presence of streams and rivers promote foot slope erosion and thus the occurrence and activity of mass movements. The construction of some infrastructures such as new roads and agglomerations deeply modify the flow of surface and underground water followed by mass movements. Using geospatial data selected on the East African Rift in Burundi, it is presented case of mass movements illustrating the nature, importance, various factors and the extent of the damages. An analysis of these elements for each hazard can guide the options for mitigation of the phenomenon and its consequences.

Keywords: mass movement, landslide, mudflow, debris flow, spectacular erosion, mega-gully, flash flood, alluvial deposit, East African rift, Burundi

Procedia PDF Downloads 280
1665 Knowledge Management: Why is So Difficult? From “A Good Idea” to Organizational Contribute

Authors: Lisandro Blas, Héctor Tamanini

Abstract:

From earliest 90 to now, no many companies or organization can “really” implement a knowledge management (KM) system that works (no only viewed from a measurement model, but in this continuity). Which are the reasons of that? Some of the reason maybe could be embedded in how KM is demanded (usefulness, priority, experts, a definition of KM) vs the importance and resources that the organizations afford (budget, responsible of a specific area of KM, intangibility). Many organizations “claim” the importance of Knowledge Management but thhese demands are not reflecting these claims in their future actions. With another’s tools or managements ideas the organizations put the economics and human resources to work. Why it´s not occur in KM? This paper tray to explain some of this reasons and tray to deal with this situations through a survey done in 2011 for a IAPG (Argentinean Institute from Oil & Gas) Congress.

Keywords: knowledge management into organizations, new perspectives, failure in implementation, claim

Procedia PDF Downloads 392
1664 Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework

Authors: U. S. N. Raju, Kothuri Sai Kiran, Meena G. Kamal, Vinay Nikhil Pabba, Suresh Kanaparthi

Abstract:

There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework.

Keywords: video lectures, big video data, video retrieval, hadoop

Procedia PDF Downloads 501
1663 Least-Square Support Vector Machine for Characterization of Clusters of Microcalcifications

Authors: Baljit Singh Khehra, Amar Partap Singh Pharwaha

Abstract:

Clusters of Microcalcifications (MCCs) are most frequent symptoms of Ductal Carcinoma in Situ (DCIS) recognized by mammography. Least-Square Support Vector Machine (LS-SVM) is a variant of the standard SVM. In the paper, LS-SVM is proposed as a classifier for classifying MCCs as benign or malignant based on relevant extracted features from enhanced mammogram. To establish the credibility of LS-SVM classifier for classifying MCCs, a comparative evaluation of the relative performance of LS-SVM classifier for different kernel functions is made. For comparative evaluation, confusion matrix and ROC analysis are used. Experiments are performed on data extracted from mammogram images of DDSM database. A total of 380 suspicious areas are collected, which contain 235 malignant and 145 benign samples, from mammogram images of DDSM database. A set of 50 features is calculated for each suspicious area. After this, an optimal subset of 23 most suitable features is selected from 50 features by Particle Swarm Optimization (PSO). The results of proposed study are quite promising.

Keywords: clusters of microcalcifications, ductal carcinoma in situ, least-square support vector machine, particle swarm optimization

Procedia PDF Downloads 336
1662 Execution of Joinery in Large Scale Projects: Middle East Region as a Case Study

Authors: Arsany Philip Fawzy

Abstract:

This study is going to address the hurdles of project management in the joinery field. It is widely divided into two sections; the first one will shed light on how to execute large-scale projects with a specific focus on the middle east region. It will also raise major obstacles that may face the joinery team from the site clearance and the coordination between the joinery team and the construction team. The second section is going to technically analyze the commercial side of the joinery and how to control the main cost of the project to avoid financial problems. It will also suggest empirical solutions to monitor the cost impact (e.g., Variation of contract quantity and claims).

Keywords: clearance, quality, cost, variation, claim

Procedia PDF Downloads 74
1661 Artificial Intelligent Tax Simulator to Minimize Tax Liability for Multinational Corporations

Authors: Sean Goltz, Michael Mayo

Abstract:

The purpose of this research is to use Global-Regulation.com database of the world laws, focusing on tax treaties between countries, in order to create an AI-driven tax simulator that will run an AI agent through potential tax scenarios across countries. The AI agent goal is to identify the scenario that will result in minimum tax liability based on tax treaties between countries. The results will be visualized by a three dimensional matrix. This will be an online web application. Multinational corporations are running their business through multiple countries. These countries, in turn, have a tax treaty with many other countries to regulate the payment of taxes on income that is transferred between these countries. As a result, planning the best tax scenario across multiple countries and numerous tax treaties is almost impossible. This research propose to use Global-Regulation.com database of word laws in English (machine translated by Google and Microsoft API’s) in order to create a simulator that will include the information in the tax treaties. Once ready, an AI agent will be sent through the simulator to identify the scenario that will result in minimum tax liability. Identifying the best tax scenario across countries may save multinational corporations, like Google, billions of dollars annually. Given the nature of the raw data and the domain of taxes (i.e., numbers), this is a promising ground to employ artificial intelligence towards a practical and beneficial purpose.

Keywords: taxation, law, multinational, corporation

Procedia PDF Downloads 170
1660 Correlation and Prediction of Biodiesel Density

Authors: Nieves M. C. Talavera-Prieto, Abel G. M. Ferreira, António T. G. Portugal, Rui J. Moreira, Jaime B. Santos

Abstract:

The knowledge of biodiesel density over large ranges of temperature and pressure is important for predicting the behavior of fuel injection and combustion systems in diesel engines, and for the optimization of such systems. In this study, cottonseed oil was transesterified into biodiesel and its density was measured at temperatures between 288 K and 358 K and pressures between 0.1 MPa and 30 MPa, with expanded uncertainty estimated as ±1.6 kg.m^-3. Experimental pressure-volume-temperature (pVT) cottonseed data was used along with literature data relative to other 18 biodiesels, in order to build a database used to test the correlation of density with temperarure and pressure using the Goharshadi–Morsali–Abbaspour equation of state (GMA EoS). To our knowledge, this is the first that density measurements are presented for cottonseed biodiesel under such high pressures, and the GMA EoS used to model biodiesel density. The new tested EoS allowed correlations within 0.2 kg•m-3 corresponding to average relative deviations within 0.02%. The built database was used to develop and test a new full predictive model derived from the observed linear relation between density and degree of unsaturation (DU), which depended from biodiesel FAMEs profile. The average density deviation of this method was only about 3 kg.m-3 within the temperature and pressure limits of application. These results represent appreciable improvements in the context of density prediction at high pressure when compared with other equations of state.

Keywords: biodiesel density, correlation, equation of state, prediction

Procedia PDF Downloads 586
1659 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms

Authors: S. Nandagopalan, N. Pradeep

Abstract:

The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.

Keywords: active contour, bayesian, echocardiographic image, feature vector

Procedia PDF Downloads 391
1658 Approaches to Estimating the Radiation and Socio-Economic Consequences of the Fukushima Daiichi Nuclear Power Plant Accident Using the Data Available in the Public Domain

Authors: Dmitry Aron

Abstract:

Major radiation accidents carry not only the potential risks of negative consequences for public health due to exposure but also because of large-scale emergency measures were taken by authorities to protect the population, which can lead to unreasonable social and economic damage. It is technically difficult, as a rule, to assess the possible costs and damages from decisions on evacuation or resettlement of residents in the shortest possible time, since it requires specially prepared information systems containing relevant information on demographic, economic parameters and incoming data on radiation conditions. Foreign observers also face the difficulties in assessing the consequences of an accident in a foreign territory, since they usually do not have official and detailed statistical data on the territory of foreign state beforehand. Also, they can suppose the application of unofficial data from open Internet sources is an unreliable and overly labor-consuming procedure. This paper describes an approach to prompt creation of relational database that contains detailed actual data on economics, demographics and radiation situation at the Fukushima Prefecture during the Fukushima Daiichi NPP accident, received by the author from open Internet sources. This database was developed and used to assess the number of evacuated population, radiation doses, expected financial losses and other parameters of the affected areas. The costs for the areas with temporarily evacuated and long-term resettled population were investigated, and the radiological and economic effectiveness of the measures taken to protect the population was estimated. Some of the results are presented in the article. The study showed that such a tool for analyzing the consequences of radiation accidents can be prepared in a short space of time for the entire territory of Japan, and it can serve for the modeling of social and economic consequences for hypothetical accidents for any nuclear power plant in its territory.

Keywords: Fukushima, radiation accident, emergency measures, database

Procedia PDF Downloads 167
1657 Climate Change and Health in Policies

Authors: Corinne Kowalski, Lea de Jong, Rainer Sauerborn, Niamh Herlihy, Anneliese Depoux, Jale Tosun

Abstract:

Climate change is considered one of the biggest threats to human health of the 21st century. The link between climate change and health has received relatively little attention in the media, in research and in policy-making. A long term and broad overview of how health is represented in the legislation on climate change is missing in the legislative literature. It is unknown if or how the argument for health is referred in legal clauses addressing climate change, in national and European legislation. Integrating scientific based evidence into policies regarding the impacts of climate change on health could be a key step to inciting the political and societal changes necessary to decelerate global warming. This may also drive the implementation of new strategies to mitigate the consequences on health systems. To provide an overview of this issue, we are analyzing the Global Climate Legislation Database provided by the Grantham Research Institute on Climate Change and the Environment. This institution was established in 2008 at the London School of Economics and Political Science. The database consists of (updated as of 1st January 2015) legislations on climate change in 99 countries around the world. This tool offers relevant information about the state of climate related policies. We will use the database to systematically analyze the 829 identified legislations to identify how health is represented as a relevant aspect of climate change legislation. We are conducting explorative research of national and supranational legislations and anticipate health to be addressed in various forms. The goal is to highlight how often, in what specific terms, which aspects of health or health risks of climate change are mentioned in various legislations. The position and recurrence of the mention of health is also of importance. Data will be extracted with complete quotation of the sentence which mentions health, which will allow for second qualitative stage to analyze which aspects of health are represented and in what context. This study is part of an interdisciplinary project called 4CHealth that confronts results of the research done on scientific, political and press literature to better understand how the knowledge on climate change and health circulates within those different fields and whether and how it is translated to real world change.

Keywords: climate change, explorative research, health, policies

Procedia PDF Downloads 338
1656 Fluid Prescribing Post Laparotomies

Authors: Gusa Hall, Barrie Keeler, Achal Khanna

Abstract:

Introduction: NICE guidelines have highlighted the consequences of IV fluid mismanagement. The main aim of this study was to audit fluid prescribing post laparotomies to identify if fluids were prescribed in accordance to NICE guidelines. Methodology: Retrospective database search of eight specific laparotomy procedures (colectomy right and left, Hartmann’s procedure, small bowel resection, perforated ulcer, abdominal perineal resection, anterior resection, pan proctocolectomy, subtotal colectomy) highlighted 29 laparotomies between April 2019 and May 2019. Two of 29 patients had secondary procedures during the same admission, n=27 (patients). Database case notes were reviewed for date of procedure, length of admission, fluid prescribed and amount, nasal gastric tube output, daily bloods results for electrolytes sodium and potassium and operational losses. Results: n=27 based on 27 identified patients between April 2019 – May 2019, 93% (25/27) received IV fluids, only 19% (5/27) received the correct IV fluids in accordance to NICE guidelines, 93% (25/27) who received IV fluids had the correct electrolytes levels (sodium & potassium), 100% (27/27) patients received blood tests (U&E’s) for correct electrolytes levels. 0% (0/27) no documentation on operational losses. IV fluids matched nasogastric tube output in 100% (3/3) of the number of patients that had a nasogastric tube in situ. Conclusion: A PubMed database literature review on barriers to safer IV prescribing highlighted educational interventions focused on prescriber knowledge rather than how to execute the prescribing task. This audit suggests IV fluids post laparotomies are not being prescribed consistently in accordance to NICE guidelines. Surgical management plans should be clearer on IV fluids and electrolytes requirements for the following 24 hours after the plan has been initiated. In addition, further teaching and training around IV prescribing is needed together with frequent surgical audits on IV fluid prescribing post-surgery to evaluate improvements.

Keywords: audit, IV Fluid prescribing, laparotomy, NICE guidelines

Procedia PDF Downloads 95
1655 Database of Pharmacogenetics HLA-A*31:01 Allele in Thai Population and Carbamazepine-Induced SCARs

Authors: Watchawin Ekphinitphithaya, Patompong Satapornpong

Abstract:

Introduction: Carbamazepine (CBZ) is one of the most prescribed antiepileptic drugs (AEDs) by neurologists and non-neurologist worldwide. CBZ is usually prescribed along with other drugs, leading to the possibility of severe cutaneous adverse drug reactions (SCARs). The HLA-B*15:02 is strongly associated with CBZ-induced Stevens-Johnson syndrome and toxic epidermal necrolysis (SJS–TEN) in the Han Chinese and other Asian populations but not in European populations, while HLA-A*31:01 allele has been reported to be associated with CBZ-induced SCARs in European population and Japanese. Objective: The aim of this study is to investigate the distribution of pharmacogenetics HLA-A*31:01 marker in a healthy Thai population associated with Carbamazepine-induced SCARs. Materials and Methods: Prospective study, 350 unrelated healthy Thais were recruited in this study. Human leukocyte antigen-A alleles were genotyped using PCR-sequence specific oligonucleotides (PCR-SSOs). Results: The frequency of HLA-A alleles were HLA-A*11:01 (190 alleles, 27.14%), HLA-A*24:02 (82 alleles, 11.71%), HLA-A*02:03 (80 alleles, 11.43%), HLA-A*33:03 (76 alleles, 10.86%), HLA-A*02:07 (58 alleles, 8.29%), HLA-A*02:01 (35 alleles, 5.00%), HLA-A*24:07 (29 alleles, 4.14%), HLA-A*02:06 – HLA-A*30:01 (15 alleles, 2.14%), and HLA-A*01:01 (14 alleles, 2.00%). Particularly, the number of HLA-A*31:01 alleles was 6 of 700 (0.86%) in the healthy Thai population. Many research presented varying distributions of HLA-A*31:01 in Asians, including 2% of Han Chinese, 9% of Japanese and 5% of Koreans. In addition, this allele was found approximately 2-5% in the Caucasian population. Conclusions: Thus, the pharmacogenetics database is vital to support in many populations, especially in Thais, for screening HLA-A*31:01 allele to avoid CBZ-induced SCARs before initiating treatments in each population.

Keywords: Carbamazepine, HLA-A*31:01, Thai population, pharmacogenetics

Procedia PDF Downloads 145
1654 Forensic Analysis of Signal Messenger on Android

Authors: Ward Bakker, Shadi Alhakimi

Abstract:

The amount of people moving towards more privacy focused instant messaging applications has grown significantly. Signal is one of these instant messaging applications, which makes Signal interesting for digital investigators. In this research, we evaluate the artifacts that are generated by the Signal messenger for Android. This evaluation was done by using the features that Signal provides to create artifacts, whereafter, we made an image of the internal storage and the process memory. This image was analysed manually. The manual analysis revealed the content that Signal stores in different locations during its operation. From our research, we were able to identify the artifacts and interpret how they were used. We also examined the source code of Signal. Using our obtain knowledge from the source code, we developed a tool that decrypts some of the artifacts using the key stored in the Android Keystore. In general, we found that most artifacts are encrypted and encoded, even after decrypting some of the artifacts. During data visualization, some artifacts were found, such as that Signal does not use relationships between the data. In this research, two interesting groups of artifacts were identified, those related to the database and those stored in the process memory dump. In the database, we found plaintext private- and group chats, and in the memory dump, we were able to retrieve the plaintext access code to the application. Nevertheless, we conclude that Signal contains a wealth of artifacts that could be very valuable to a digital forensic investigation.

Keywords: forensic, signal, Android, digital

Procedia PDF Downloads 56
1653 Internal and External Influences on the Firm Objective

Authors: A. Briseno, A, Zorrilla

Abstract:

Firms are increasingly responding to social and environmental claims from society. Practices oriented to attend issues such as poverty, work equality, or renewable energy, are being implemented more frequently by firms to address impacts on sustainability. However, questions remain on how the responses of firms vary across industries and regions between the social and the economic objectives. Using concepts from organizational theory and social network theory, this paper aims to create a theoretical framework that explains the internal and external influences that make a firm establish its objective. The framework explains why firms might have a different objective orientation in terms of its economic and social prioritization.

Keywords: organizational identity, social network theory, firm objective, value maximization, social responsibility

Procedia PDF Downloads 282
1652 A Cloud Computing System Using Virtual Hyperbolic Coordinates for Services Distribution

Authors: Telesphore Tiendrebeogo, Oumarou Sié

Abstract:

Cloud computing technologies have attracted considerable interest in recent years. Thus, these latters have become more important for many existing database applications. It provides a new mode of use and of offer of IT resources in general. Such resources can be used “on demand” by anybody who has access to the internet. Particularly, the Cloud platform provides an ease to use interface between providers and users, allow providers to develop and provide software and databases for users over locations. Currently, there are many Cloud platform providers support large scale database services. However, most of these only support simple keyword-based queries and can’t response complex query efficiently due to lack of efficient in multi-attribute index techniques. Existing Cloud platform providers seek to improve performance of indexing techniques for complex queries. In this paper, we define a new cloud computing architecture based on a Distributed Hash Table (DHT) and design a prototype system. Next, we perform and evaluate our cloud computing indexing structure based on a hyperbolic tree using virtual coordinates taken in the hyperbolic plane. We show through our experimental results that we compare with others clouds systems to show our solution ensures consistence and scalability for Cloud platform.

Keywords: virtual coordinates, cloud, hyperbolic plane, storage, scalability, consistency

Procedia PDF Downloads 397
1651 Landslide Susceptibility Mapping: A Comparison between Logistic Regression and Multivariate Adaptive Regression Spline Models in the Municipality of Oudka, Northern of Morocco

Authors: S. Benchelha, H. C. Aoudjehane, M. Hakdaoui, R. El Hamdouni, H. Mansouri, T. Benchelha, M. Layelmam, M. Alaoui

Abstract:

The logistic regression (LR) and multivariate adaptive regression spline (MarSpline) are applied and verified for analysis of landslide susceptibility map in Oudka, Morocco, using geographical information system. From spatial database containing data such as landslide mapping, topography, soil, hydrology and lithology, the eight factors related to landslides such as elevation, slope, aspect, distance to streams, distance to road, distance to faults, lithology map and Normalized Difference Vegetation Index (NDVI) were calculated or extracted. Using these factors, landslide susceptibility indexes were calculated by the two mentioned methods. Before the calculation, this database was divided into two parts, the first for the formation of the model and the second for the validation. The results of the landslide susceptibility analysis were verified using success and prediction rates to evaluate the quality of these probabilistic models. The result of this verification was that the MarSpline model is the best model with a success rate (AUC = 0.963) and a prediction rate (AUC = 0.951) higher than the LR model (success rate AUC = 0.918, rate prediction AUC = 0.901).

Keywords: landslide susceptibility mapping, regression logistic, multivariate adaptive regression spline, Oudka, Taounate

Procedia PDF Downloads 164
1650 Bibliometric Analysis of Global Research Trends on Organization Culture, Strategic Leadership and Performance Using Scopus Database

Authors: Anyia Nduka, Aslan Bin Amad Senin

Abstract:

Taking a behavioral perspective of Organization Culture, Strategic Leadership, and performance (OC, SLP). We examine the role of Strategic Leadership as key vicious mechanism linking OC,SLP to the organizational capacities. Given the increasing degree of dependence of modern businesses on the use and scientific discovery of relevant data, research efforts around the entire globe have been accelerated. In today's corporate world, Strategic Leadership is still the most sustainable option of performance and competitive advantage. This is why it is critical to gain a deep understanding of research area and to strengthen new collaborative networks in efforts to support research transition towards these integrative efforts. This bibliometric analysis is aimed to examine global trends in OC,SLP research based on publication output, author co-authorships, and co-occurrences of author keywords among authors and affiliated countries. 2829 journal articles were retrieved from the Scopus database Between 1974 and 2021. From the research findings, there is a significant increase in number of publications with strong global collaboration (e.g., USA & UK). We also discovered that while most countries/territories without affiliations were centered in developing countries, the outstanding performance of Asian countries and the volume of their collaborations should be emulated.

Keywords: organizational culture, strategic leadership, organizational resilience, performance

Procedia PDF Downloads 49
1649 Correlation between Funding and Publications: A Pre-Step towards Future Research Prediction

Authors: Ning Kang, Marius Doornenbal

Abstract:

Funding is a very important – if not crucial – resource for research projects. Usually, funding organizations will publish a description of the funded research to describe the scope of the funding award. Logically, we would expect research outcomes to align with this funding award. For that reason, we might be able to predict future research topics based on present funding award data. That said, it remains to be shown if and how future research topics can be predicted by using the funding information. In this paper, we extract funding project information and their generated paper abstracts from the Gateway to Research database as a group, and use the papers from the same domains and publication years in the Scopus database as a baseline comparison group. We annotate both the project awards and the papers resulting from the funded projects with linguistic features (noun phrases), and then calculate tf-idf and cosine similarity between these two set of features. We show that the cosine similarity between the project-generated papers group is bigger than the project-baseline group, and also that these two groups of similarities are significantly different. Based on this result, we conclude that the funding information actually correlates with the content of future research output for the funded project on the topical level. How funding really changes the course of science or of scientific careers remains an elusive question.

Keywords: natural language processing, noun phrase, tf-idf, cosine similarity

Procedia PDF Downloads 222
1648 Attribute Based Comparison and Selection of Modular Self-Reconfigurable Robot Using Multiple Attribute Decision Making Approach

Authors: Manpreet Singh, V. P. Agrawal, Gurmanjot Singh Bhatti

Abstract:

From the last decades, there is a significant technological advancement in the field of robotics, and a number of modular self-reconfigurable robots were introduced that can help in space exploration, bucket to stuff, search, and rescue operation during earthquake, etc. As there are numbers of self-reconfigurable robots, choosing the optimum one is always a concern for robot user since there is an increase in available features, facilities, complexity, etc. The objective of this research work is to present a multiple attribute decision making based methodology for coding, evaluation, comparison ranking and selection of modular self-reconfigurable robots using a technique for order preferences by similarity to ideal solution approach. However, 86 attributes that affect the structure and performance are identified. A database for modular self-reconfigurable robot on the basis of different pertinent attribute is generated. This database is very useful for the user, for selecting a robot that suits their operational needs. Two visual methods namely linear graph and spider chart are proposed for ranking of modular self-reconfigurable robots. Using five robots (Atron, Smores, Polybot, M-Tran 3, Superbot), an example is illustrated, and raking of the robots is successfully done, which shows that Smores is the best robot for the operational need illustrated, and this methodology is found to be very effective and simple to use.

Keywords: self-reconfigurable robots, MADM, TOPSIS, morphogenesis, scalability

Procedia PDF Downloads 197
1647 Streamlining .NET Data Access: Leveraging JSON for Data Operations in .NET

Authors: Tyler T. Procko, Steve Collins

Abstract:

New features in .NET (6 and above) permit streamlined access to information residing in JSON-capable relational databases, such as SQL Server (2016 and above). Traditional methods of data access now comparatively involve unnecessary steps which compromise system performance. This work posits that the established ORM (Object Relational Mapping) based methods of data access in applications and APIs result in common issues, e.g., object-relational impedance mismatch. Recent developments in C# and .NET Core combined with a framework of modern SQL Server coding conventions have allowed better technical solutions to the problem. As an amelioration, this work details the language features and coding conventions which enable this streamlined approach, resulting in an open-source .NET library implementation called Codeless Data Access (CODA). Canonical approaches rely on ad-hoc mapping code to perform type conversions between the client and back-end database; with CODA, no mapping code is needed, as JSON is freely mapped to SQL and vice versa. CODA streamlines API data access by improving on three aspects of immediate concern to web developers, database engineers and cybersecurity professionals: Simplicity, Speed and Security. Simplicity is engendered by cutting out the “middleman” steps, effectively making API data access a whitebox, whereas traditional methods are blackbox. Speed is improved because of the fewer translational steps taken, and security is improved as attack surfaces are minimized. An empirical evaluation of the speed of the CODA approach in comparison to ORM approaches ] is provided and demonstrates that the CODA approach is significantly faster. CODA presents substantial benefits for API developer workflows by simplifying data access, resulting in better speed and security and allowing developers to focus on productive development rather than being mired in data access code. Future considerations include a generalization of the CODA method and extension outside of the .NET ecosystem to other programming languages.

Keywords: API data access, database, JSON, .NET core, SQL server

Procedia PDF Downloads 46
1646 Association of Non Synonymous SNP in DC-SIGN Receptor Gene with Tuberculosis (Tb)

Authors: Saima Suleman, Kalsoom Sughra, Naeem Mahmood Ashraf

Abstract:

Mycobacterium tuberculosis is a communicable chronic illness. This disease is being highly focused by researchers as it is present approximately in one third of world population either in active or latent form. The genetic makeup of a person plays an important part in producing immunity against disease. And one important factor association is single nucleotide polymorphism of relevant gene. In this study, we have studied association between single nucleotide polymorphism of CD-209 gene (encode DC-SIGN receptor) and patients of tuberculosis. Dry lab (in silico) and wet lab (RFLP) analysis have been carried out. GWAS catalogue and GEO database have been searched to find out previous association data. No association study has been found related to CD-209 nsSNPs but role of CD-209 in pulmonary tuberculosis have been addressed in GEO database.Therefore, CD-209 has been selected for this study. Different databases like ENSEMBLE and 1000 Genome Project has been used to retrieve SNP data in form of VCF file which is further submitted to different software to sort SNPs into benign and deleterious. Selected SNPs are further annotated by using 3-D modeling techniques using I-TASSER online software. Furthermore, selected nsSNPs were checked in Gujrat and Faisalabad population through RFLP analysis. In this study population two SNPs are found to be associated with tuberculosis while one nsSNP is not found to be associated with the disease.

Keywords: association, CD209, DC-SIGN, tuberculosis

Procedia PDF Downloads 286
1645 Validation of the Formal Model of Web Services Applications for Digital Reference Service of Library Information System

Authors: Zainab Magaji Musa, Nordin M. A. Rahman, Julaily Aida Jusoh

Abstract:

The web services applications for digital reference service (WSDRS) of LIS model is an informal model that claims to reduce the problems of digital reference services in libraries. It uses web services technology to provide efficient way of satisfying users’ needs in the reference section of libraries. The formal WSDRS model consists of the Z specifications of all the informal specifications of the model. This paper discusses the formal validation of the Z specifications of WSDRS model. The authors formally verify and thus validate the properties of the model using Z/EVES theorem prover.

Keywords: validation, verification, formal, theorem prover

Procedia PDF Downloads 485
1644 The Use of Voice in Online Public Access Catalog as Faster Searching Device

Authors: Maisyatus Suadaa Irfana, Nove Eka Variant Anna, Dyah Puspitasari Sri Rahayu

Abstract:

Technological developments provide convenience to all the people. Nowadays, the communication of human with the computer is done via text. With the development of technology, human and computer communications have been conducted with a voice like communication between human beings. It provides an easy facility for many people, especially those who have special needs. Voice search technology is applied in the search of book collections in the OPAC (Online Public Access Catalog), so library visitors will find it faster and easier to find books that they need. Integration with Google is needed to convert the voice into text. To optimize the time and the results of searching, Server will download all the book data that is available in the server database. Then, the data will be converted into JSON format. In addition, the incorporation of some algorithms is conducted including Decomposition (parse) in the form of array of JSON format, the index making, analyzer to the result. It aims to make the process of searching much faster than the usual searching in OPAC because the data are directly taken to the database for every search warrant. Data Update Menu is provided with the purpose to enable users perform their own data updates and get the latest data information.

Keywords: OPAC, voice, searching, faster

Procedia PDF Downloads 319
1643 Measuring the Level of Knowledge of Construction Contracts Procedures: A Case Study of Botswana

Authors: Babulayi B. Wilson

Abstract:

Unsatisfactory performance of construction projects in both the industrialised and developing countries indicate that there could be several defects in construction projects phases. Notwithstanding the fact that some project defects are often conceived at the initiation phase of construction projects, insufficient knowledge of contract procedures has been identified as one of the major sources of construction disputes. Contract procedures are a set of rules that outlines the primary obligations and liabilities of parties involved in the implementation of a construction project. Engineering professional bodies often codify contract procedures into standard forms of contract such as the Institution of Civil Engineers (ICE, UK) and Association of Consulting Engineers (ACE, UK) and keep them under constant review by updating any clause to reflect any change in case law or relevant piece of legislation. Even so, it is the responsibility of a professional body or conditions of contract draftsperson to introduce contract-specific clauses that may be necessary for business efficacy but not covered in the chosen standard conditions of contract. In Botswana, the use of clients’ drafted and/or un-adapted for environment of use international forms of contract in conjunction with client-drafted pricing schedules is common. The product of the latter often impact negatively upon contractors’ claims and payments, in that, tender rates and prices can only be deemed to be sufficient if the chosen conditions of contract compliment the pricing schedule (use of standardised procurement documents). In addition, client drafted and the use of borrowed forms of contract such as FIDIC often conflict with domicile law resulting in costly disputes on the part of the client. It is upon the preceding text that the object of the research is to measure the level of knowledge of contract procedures amongst key stakeholders in the Botswana construction industry by requesting a representative sample from the industry and academia to respond to tutorial questions prepared from two commonly used forms of contract for civil works, that is, FIDIC (International Form of Contract) and ICE (UK). The questions were prepared under the following captions: (a) preparation of tender documents (b) obligations of the parties (c) contract administration; and (d) claims, variations, and valuation of variations. After ascertaining that the level of knowledge of contract procedures is insufficient among most practitioners in the Botswana construction industry, major procurement entities, and engineering institutions of learning; a guide to drafting a condition of a construction contract was developed and then validated through seminars and workshops. In the present, the effectiveness of the guide is not yet measured but feedback from seminars and workshops conducted indicates an appreciation of the guide by the majority of major construction industry stakeholders.

Keywords: contract procedures, conditions of contract, professional practice, construction law, forms of contract

Procedia PDF Downloads 161