Search results for: overton database
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1637

Search results for: overton database

1457 Ergonomics and Its Applicability in the Design Process in Egypt Challenges and Prospects

Authors: Mohamed Moheyeldin Mahmoud

Abstract:

Egypt suffers from a severe shortage of data and charts concerning the physical dimensions, measurements, qualities and consumer behavior. The shortage of needed information and appropriate methods has forced the Egyptian designer to use any other foreign standard when designing a product for the Egyptian consumer which has led to many problems. The urgently needed database concerning the physical specifications, measurements of the Egyptian consumers, as well as the need to support the Ergonomics given courses in many colleges and institutes with the latest technologies, is stated as the research problem. Descriptive analytical method relying on the compiling, comparing and analyzing of information and facts in order to get acceptable perceptions, ideas and considerations is the used methodology by the researcher. The research concludes that: 1. Good interaction relationship between users and products shows the success of that product. 2. An integration linkage between the most prominent fields of science specially Ergonomics, Interaction Design and Ethnography should be encouraged to provide an ultimately updated database concerning the nature, specifications and environment of the Egyptian consumer, in order to achieve a higher benefit for both user and product. 3. Chinese economic policy based on the study of market requirements long before any market activities should be emulated. 4. Using Ethnography supports the design activities creating new products or updating existent ones through measuring the compatibility of products with their environment and user expectations, While contracting a joint cooperation between military colleges, sports education institutes from one side, and design institutes from the other side to provide an ultimately updated (annually updated) database concerning some specifications about students of both sexes applying in those institutes (height, weight, etc.) to provide the Industrial designer with the needed information when creating a new product or updating an existing one concerning that category is recommended by the researcher.

Keywords: adapt, ergonomics, ethnography, interaction design

Procedia PDF Downloads 227
1456 Wireless Sensor Network to Help Low Incomes Farmers to Face Drought Impacts

Authors: Fantazi Walid, Ezzedine Tahar, Bargaoui Zoubeida

Abstract:

This research presents the main ideas to implement an intelligent system composed by communicating wireless sensors measuring environmental data linked to drought indicators (such as air temperature, soil moisture , etc...). On the other hand, the setting up of a spatio temporal database communicating with a Web mapping application for a monitoring in real time in activity 24:00 /day, 7 days/week is proposed to allow the screening of the drought parameters time evolution and their extraction. Thus this system helps detecting surfaces touched by the phenomenon of drought. Spatio-temporal conceptual models seek to answer the users who need to manage soil water content for irrigating or fertilizing or other activities pursuing crop yield augmentation. Effectively, spatio-temporal conceptual models enable users to obtain a diagram of readable and easy data to apprehend. Based on socio-economic information, it helps identifying people impacted by the phenomena with the corresponding severity especially that this information is accessible by farmers and stakeholders themselves. The study will be applied in Siliana watershed Northern Tunisia.

Keywords: WSN, database spatio-temporal, GIS, web mapping, indicator of drought

Procedia PDF Downloads 494
1455 Specification of Requirements to Ensure Proper Implementation of Security Policies in Cloud-Based Multi-Tenant Systems

Authors: Rebecca Zahra, Joseph G. Vella, Ernest Cachia

Abstract:

The notion of cloud computing is rapidly gaining ground in the IT industry and is appealing mostly due to making computing more adaptable and expedient whilst diminishing the total cost of ownership. This paper focuses on the software as a service (SaaS) architecture of cloud computing which is used for the outsourcing of databases with their associated business processes. One approach for offering SaaS is basing the system’s architecture on multi-tenancy. Multi-tenancy allows multiple tenants (users) to make use of the same single application instance. Their requests and configurations might then differ according to specific requirements met through tenant customisation through the software. Despite the known advantages, companies still feel uneasy to opt for the multi-tenancy with data security being a principle concern. The fact that multiple tenants, possibly competitors, would have their data located on the same server process and share the same database tables heighten the fear of unauthorised access. Security is a vital aspect which needs to be considered by application developers, database administrators, data owners and end users. This is further complicated in cloud-based multi-tenant system where boundaries must be established between tenants and additional access control models must be in place to prevent unauthorised cross-tenant access to data. Moreover, when altering the database state, the transactions need to strictly adhere to the tenant’s known business processes. This paper focuses on the fact that security in cloud databases should not be considered as an isolated issue. Rather it should be included in the initial phases of the database design and monitored continuously throughout the whole development process. This paper aims to identify a number of the most common security risks and threats specifically in the area of multi-tenant cloud systems. Issues and bottlenecks relating to security risks in cloud databases are surveyed. Some techniques which might be utilised to overcome them are then listed and evaluated. After a description and evaluation of the main security threats, this paper produces a list of software requirements to ensure that proper security policies are implemented by a software development team when designing and implementing a multi-tenant based SaaS. This would then assist the cloud service providers to define, implement, and manage security policies as per tenant customisation requirements whilst assuring security for the customers’ data.

Keywords: cloud computing, data management, multi-tenancy, requirements, security

Procedia PDF Downloads 156
1454 ArcGIS as a Tool for Infrastructure Documentation and Asset Management: Establishing a GIS for Computer Network Documentation

Authors: John Segars

Abstract:

Built out of a real-world need to have better, more detailed, asset and infrastructure documentation, this project will lay out the case for using the database functionality of ArcGIS as a tool to track and maintain infrastructure location, status, maintenance and serviceability. Workflows and processes will be presented and detailed which may be applied to an organizations’ infrastructure needs that might allow them to make use of the robust tools which surround the ArcGIS platform. The end result is a value-added information system framework with a geographic component e.g., the spatial location of various I.T. assets, a detailed set of records which not only documents location but also captures the maintenance history for assets along with photographs and documentation of these various assets as attachments to the numerous feature class items. In addition to the asset location and documentation benefits, the staff will be able to log into the devices and pull SNMP (Simple Network Management Protocol) based query information from within the user interface. The entire collection of information may be displayed in ArcGIS, via a JavaScript based web application or via queries to the back-end database. The project is applicable to all organizations which maintain an IT infrastructure but specifically targets post-secondary educational institutions where access to ESRI resources is generally already available in house.

Keywords: ESRI, GIS, infrastructure, network documentation, PostgreSQL

Procedia PDF Downloads 181
1453 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network

Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.

Keywords: big data, k-NN, machine learning, traffic speed prediction

Procedia PDF Downloads 363
1452 Transcriptome Analysis of Saffron (crocus sativus L.) Stigma Focusing on Identification Genes Involved in the Biosynthesis of Crocin

Authors: Parvaneh Mahmoudi, Ahmad Moeni, Seyed Mojtaba Khayam Nekoei, Mohsen Mardi, Mehrshad Zeinolabedini, Ghasem Hosseini Salekdeh

Abstract:

Saffron (Crocus sativus L.) is one of the most important spice and medicinal plants. The three-branch style of C. sativus flowers are the most important economic part of the plant and known as saffron, which has several medicinal properties. Despite the economic and biological significance of this plant, knowledge about its molecular characteristics is very limited. In the present study, we, for the first time, constructed a comprehensive dataset for C. sativus stigma through de novo transcriptome sequencing. We performed de novo transcriptome sequencing of C. sativus stigma using the Illumina paired-end sequencing technology. A total of 52075128 reads were generated and assembled into 118075 unigenes, with an average length of 629 bp and an N50 of 951 bp. A total of 66171unigenes were identified, among them, 66171 (56%) were annotated in the non-redundant National Center for Biotechnology Information (NCBI) database, 30938 (26%) were annotated in the Swiss-Prot database, 10273 (8.7%) unigenes were mapped to 141 Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway database, while 52560 (44%) and 40756 (34%) unigenes were assigned to Gen Ontology (GO) categories and Eukaryotic Orthologous Groups of proteins (KOG), respectively. In addition, 65 candidate genes involved in three stages of crocin biosynthesis were identified. Finally, transcriptome sequencing of saffron stigma was used to identify 6779 potential microsatellites (SSRs) molecular markers. High-throughput de novo transcriptome sequencing provided a valuable resource of transcript sequences of C. sativus in public databases. In addition, most of candidate genes potentially involved in crocin biosynthesis were identified which could be further utilized in functional genomics studies. Furthermore, numerous obtained SSRs might contribute to address open questions about the origin of this amphiploid spices with probable little genetic diversity.

Keywords: saffron, transcriptome, NGS, bioinformatic

Procedia PDF Downloads 100
1451 A General Framework for Knowledge Discovery from Echocardiographic and Natural Images

Authors: S. Nandagopalan, N. Pradeep

Abstract:

The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.

Keywords: active contour, Bayesian, echocardiographic image, feature vector

Procedia PDF Downloads 445
1450 Real-World Comparison of Adherence to and Persistence with Dulaglutide and Liraglutide in UAE e-Claims Database

Authors: Ibrahim Turfanda, Soniya Rai, Karan Vadher

Abstract:

Objectives— The study aims to compare real-world adherence to and persistence with dulaglutide and liraglutide in patients with type 2 diabetes (T2D) initiating treatment in UAE. Methods— This was a retrospective, non-interventional study (observation period: 01 March 2017–31 August 2019) using the UAE Dubai e-Claims database. Included: adult patients initiating dulaglutide/liraglutide 01 September 2017–31 August 2018 (index period) with: ≥1 claim for T2D in the 6 months before index date (ID); ≥1 claim for dulaglutide/liraglutide during index period; and continuous medical enrolment for ≥6 months before and ≥12 months after ID. Key endpoints, assessed 3/6/12 months after ID: adherence to treatment (proportion of days covered [PDC; PDC ≥80% considered ‘adherent’], per-group mean±standard deviation [SD] PDC); and persistence (number of continuous therapy days from ID until discontinuation [i.e., >45 days gap] or end of observation period). Patients initiating dulaglutide/liraglutide were propensity score matched (1:1) based on baseline characteristics. Between-group comparison of adherence was analysed using the McNemar test (α=0.025). Persistence was analysed using Kaplan–Meier estimates with log-rank tests (α=0.025) for between-group comparisons. This study presents 12-month outcomes. Results— Following propensity score matching, 263 patients were included in each group. Mean±SD PDC for all patients at 12 months was significantly higher in the dulaglutide versus the liraglutide group (dulaglutide=0.48±0.30, liraglutide=0.39±0.28, p=0.0002). The proportion of adherent patients favored dulaglutide (dulaglutide=20.2%, liraglutide=12.9%, p=0.0302), as did the probability of being adherent to treatment (odds ratio [97.5% CI]: 1.70 [0.99, 2.91]; p=0.03). Proportion of persistent patients also favoured dulaglutide (dulaglutide=15.2%, liraglutide=9.1%, p=0.0528), as did the probability of discontinuing treatment 12 months after ID (p=0.027). Conclusions— Based on the UAE Dubai e-Claims database data, dulaglutide initiators exhibited significantly greater adherence in terms of mean PDC versus liraglutide initiators. The proportion of adherent patients and the probability of being adherent favored the dulaglutide group, as did treatment persistence.

Keywords: adherence, dulaglutide, effectiveness, liraglutide, persistence

Procedia PDF Downloads 125
1449 The Study of the Socio-Economic and Environmental Impact on the Semi-Arid Environments Using GIS in the Eastern Aurès, Algeria

Authors: Benmessaoud Hassen

Abstract:

We propose in this study to address the impact of socio-economic and environmental impact on the physical environment, especially their spatiotemporal dynamics in semi-arid and arid eastern Aurès. Including 11 municipalities, the study area spreads out over a relatively large surface area of about 60.000 ha. The hindsight is quite important and is determined by 03 days of analysis of environmental variation spread over thirty years (between 1987 and 2007). The multi-source data acquired in this context are integrated into a geographic information system (GIS).This allows, among other indices to calculate areas and classes for each thematic layer of the 4 layers previously defined by a method inspired MEDALUS (Mediterranean Desertification and Land Use).The database created is composed of four layers of information (population, livestock, farming and land use). His analysis in space and time has been supplemented by a validation of the ground truth. Once the database has corrected it used to develop the comprehensive map with the calculation of the index of socio-economic and environmental (ISCE). The map supports and the resulting information does not consist only of figures on the present situation but could be used to forecast future trends.

Keywords: impact of socio-economic and environmental, spatiotemporal dynamics, semi-arid environments, GIS, Eastern Aurès

Procedia PDF Downloads 325
1448 Mapping and Database on Mass Movements along the Eastern Edge of the East African Rift in Burundi

Authors: L. Nahimana

Abstract:

The eastern edge of the East African Rift in Burundi shows many mass movement phenomena corresponding to landslides, mudflow, debris flow, spectacular erosion (mega-gully), flash floods and alluvial deposits. These phenomena usually occur during the rainy season. Their extent and consecutive damages vary widely. To manage these phenomena, it is necessary to adopt a methodological approach of their mapping with a structured database. The elements for this database are: three-dimensional extent of the phenomenon, natural causes and conditions (geological lithology, slope, weathering depth and products, rainfall patterns, natural environment) and the anthropogenic factors corresponding to the various human activities. The extent of the area provides information about the possibilities and opportunities for mitigation technique. The lithological nature allows understanding the influence of the nature of the rock and its structure on the intensity of the weathering of rocks, as well as the geotechnical properties of the weathering products. The slope influences the land stability. The intensity of annual, monthly and daily rainfall helps to understand the conditions of water saturation of the terrains. Certain natural circumstances such as the presence of streams and rivers promote foot slope erosion and thus the occurrence and activity of mass movements. The construction of some infrastructures such as new roads and agglomerations deeply modify the flow of surface and underground water followed by mass movements. Using geospatial data selected on the East African Rift in Burundi, it is presented case of mass movements illustrating the nature, importance, various factors and the extent of the damages. An analysis of these elements for each hazard can guide the options for mitigation of the phenomenon and its consequences.

Keywords: mass movement, landslide, mudflow, debris flow, spectacular erosion, mega-gully, flash flood, alluvial deposit, East African rift, Burundi

Procedia PDF Downloads 306
1447 Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework

Authors: U. S. N. Raju, Kothuri Sai Kiran, Meena G. Kamal, Vinay Nikhil Pabba, Suresh Kanaparthi

Abstract:

There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework.

Keywords: video lectures, big video data, video retrieval, hadoop

Procedia PDF Downloads 533
1446 Least-Square Support Vector Machine for Characterization of Clusters of Microcalcifications

Authors: Baljit Singh Khehra, Amar Partap Singh Pharwaha

Abstract:

Clusters of Microcalcifications (MCCs) are most frequent symptoms of Ductal Carcinoma in Situ (DCIS) recognized by mammography. Least-Square Support Vector Machine (LS-SVM) is a variant of the standard SVM. In the paper, LS-SVM is proposed as a classifier for classifying MCCs as benign or malignant based on relevant extracted features from enhanced mammogram. To establish the credibility of LS-SVM classifier for classifying MCCs, a comparative evaluation of the relative performance of LS-SVM classifier for different kernel functions is made. For comparative evaluation, confusion matrix and ROC analysis are used. Experiments are performed on data extracted from mammogram images of DDSM database. A total of 380 suspicious areas are collected, which contain 235 malignant and 145 benign samples, from mammogram images of DDSM database. A set of 50 features is calculated for each suspicious area. After this, an optimal subset of 23 most suitable features is selected from 50 features by Particle Swarm Optimization (PSO). The results of proposed study are quite promising.

Keywords: clusters of microcalcifications, ductal carcinoma in situ, least-square support vector machine, particle swarm optimization

Procedia PDF Downloads 354
1445 Artificial Intelligent Tax Simulator to Minimize Tax Liability for Multinational Corporations

Authors: Sean Goltz, Michael Mayo

Abstract:

The purpose of this research is to use Global-Regulation.com database of the world laws, focusing on tax treaties between countries, in order to create an AI-driven tax simulator that will run an AI agent through potential tax scenarios across countries. The AI agent goal is to identify the scenario that will result in minimum tax liability based on tax treaties between countries. The results will be visualized by a three dimensional matrix. This will be an online web application. Multinational corporations are running their business through multiple countries. These countries, in turn, have a tax treaty with many other countries to regulate the payment of taxes on income that is transferred between these countries. As a result, planning the best tax scenario across multiple countries and numerous tax treaties is almost impossible. This research propose to use Global-Regulation.com database of word laws in English (machine translated by Google and Microsoft API’s) in order to create a simulator that will include the information in the tax treaties. Once ready, an AI agent will be sent through the simulator to identify the scenario that will result in minimum tax liability. Identifying the best tax scenario across countries may save multinational corporations, like Google, billions of dollars annually. Given the nature of the raw data and the domain of taxes (i.e., numbers), this is a promising ground to employ artificial intelligence towards a practical and beneficial purpose.

Keywords: taxation, law, multinational, corporation

Procedia PDF Downloads 199
1444 Correlation and Prediction of Biodiesel Density

Authors: Nieves M. C. Talavera-Prieto, Abel G. M. Ferreira, António T. G. Portugal, Rui J. Moreira, Jaime B. Santos

Abstract:

The knowledge of biodiesel density over large ranges of temperature and pressure is important for predicting the behavior of fuel injection and combustion systems in diesel engines, and for the optimization of such systems. In this study, cottonseed oil was transesterified into biodiesel and its density was measured at temperatures between 288 K and 358 K and pressures between 0.1 MPa and 30 MPa, with expanded uncertainty estimated as ±1.6 kg.m^-3. Experimental pressure-volume-temperature (pVT) cottonseed data was used along with literature data relative to other 18 biodiesels, in order to build a database used to test the correlation of density with temperarure and pressure using the Goharshadi–Morsali–Abbaspour equation of state (GMA EoS). To our knowledge, this is the first that density measurements are presented for cottonseed biodiesel under such high pressures, and the GMA EoS used to model biodiesel density. The new tested EoS allowed correlations within 0.2 kg•m-3 corresponding to average relative deviations within 0.02%. The built database was used to develop and test a new full predictive model derived from the observed linear relation between density and degree of unsaturation (DU), which depended from biodiesel FAMEs profile. The average density deviation of this method was only about 3 kg.m-3 within the temperature and pressure limits of application. These results represent appreciable improvements in the context of density prediction at high pressure when compared with other equations of state.

Keywords: biodiesel density, correlation, equation of state, prediction

Procedia PDF Downloads 615
1443 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms

Authors: S. Nandagopalan, N. Pradeep

Abstract:

The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.

Keywords: active contour, bayesian, echocardiographic image, feature vector

Procedia PDF Downloads 420
1442 Approaches to Estimating the Radiation and Socio-Economic Consequences of the Fukushima Daiichi Nuclear Power Plant Accident Using the Data Available in the Public Domain

Authors: Dmitry Aron

Abstract:

Major radiation accidents carry not only the potential risks of negative consequences for public health due to exposure but also because of large-scale emergency measures were taken by authorities to protect the population, which can lead to unreasonable social and economic damage. It is technically difficult, as a rule, to assess the possible costs and damages from decisions on evacuation or resettlement of residents in the shortest possible time, since it requires specially prepared information systems containing relevant information on demographic, economic parameters and incoming data on radiation conditions. Foreign observers also face the difficulties in assessing the consequences of an accident in a foreign territory, since they usually do not have official and detailed statistical data on the territory of foreign state beforehand. Also, they can suppose the application of unofficial data from open Internet sources is an unreliable and overly labor-consuming procedure. This paper describes an approach to prompt creation of relational database that contains detailed actual data on economics, demographics and radiation situation at the Fukushima Prefecture during the Fukushima Daiichi NPP accident, received by the author from open Internet sources. This database was developed and used to assess the number of evacuated population, radiation doses, expected financial losses and other parameters of the affected areas. The costs for the areas with temporarily evacuated and long-term resettled population were investigated, and the radiological and economic effectiveness of the measures taken to protect the population was estimated. Some of the results are presented in the article. The study showed that such a tool for analyzing the consequences of radiation accidents can be prepared in a short space of time for the entire territory of Japan, and it can serve for the modeling of social and economic consequences for hypothetical accidents for any nuclear power plant in its territory.

Keywords: Fukushima, radiation accident, emergency measures, database

Procedia PDF Downloads 191
1441 Climate Change and Health in Policies

Authors: Corinne Kowalski, Lea de Jong, Rainer Sauerborn, Niamh Herlihy, Anneliese Depoux, Jale Tosun

Abstract:

Climate change is considered one of the biggest threats to human health of the 21st century. The link between climate change and health has received relatively little attention in the media, in research and in policy-making. A long term and broad overview of how health is represented in the legislation on climate change is missing in the legislative literature. It is unknown if or how the argument for health is referred in legal clauses addressing climate change, in national and European legislation. Integrating scientific based evidence into policies regarding the impacts of climate change on health could be a key step to inciting the political and societal changes necessary to decelerate global warming. This may also drive the implementation of new strategies to mitigate the consequences on health systems. To provide an overview of this issue, we are analyzing the Global Climate Legislation Database provided by the Grantham Research Institute on Climate Change and the Environment. This institution was established in 2008 at the London School of Economics and Political Science. The database consists of (updated as of 1st January 2015) legislations on climate change in 99 countries around the world. This tool offers relevant information about the state of climate related policies. We will use the database to systematically analyze the 829 identified legislations to identify how health is represented as a relevant aspect of climate change legislation. We are conducting explorative research of national and supranational legislations and anticipate health to be addressed in various forms. The goal is to highlight how often, in what specific terms, which aspects of health or health risks of climate change are mentioned in various legislations. The position and recurrence of the mention of health is also of importance. Data will be extracted with complete quotation of the sentence which mentions health, which will allow for second qualitative stage to analyze which aspects of health are represented and in what context. This study is part of an interdisciplinary project called 4CHealth that confronts results of the research done on scientific, political and press literature to better understand how the knowledge on climate change and health circulates within those different fields and whether and how it is translated to real world change.

Keywords: climate change, explorative research, health, policies

Procedia PDF Downloads 365
1440 Fluid Prescribing Post Laparotomies

Authors: Gusa Hall, Barrie Keeler, Achal Khanna

Abstract:

Introduction: NICE guidelines have highlighted the consequences of IV fluid mismanagement. The main aim of this study was to audit fluid prescribing post laparotomies to identify if fluids were prescribed in accordance to NICE guidelines. Methodology: Retrospective database search of eight specific laparotomy procedures (colectomy right and left, Hartmann’s procedure, small bowel resection, perforated ulcer, abdominal perineal resection, anterior resection, pan proctocolectomy, subtotal colectomy) highlighted 29 laparotomies between April 2019 and May 2019. Two of 29 patients had secondary procedures during the same admission, n=27 (patients). Database case notes were reviewed for date of procedure, length of admission, fluid prescribed and amount, nasal gastric tube output, daily bloods results for electrolytes sodium and potassium and operational losses. Results: n=27 based on 27 identified patients between April 2019 – May 2019, 93% (25/27) received IV fluids, only 19% (5/27) received the correct IV fluids in accordance to NICE guidelines, 93% (25/27) who received IV fluids had the correct electrolytes levels (sodium & potassium), 100% (27/27) patients received blood tests (U&E’s) for correct electrolytes levels. 0% (0/27) no documentation on operational losses. IV fluids matched nasogastric tube output in 100% (3/3) of the number of patients that had a nasogastric tube in situ. Conclusion: A PubMed database literature review on barriers to safer IV prescribing highlighted educational interventions focused on prescriber knowledge rather than how to execute the prescribing task. This audit suggests IV fluids post laparotomies are not being prescribed consistently in accordance to NICE guidelines. Surgical management plans should be clearer on IV fluids and electrolytes requirements for the following 24 hours after the plan has been initiated. In addition, further teaching and training around IV prescribing is needed together with frequent surgical audits on IV fluid prescribing post-surgery to evaluate improvements.

Keywords: audit, IV Fluid prescribing, laparotomy, NICE guidelines

Procedia PDF Downloads 120
1439 Database of Pharmacogenetics HLA-A*31:01 Allele in Thai Population and Carbamazepine-Induced SCARs

Authors: Watchawin Ekphinitphithaya, Patompong Satapornpong

Abstract:

Introduction: Carbamazepine (CBZ) is one of the most prescribed antiepileptic drugs (AEDs) by neurologists and non-neurologist worldwide. CBZ is usually prescribed along with other drugs, leading to the possibility of severe cutaneous adverse drug reactions (SCARs). The HLA-B*15:02 is strongly associated with CBZ-induced Stevens-Johnson syndrome and toxic epidermal necrolysis (SJS–TEN) in the Han Chinese and other Asian populations but not in European populations, while HLA-A*31:01 allele has been reported to be associated with CBZ-induced SCARs in European population and Japanese. Objective: The aim of this study is to investigate the distribution of pharmacogenetics HLA-A*31:01 marker in a healthy Thai population associated with Carbamazepine-induced SCARs. Materials and Methods: Prospective study, 350 unrelated healthy Thais were recruited in this study. Human leukocyte antigen-A alleles were genotyped using PCR-sequence specific oligonucleotides (PCR-SSOs). Results: The frequency of HLA-A alleles were HLA-A*11:01 (190 alleles, 27.14%), HLA-A*24:02 (82 alleles, 11.71%), HLA-A*02:03 (80 alleles, 11.43%), HLA-A*33:03 (76 alleles, 10.86%), HLA-A*02:07 (58 alleles, 8.29%), HLA-A*02:01 (35 alleles, 5.00%), HLA-A*24:07 (29 alleles, 4.14%), HLA-A*02:06 – HLA-A*30:01 (15 alleles, 2.14%), and HLA-A*01:01 (14 alleles, 2.00%). Particularly, the number of HLA-A*31:01 alleles was 6 of 700 (0.86%) in the healthy Thai population. Many research presented varying distributions of HLA-A*31:01 in Asians, including 2% of Han Chinese, 9% of Japanese and 5% of Koreans. In addition, this allele was found approximately 2-5% in the Caucasian population. Conclusions: Thus, the pharmacogenetics database is vital to support in many populations, especially in Thais, for screening HLA-A*31:01 allele to avoid CBZ-induced SCARs before initiating treatments in each population.

Keywords: Carbamazepine, HLA-A*31:01, Thai population, pharmacogenetics

Procedia PDF Downloads 170
1438 Forensic Analysis of Signal Messenger on Android

Authors: Ward Bakker, Shadi Alhakimi

Abstract:

The amount of people moving towards more privacy focused instant messaging applications has grown significantly. Signal is one of these instant messaging applications, which makes Signal interesting for digital investigators. In this research, we evaluate the artifacts that are generated by the Signal messenger for Android. This evaluation was done by using the features that Signal provides to create artifacts, whereafter, we made an image of the internal storage and the process memory. This image was analysed manually. The manual analysis revealed the content that Signal stores in different locations during its operation. From our research, we were able to identify the artifacts and interpret how they were used. We also examined the source code of Signal. Using our obtain knowledge from the source code, we developed a tool that decrypts some of the artifacts using the key stored in the Android Keystore. In general, we found that most artifacts are encrypted and encoded, even after decrypting some of the artifacts. During data visualization, some artifacts were found, such as that Signal does not use relationships between the data. In this research, two interesting groups of artifacts were identified, those related to the database and those stored in the process memory dump. In the database, we found plaintext private- and group chats, and in the memory dump, we were able to retrieve the plaintext access code to the application. Nevertheless, we conclude that Signal contains a wealth of artifacts that could be very valuable to a digital forensic investigation.

Keywords: forensic, signal, Android, digital

Procedia PDF Downloads 82
1437 A Cloud Computing System Using Virtual Hyperbolic Coordinates for Services Distribution

Authors: Telesphore Tiendrebeogo, Oumarou Sié

Abstract:

Cloud computing technologies have attracted considerable interest in recent years. Thus, these latters have become more important for many existing database applications. It provides a new mode of use and of offer of IT resources in general. Such resources can be used “on demand” by anybody who has access to the internet. Particularly, the Cloud platform provides an ease to use interface between providers and users, allow providers to develop and provide software and databases for users over locations. Currently, there are many Cloud platform providers support large scale database services. However, most of these only support simple keyword-based queries and can’t response complex query efficiently due to lack of efficient in multi-attribute index techniques. Existing Cloud platform providers seek to improve performance of indexing techniques for complex queries. In this paper, we define a new cloud computing architecture based on a Distributed Hash Table (DHT) and design a prototype system. Next, we perform and evaluate our cloud computing indexing structure based on a hyperbolic tree using virtual coordinates taken in the hyperbolic plane. We show through our experimental results that we compare with others clouds systems to show our solution ensures consistence and scalability for Cloud platform.

Keywords: virtual coordinates, cloud, hyperbolic plane, storage, scalability, consistency

Procedia PDF Downloads 425
1436 Landslide Susceptibility Mapping: A Comparison between Logistic Regression and Multivariate Adaptive Regression Spline Models in the Municipality of Oudka, Northern of Morocco

Authors: S. Benchelha, H. C. Aoudjehane, M. Hakdaoui, R. El Hamdouni, H. Mansouri, T. Benchelha, M. Layelmam, M. Alaoui

Abstract:

The logistic regression (LR) and multivariate adaptive regression spline (MarSpline) are applied and verified for analysis of landslide susceptibility map in Oudka, Morocco, using geographical information system. From spatial database containing data such as landslide mapping, topography, soil, hydrology and lithology, the eight factors related to landslides such as elevation, slope, aspect, distance to streams, distance to road, distance to faults, lithology map and Normalized Difference Vegetation Index (NDVI) were calculated or extracted. Using these factors, landslide susceptibility indexes were calculated by the two mentioned methods. Before the calculation, this database was divided into two parts, the first for the formation of the model and the second for the validation. The results of the landslide susceptibility analysis were verified using success and prediction rates to evaluate the quality of these probabilistic models. The result of this verification was that the MarSpline model is the best model with a success rate (AUC = 0.963) and a prediction rate (AUC = 0.951) higher than the LR model (success rate AUC = 0.918, rate prediction AUC = 0.901).

Keywords: landslide susceptibility mapping, regression logistic, multivariate adaptive regression spline, Oudka, Taounate

Procedia PDF Downloads 188
1435 Bibliometric Analysis of Global Research Trends on Organization Culture, Strategic Leadership and Performance Using Scopus Database

Authors: Anyia Nduka, Aslan Bin Amad Senin

Abstract:

Taking a behavioral perspective of Organization Culture, Strategic Leadership, and performance (OC, SLP). We examine the role of Strategic Leadership as key vicious mechanism linking OC,SLP to the organizational capacities. Given the increasing degree of dependence of modern businesses on the use and scientific discovery of relevant data, research efforts around the entire globe have been accelerated. In today's corporate world, Strategic Leadership is still the most sustainable option of performance and competitive advantage. This is why it is critical to gain a deep understanding of research area and to strengthen new collaborative networks in efforts to support research transition towards these integrative efforts. This bibliometric analysis is aimed to examine global trends in OC,SLP research based on publication output, author co-authorships, and co-occurrences of author keywords among authors and affiliated countries. 2829 journal articles were retrieved from the Scopus database Between 1974 and 2021. From the research findings, there is a significant increase in number of publications with strong global collaboration (e.g., USA & UK). We also discovered that while most countries/territories without affiliations were centered in developing countries, the outstanding performance of Asian countries and the volume of their collaborations should be emulated.

Keywords: organizational culture, strategic leadership, organizational resilience, performance

Procedia PDF Downloads 85
1434 Correlation between Funding and Publications: A Pre-Step towards Future Research Prediction

Authors: Ning Kang, Marius Doornenbal

Abstract:

Funding is a very important – if not crucial – resource for research projects. Usually, funding organizations will publish a description of the funded research to describe the scope of the funding award. Logically, we would expect research outcomes to align with this funding award. For that reason, we might be able to predict future research topics based on present funding award data. That said, it remains to be shown if and how future research topics can be predicted by using the funding information. In this paper, we extract funding project information and their generated paper abstracts from the Gateway to Research database as a group, and use the papers from the same domains and publication years in the Scopus database as a baseline comparison group. We annotate both the project awards and the papers resulting from the funded projects with linguistic features (noun phrases), and then calculate tf-idf and cosine similarity between these two set of features. We show that the cosine similarity between the project-generated papers group is bigger than the project-baseline group, and also that these two groups of similarities are significantly different. Based on this result, we conclude that the funding information actually correlates with the content of future research output for the funded project on the topical level. How funding really changes the course of science or of scientific careers remains an elusive question.

Keywords: natural language processing, noun phrase, tf-idf, cosine similarity

Procedia PDF Downloads 245
1433 Attribute Based Comparison and Selection of Modular Self-Reconfigurable Robot Using Multiple Attribute Decision Making Approach

Authors: Manpreet Singh, V. P. Agrawal, Gurmanjot Singh Bhatti

Abstract:

From the last decades, there is a significant technological advancement in the field of robotics, and a number of modular self-reconfigurable robots were introduced that can help in space exploration, bucket to stuff, search, and rescue operation during earthquake, etc. As there are numbers of self-reconfigurable robots, choosing the optimum one is always a concern for robot user since there is an increase in available features, facilities, complexity, etc. The objective of this research work is to present a multiple attribute decision making based methodology for coding, evaluation, comparison ranking and selection of modular self-reconfigurable robots using a technique for order preferences by similarity to ideal solution approach. However, 86 attributes that affect the structure and performance are identified. A database for modular self-reconfigurable robot on the basis of different pertinent attribute is generated. This database is very useful for the user, for selecting a robot that suits their operational needs. Two visual methods namely linear graph and spider chart are proposed for ranking of modular self-reconfigurable robots. Using five robots (Atron, Smores, Polybot, M-Tran 3, Superbot), an example is illustrated, and raking of the robots is successfully done, which shows that Smores is the best robot for the operational need illustrated, and this methodology is found to be very effective and simple to use.

Keywords: self-reconfigurable robots, MADM, TOPSIS, morphogenesis, scalability

Procedia PDF Downloads 223
1432 Streamlining .NET Data Access: Leveraging JSON for Data Operations in .NET

Authors: Tyler T. Procko, Steve Collins

Abstract:

New features in .NET (6 and above) permit streamlined access to information residing in JSON-capable relational databases, such as SQL Server (2016 and above). Traditional methods of data access now comparatively involve unnecessary steps which compromise system performance. This work posits that the established ORM (Object Relational Mapping) based methods of data access in applications and APIs result in common issues, e.g., object-relational impedance mismatch. Recent developments in C# and .NET Core combined with a framework of modern SQL Server coding conventions have allowed better technical solutions to the problem. As an amelioration, this work details the language features and coding conventions which enable this streamlined approach, resulting in an open-source .NET library implementation called Codeless Data Access (CODA). Canonical approaches rely on ad-hoc mapping code to perform type conversions between the client and back-end database; with CODA, no mapping code is needed, as JSON is freely mapped to SQL and vice versa. CODA streamlines API data access by improving on three aspects of immediate concern to web developers, database engineers and cybersecurity professionals: Simplicity, Speed and Security. Simplicity is engendered by cutting out the “middleman” steps, effectively making API data access a whitebox, whereas traditional methods are blackbox. Speed is improved because of the fewer translational steps taken, and security is improved as attack surfaces are minimized. An empirical evaluation of the speed of the CODA approach in comparison to ORM approaches ] is provided and demonstrates that the CODA approach is significantly faster. CODA presents substantial benefits for API developer workflows by simplifying data access, resulting in better speed and security and allowing developers to focus on productive development rather than being mired in data access code. Future considerations include a generalization of the CODA method and extension outside of the .NET ecosystem to other programming languages.

Keywords: API data access, database, JSON, .NET core, SQL server

Procedia PDF Downloads 66
1431 Association of Non Synonymous SNP in DC-SIGN Receptor Gene with Tuberculosis (Tb)

Authors: Saima Suleman, Kalsoom Sughra, Naeem Mahmood Ashraf

Abstract:

Mycobacterium tuberculosis is a communicable chronic illness. This disease is being highly focused by researchers as it is present approximately in one third of world population either in active or latent form. The genetic makeup of a person plays an important part in producing immunity against disease. And one important factor association is single nucleotide polymorphism of relevant gene. In this study, we have studied association between single nucleotide polymorphism of CD-209 gene (encode DC-SIGN receptor) and patients of tuberculosis. Dry lab (in silico) and wet lab (RFLP) analysis have been carried out. GWAS catalogue and GEO database have been searched to find out previous association data. No association study has been found related to CD-209 nsSNPs but role of CD-209 in pulmonary tuberculosis have been addressed in GEO database.Therefore, CD-209 has been selected for this study. Different databases like ENSEMBLE and 1000 Genome Project has been used to retrieve SNP data in form of VCF file which is further submitted to different software to sort SNPs into benign and deleterious. Selected SNPs are further annotated by using 3-D modeling techniques using I-TASSER online software. Furthermore, selected nsSNPs were checked in Gujrat and Faisalabad population through RFLP analysis. In this study population two SNPs are found to be associated with tuberculosis while one nsSNP is not found to be associated with the disease.

Keywords: association, CD209, DC-SIGN, tuberculosis

Procedia PDF Downloads 309
1430 The Use of Voice in Online Public Access Catalog as Faster Searching Device

Authors: Maisyatus Suadaa Irfana, Nove Eka Variant Anna, Dyah Puspitasari Sri Rahayu

Abstract:

Technological developments provide convenience to all the people. Nowadays, the communication of human with the computer is done via text. With the development of technology, human and computer communications have been conducted with a voice like communication between human beings. It provides an easy facility for many people, especially those who have special needs. Voice search technology is applied in the search of book collections in the OPAC (Online Public Access Catalog), so library visitors will find it faster and easier to find books that they need. Integration with Google is needed to convert the voice into text. To optimize the time and the results of searching, Server will download all the book data that is available in the server database. Then, the data will be converted into JSON format. In addition, the incorporation of some algorithms is conducted including Decomposition (parse) in the form of array of JSON format, the index making, analyzer to the result. It aims to make the process of searching much faster than the usual searching in OPAC because the data are directly taken to the database for every search warrant. Data Update Menu is provided with the purpose to enable users perform their own data updates and get the latest data information.

Keywords: OPAC, voice, searching, faster

Procedia PDF Downloads 344
1429 Gypsum Composites with CDW as Raw Material

Authors: R. Santos Jiménez, A. San-Antonio-González, M. del Río Merino, M. González Cortina, C. Viñas Arrebola

Abstract:

On average, Europe generates around 890 million tons of construction and demolition waste (CDW) per year and only 50% of these CDW are recycled. This is far from the objectives determined in the European Directive for 2020 and aware of this situation, the European Countries are implementing national policies to prevent the waste that can be avoidable and to promote measures to increase recycling and recovering. In Spain, one of these measures has been the development of a CDW recycling guide for the manufacture of mortar, concrete, bricks and lightweight aggregates. However, there is still not enough information on the possibility of incorporating CDW materials in the manufacture of gypsum products. In view of the foregoing, the Universidad Politécnica de Madrid is creating a database with information on the possibility of incorporating CDW materials in the manufacture of gypsum products. The objective of this study is to improve this database by analysing the feasibility of incorporating two different CDW in a gypsum matrix: ceramic waste bricks (perforated brick and double hollow brick), and extruded polystyrene (XPS) waste. Results show that it is possible to incorporate up to 25% of ceramic waste and 4% of XPS waste over the weight of gypsum in a gypsum matrix. Furhtermore, with the addition of ceramic waste an 8% of surface hardness increase and a 25% of capillary water absorption reduction can be obtained. On the other hand, with the addition of XPS, a 26% reduction of density and a 37% improvement of thermal conductivity can be obtained.

Keywords: CDW, waste materials, ceramic waste, XPS, construction materials, gypsum

Procedia PDF Downloads 510
1428 An Engineer-Oriented Life Cycle Assessment Tool for Building Carbon Footprint: The Building Carbon Footprint Evaluation System in Taiwan

Authors: Hsien-Te Lin

Abstract:

The purpose of this paper is to introduce the BCFES (building carbon footprint evaluation system), which is a LCA (life cycle assessment) tool developed by the Low Carbon Building Alliance (LCBA) in Taiwan. A qualified BCFES for the building industry should fulfill the function of evaluating carbon footprint throughout all stages in the life cycle of building projects, including the production, transportation and manufacturing of materials, construction, daily energy usage, renovation and demolition. However, many existing BCFESs are too complicated and not very designer-friendly, creating obstacles in the implementation of carbon reduction policies. One of the greatest obstacle is the misapplication of the carbon footprint inventory standards of PAS2050 or ISO14067, which are designed for mass-produced goods rather than building projects. When these product-oriented rules are applied to building projects, one must compute a tremendous amount of data for raw materials and the transportation of construction equipment throughout the construction period based on purchasing lists and construction logs. This verification method is very cumbersome by nature and unhelpful to the promotion of low carbon design. With a view to provide an engineer-oriented BCFE with pre-diagnosis functions, a component input/output (I/O) database system and a scenario simulation method for building energy are proposed herein. Most existing BCFESs base their calculations on a product-oriented carbon database for raw materials like cement, steel, glass, and wood. However, data on raw materials is meaningless for the purpose of encouraging carbon reduction design without a feedback mechanism, because an engineering project is not designed based on raw materials but rather on building components, such as flooring, walls, roofs, ceilings, roads or cabinets. The LCBA Database has been composited from existing carbon footprint databases for raw materials and architectural graphic standards. Project designers can now use the LCBA Database to conduct low carbon design in a much more simple and efficient way. Daily energy usage throughout a building's life cycle, including air conditioning, lighting, and electric equipment, is very difficult for the building designer to predict. A good BCFES should provide a simplified and designer-friendly method to overcome this obstacle in predicting energy consumption. In this paper, the author has developed a simplified tool, the dynamic Energy Use Intensity (EUI) method, to accurately predict energy usage with simple multiplications and additions using EUI data and the designed efficiency levels for the building envelope, AC, lighting and electrical equipment. Remarkably simple to use, it can help designers pre-diagnose hotspots in building carbon footprint and further enhance low carbon designs. The BCFES-LCBA offers the advantages of an engineer-friendly component I/O database, simplified energy prediction methods, pre-diagnosis of carbon hotspots and sensitivity to good low carbon designs, making it an increasingly popular carbon management tool in Taiwan. To date, about thirty projects have been awarded BCFES-LCBA certification and the assessment has become mandatory in some cities.

Keywords: building carbon footprint, life cycle assessment, energy use intensity, building energy

Procedia PDF Downloads 139