Search results for: Data science
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26494

Search results for: Data science

25654 Information Extraction Based on Search Engine Results

Authors: Mohammed R. Elkobaisi, Abdelsalam Maatuk

Abstract:

The search engines are the large scale information retrieval tools from the Web that are currently freely available to all. This paper explains how to convert the raw resulted number of search engines into useful information. This represents a new method for data gathering comparing with traditional methods. When a query is submitted for a multiple numbers of keywords, this take a long time and effort, hence we develop a user interface program to automatic search by taking multi-keywords at the same time and leave this program to collect wanted data automatically. The collected raw data is processed using mathematical and statistical theories to eliminate unwanted data and converting it to usable data.

Keywords: search engines, information extraction, agent system

Procedia PDF Downloads 423
25653 Comparison of Wet and Microwave Digestion Methods for the Al, Cu, Fe, Mn, Ni, Pb and Zn Determination in Some Honey Samples by ICPOES in Turkey

Authors: Huseyin Altundag, Emel Bina, Esra Altıntıg

Abstract:

The aim of this study is determining amount of Al, Cu, Fe, Mn, Ni, Pb and Zn in the samples of honey which are gathered from Sakarya and Istanbul regions. In this study the evaluation of the trace elements in honeys samples are gathered from Sakarya and Istanbul, Turkey. The sample preparation phase is performed via wet decomposition method and microwave digestion system. The accuracy of the method was corrected by the standard reference material, Tea Leaves (INCY-TL-1) and NIST SRM 1515 Apple leaves. The comparison between gathered data and literature values has made and possible resources of the contamination to the samples of honey have handled. The obtained results will be presented in ICCIS 2015: XIII International Conference on Chemical Industry and Science.

Keywords: Wet decomposition, Microwave digestion, Trace element, Honey, ICP-OES

Procedia PDF Downloads 458
25652 Implementation and Performance Analysis of Data Encryption Standard and RSA Algorithm with Image Steganography and Audio Steganography

Authors: S. C. Sharma, Ankit Gambhir, Rajeev Arya

Abstract:

In today’s era data security is an important concern and most demanding issues because it is essential for people using online banking, e-shopping, reservations etc. The two major techniques that are used for secure communication are Cryptography and Steganography. Cryptographic algorithms scramble the data so that intruder will not able to retrieve it; however steganography covers that data in some cover file so that presence of communication is hidden. This paper presents the implementation of Ron Rivest, Adi Shamir, and Leonard Adleman (RSA) Algorithm with Image and Audio Steganography and Data Encryption Standard (DES) Algorithm with Image and Audio Steganography. The coding for both the algorithms have been done using MATLAB and its observed that these techniques performed better than individual techniques. The risk of unauthorized access is alleviated up to a certain extent by using these techniques. These techniques could be used in Banks, RAW agencies etc, where highly confidential data is transferred. Finally, the comparisons of such two techniques are also given in tabular forms.

Keywords: audio steganography, data security, DES, image steganography, intruder, RSA, steganography

Procedia PDF Downloads 284
25651 Data Monetisation by E-commerce Companies: A Need for a Regulatory Framework in India

Authors: Anushtha Saxena

Abstract:

This paper examines the process of data monetisation bye-commerce companies operating in India. Data monetisation is collecting, storing, and analysing consumers’ data to use further the data that is generated for profits, revenue, etc. Data monetisation enables e-commerce companies to get better businesses opportunities, innovative products and services, a competitive edge over others to the consumers, and generate millions of revenues. This paper analyses the issues and challenges that are faced due to the process of data monetisation. Some of the issues highlighted in the paper pertain to the right to privacy, protection of data of e-commerce consumers. At the same time, data monetisation cannot be prohibited, but it can be regulated and monitored by stringent laws and regulations. The right to privacy isa fundamental right guaranteed to the citizens of India through Article 21 of The Constitution of India. The Supreme Court of India recognized the Right to Privacy as a fundamental right in the landmark judgment of Justice K.S. Puttaswamy (Retd) and Another v. Union of India . This paper highlights the legal issue of how e-commerce businesses violate individuals’ right to privacy by using the data collected, stored by them for economic gains and monetisation and protection of data. The researcher has mainly focused on e-commerce companies like online shopping websitesto analyse the legal issue of data monetisation. In the Internet of Things and the digital age, people have shifted to online shopping as it is convenient, easy, flexible, comfortable, time-consuming, etc. But at the same time, the e-commerce companies store the data of their consumers and use it by selling to the third party or generating more data from the data stored with them. This violatesindividuals’ right to privacy because the consumers do not know anything while giving their data online. Many times, data is collected without the consent of individuals also. Data can be structured, unstructured, etc., that is used by analytics to monetise. The Indian legislation like The Information Technology Act, 2000, etc., does not effectively protect the e-consumers concerning their data and how it is used by e-commerce businesses to monetise and generate revenues from that data. The paper also examines the draft Data Protection Bill, 2021, pending in the Parliament of India, and how this Bill can make a huge impact on data monetisation. This paper also aims to study the European Union General Data Protection Regulation and how this legislation can be helpful in the Indian scenarioconcerning e-commerce businesses with respect to data monetisation.

Keywords: data monetization, e-commerce companies, regulatory framework, GDPR

Procedia PDF Downloads 111
25650 Experiments on Weakly-Supervised Learning on Imperfect Data

Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler

Abstract:

Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.

Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation

Procedia PDF Downloads 195
25649 Transforming Healthcare Data Privacy: Integrating Blockchain with Zero-Knowledge Proofs and Cryptographic Security

Authors: Kenneth Harper

Abstract:

Blockchain technology presents solutions for managing healthcare data, addressing critical challenges in privacy, integrity, and access. This paper explores how privacy-preserving technologies, such as zero-knowledge proofs (ZKPs) and homomorphic encryption (HE), enhance decentralized healthcare platforms by enabling secure computations and patient data protection. An examination of the mathematical foundations of these methods, their practical applications, and how they meet the evolving demands of healthcare data security is unveiled. Using real-world examples, this research highlights industry-leading implementations and offers a roadmap for future applications in secure, decentralized healthcare ecosystems.

Keywords: blockchain, cryptography, data privacy, decentralized data management, differential privacy, healthcare, healthcare data security, homomorphic encryption, privacy-preserving technologies, secure computations, zero-knowledge proofs

Procedia PDF Downloads 10
25648 Virtual Schooling as a Collaboration between Public Schools and the Scientific Community

Authors: Thomas A. Fuller

Abstract:

Over the past fifteen years, virtual schooling has been introduced and implemented in varying degrees throughout the public education system in the United States. It is possible in some states for students to voluntarily take all of their course load online, without ever having to step in a classroom. Experts foresee a dramatic rise in the number of courses taken online by public school students in the United States, with some predicting that by 2019 as many as 50% of public high school courses will be delivered online. This electronic delivery of public education offers tremendous potential to the scientific community because it calls for innovation and is funded by public school revenue. Public accountability provides a ready supply of statistical data for measuring the progress of virtual schools as they are implemented into the public school arena. This allows for a survey of the current use of virtual schooling through examination of past statistical data, as well as forecasting forward for future years based upon this past data. Virtual schooling is on the rise in the United States, but its growth has been tempered by practical problems of implementation. The greatest and best use of virtual schooling thus far has been to supplement the courses offered by public schools (e.g., offering unique language courses, elective courses, and games-based math and science courses). The weaknesses of virtual schooling lay in the problematic accountability in allowing students to take courses online at home and the lack of supportive infrastructure in the public school arena. Virtual schooling holds great promise for the public school education system in the United States, as well as the scientific community. Online courses allow students access to a much greater catalog of courses than is offered through classroom instruction in their local public school. This promising sector needs assistance from the scientific community in implementing new pedagogical methodologies.

Keywords: virtual schools, online classroom, electronic delivery, technological innovation

Procedia PDF Downloads 380
25647 Operating Speed Models on Tangent Sections of Two-Lane Rural Roads

Authors: Dražen Cvitanić, Biljana Maljković

Abstract:

This paper presents models for predicting operating speeds on tangent sections of two-lane rural roads developed on continuous speed data. The data corresponds to 20 drivers of different ages and driving experiences, driving their own cars along an 18 km long section of a state road. The data were first used for determination of maximum operating speeds on tangents and their comparison with speeds in the middle of tangents i.e. speed data used in most of operating speed studies. Analysis of continuous speed data indicated that the spot speed data are not reliable indicators of relevant speeds. After that, operating speed models for tangent sections were developed. There was no significant difference between models developed using speed data in the middle of tangent sections and models developed using maximum operating speeds on tangent sections. All developed models have higher coefficient of determination then models developed on spot speed data. Thus, it can be concluded that the method of measuring has more significant impact on the quality of operating speed model than the location of measurement.

Keywords: operating speed, continuous speed data, tangent sections, spot speed, consistency

Procedia PDF Downloads 451
25646 Bridging the Gap between Teaching and Learning: A 3-S (Strength, Stamina, Speed) Model for Medical Education

Authors: Mangala. Sadasivan, Mary Hughes, Bryan Kelly

Abstract:

Medical Education must focus on bridging the gap between teaching and learning when training pre-clinical year students in skills needed to keep up with medical knowledge and to meet the demands of health care in the future. The authors were interested in showing that a 3-S Model (building strength, developing stamina, and increasing speed) using a bridged curriculum design helps connect teaching and learning and improves students’ retention of basic science and clinical knowledge. The authors designed three learning modules using the 3-S Model within a systems course in a pre-clerkship medical curriculum. Each module focused on a bridge (concept map) designed by the instructor for specific content delivered to students in the course. This with-in-subjects design study included 304 registered MSU osteopathic medical students (3 campuses) ranked by quintile based on previous coursework. The instructors used the bridge to create self-directed learning exercises (building strength) to help students master basic science content. Students were video coached on how to complete assignments, and given pre-tests and post-tests designed to give them control to assess and identify gaps in learning and strengthen connections. The instructor who designed the modules also used video lectures to help students master clinical concepts and link them (building stamina) to previously learned material connected to the bridge. Boardstyle practice questions relevant to the modules were used to help students improve access (increasing speed) to stored content. Unit Examinations covering the content within modules and materials covered by other instructors teaching within the units served as outcome measures in this study. This data was then compared to each student’s performance on a final comprehensive exam and their COMLEX medical board examinations taken some time after the course. The authors used mean comparisons to evaluate students’ performances on module items (using 3-S Model) to non-module items on unit exams, final course exam and COMLEX medical board examination. The data shows that on average, students performed significantly better on module items compared to non-module items on exams 1 and 2. The module 3 exam was canceled due to a university shut down. The difference in mean scores (module verses non-module) items disappeared on the final comprehensive exam which was rescheduled once the university resumed session. Based on Quintile designation, the mean scores were higher for module items than non-module items and the difference in scores between items for Quintiles 1 and 2 were significantly better on exam 1 and the gap widened for all Quintile groups on exam 2 and disappeared in exam 3. Based on COMLEX performance, all students on average as a group, whether they Passed or Failed, performed better on Module items than non-module items in all three exams. The gap between scores of module items for students who passed COMLEX to those who failed was greater on Exam 1 (14.3) than on Exam 2 (7.5) and Exam 3 (10.2). Data shows the 3-S Model using a bridge effectively connects teaching and learning

Keywords: bridging gap, medical education, teaching and learning, model of learning

Procedia PDF Downloads 58
25645 A Neural Network Based Clustering Approach for Imputing Multivariate Values in Big Data

Authors: S. Nickolas, Shobha K.

Abstract:

The treatment of incomplete data is an important step in the data pre-processing. Missing values creates a noisy environment in all applications and it is an unavoidable problem in big data management and analysis. Numerous techniques likes discarding rows with missing values, mean imputation, expectation maximization, neural networks with evolutionary algorithms or optimized techniques and hot deck imputation have been introduced by researchers for handling missing data. Among these, imputation techniques plays a positive role in filling missing values when it is necessary to use all records in the data and not to discard records with missing values. In this paper we propose a novel artificial neural network based clustering algorithm, Adaptive Resonance Theory-2(ART2) for imputation of missing values in mixed attribute data sets. The process of ART2 can recognize learned models fast and be adapted to new objects rapidly. It carries out model-based clustering by using competitive learning and self-steady mechanism in dynamic environment without supervision. The proposed approach not only imputes the missing values but also provides information about handling the outliers.

Keywords: ART2, data imputation, clustering, missing data, neural network, pre-processing

Procedia PDF Downloads 272
25644 The Effect That the Data Assimilation of Qinghai-Tibet Plateau Has on a Precipitation Forecast

Authors: Ruixia Liu

Abstract:

Qinghai-Tibet Plateau has an important influence on the precipitation of its lower reaches. Data from remote sensing has itself advantage and numerical prediction model which assimilates RS data will be better than other. We got the assimilation data of MHS and terrestrial and sounding from GSI, and introduced the result into WRF, then got the result of RH and precipitation forecast. We found that assimilating MHS and terrestrial and sounding made the forecast on precipitation, area and the center of the precipitation more accurate by comparing the result of 1h,6h,12h, and 24h. Analyzing the difference of the initial field, we knew that the data assimilating about Qinghai-Tibet Plateau influence its lower reaches forecast by affecting on initial temperature and RH.

Keywords: Qinghai-Tibet Plateau, precipitation, data assimilation, GSI

Procedia PDF Downloads 229
25643 A Reading Attempt of the Urban Memory of Jordan University of Science and Technology Campus by Cognitive Mapping

Authors: Bsma Adel Bany Mohammad

Abstract:

The University campuses are a small city containing basic city functions such as educational spaces, accommodations, services and transportation. They are spaces of functional and social life with different activities, different occupants. The campus designed and transformed like cities so both experienced and memorized in same way. Campus memory is the ability of individuals to maintain and reveal the spatial components of designed physical spaces, which form the understandings, experiences, sensations of the environment in all. ‘Cognitive mapping’ is used to decode the physical interaction and emotional relationship between individuals and the city; Cognitive maps are created graphically using geometric and verbal elements on paper by remembering the images of the Urban Environment. In this study, to determine the emotional urban identity belonging to Jordan University of science and technology Campus, architecture students Asked to identify the areas they interact with in the campus by drawing a cognitive map. ‘Campus memory items’ are identified by analyzing the cognitive maps of the campus, then the spatial identity result of such data. The analysis based on the five basic elements of Lynch: paths, districts, edges, nodes, and landmarks. As a result of this analysis, it found that Spatial Identity constructed by the shared elements of the maps. The memory of most students listed the gates structure- which is a large desirable structure, located at the main entrances within the campus defined as major landmarks, then the square spaces defined as nodes, in addition to both stairs and corridors defined as paths. Finally, the districts, edges of educational buildings and service spaces are listed correspondingly in cognitive maps. Findings suggest that the spatial identity of the campus design is related mainly to the gates structures, squares and stairs.

Keywords: cognitive maps, university campus, urban memory, identity

Procedia PDF Downloads 147
25642 Positive Affect, Negative Affect, Organizational and Motivational Factor on the Acceptance of Big Data Technologies

Authors: Sook Ching Yee, Angela Siew Hoong Lee

Abstract:

Big data technologies have become a trend to exploit business opportunities and provide valuable business insights through the analysis of big data. However, there are still many organizations that have yet to adopt big data technologies especially small and medium organizations (SME). This study uses the technology acceptance model (TAM) to look into several constructs in the TAM and other additional constructs which are positive affect, negative affect, organizational factor and motivational factor. The conceptual model proposed in the study will be tested on the relationship and influence of positive affect, negative affect, organizational factor and motivational factor towards the intention to use big data technologies to produce an outcome. Empirical research is used in this study by conducting a survey to collect data.

Keywords: big data technologies, motivational factor, negative affect, organizational factor, positive affect, technology acceptance model (TAM)

Procedia PDF Downloads 356
25641 Existence of God: Belief, Analysis and a Scientific Explanation of Resemblance with Cosmic Theory

Authors: Aarti Muley

Abstract:

An ancient Vedic philosophy defines the three basic gods i.e Bramha, Vishnu and Shiva. Bramha is known as a supreme god and responsible for creating a universe. Vedic scriptures have not given the direct description of Lord Bramha but with the name Hiranyagarbha Rig Veda describes Bramha. Vedas, Bhagwat Gita, Mahabharata describes Bramha and modern science has found that many theories and principle is directly related with the life of Lord Bramha but there is no direct explanation and evidence regarding a planet Bramhaloka or also called as Satyaloka. Neither the ancient scriptures nor the Indian astrology which is based on the motion of the planet have given any evidence to the planet Bramhaloka directly. In this paper, the efforts have been made to study who is god Bramha and the planet Bramhaloka from Vedic scriptures and using the theories of modern science it has been found that it has strong resemblance with the star Sun. To the best of author’s knowledge, this is the first report which gives the explanation that the lord Bramha’s planet Bramhaloka and the Sun is one and the same.

Keywords: God Bramha, ancient scriptures, cosmic theory, scientific explanation

Procedia PDF Downloads 170
25640 Big Data Analysis with Rhipe

Authors: Byung Ho Jung, Ji Eun Shin, Dong Hoon Lim

Abstract:

Rhipe that integrates R and Hadoop environment made it possible to process and analyze massive amounts of data using a distributed processing environment. In this paper, we implemented multiple regression analysis using Rhipe with various data sizes of actual data. Experimental results for comparing the performance of our Rhipe with stats and biglm packages available on bigmemory, showed that our Rhipe was more fast than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases. We also compared the computing speeds of pseudo-distributed and fully-distributed modes for configuring Hadoop cluster. The results showed that fully-distributed mode was faster than pseudo-distributed mode, and computing speeds of fully-distributed mode were faster as the number of data nodes increases.

Keywords: big data, Hadoop, Parallel regression analysis, R, Rhipe

Procedia PDF Downloads 493
25639 Application of Mathematics in Real-Life Situation

Authors: Abubakar Attahiru

Abstract:

Mathematics plays an important role in the real situation. The development of the study of mathematics is a result of the needs of man to survive and interact with one another in society. Mathematics is the universal language that is applied in almost every aspect of life. Mathematics gives us a way to understand patterns, define relationships, and predict the future. The changes in the content and methods of studying mathematics follow the trends in societal needs and developments. Also, the developments in mathematics affect the developments in society. Generally, education helps to develop society while the activities and needs of the society dictate e educational policy of any society. Among all the academic subjects studied at school, mathematics has distinctly contributed more to the objectives of general education of man than any other subject. This is a result of the applications of mathematics to all spheres of human endeavors’. This paper looks at the meaning of the basic concepts of mathematics, science, and technology, the application of mathematics in a real-life situation, and their relationships with society. The paper also shows how mathematics, science, and technology affect the existence and development of society and how society determines the nature of mathematics studied in society through its educational system.

Keywords: application, mathematics, real life, situation

Procedia PDF Downloads 151
25638 Security in Resource Constraints Network Light Weight Encryption for Z-MAC

Authors: Mona Almansoori, Ahmed Mustafa, Ahmad Elshamy

Abstract:

Wireless sensor network was formed by a combination of nodes, systematically it transmitting the data to their base stations, this transmission data can be easily compromised if the limited processing power and the data consistency from these nodes are kept in mind; there is always a discussion to address the secure data transfer or transmission in actual time. This will present a mechanism to securely transmit the data over a chain of sensor nodes without compromising the throughput of the network by utilizing available battery resources available in the sensor node. Our methodology takes many different advantages of Z-MAC protocol for its efficiency, and it provides a unique key by sharing the mechanism using neighbor node MAC address. We present a light weighted data integrity layer which is embedded in the Z-MAC protocol to prove that our protocol performs well than Z-MAC when we introduce the different attack scenarios.

Keywords: hybrid MAC protocol, data integrity, lightweight encryption, neighbor based key sharing, sensor node dataprocessing, Z-MAC

Procedia PDF Downloads 139
25637 Using Composite Flour in Bread Making: Cassava and Wheat Flour

Authors: Aishatu Ibrahim, Ijeoma Chinyere Ukonu

Abstract:

The study set out to produce bread using composite cassava flour. The main objective of the work is to determine the possibility of using composite cassava flour in bread production and to find out whether it is acceptable in the hospitality industry and by the general public. The research questions were formed and analyzed. A sample size of 10 professional catering judges was used in the department of hospitality management/food science and technology. Relevant literature was received. Data collected was analyzed using mean deviation. Product A which is 20% cassava flour and 80% wheat flour product, and D which is 100% wheat flour product were competing with high acceptability. It was observed that the composite cassava dough needed to be allowed to proof for a longer period. Lastly, the researcher recommends that the caterers should be encouraged to use composite cassava flour in the production of bread in order to reduce cost.

Keywords: bread, cassava, flour, wheat

Procedia PDF Downloads 330
25636 Supplier Carbon Footprint Methodology Development for Automotive Original Equipment Manufacturers

Authors: Nur A. Özdemir, Sude Erkin, Hatice K. Güney, Cemre S. Atılgan, Enes Huylu, Hüseyin Y. Altıntaş, Aysemin Top, Özak Durmuş

Abstract:

Carbon emissions produced during a product’s life cycle, from extraction of raw materials up to waste disposal and market consumption activities are the major contributors to global warming. In the light of the science-based targets (SBT) leading the way to a zero-carbon economy for sustainable growth of the companies, carbon footprint reporting of the purchased goods has become critical for identifying hotspots and best practices for emission reduction opportunities. In line with Ford Otosan's corporate sustainability strategy, research was conducted to evaluate the carbon footprint of purchased products in accordance with Scope 3 of the Greenhouse Gas Protocol (GHG). The purpose of this paper is to develop a systematic and transparent methodology to calculate carbon footprint of the products produced by automotive OEMs (Original Equipment Manufacturers) within the context of automobile supply chain management. To begin with, primary material data were collected through IMDS (International Material Database System) corresponds to company’s three distinct types of vehicles including Light Commercial Vehicle (Courier), Medium Commercial Vehicle (Transit and Transit Custom), Heavy Commercial Vehicle (F-MAX). Obtained material data was classified as metals, plastics, liquids, electronics, and others to get insights about the overall material distribution of produced vehicles and matched to the SimaPro Ecoinvent 3 database which is one of the most extent versions for modelling material data related to the product life cycle. Product life cycle analysis was calculated within the framework of ISO 14040 – 14044 standards by addressing the requirements and procedures. A comprehensive literature review and cooperation with suppliers were undertaken to identify the production methods of parts used in vehicles and to find out the amount of scrap generated during part production. Cumulative weight and material information with related production process belonging the components were listed by multiplying with current sales figures. The results of the study show a key modelling on carbon footprint of products and processes based on a scientific approach to drive sustainable growth by setting straightforward, science-based emission reduction targets. Hence, this study targets to identify the hotspots and correspondingly provide broad ideas about our understanding of how to integrate carbon footprint estimates into our company's supply chain management by defining convenient actions in line with climate science. According to emission values arising from the production phase including raw material extraction and material processing for Ford OTOSAN vehicles subjected in this study, GHG emissions from the production of metals used for HCV, MCV and LCV account for more than half of the carbon footprint of the vehicle's production. Correspondingly, aluminum and steel have the largest share among all material types and achieving carbon neutrality in the steel and aluminum industry is of great significance to the world, which will also present an immense impact on the automobile industry. Strategic product sustainability plan which includes the use of secondary materials, conversion to green energy and low-energy process design is required to reduce emissions of steel, aluminum, and plastics due to the projected increase in total volume by 2030.

Keywords: automotive, carbon footprint, IMDS, scope 3, SimaPro, sustainability

Procedia PDF Downloads 104
25635 Survival Data with Incomplete Missing Categorical Covariates

Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar

Abstract:

The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.

Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution

Procedia PDF Downloads 400
25634 Sexual Health in the Over Forty-Fives: A Cross-Europe Project

Authors: Tess Hartland, Moitree Banerjee, Sue Churchill, Antonina Pereira, Ian Tyndall, Ruth Lowry

Abstract:

Background: Sexual health services and policies for middle-aged and older adults are underdeveloped, while global sexually transmitted infections in this age group are on the rise. The Interreg cross-Europe Sexual Health In Over 45s (SHIFT) project aims to increase participation in sexual health services and improve sexual health and wellbeing in people aged over 45, with an additional focus on disadvantaged groups. Methods: A two-pronged mixed-methodology is being used to develop a model for good service provision in sexual health for over 45s. (1) Following PRISMA-ScR guidelines, a scoping review is being conducted, using the databases PsychINFO, Web of Science, ERIC and PubMed. A key search strategy using terms around sexual health, good practice, over 45s and disadvantaged groups. The initial search for literature yielded 7914 results. (2) Surveys (n=1000) based on the Theory of Planned Behaviour are being administered across the UK, Belgium and Netherlands to explore current sexual health knowledge, awareness and attitudes. Expected results: It is expected that sexual health needs and potential gaps in service provision will be identified in order to inform good practice for sexual health services for the target population. Results of the scoping review are being analysed, while focus group and survey data is being gathered. Preliminary analysis of the survey data highlights barriers to access such as limited risk awareness and stigma. All data analysis will be completed by the time of the conference. Discussion: Findings will inform the development of a model to improve sexual health and wellbeing for among over 45s, a population which is often missed in sexual health policy improvement.

Keywords: adult health, disease prevention, health promotion, over 45s, sexual health

Procedia PDF Downloads 125
25633 Advances in Design Decision Support Tools for Early-stage Energy-Efficient Architectural Design: A Review

Authors: Maryam Mohammadi, Mohammadjavad Mahdavinejad, Mojtaba Ansari

Abstract:

The main driving force for increasing movement towards the design of High-Performance Buildings (HPB) are building codes and rating systems that address the various components of the building and their impact on the environment and energy conservation through various methods like prescriptive methods or simulation-based approaches. The methods and tools developed to meet these needs, which are often based on building performance simulation tools (BPST), have limitations in terms of compatibility with the integrated design process (IDP) and HPB design, as well as use by architects in the early stages of design (when the most important decisions are made). To overcome these limitations in recent years, efforts have been made to develop Design Decision Support Systems, which are often based on artificial intelligence. Numerous needs and steps for designing and developing a Decision Support System (DSS), which complies with the early stages of energy-efficient architecture design -consisting of combinations of different methods in an integrated package- have been listed in the literature. While various review studies have been conducted in connection with each of these techniques (such as optimizations, sensitivity and uncertainty analysis, etc.) and their integration of them with specific targets; this article is a critical and holistic review of the researches which leads to the development of applicable systems or introduction of a comprehensive framework for developing models complies with the IDP. Information resources such as Science Direct and Google Scholar are searched using specific keywords and the results are divided into two main categories: Simulation-based DSSs and Meta-simulation-based DSSs. The strengths and limitations of different models are highlighted, two general conceptual models are introduced for each category and the degree of compliance of these models with the IDP Framework is discussed. The research shows movement towards Multi-Level of Development (MOD) models, well combined with early stages of integrated design (schematic design stage and design development stage), which are heuristic, hybrid and Meta-simulation-based, relies on Big-real Data (like Building Energy Management Systems Data or Web data). Obtaining, using and combining of these data with simulation data to create models with higher uncertainty, more dynamic and more sensitive to context and culture models, as well as models that can generate economy-energy-efficient design scenarios using local data (to be more harmonized with circular economy principles), are important research areas in this field. The results of this study are a roadmap for researchers and developers of these tools.

Keywords: integrated design process, design decision support system, meta-simulation based, early stage, big data, energy efficiency

Procedia PDF Downloads 160
25632 A Study of Blockchain Oracles

Authors: Abdeljalil Beniiche

Abstract:

The limitation with smart contracts is that they cannot access external data that might be required to control the execution of business logic. Oracles can be used to provide external data to smart contracts. An oracle is an interface that delivers data from external data outside the blockchain to a smart contract to consume. Oracle can deliver different types of data depending on the industry and requirements. In this paper, we study and describe the widely used blockchain oracles. Then, we elaborate on his potential role, technical architecture, and design patterns. Finally, we discuss the human oracle and its key role in solving the truth problem by reaching a consensus about a certain inquiry and tasks.

Keywords: blockchain, oracles, oracles design, human oracles

Procedia PDF Downloads 130
25631 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data

Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar

Abstract:

It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.

Keywords: accuracy, exponential smoothing, forecasting, initial value

Procedia PDF Downloads 175
25630 Multi Data Management Systems in a Cluster Randomized Trial in Poor Resource Setting: The Pneumococcal Vaccine Schedules Trial

Authors: Abdoullah Nyassi, Golam Sarwar, Sarra Baldeh, Mamadou S. K. Jallow, Bai Lamin Dondeh, Isaac Osei, Grant A. Mackenzie

Abstract:

A randomized controlled trial is the "gold standard" for evaluating the efficacy of an intervention. Large-scale, cluster-randomized trials are expensive and difficult to conduct, though. To guarantee the validity and generalizability of findings, high-quality, dependable, and accurate data management systems are necessary. Robust data management systems are crucial for optimizing and validating the quality, accuracy, and dependability of trial data. Regarding the difficulties of data gathering in clinical trials in low-resource areas, there is a scarcity of literature on this subject, which may raise concerns. Effective data management systems and implementation goals should be part of trial procedures. Publicizing the creative clinical data management techniques used in clinical trials should boost public confidence in the study's conclusions and encourage further replication. In the ongoing pneumococcal vaccine schedule study in rural Gambia, this report details the development and deployment of multi-data management systems and methodologies. We implemented six different data management, synchronization, and reporting systems using Microsoft Access, RedCap, SQL, Visual Basic, Ruby, and ASP.NET. Additionally, data synchronization tools were developed to integrate data from these systems into the central server for reporting systems. Clinician, lab, and field data validation systems and methodologies are the main topics of this report. Our process development efforts across all domains were driven by the complexity of research project data collected in real-time data, online reporting, data synchronization, and ways for cleaning and verifying data. Consequently, we effectively used multi-data management systems, demonstrating the value of creative approaches in enhancing the consistency, accuracy, and reporting of trial data in a poor resource setting.

Keywords: data management, data collection, data cleaning, cluster-randomized trial

Procedia PDF Downloads 15
25629 Asymptotic Expansion of Double Oscillatory Integrals: Contribution of Non Stationary Critical Points of the Second Kind

Authors: Abdallah Benaissa

Abstract:

In this paper, we consider the problem of asymptotics of double oscillatory integrals in the case of critical points of the second kind, the order of contact between the boundary and a level curve of the phase being even, the situation when the order of contact is odd will be studied in other occasions. Complete asymptotic expansions will be derived and the coefficient of the leading term will be computed in terms of the original data of the problem. A multitude of people have studied this problem using a variety of methods, but only in a special case when the order of contact is minimal: the more cited papers are a paper of Jones and Kline and an other one of Chako. These integrals are encountered in many areas of science, especially in problems of diffraction of optics.

Keywords: asymptotic expansion, double oscillatory integral, critical point of the second kind, optics diffraction

Procedia PDF Downloads 348
25628 Finding Bicluster on Gene Expression Data of Lymphoma Based on Singular Value Decomposition and Hierarchical Clustering

Authors: Alhadi Bustaman, Soeganda Formalidin, Titin Siswantining

Abstract:

DNA microarray technology is used to analyze thousand gene expression data simultaneously and a very important task for drug development and test, function annotation, and cancer diagnosis. Various clustering methods have been used for analyzing gene expression data. However, when analyzing very large and heterogeneous collections of gene expression data, conventional clustering methods often cannot produce a satisfactory solution. Biclustering algorithm has been used as an alternative approach to identifying structures from gene expression data. In this paper, we introduce a transform technique based on singular value decomposition to identify normalized matrix of gene expression data followed by Mixed-Clustering algorithm and the Lift algorithm, inspired in the node-deletion and node-addition phases proposed by Cheng and Church based on Agglomerative Hierarchical Clustering (AHC). Experimental study on standard datasets demonstrated the effectiveness of the algorithm in gene expression data.

Keywords: agglomerative hierarchical clustering (AHC), biclustering, gene expression data, lymphoma, singular value decomposition (SVD)

Procedia PDF Downloads 273
25627 Implementation of an Undergraduate Integrated Biology and Chemistry Course

Authors: Jayson G. Balansag

Abstract:

An integrated biology and chemistry (iBC) course for freshmen college students was developed in University of Delaware. This course will prepare students to (1) become interdisciplinary thinkers in the field of biology and (2) collaboratively work with others from multiple disciplines in the future. This paper documents and describes the implementation of the course. The information gathered from reading literature, classroom observations, and interviews were used to carry out the purpose of this paper. The major goal of the iBC course is to align the concepts between Biology and Chemistry, so that students can draw science concepts from both disciplines which they can apply in their interdisciplinary researches. This course is offered every fall and spring semesters of each school year. Students enrolled in Biology are also enrolled in Chemistry during the same semester. The iBC is composed of lectures, laboratories, studio sessions, and workshops and is taught by the faculty from the biology and chemistry departments. In addition, the preceptors, graduate teaching assistants, and studio fellows facilitate the laboratory and studio sessions. These roles are interdependent with each other. The iBC can be used as a model for higher education institutions who wish to implement an integrated biology course.

Keywords: integrated biology and chemistry, integration, interdisciplinary research, new biology, undergraduate science education

Procedia PDF Downloads 238
25626 An Efficient Traceability Mechanism in the Audited Cloud Data Storage

Authors: Ramya P, Lino Abraham Varghese, S. Bose

Abstract:

By cloud storage services, the data can be stored in the cloud, and can be shared across multiple users. Due to the unexpected hardware/software failures and human errors, which make the data stored in the cloud be lost or corrupted easily it affected the integrity of data in cloud. Some mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. But public auditing on the integrity of shared data with the existing mechanisms will unavoidably reveal confidential information such as identity of the person, to public verifiers. Here a privacy-preserving mechanism is proposed to support public auditing on shared data stored in the cloud. It uses group signatures to compute verification metadata needed to audit the correctness of shared data. The identity of the signer on each block in shared data is kept confidential from public verifiers, who are easily verifying shared data integrity without retrieving the entire file. But on demand, the signer of the each block is reveal to the owner alone. Group private key is generated once by the owner in the static group, where as in the dynamic group, the group private key is change when the users revoke from the group. When the users leave from the group the already signed blocks are resigned by cloud service provider instead of owner is efficiently handled by efficient proxy re-signature scheme.

Keywords: data integrity, dynamic group, group signature, public auditing

Procedia PDF Downloads 387
25625 The Cultural and Semantic Danger of English Transparent Words Translated from English into Arabic

Authors: Abdullah Khuwaileh

Abstract:

While teaching and translating vocabulary is no longer a neglected area in ELT in general and in translation in particular, the psychology of its acquisition has been a neglected area. Our paper aims at exploring some of the learning and translating conditions under which vocabulary is acquired and translated properly. To achieve this objective, two teaching methods (experiments) were applied on 4 translators to measure their acquisition of a number of transparent vocabulary items. Some of these items were knowingly chosen from 'deceptively transparent words'. All the data, sample, etc., were taken from Jordan University of Science and Technology (JUST) and Yarmouk University, where the researcher is employed. The study showed that translators might translate transparent words inaccurately, particularly if these words are uncontextualised. It was also shown that the morphological structures of words may lead translators or even EFL learners to misinterpretations of meaning.

Keywords: english, transparent, word, processing, translation

Procedia PDF Downloads 68