Search results for: data source
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28358

Search results for: data source

25868 Probability Sampling in Matched Case-Control Study in Drug Abuse

Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell

Abstract:

Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.

Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling

Procedia PDF Downloads 493
25867 Representation of Reality in Nigerian Poetry

Authors: Zainab Abdulkarim

Abstract:

Literature is the study of life, a source of knowledge. It involves the truth about many things in life. Most of these creative artistes most especially the poets are representatives of the voices of the people. These set of artistes have been the critics to all involved in the development of their nation. This paper will examine how Nigerian Poets goes further not just by writing but by showing the different ways the country has been convoluted. This paper intends to show the power and ability literature has in representation. The power is to represent the important values of life. There is no doubt that literature asserts truth. Through the various poems examined in this paper, Nigerian Poets have proved to portray the realities of the nation.

Keywords: literature, poets, reality, representation

Procedia PDF Downloads 314
25866 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 363
25865 Evaluating the Effectiveness of Science Teacher Training Programme in National Colleges of Education: a Preliminary Study, Perceptions of Prospective Teachers

Authors: A. S. V Polgampala, F. Huang

Abstract:

This is an overview of what is entailed in an evaluation and issues to be aware of when class observation is being done. This study examined the effects of evaluating teaching practice of a 7-day ‘block teaching’ session in a pre -service science teacher training program at a reputed National College of Education in Sri Lanka. Effects were assessed in three areas: evaluation of the training process, evaluation of the training impact, and evaluation of the training procedure. Data for this study were collected by class observation of 18 teachers during 9th February to 16th of 2017. Prospective teachers of science teaching, the participants of the study were evaluated based on newly introduced format by the NIE. The data collected was analyzed qualitatively using the Miles and Huberman procedure for analyzing qualitative data: data reduction, data display and conclusion drawing/verification. It was observed that the trainees showed their confidence in teaching those competencies and skills. Teacher educators’ dissatisfaction has been a great impact on evaluation process.

Keywords: evaluation, perceptions & perspectives, pre-service, science teachering

Procedia PDF Downloads 315
25864 Detecting Venomous Files in IDS Using an Approach Based on Data Mining Algorithm

Authors: Sukhleen Kaur

Abstract:

In security groundwork, Intrusion Detection System (IDS) has become an important component. The IDS has received increasing attention in recent years. IDS is one of the effective way to detect different kinds of attacks and malicious codes in a network and help us to secure the network. Data mining techniques can be implemented to IDS, which analyses the large amount of data and gives better results. Data mining can contribute to improving intrusion detection by adding a level of focus to anomaly detection. So far the study has been carried out on finding the attacks but this paper detects the malicious files. Some intruders do not attack directly, but they hide some harmful code inside the files or may corrupt those file and attack the system. These files are detected according to some defined parameters which will form two lists of files as normal files and harmful files. After that data mining will be performed. In this paper a hybrid classifier has been used via Naive Bayes and Ripper classification methods. The results show how the uploaded file in the database will be tested against the parameters and then it is characterised as either normal or harmful file and after that the mining is performed. Moreover, when a user tries to mine on harmful file it will generate an exception that mining cannot be made on corrupted or harmful files.

Keywords: data mining, association, classification, clustering, decision tree, intrusion detection system, misuse detection, anomaly detection, naive Bayes, ripper

Procedia PDF Downloads 414
25863 Generalized Approach to Linear Data Transformation

Authors: Abhijith Asok

Abstract:

This paper presents a generalized approach for the simple linear data transformation, Y=bX, through an integration of multidimensional coordinate geometry, vector space theory and polygonal geometry. The scaling is performed by adding an additional ’Dummy Dimension’ to the n-dimensional data, which helps plot two dimensional component-wise straight lines on pairs of dimensions. The end result is a set of scaled extensions of observations in any of the 2n spatial divisions, where n is the total number of applicable dimensions/dataset variables, created by shifting the n-dimensional plane along the ’Dummy Axis’. The derived scaling factor was found to be dependent on the coordinates of the common point of origin for diverging straight lines and the plane of extension, chosen on and perpendicular to the ’Dummy Axis’, respectively. This result indicates the geometrical interpretation of a linear data transformation and hence, opportunities for a more informed choice of the factor ’b’, based on a better choice of these coordinate values. The paper follows on to identify the effect of this transformation on certain popular distance metrics, wherein for many, the distance metric retained the same scaling factor as that of the features.

Keywords: data transformation, dummy dimension, linear transformation, scaling

Procedia PDF Downloads 297
25862 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health

Authors: Minna Pikkarainen, Yueqiang Xu

Abstract:

The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.

Keywords: blockchain, health data, platform, action design

Procedia PDF Downloads 100
25861 Structural Characterization and Application of Tio2 Nano-Partical

Authors: Maru Chetan, Desai Abhilash

Abstract:

The structural characteristics & application of TiO2 powder with different phases are study by various techniques in this paper. TTIP, EG and citric acid use as Ti source and catalyst respectively synthesis for sol gel synthesis of TiO2 powder. To replace sol gel method we develop the new method of making nano particle of TiO2 powder. It is two route method one is physical and second one is chemical route. Specific aim to this process is to minimize the production cost and the large scale production of nano particle The synthesis product work characterize by EDAX, SEM, XRD tests.

Keywords: mortal and pestle, nano particle , TiO2, TTIP

Procedia PDF Downloads 322
25860 Dependence of Photocurrent on UV Wavelength in ZnO/Pt Bottom-Contact Schottky Diode

Authors: Byoungho Lee, Changmin Kim, Youngmin Lee, Sejoon Lee, Deuk Young Kim

Abstract:

We fabricated the bottom-contacted ZnO/Pt Schottky diode and investigated the dependence of its photocurrent on the wavelength of illuminated ultraviolet (UV) light source. The bottom-contacted Schottky diode was devised by growing (000l) ZnO on (111) Pt, and the fabricated device showed a strong dependence on the UV wavelength for its photo-response characteristics. When longer-wavelength-UV (e.g., UV-A) was illuminated on the device, the photo-current was increased by a factor of 200, compared to that under illumination of shorter-wavelength-UV (e.g., UV-C). The behavior is attributed to the wavelength-dependent UV penetration depth for ZnO.

Keywords: ZnO, UV, Schottky diode, photocurrent

Procedia PDF Downloads 256
25859 Using Learning Apps in the Classroom

Authors: Janet C. Read

Abstract:

UClan set collaboration with Lingokids to assess the Lingokids learning app's impact on learning outcomes in classrooms in the UK for children with ages ranging from 3 to 5 years. Data gathered during the controlled study with 69 children includes attitudinal data, engagement, and learning scores. Data shows that children enjoyment while learning was higher among those children using the game-based app compared to those children using other traditional methods. It’s worth pointing out that engagement when using the learning app was significantly higher than other traditional methods among older children. According to existing literature, there is a direct correlation between engagement, motivation, and learning. Therefore, this study provides relevant data points to conclude that Lingokids learning app serves its purpose of encouraging learning through playful and interactive content. That being said, we believe that learning outcomes should be assessed with a wider range of methods in further studies. Likewise, it would be beneficial to assess the level of usability and playability of the app in order to evaluate the learning app from other angles.

Keywords: learning app, learning outcomes, rapid test activity, Smileyometer, early childhood education, innovative pedagogy

Procedia PDF Downloads 71
25858 Road Safety in the Great Britain: An Exploratory Data Analysis

Authors: Jatin Kumar Choudhary, Naren Rayala, Abbas Eslami Kiasari, Fahimeh Jafari

Abstract:

The Great Britain has one of the safest road networks in the world. However, the consequences of any death or serious injury are devastating for loved ones, as well as for those who help the severely injured. This paper aims to analyse the Great Britain's road safety situation and show the response measures for areas where the total damage caused by accidents can be significantly and quickly reduced. In this paper, we do an exploratory data analysis using STATS19 data. For the past 30 years, the UK has had a good record in reducing fatalities. The UK ranked third based on the number of road deaths per million inhabitants. There were around 165,000 accidents reported in the Great Britain in 2009 and it has been decreasing every year until 2019 which is under 120,000. The government continues to scale back road deaths empowering responsible road users by identifying and prosecuting the parameters that make the roads less safe.

Keywords: road safety, data analysis, openstreetmap, feature expanding.

Procedia PDF Downloads 140
25857 Intrusion Detection System Using Linear Discriminant Analysis

Authors: Zyad Elkhadir, Khalid Chougdali, Mohammed Benattou

Abstract:

Most of the existing intrusion detection systems works on quantitative network traffic data with many irrelevant and redundant features, which makes detection process more time’s consuming and inaccurate. A several feature extraction methods, such as linear discriminant analysis (LDA), have been proposed. However, LDA suffers from the small sample size (SSS) problem which occurs when the number of the training samples is small compared with the samples dimension. Hence, classical LDA cannot be applied directly for high dimensional data such as network traffic data. In this paper, we propose two solutions to solve SSS problem for LDA and apply them to a network IDS. The first method, reduce the original dimension data using principal component analysis (PCA) and then apply LDA. In the second solution, we propose to use the pseudo inverse to avoid singularity of within-class scatter matrix due to SSS problem. After that, the KNN algorithm is used for classification process. We have chosen two known datasets KDDcup99 and NSLKDD for testing the proposed approaches. Results showed that the classification accuracy of (PCA+LDA) method outperforms clearly the pseudo inverse LDA method when we have large training data.

Keywords: LDA, Pseudoinverse, PCA, IDS, NSL-KDD, KDDcup99

Procedia PDF Downloads 226
25856 Investigation of the Flow in Impeller Sidewall Gap of a Centrifugal Pump Using CFD

Authors: Mohammadreza DaqiqShirazi, Rouhollah Torabi, Alireza Riasi, Ahmad Nourbakhsh

Abstract:

In this paper, the flow in a sidewall gap of an impeller which belongs to a centrifugal pump is studied using numerical method. The flow in sidewall gap forms internal leakage and is the source of “disk friction loss” which is the most important cause of reduced efficiency in low specific speed centrifugal pumps. Simulation is done using CFX software and a high quality mesh, therefore the modeling error has been reduced. Navier-Stokes equations have been solved for this domain. In order to predict the turbulence effects the SST model has been employed.

Keywords: numerical study, centrifugal pumps, disk friction loss, sidewall gap

Procedia PDF Downloads 530
25855 Resistance of Mycobacterium tuberculosis to Daptomycin

Authors: Ji-Chan Jang

Abstract:

Tuberculosis is still major health problem because there is an increase of multidrug-resistant and extensively drug-resistant forms of the disease. Therefore, the most urgent clinical need is to discover potent agents and develop novel drug combination capable of reducing the duration of MDR and XDR tuberculosis therapy. Three reference strains H37Rv, CDC1551, W-Beijing GC1237 and six clinical isolates of MDRTB were tested to daptomycin in the range of 0.013 to 256 mg/L. Daptomycin is resistant to all tested M. tuberculosis strains not only laboratory strains but also clinical MDR strains that were isolated at different source. Daptomycin will not be an antibiotic of choice for treating infection of Gram positive atypical slowly growing M. tuberculosis.

Keywords: tuberculosis, daptomycin, resistance, Mycobacterium tuberculosis

Procedia PDF Downloads 385
25854 Optimal Wheat Straw to Bioethanol Supply Chain Models

Authors: Abdul Halim Abdul Razik, Ali Elkamel, Leonardo Simon

Abstract:

Wheat straw is one of the alternative feedstocks that may be utilized for bioethanol production especially when sustainability criteria are the major concerns. To increase market competitiveness, optimal supply chain plays an important role since wheat straw is a seasonal agricultural residue. In designing the supply chain optimization model, economic profitability of the thermochemical and biochemical conversion routes options were considered. It was found that torrefied pelletization with gasification route to be the most profitable option to produce bioethanol from the lignocellulosic source of wheat straw.

Keywords: bio-ethanol, optimization, supply chain, wheat straw

Procedia PDF Downloads 737
25853 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — in the Case of Critical Dataset Size —

Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno

Abstract:

STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to realworld data.

Keywords: rule induction, decision table, missing data, noise

Procedia PDF Downloads 396
25852 Modelling Urban Rigidity and Elasticity Growth Boundaries: A Spatial Constraints-Suitability Based Perspective

Authors: Pengcheng Xiang Jr., Xueqing Sun, Dong Ngoduy

Abstract:

In the context of rapid urbanization, urban sprawl has brought about extensive negative impacts on ecosystems and the environment, resulting in a gradual shift from "incremental growth" to ‘stock growth’ in cities. A detailed urban growth boundary is a prerequisite for urban renewal and management. This study takes Shenyang City, China, as the study area and evaluates the spatial distribution of urban spatial suitability in the study area from the perspective of spatial constraints-suitability using multi-source data and simulates the future rigid and elastic growth boundaries of the city in the study area using the CA-Markov model. The results show that (1) the suitable construction area and moderate construction area in the study area account for 8.76% and 19.01% of the total area, respectively, and the suitable construction area and moderate construction area show a trend of distribution from the urban centre to the periphery, mainly in Shenhe District, the southern part of Heping District, the western part of Dongling District, and the central part of Dadong District; (2) the area of expansion of construction land in the study area in the period of 2023-2030 is 153274.6977hm2, accounting for 44.39% of the total area of the study area; (3) the rigid boundary of the study area occupies an area of 153274.6977 hm2, accounting for 44.39% of the total area of the study area, and the elastic boundary of the study area contains an area of 75362.61 hm2, accounting for 21.69% of the total area of the study area. The study constructed a method for urban growth boundary delineation, which helps to apply remote sensing to guide future urban spatial growth management and urban renewal.

Keywords: urban growth boundary, spatial constraints, spatial suitability, urban sprawl

Procedia PDF Downloads 32
25851 Fate of Sustainability and Land Use Array in Urbanized Cities

Authors: Muhammad Yahaya Ubale

Abstract:

Substantial rate of urbanization as well as economic growth is the tasks and prospects of sustainability. Objectives of the paper are: to ascertain the fate of sustainability in urbanized cities and; to identify the challenges of land use array in urbanized cities. Methodology engaged in this paper employed the use of secondary data where articles, conference proceedings, seminar papers and literature materials were effectively used. The paper established the fact that while one thinks globally, it is reciprocal to act locally if at all sustainability should be achieved. The speed and scale of urbanization must be equal to natural and cost-effective deliberations. It also discovered a podium that allows a city to work together as an ideal conglomerate, engaging all city departments as a source of services, engaging residents, businesses, and contractors. It also revealed that city should act as a leader and partner within an urban region, engaging senior government officials, utilities, rural settlements, private sector stakeholders, NGOs, and academia. Cities should assimilate infrastructure system design and management to enhance efficiency of resource flows in an urban area. They should also coordinate spatial development; integrate urban forms and urban flows, combine land use, urban design, urban density, and other spatial attributes with infrastructural development. Finally, by 2050, urbanized cities alone could be consuming 140 billion tons of minerals, ores, fossil fuels and biomass annually (three times its current rate of consumption), sustainability can be accomplished through land use control, limited access to finite resources, facilities, utilities and services as well as property right and user charge.

Keywords: sustainability, land use array, urbanized cities, fate of sustainability and perseverance

Procedia PDF Downloads 272
25850 Machine Learning Strategies for Data Extraction from Unstructured Documents in Financial Services

Authors: Delphine Vendryes, Dushyanth Sekhar, Baojia Tong, Matthew Theisen, Chester Curme

Abstract:

Much of the data that inform the decisions of governments, corporations and individuals are harvested from unstructured documents. Data extraction is defined here as a process that turns non-machine-readable information into a machine-readable format that can be stored, for instance, in a database. In financial services, introducing more automation in data extraction pipelines is a major challenge. Information sought by financial data consumers is often buried within vast bodies of unstructured documents, which have historically required thorough manual extraction. Automated solutions provide faster access to non-machine-readable datasets, in a context where untimely information quickly becomes irrelevant. Data quality standards cannot be compromised, so automation requires high data integrity. This multifaceted task is broken down into smaller steps: ingestion, table parsing (detection and structure recognition), text analysis (entity detection and disambiguation), schema-based record extraction, user feedback incorporation. Selected intermediary steps are phrased as machine learning problems. Solutions leveraging cutting-edge approaches from the fields of computer vision (e.g. table detection) and natural language processing (e.g. entity detection and disambiguation) are proposed.

Keywords: computer vision, entity recognition, finance, information retrieval, machine learning, natural language processing

Procedia PDF Downloads 113
25849 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform

Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu

Abstract:

Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.

Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance predicting formula, typical SQL query tasks

Procedia PDF Downloads 232
25848 Model Predictive Controller for Pasteurization Process

Authors: Tesfaye Alamirew Dessie

Abstract:

Our study focuses on developing a Model Predictive Controller (MPC) and evaluating it against a traditional PID for a pasteurization process. Utilizing system identification from the experimental data, the dynamics of the pasteurization process were calculated. Using best fit with data validation, residual, and stability analysis, the quality of several model architectures was evaluated. The validation data fit the auto-regressive with exogenous input (ARX322) model of the pasteurization process by roughly 80.37 percent. The ARX322 model structure was used to create MPC and PID control techniques. After comparing controller performance based on settling time, overshoot percentage, and stability analysis, it was found that MPC controllers outperform PID for those parameters.

Keywords: MPC, PID, ARX, pasteurization

Procedia PDF Downloads 163
25847 Point Estimation for the Type II Generalized Logistic Distribution Based on Progressively Censored Data

Authors: Rana Rimawi, Ayman Baklizi

Abstract:

Skewed distributions are important models that are frequently used in applications. Generalized distributions form a class of skewed distributions and gain widespread use in applications because of their flexibility in data analysis. More specifically, the Generalized Logistic Distribution with its different types has received considerable attention recently. In this study, based on progressively type-II censored data, we will consider point estimation in type II Generalized Logistic Distribution (Type II GLD). We will develop several estimators for its unknown parameters, including maximum likelihood estimators (MLE), Bayes estimators and linear estimators (BLUE). The estimators will be compared using simulation based on the criteria of bias and Mean square error (MSE). An illustrative example of a real data set will be given.

Keywords: point estimation, type II generalized logistic distribution, progressive censoring, maximum likelihood estimation

Procedia PDF Downloads 198
25846 Omni: Data Science Platform for Evaluate Performance of a LoRaWAN Network

Authors: Emanuele A. Solagna, Ricardo S, Tozetto, Roberto dos S. Rabello

Abstract:

Nowadays, physical processes are becoming digitized by the evolution of communication, sensing and storage technologies which promote the development of smart cities. The evolution of this technology has generated multiple challenges related to the generation of big data and the active participation of electronic devices in society. Thus, devices can send information that is captured and processed over large areas, but there is no guarantee that all the obtained data amount will be effectively stored and correctly persisted. Because, depending on the technology which is used, there are parameters that has huge influence on the full delivery of information. This article aims to characterize the project, currently under development, of a platform that based on data science will perform a performance and effectiveness evaluation of an industrial network that implements LoRaWAN technology considering its main parameters configuration relating these parameters to the information loss.

Keywords: Internet of Things, LoRa, LoRaWAN, smart cities

Procedia PDF Downloads 148
25845 Cybervetting and Online Privacy in Job Recruitment – Perspectives on the Current and Future Legislative Framework Within the EU

Authors: Nicole Christiansen, Hanne Marie Motzfeldt

Abstract:

In recent years, more and more HR professionals have been using cyber-vetting in job recruitment in an effort to find the perfect match for the company. These practices are growing rapidly, accessing a vast amount of data from social networks, some of which is privileged and protected information. Thus, there is a risk that the right to privacy is becoming a duty to manage your private data. This paper investigates to which degree a job applicant's fundamental rights are protected adequately in current and future legislation in the EU. This paper argues that current data protection regulations and forthcoming regulations on the use of AI ensure sufficient protection. However, even though the regulation on paper protects employees within the EU, the recruitment sector may not pay sufficient attention to the regulation as it not specifically targeting this area. Therefore, the lack of specific labor and employment regulation is a concern that the social partners should attend to.

Keywords: AI, cyber vetting, data protection, job recruitment, online privacy

Procedia PDF Downloads 86
25844 Performance Analysis of a Shell and Tube Heat Exchanger in the Organic Rankine Cycle Power Plant

Authors: Yogi Sirodz Gaos, Irvan Wiradinata

Abstract:

In the 500 kW Organic Rankine Cycle (ORC) power plant in Indonesia, an AFT (according to the Tubular Exchanger Manufacturers Association – TEMA) type shell and tube heat exchanger device is used as a pre-heating system for the ORC’s hot water circulation system. The pre-heating source is a waste heat recovery of the brine water, which is tapped from a geothermal power plant. The brine water itself has 5 MWₜₕ capacities, with average temperature of 170ᵒC, and 7 barg working pressure. The aim of this research is to examine the performance of the heat exchanger in the ORC system in a 500 kW ORC power plant. The data for this research were collected during the commissioning on the middle of December 2016. During the commissioning, the inlet temperature and working pressure of the brine water to the shell and tube type heat exchanger was 149ᵒC, and 4.4 barg respectively. Furthermore, the ΔT for the hot water circulation of the ORC system to the heat exchanger was 27ᵒC, with the inlet temperature of 140ᵒC. The pressure in the hot circulation system was dropped slightly from 7.4ᵒC to 7.1ᵒC. The flow rate of the hot water circulation was 80.5 m³/h. The presentation and discussion of a case study on the performance of the heat exchanger on the 500 kW ORC system is presented as follows: (1) the heat exchange duty is 2,572 kW; (2) log mean temperature of the heat exchanger is 13.2ᵒC; (3) the actual overall thermal conductivity is 1,020.6 W/m².K (4) the required overall thermal conductivity is 316.76 W/m².K; and (5) the over design for this heat exchange performance is 222.2%. An analysis of the heat exchanger detailed engineering design (DED) is briefly discussed. To sum up, this research concludes that the shell and tube heat exchangers technology demonstrated a good performance as pre-heating system for the ORC’s hot water circulation system. Further research need to be conducted to examine the performance of heat exchanger system on the ORC’s hot water circulation system.

Keywords: shell and tube, heat exchanger, organic Rankine cycle, performance, commissioning

Procedia PDF Downloads 143
25843 Sequential Pattern Mining from Data of Medical Record with Sequential Pattern Discovery Using Equivalent Classes (SPADE) Algorithm (A Case Study : Bolo Primary Health Care, Bima)

Authors: Rezky Rifaini, Raden Bagus Fajriya Hakim

Abstract:

This research was conducted at the Bolo primary health Care in Bima Regency. The purpose of the research is to find out the association pattern that is formed of medical record database from Bolo Primary health care’s patient. The data used is secondary data from medical records database PHC. Sequential pattern mining technique is the method that used to analysis. Transaction data generated from Patient_ID, Check_Date and diagnosis. Sequential Pattern Discovery Algorithms Using Equivalent Classes (SPADE) is one of the algorithm in sequential pattern mining, this algorithm find frequent sequences of data transaction, using vertical database and sequence join process. Results of the SPADE algorithm is frequent sequences that then used to form a rule. It technique is used to find the association pattern between items combination. Based on association rules sequential analysis with SPADE algorithm for minimum support 0,03 and minimum confidence 0,75 is gotten 3 association sequential pattern based on the sequence of patient_ID, check_Date and diagnosis data in the Bolo PHC.

Keywords: diagnosis, primary health care, medical record, data mining, sequential pattern mining, SPADE algorithm

Procedia PDF Downloads 401
25842 Estimation of Maximum Earthquake for Gujarat Region, India

Authors: Ashutosh Saxena, Kumar Pallav, Ramji Dwivedi

Abstract:

The present study estimates the seismicity parameter 'b' and maximum possible magnitude of an earthquake (Mmax) for Gujarat region with three well-established methods viz. Kijiko parametric model (KP), Kijiko-Sellevol-Bayern (KSB) and Tapered Gutenberg-Richter (TGR), as a combined seismic source regime. The earthquake catalogue is prepared for a period of 1330 to 2013 in the region Latitudes 20o N to 250 N and Longitudinally extending from 680 to 750 E for earthquake moment magnitude (Mw) ≥4.0. The ’a’ and 'b' value estimated for the region as 4.68 and 0.58. Further, Mmax estimated as 8.54 (± 0.29), 8.69 (± 0.48), and 8.12 with KP, KSB, and TGR, respectively.

Keywords: Mmax, seismicity parameter, Gujarat, Tapered Gutenberg-Richter

Procedia PDF Downloads 542
25841 Governance, Risk Management, and Compliance Factors Influencing the Adoption of Cloud Computing in Australia

Authors: Tim Nedyalkov

Abstract:

A business decision to move to the cloud brings fundamental changes in how an organization develops and delivers its Information Technology solutions. The accelerated pace of digital transformation across businesses and government agencies increases the reliance on cloud-based services. They are collecting, managing, and retaining large amounts of data in cloud environments makes information security and data privacy protection essential. It becomes even more important to understand what key factors drive successful cloud adoption following the commencement of the Privacy Amendment Notifiable Data Breaches (NDB) Act 2017 in Australia as the regulatory changes impact many organizations and industries. This quantitative correlational research investigated the governance, risk management, and compliance factors contributing to cloud security success. The factors influence the adoption of cloud computing within an organizational context after the commencement of the NDB scheme. The results and findings demonstrated that corporate information security policies, data storage location, management understanding of data governance responsibilities, and regular compliance assessments are the factors influencing cloud computing adoption. The research has implications for organizations, future researchers, practitioners, policymakers, and cloud computing providers to meet the rapidly changing regulatory and compliance requirements.

Keywords: cloud compliance, cloud security, data governance, privacy protection

Procedia PDF Downloads 116
25840 Pioneering Technology of Night Photo-Stimulation of the Brain Lymphatic System: Therapy of Brain Diseases during Sleep

Authors: Semyachkina-Glushkovskaya Oxana, Fedosov Ivan, Blokhina Inna, Terskov Andrey, Evsukova Arina, Elovenko Daria, Adushkina Viktoria, Dubrovsky Alexander, Jürgen Kurths

Abstract:

In modern neurobiology, sleep is considered a novel biomarker and a promising therapeutic target for brain diseases. This is due to recent discoveries of the nighttime activation of the brain lymphatic system (BLS), playing an important role in the removal of wastes and toxins from the brain and contributes neuroprotection of the central nervous system (CNS). In our review, we discuss that night stimulation of BLS might be a breakthrough strategy in a new treatment of Alzheimer’s and Parkinson’s disease, stroke, brain trauma, and oncology. Although this research is in its infancy, however, there are pioneering and promising results suggesting that night transcranial photostimulation (tPBM) stimulates more effectively lymphatic removal of amyloid-beta from mouse brain than daily tPBM that is associated with a greater improvement of the neurological status and recognition memory of animals. In our previous study, we discovered that tPBM modulates the tone and permeability of the lymphatic endothelium by stimulating NO formation, promoting lymphatic clearance of wastes and toxins from the brain tissues. We also demonstrate that tPBM can also lead to angio- and lymphangiogenesis, which is another mechanism underlying tPBM-mediated stimulation of BLS. Thus, photo-augmentation of BLS might be a promising therapeutic target for preventing or delaying brain diseases associated with BLS dysfunction. Here we present pioneering technology for simultaneous tPBM in humans and sleep monitoring for stimulation of BLS to remove toxins from CNS and modulation of brain immunity. The wireless-controlled gadget includes a flexible organic light-emitting diode (LED) source that is controlled directly by a sleep-tracking device via a mobile application. The designed autonomous LED source is capable of providing the required therapeutic dose of light radiation at a certain region of the patient’s head without disturbing of sleeping patient. To minimize patients' discomfort, advanced materials like flexible organic LEDs were used. Acknowledgment: This study was supported by RSF project No. 23-75-30001.

Keywords: brain diseases, brain lymphatic system, phototherapy, sleep

Procedia PDF Downloads 72
25839 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa

Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees

Abstract:

The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.

Keywords: solar energy, solar radiation, ERA-5, potential energy

Procedia PDF Downloads 211