Search results for: data envelopment analysis (DEA)
41496 A Systematic Review on Challenges in Big Data Environment
Authors: Rimmy Yadav, Anmol Preet Kaur
Abstract:
Big Data has demonstrated the vast potential in streamlining, deciding, spotting business drifts in different fields, for example, producing, fund, Information Technology. This paper gives a multi-disciplinary diagram of the research issues in enormous information and its procedures, instruments, and system identified with the privacy, data storage management, network and energy utilization, adaptation to non-critical failure and information representations. Other than this, result difficulties and openings accessible in this Big Data platform have made.Keywords: big data, privacy, data management, network and energy consumption
Procedia PDF Downloads 31341495 Post-occupancy Evaluation of Greenway Based on Multi-source data : A Case Study of Jincheng Greenway in Chengdu
Authors: Qin Zhu
Abstract:
Under the development concept of Park City, Tianfu Greenway system, as the basic and pre-configuration element of Chengdu Global Park construction, connects urban open space with linear and circular structures and undertakes and exerts the ecological, cultural and recreational functions of the park system. Chengdu greenway construction is in full swing. In the process of greenway planning and construction, the landscape effect of greenway on urban quality improvement is more valued, and the long-term impact of crowd experience on the sustainable development of greenway is often ignored. Therefore, it is very important to test the effectiveness of greenway construction from the perspective of users. Taking Jincheng Greenway in Chengdu as an example, this paper attempts to introduce multi-source data to construct a post-occupancy evaluation model of greenway and adopts behavior mapping method, questionnaire survey method, web text analysis and IPA analysis method to comprehensively evaluate the user 's behavior characteristics and satisfaction. According to the evaluation results, we can grasp the actual behavior rules and comprehensive needs of users so that the experience of building greenways can be fed back in time and provide guidance for the optimization and improvement of built greenways and the planning and construction of future greenways.Keywords: multi-source data, greenway, IPA analysis, post -occupancy evaluation (POE)
Procedia PDF Downloads 6141494 Exergy Analysis of Reverse Osmosis for Potable Water and Land Irrigation
Authors: M. Sarai Atab, A. Smallbone, A. P. Roskilly
Abstract:
A thermodynamic study is performed on the Reverse Osmosis (RO) desalination process for brackish water. The detailed RO model of thermodynamics properties with and without an energy recovery device was built in Simulink/MATLAB and validated against reported measurement data. The efficiency of desalination plants can be estimated by both the first and second laws of thermodynamics. While the first law focuses on the quantity of energy, the second law analysis (i.e. exergy analysis) introduces quality. This paper used the Main Outfall Drain in Iraq as a case study to conduct energy and exergy analysis of RO process. The result shows that it is feasible to use energy recovery method for reverse osmosis with salinity less than 15000 ppm as the exergy efficiency increases twice. Moreover, this analysis shows that the highest exergy destruction occurs in the rejected water and lowest occurs in the permeate flow rate accounting 37% for 4.3% respectively.Keywords: brackish water, exergy, irrigation, reverse osmosis (RO)
Procedia PDF Downloads 17541493 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data
Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa
Abstract:
A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation
Procedia PDF Downloads 20241492 A Hybrid System for Boreholes Soil Sample
Authors: Ali Ulvi Uzer
Abstract:
Data reduction is an important topic in the field of pattern recognition applications. The basic concept is the reduction of multitudinous amounts of data down to the meaningful parts. The Principal Component Analysis (PCA) method is frequently used for data reduction. The Support Vector Machine (SVM) method is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data, the algorithm outputs an optimal hyperplane which categorizes new examples. This study offers a hybrid approach that uses the PCA for data reduction and Support Vector Machines (SVM) for classification. In order to detect the accuracy of the suggested system, two boreholes taken from the soil sample was used. The classification accuracies for this dataset were obtained through using ten-fold cross-validation method. As the results suggest, this system, which is performed through size reduction, is a feasible system for faster recognition of dataset so our study result appears to be very promising.Keywords: feature selection, sequential forward selection, support vector machines, soil sample
Procedia PDF Downloads 45541491 The Development of Statistical Analysis in Agriculture Experimental Design Using R
Authors: Somruay Apichatibutarapong, Chookiat Pudprommart
Abstract:
The purpose of this study was to develop of statistical analysis by using R programming via internet applied for agriculture experimental design. Data were collected from 65 items in completely randomized design, randomized block design, Latin square design, split plot design, factorial design and nested design. The quantitative approach was used to investigate the quality of learning media on statistical analysis by using R programming via Internet by six experts and the opinions of 100 students who interested in experimental design and applied statistics. It was revealed that the experts’ opinions were good in all contents except a usage of web board and the students’ opinions were good in overall and all items.Keywords: experimental design, r programming, applied statistics, statistical analysis
Procedia PDF Downloads 36941490 Survey on Big Data Stream Classification by Decision Tree
Authors: Mansoureh Ghiasabadi Farahani, Samira Kalantary, Sara Taghi-Pour, Mahboubeh Shamsi
Abstract:
Nowadays, the development of computers technology and its recent applications provide access to new types of data, which have not been considered by the traditional data analysts. Two particularly interesting characteristics of such data sets include their huge size and streaming nature .Incremental learning techniques have been used extensively to address the data stream classification problem. This paper presents a concise survey on the obstacles and the requirements issues classifying data streams with using decision tree. The most important issue is to maintain a balance between accuracy and efficiency, the algorithm should provide good classification performance with a reasonable time response.Keywords: big data, data streams, classification, decision tree
Procedia PDF Downloads 52241489 Robust and Dedicated Hybrid Cloud Approach for Secure Authorized Deduplication
Authors: Aishwarya Shekhar, Himanshu Sharma
Abstract:
Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. In this process, duplicate data is expunged, leaving only one copy means single instance of the data to be accumulated. Though, indexing of each and every data is still maintained. Data deduplication is an approach for minimizing the part of storage space an organization required to retain its data. In most of the company, the storage systems carry identical copies of numerous pieces of data. Deduplication terminates these additional copies by saving just one copy of the data and exchanging the other copies with pointers that assist back to the primary copy. To ignore this duplication of the data and to preserve the confidentiality in the cloud here we are applying the concept of hybrid nature of cloud. A hybrid cloud is a fusion of minimally one public and private cloud. As a proof of concept, we implement a java code which provides security as well as removes all types of duplicated data from the cloud.Keywords: confidentiality, deduplication, data compression, hybridity of cloud
Procedia PDF Downloads 38441488 Periodicity Analysis of Long-Term Waterquality Data Series of the Hungarian Section of the River Tisza Using Morlet Wavelet Spectrum Estimation
Authors: Péter Tanos, József Kovács, Angéla Anda, Gábor Várbíró, Sándor Molnár, István Gábor Hatvani
Abstract:
The River Tisza is the second largest river in Central Europe. In this study, Morlet wavelet spectrum (periodicity) analysis was used with chemical, biological and physical water quality data for the Hungarian section of the River Tisza. In the research 15, water quality parameters measured at 14 sampling sites in the River Tisza and 4 sampling sites in the main artificial changes were assessed for the time period 1993 - 2005. Results show that annual periodicity was not always to be found in the water quality parameters, at least at certain sampling sites. Periodicity was found to vary over space and time, but in general, an increase was observed in the company of higher trophic states of the river heading downstream.Keywords: annual periodicity water quality, spatiotemporal variability of periodic behavior, Morlet wavelet spectrum analysis, River Tisza
Procedia PDF Downloads 34541487 Big Data Analysis Approach for Comparison New York Taxi Drivers' Operation Patterns between Workdays and Weekends Focusing on the Revenue Aspect
Authors: Yongqi Dong, Zuo Zhang, Rui Fu, Li Li
Abstract:
The records generated by taxicabs which are equipped with GPS devices is of vital importance for studying human mobility behavior, however, here we are focusing on taxi drivers' operation strategies between workdays and weekends temporally and spatially. We identify a group of valuable characteristics through large scale drivers' behavior in a complex metropolis environment. Based on the daily operations of 31,000 taxi drivers in New York City, we classify drivers into top, ordinary and low-income groups according to their monthly working load, daily income, daily ranking and the variance of the daily rank. Then, we apply big data analysis and visualization methods to compare the different characteristics among top, ordinary and low income drivers in selecting of working time, working area as well as strategies between workdays and weekends. The results verify that top drivers do have special operation tactics to help themselves serve more passengers, travel faster thus make more money per unit time. This research provides new possibilities for fully utilizing the information obtained from urban taxicab data for estimating human behavior, which is not only very useful for individual taxicab driver but also to those policy-makers in city authorities.Keywords: big data, operation strategies, comparison, revenue, temporal, spatial
Procedia PDF Downloads 22741486 Principal Component Analysis of Body Weight and Morphometric Traits of New Zealand Rabbits Raised under Semi-Arid Condition in Nigeria
Authors: Emmanuel Abayomi Rotimi
Abstract:
Context: Rabbits production plays important role in increasing animal protein supply in Nigeria. Rabbit production provides a cheap, affordable, and healthy source of meat. The growth of animals involves an increase in body weight, which can change the conformation of various parts of the body. Live weight and linear measurements are indicators of growth rate in rabbits and other farm animals. Aims: This study aimed to define the body dimensions of New Zealand rabbits and also to investigate the morphometric traits variables that contribute to body conformation by the use of principal component analysis (PCA). Methods: Data were obtained from 80 New Zealand rabbits (40 bucks and 40 does) raised in Livestock Teaching and Research Farm, Federal University Dutsinma. Data were taken on body weight (BWT), body length (BL), ear length (EL), tail length (TL), heart girth (HG) and abdominal circumference (AC). Data collected were subjected to multivariate analysis using SPSS 20.0 statistical package. Key results: The descriptive statistics showed that the mean BWT, BL, EL, TL, HG, and AC were 0.91kg, 27.34cm, 10.24cm, 8.35cm, 19.55cm and 21.30cm respectively. Sex showed significant (P<0.05) effect on all the variables examined, with higher values recorded for does. The phenotypic correlation coefficient values (r) between the morphometric traits were all positive and ranged from r = 0.406 (between EL and BL) to r = 0.909 (between AC and HG). HG is the most correlated with BWT (r = 0.786). The principal component analysis with variance maximizing orthogonal rotation was used to extract the components. Two principal components (PCs) from the factor analysis of morphometric traits explained about 80.42% of the total variance. PC1 accounted for 64.46% while PC2 accounted for 15.97% of the total variances. Three variables, representing body conformation, loaded highest in PC1. PC1 had the highest contribution (64.46%) to the total variance, and it is regarded as body conformation traits. Conclusions: This component could be used as selection criteria for improving body weight of rabbits.Keywords: conformation, multicollinearity, multivariate, rabbits and principal component analysis
Procedia PDF Downloads 13041485 A Review of Machine Learning for Big Data
Authors: Devatha Kalyan Kumar, Aravindraj D., Sadathulla A.
Abstract:
Big data are now rapidly expanding in all engineering and science and many other domains. The potential of large or massive data is undoubtedly significant, make sense to require new ways of thinking and learning techniques to address the various big data challenges. Machine learning is continuously unleashing its power in a wide range of applications. In this paper, the latest advances and advancements in the researches on machine learning for big data processing. First, the machine learning techniques methods in recent studies, such as deep learning, representation learning, transfer learning, active learning and distributed and parallel learning. Then focus on the challenges and possible solutions of machine learning for big data.Keywords: active learning, big data, deep learning, machine learning
Procedia PDF Downloads 44641484 Strengthening Legal Protection of Personal Data through Technical Protection Regulation in Line with Human Rights
Authors: Tomy Prihananto, Damar Apri Sudarmadi
Abstract:
Indonesia recognizes the right to privacy as a human right. Indonesia provides legal protection against data management activities because the protection of personal data is a part of human rights. This paper aims to describe the arrangement of data management and data management in Indonesia. This paper is a descriptive research with qualitative approach and collecting data from literature study. Results of this paper are comprehensive arrangement of data that have been set up as a technical requirement of data protection by encryption methods. Arrangements on encryption and protection of personal data are mutually reinforcing arrangements in the protection of personal data. Indonesia has two important and immediately enacted laws that provide protection for the privacy of information that is part of human rights.Keywords: Indonesia, protection, personal data, privacy, human rights, encryption
Procedia PDF Downloads 18341483 Use of Cloud Computing and Smart Devices in Healthcare
Authors: Nikunj Agarwal, M. P. Sebastian
Abstract:
Cloud computing can reduce the start-up expenses of implementing EHR (Electronic Health Records). However, many of the healthcare institutions are yet to implement cloud computing due to the associated privacy and security issues. In this paper, we analyze the challenges and opportunities of implementing cloud computing in healthcare. We also analyze data of over 5000 US hospitals that use Telemedicine applications. This analysis helps to understand the importance of smart phones over the desktop systems in different departments of the healthcare institutions. The wide usage of smartphones and cloud computing allows ubiquitous and affordable access to the health data by authorized persons, including patients and doctors. Cloud computing will prove to be beneficial to a majority of the departments in healthcare. Through this analysis, we attempt to understand the different healthcare departments that may benefit significantly from the implementation of cloud computing.Keywords: cloud computing, smart devices, healthcare, telemedicine
Procedia PDF Downloads 39741482 Malaysian Students' Identity in Seminars by Observing, Interviewing and Conducting Focus Group Discussion
Authors: Zurina Khairuddin
Abstract:
The objective of this study is to explore the identities constructed and negotiated by Malaysian students in the UK and Malaysia when they interact in seminars. The study utilised classroom observation, interview and focus group discussion to collect the data. The participants of this study are the first year Malaysian students studying in the UK and Malaysia. The data collected was analysed utilising a combination of Conversation Analysis and framework. This study postulates that Malaysian students in the UK construct and negotiate flexible and different identities depending on the contexts they were in. It also shows that most Malaysian students in the UK and Malaysia are similar in the identities they construct and negotiate. This study suggests implications and recommendations for Malaysian students in the UK and Malaysia, and other stakeholders such as UK and Malaysian academic community.Keywords: conversation analysis, interaction patterns, Malaysian students, students' identity
Procedia PDF Downloads 18441481 Urban Land Use Type Analysis Based on Land Subsidence Areas Using X-Band Satellite Image of Jakarta Metropolitan City, Indonesia
Authors: Ratih Fitria Putri, Josaphat Tetuko Sri Sumantyo, Hiroaki Kuze
Abstract:
Jakarta Metropolitan City is located on the northwest coast of West Java province with geographical location between 106º33’ 00”-107º00’00”E longitude and 5º48’30”-6º24’00”S latitude. Jakarta urban area has been suffered from land subsidence in several land use type as trading, industry and settlement area. Land subsidence hazard is one of the consequences of urban development in Jakarta. This hazard is caused by intensive human activities in groundwater extraction and land use mismanagement. Geologically, the Jakarta urban area is mostly dominated by alluvium fan sediment. The objectives of this research are to make an analysis of Jakarta urban land use type on land subsidence zone areas. The process of producing safer land use and settlements of the land subsidence areas are very important. Spatial distributions of land subsidence detection are necessary tool for land use management planning. For this purpose, Differential Synthetic Aperture Radar Interferometry (DInSAR) method is used. The DInSAR is complementary to ground-based methods such as leveling and global positioning system (GPS) measurements, yielding information in a wide coverage area even when the area is inaccessible. The data were fine tuned by using X-Band image satellite data from 2010 to 2013 and land use mapping data. Our analysis of land use type that land subsidence movement occurred on the northern part Jakarta Metropolitan City varying from 7.5 to 17.5 cm/year as industry and settlement land use type areas.Keywords: land use analysis, land subsidence mapping, urban area, X-band satellite image
Procedia PDF Downloads 27741480 Authentication and Legal Admissibility of 'Computer Evidence from Electronic Voting Machines' in Electoral Litigation: A Qualitative Legal Analysis of Judicial Opinions of Appellate Courts in the USA
Authors: Felix O. Omosele
Abstract:
Several studies have established that electronic voting machines are prone to multi-faceted challenges. One of which is their capacity to lose votes after the ballots might have been cast. Therefore, the international consensus appears to favour the use of electronic voting machines that are accompanied with verifiable audit paper audit trail (VVPAT). At present, there is no known study that has evaluated the impacts (or otherwise) of this verification and auditing on the authentication, admissibility and evidential weight of electronically-obtained electoral data. This legal inquiry is important as elections are sometimes won or lost in courts and on the basis of such data. This gap will be filled by the present research work. Using the United States of America as a case study, this paper employed a qualitative legal analysis of several of its appellate courts’ judicial opinions. This analysis equally unearths the necessary statutory rules and regulations that are important to the research problem. The objective of the research is to highlight the roles played by VVPAT on electoral evidence- as seen from the eyes of the court. The preliminary outcome of this qualitative analysis shows that the admissibility and weight attached to ‘Computer Evidence from e-voting machines (CEEM)’ are often treated with general standards applied to other computer-stored evidence. These standards sometimes fail to embrace the peculiar challenges faced by CEEM, particularly with respect to their tabulation and transmission. This paper, therefore, argues that CEEM should be accorded unique consideration by courts. It proposes the development of a legal standard which recognises verification and auditing as ‘weight enhancers’ for electronically-obtained electoral data.Keywords: admissibility of computer evidence, electronic voting, qualitative legal analysis, voting machines in the USA
Procedia PDF Downloads 19741479 Time Series Regression with Meta-Clusters
Authors: Monika Chuchro
Abstract:
This paper presents a preliminary attempt to apply classification of time series using meta-clusters in order to improve the quality of regression models. In this case, clustering was performed as a method to obtain a subgroups of time series data with normal distribution from inflow into waste water treatment plant data which Composed of several groups differing by mean value. Two simple algorithms: K-mean and EM were chosen as a clustering method. The rand index was used to measure the similarity. After simple meta-clustering, regression model was performed for each subgroups. The final model was a sum of subgroups models. The quality of obtained model was compared with the regression model made using the same explanatory variables but with no clustering of data. Results were compared by determination coefficient (R2), measure of prediction accuracy mean absolute percentage error (MAPE) and comparison on linear chart. Preliminary results allows to foresee the potential of the presented technique.Keywords: clustering, data analysis, data mining, predictive models
Procedia PDF Downloads 46641478 Troubleshooting Petroleum Equipment Based on Wireless Sensors Based on Bayesian Algorithm
Authors: Vahid Bayrami Rad
Abstract:
In this research, common methods and techniques have been investigated with a focus on intelligent fault finding and monitoring systems in the oil industry. In fact, remote and intelligent control methods are considered a necessity for implementing various operations in the oil industry, but benefiting from the knowledge extracted from countless data generated with the help of data mining algorithms. It is a avoid way to speed up the operational process for monitoring and troubleshooting in today's big oil companies. Therefore, by comparing data mining algorithms and checking the efficiency and structure and how these algorithms respond in different conditions, The proposed (Bayesian) algorithm using data clustering and their analysis and data evaluation using a colored Petri net has provided an applicable and dynamic model from the point of view of reliability and response time. Therefore, by using this method, it is possible to achieve a dynamic and consistent model of the remote control system and prevent the occurrence of leakage in oil pipelines and refineries and reduce costs and human and financial errors. Statistical data The data obtained from the evaluation process shows an increase in reliability, availability and high speed compared to other previous methods in this proposed method.Keywords: wireless sensors, petroleum equipment troubleshooting, Bayesian algorithm, colored Petri net, rapid miner, data mining-reliability
Procedia PDF Downloads 6741477 Analysis of Radial Pulse Using Nadi-Parikshan Yantra
Authors: Ashok E. Kalange
Abstract:
Diagnosis according to Ayurveda is to find the root cause of a disease. Out of the eight different kinds of examinations, Nadi-Pariksha (pulse examination) is important. Nadi-Pariksha is done at the root of the thumb by examining the radial artery using three fingers. Ancient Ayurveda identifies the health status by observing the wrist pulses in terms of 'Vata', 'Pitta' and 'Kapha', collectively called as tridosha, as the basic elements of human body and in their combinations. Diagnosis by traditional pulse analysis – NadiPariksha - requires a long experience in pulse examination and a high level of skill. The interpretation tends to be subjective, depending on the expertise of the practitioner. Present work is part of the efforts carried out in making Nadi-Parikshan objective. Nadi Parikshan Yantra (three point pulse examination system) is developed in our laboratory by using three pressure sensors (one each for the Vata, Pitta and Kapha points on radial artery). The radial pulse data was collected of a large number of subjects. The radial pulse data collected is analyzed on the basis of relative amplitudes of the three point pulses as well as in frequency and time domains. The same subjects were examined by Ayurvedic physician (Nadi Vaidya) and the dominant Dosha - Vata, Pitta or Kapha - was identified. The results are discussed in details in the paper.Keywords: Nadi Parikshan Yantra, Tridosha, Nadi Pariksha, human pulse data analysis
Procedia PDF Downloads 19041476 The Importance of including All Data in a Linear Model for the Analysis of RNAseq Data
Authors: Roxane A. Legaie, Kjiana E. Schwab, Caroline E. Gargett
Abstract:
Studies looking at the changes in gene expression from RNAseq data often make use of linear models. It is also common practice to focus on a subset of data for a comparison of interest, leaving aside the samples not involved in this particular comparison. This work shows the importance of including all observations in the modeling process to better estimate variance parameters, even when the samples included are not directly used in the comparison under test. The human endometrium is a dynamic tissue, which undergoes cycles of growth and regression with each menstrual cycle. The mesenchymal stem cells (MSCs) present in the endometrium are likely responsible for this remarkable regenerative capacity. However recent studies suggest that MSCs also plays a role in the pathogenesis of endometriosis, one of the most common medical conditions affecting the lower abdomen in women in which the endometrial tissue grows outside the womb. In this study we compared gene expression profiles between MSCs and non-stem cell counterparts (‘non-MSC’) obtained from women with (‘E’) or without (‘noE’) endometriosis from RNAseq. Raw read counts were used for differential expression analysis using a linear model with the limma-voom R package, including either all samples in the study or only the samples belonging to the subset of interest (e.g. for the comparison ‘E vs noE in MSC cells’, including only MSC samples from E and noE patients but not the non-MSC ones). Using the full dataset we identified about 100 differentially expressed (DE) genes between E and noE samples in MSC samples (adj.p-val < 0.05 and |logFC|>1) while only 9 DE genes were identified when using only the subset of data (MSC samples only). Important genes known to be involved in endometriosis such as KLF9 and RND3 were missed in the latter case. When looking at the MSC vs non-MSC cells comparison, the linear model including all samples identified 260 genes for noE samples (including the stem cell marker SUSD2) while the subset analysis did not identify any DE genes. When looking at E samples, 12 genes were identified with the first approach and only 1 with the subset approach. Although the stem cell marker RGS5 was found in both cases, the subset test missed important genes involved in stem cell differentiation such as NOTCH3 and other potentially related genes to be used for further investigation and pathway analysis.Keywords: differential expression, endometriosis, linear model, RNAseq
Procedia PDF Downloads 43241475 Development of a Data Security Model Using Steganography
Authors: Terungwa Simon Yange, Agana Moses A.
Abstract:
This paper studied steganography and designed a simplistic approach to a steganographic tool for hiding information in image files with the view of addressing the security challenges with data by hiding data from unauthorized users to improve its security. The Structured Systems Analysis and Design Method (SSADM) was used in this work. The system was developed using Java Development Kit (JDK) 1.7.0_10 and MySQL Server as its backend. The system was tested with some hypothetical health records which proved the possibility of protecting data from unauthorized users by making it secret so that its existence cannot be easily recognized by fraudulent users. It further strengthens the confidentiality of patient records kept by medical practitioners in the health setting. In conclusion, this work was able to produce a user friendly steganography software that is very fast to install and easy to operate to ensure privacy and secrecy of sensitive data. It also produced an exact copy of the original image and the one carrying the secret message when compared with each.Keywords: steganography, cryptography, encryption, decryption, secrecy
Procedia PDF Downloads 26741474 Time Series Analysis the Case of China and USA Trade Examining during Covid-19 Trade Enormity of Abnormal Pricing with the Exchange rate
Authors: Md. Mahadi Hasan Sany, Mumenunnessa Keya, Sharun Khushbu, Sheikh Abujar
Abstract:
Since the beginning of China's economic reform, trade between the U.S. and China has grown rapidly, and has increased since China's accession to the World Trade Organization in 2001. The US imports more than it exports from China, reducing the trade war between China and the U.S. for the 2019 trade deficit, but in 2020, the opposite happens. In international and U.S. trade, Washington launched a full-scale trade war against China in March 2016, which occurred a catastrophic epidemic. The main goal of our study is to measure and predict trade relations between China and the U.S., before and after the arrival of the COVID epidemic. The ML model uses different data as input but has no time dimension that is present in the time series models and is only able to predict the future from previously observed data. The LSTM (a well-known Recurrent Neural Network) model is applied as the best time series model for trading forecasting. We have been able to create a sustainable forecasting system in trade between China and the US by closely monitoring a dataset published by the State Website NZ Tatauranga Aotearoa from January 1, 2015, to April 30, 2021. Throughout the survey, we provided a 180-day forecast that outlined what would happen to trade between China and the US during COVID-19. In addition, we have illustrated that the LSTM model provides outstanding outcome in time series data analysis rather than RFR and SVR (e.g., both ML models). The study looks at how the current Covid outbreak affects China-US trade. As a comparative study, RMSE transmission rate is calculated for LSTM, RFR and SVR. From our time series analysis, it can be said that the LSTM model has given very favorable thoughts in terms of China-US trade on the future export situation.Keywords: RFR, China-U.S. trade war, SVR, LSTM, deep learning, Covid-19, export value, forecasting, time series analysis
Procedia PDF Downloads 19841473 In-Depth Analysis of Involved Factors to Car-Motorcycle Accidents in Budapest City
Authors: Danish Farooq, Janos Juhasz
Abstract:
Car-motorcycle accidents have been observed higher in recent years, which caused mainly riders’ fatalities and serious injuries. In-depth crash investigation methods aim to investigate the main factors which are likely involved in fatal road accidents and injury outcomes. The main objective of this study is to investigate the involved factors in car-motorcycle accidents in Budapest city. The procedure included statistical analysis and data sampling to identify car-motorcycle accidents by dominant accident types based on collision configurations. The police report was used as a data source for specified accidents, and simulation models were plotted according to scale (M 1:200). Car-motorcycle accidents were simulated in Virtual Crash software for 5 seconds before the collision. The simulation results showed that the main involved factors to car-motorcycle accidents were human behavior and view obstructions. The comprehensive, in-depth analysis also found that most of the car drivers and riders were unable to perform collision avoidance manoeuvres before the collision. This study can help the traffic safety authorities to focus on simulated involved factors to solve road safety issues in car-motorcycle accidents. The study also proposes safety measures to improve safe movements among road users.Keywords: car motorcycle accidents, in-depth analysis, microscopic simulation, safety measures
Procedia PDF Downloads 15141472 Comparative Study between Herzberg’s and Maslow’s Theories in Maritime Transport Education
Authors: Nermin Mahmoud Gohar, Aisha Tarek Noour
Abstract:
Learner satisfaction has been a vital field of interest in the literature. Accordingly, the paper will explore the reasons behind individual differences in motivation and satisfaction. This study examines the effect of both; Herzberg’s and Maslow’s theories on learners satisfaction. A self-administered questionnaire was used to collect data from learners who were geographically widely spread around the College of Maritime Transport and Technology (CMTT) at the Arab Academy for Science, Technology and Maritime Transport (AAST&MT) in Egypt. One hundred and fifty undergraduates responded to a questionnaire survey. Respondents were drawn from two branches in Alexandria and Port Said. The data analysis used was SPSS 22 and AMOS 18. Factor analysis technique was used to find out the dimensions under study verified by Herzberg’s and Maslow’s theories. In addition, regression analysis and structural equation modeling were applied to find the effect of the above-mentioned theories on maritime transport learners’ satisfaction. Concerning the limitation of this study, it used the available number of learners in the CMTT due to the relatively low population in this field.Keywords: motivation, satisfaction, needs, education, Herzberg’s and Maslow’s theories
Procedia PDF Downloads 43641471 The Various Legal Dimensions of Genomic Data
Authors: Amy Gooden
Abstract:
When human genomic data is considered, this is often done through only one dimension of the law, or the interplay between the various dimensions is not considered, thus providing an incomplete picture of the legal framework. This research considers and analyzes the various dimensions in South African law applicable to genomic sequence data – including property rights, personality rights, and intellectual property rights. The effective use of personal genomic sequence data requires the acknowledgement and harmonization of the rights applicable to such data.Keywords: artificial intelligence, data, law, genomics, rights
Procedia PDF Downloads 14041470 Variance-Aware Routing and Authentication Scheme for Harvesting Data in Cloud-Centric Wireless Sensor Networks
Authors: Olakanmi Oladayo Olufemi, Bamifewe Olusegun James, Badmus Yaya Opeyemi, Adegoke Kayode
Abstract:
The wireless sensor network (WSN) has made a significant contribution to the emergence of various intelligent services or cloud-based applications. Most of the time, these data are stored on a cloud platform for efficient management and sharing among different services or users. However, the sensitivity of the data makes them prone to various confidentiality and performance-related attacks during and after harvesting. Various security schemes have been developed to ensure the integrity and confidentiality of the WSNs' data. However, their specificity towards particular attacks and the resource constraint and heterogeneity of WSNs make most of these schemes imperfect. In this paper, we propose a secure variance-aware routing and authentication scheme with two-tier verification to collect, share, and manage WSN data. The scheme is capable of classifying WSN into different subnets, detecting any attempt of wormhole and black hole attack during harvesting, and enforcing access control on the harvested data stored in the cloud. The results of the analysis showed that the proposed scheme has more security functionalities than other related schemes, solves most of the WSNs and cloud security issues, prevents wormhole and black hole attacks, identifies the attackers during data harvesting, and enforces access control on the harvested data stored in the cloud at low computational, storage, and communication overheads.Keywords: data block, heterogeneous IoT network, data harvesting, wormhole attack, blackhole attack access control
Procedia PDF Downloads 8541469 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications
Authors: H. Hruschka
Abstract:
This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models
Procedia PDF Downloads 20241468 Meta Model for Optimum Design Objective Function of Steel Frames Subjected to Seismic Loads
Authors: Salah R. Al Zaidee, Ali S. Mahdi
Abstract:
Except for simple problems of statically determinate structures, optimum design problems in structural engineering have implicit objective functions where structural analysis and design are essential within each searching loop. With these implicit functions, the structural engineer is usually enforced to write his/her own computer code for analysis, design, and searching for optimum design among many feasible candidates and cannot take advantage of available software for structural analysis, design, and searching for the optimum solution. The meta-model is a regression model used to transform an implicit objective function into objective one and leads in turn to decouple the structural analysis and design processes from the optimum searching process. With the meta-model, well-known software for structural analysis and design can be used in sequence with optimum searching software. In this paper, the meta-model has been used to develop an explicit objective function for plane steel frames subjected to dead, live, and seismic forces. Frame topology is assumed as predefined based on architectural and functional requirements. Columns and beams sections and different connections details are the main design variables in this study. Columns and beams are grouped to reduce the number of design variables and to make the problem similar to that adopted in engineering practice. Data for the implicit objective function have been generated based on analysis and assessment for many design proposals with CSI SAP software. These data have been used later in SPSS software to develop a pure quadratic nonlinear regression model for the explicit objective function. Good correlations with a coefficient, R2, in the range from 0.88 to 0.99 have been noted between the original implicit functions and the corresponding explicit functions generated with meta-model.Keywords: meta-modal, objective function, steel frames, seismic analysis, design
Procedia PDF Downloads 24541467 Comparative Analysis of Effecting Factors on Fertility by Birth Order: A Hierarchical Approach
Authors: Ali Hesari, Arezoo Esmaeeli
Abstract:
Regarding to dramatic changes of fertility and higher order births during recent decades in Iran, access to knowledge about affecting factors on different birth orders has crucial importance. In this study, According to hierarchical structure of many of social sciences data and the effect of variables of different levels of social phenomena that determine different birth orders in 365 days ending to 1390 census have been explored by multilevel approach. In this paper, 2% individual row data for 1390 census is analyzed by HLM software. Three different hierarchical linear regression models are estimated for data analysis of the first and second, third, fourth and more birth order. Research results displays different outcomes for three models. Individual level variables entered in equation are; region of residence (rural/urban), age, educational level and labor participation status and province level variable is GDP per capita. Results show that individual level variables have different effects in these three models and in second level we have different random and fixed effects in these models.Keywords: fertility, birth order, hierarchical approach, fixe effects, random effects
Procedia PDF Downloads 339