Search results for: data security assurance
25210 Urban Transport Demand Management Multi-Criteria Decision Using AHP and SERVQUAL Models: Case Study of Nigerian Cities
Authors: Suleiman Hassan Otuoze, Dexter Vernon Lloyd Hunt, Ian Jefferson
Abstract:
Urbanization has continued to widen the gap between demand and resources available to provide resilient and sustainable transport services in many fast-growing developing countries' cities. Transport demand management is a decision-based optimization concept for both benchmarking and ensuring efficient use of transport resources. This study assesses the service quality of infrastructure and mobility services in the Nigerian cities of Kano and Lagos through five dimensions of quality (i.e., Tangibility, Reliability, Responsibility, Safety Assurance and Empathy). The methodology adopts a hybrid AHP-SERVQUAL model applied on questionnaire surveys to gauge the quality of satisfaction and the views of experts in the field. The AHP results prioritize tangibility, which defines the state of transportation infrastructure and services in terms of satisfaction qualities and intervention decision weights in the two cities. The results recorded ‘unsatisfactory’ indices of quality of performance and satisfaction rating values of 48% and 49% for Kano and Lagos, respectively. The satisfaction indices are identified as indicators of low performances of transportation demand management (TDM) measures and the necessity to re-order priorities and take proactive steps towards infrastructure. The findings pilot a framework for comparative assessment of recognizable standards in transport services, best ethics of management and a necessity of quality infrastructure to guarantee both resilient and sustainable urban mobility.Keywords: transportation demand management, multi-criteria decision support, transport infrastructure, service quality, sustainable transport
Procedia PDF Downloads 22425209 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach
Authors: Sarisa Pinkham, Kanyarat Bussaban
Abstract:
The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.Keywords: daily rainfall, image processing, approximation, pixel value data
Procedia PDF Downloads 38725208 The Securitization of the European Migrant Crisis (2015-2016): Applying the Insights of the Copenhagen School of Security Studies to a Comparative Analysis of Refugee Policies in Bulgaria and Hungary
Authors: Tatiana Rizova
Abstract:
The migrant crisis, which peaked in 2015-2016, posed an unprecedented challenge to the European Union’s (EU) newest member states, including Bulgaria and Hungary. Their governments had to formulate sound migration policies with expediency and sensitivity to the needs of millions of people fleeing violent conflicts in the Middle East and failed states in North Africa. Political leaders in post-communist countries had to carefully coordinate with other EU member states on joint policies and solutions while minimizing the risk of alienating their increasingly anti-migrant domestic constituents. Post-communist member states’ governments chose distinct policy responses to the crisis, which were dictated by factors such as their governments’ partisan stances on migration, their views of the European Union, and the decision to frame the crisis as a security or a humanitarian issue. This paper explores how two Bulgarian governments (Boyko Borisov’s second and third government formed during the 43rd and 44th Bulgarian National Assembly, respectively) navigated the processes of EU migration policy making and managing the expectations of their electorates. Based on a comparative analysis of refugee policies in Bulgaria and Hungary during the height of the crisis (2015-2016) and a temporal analysis of refugee policies in Bulgaria (2015-2018), the paper advances the following conclusions. Drawing on insights of the Copenhagen school of security studies, the paper argues that cultural concerns dominated domestic debates in both Bulgaria and Hungary; both governments framed the issue predominantly as a matter of security rather than humanitarian disaster. Regardless of the similarities in issue framing, however, the two governments sought different paths of tackling the crisis. While the Bulgarian government demonstrated its willingness to comply with EU decisions (such as the proposal for mandatory quotas for refugee relocation), the Hungarian government defied EU directives and became a leading voice of dissent inside the EU. The current Bulgarian government (April 2017 - present) appears to be committed to complying with EU decisions and accepts the strategy of EU burden-sharing, while the Hungarian government has continually snubbed the EU’s appeals for cooperation despite the risk of hefty financial penalties. Hungary’s refugee policies have been influenced by the parliamentary representation of the far right-wing party Movement for a Better Hungary (Jobbik), which has encouraged the majority party (FIDESZ) to adopt harsher anti-migrant rhetoric and more hostile policies toward refugees. Bulgaria’s current government is a coalition of the center-right Citizens for a European Development of Bulgaria (GERB) and its far right-wing junior partners – the United Patriots (comprised of three nationalist political parties). The parliamentary presence of Jobbik in Hungary’s parliament has magnified the anti-migrant stance, rhetoric, and policies of Mr. Orbán’s Civic Alliance; we have yet to observe a substantial increase in the anti-migrant rhetoric and policies in Bulgaria’s case. Analyzing responses to the migrant/refugee crisis is a critical opportunity to understand how issues of cultural identity and belonging, inclusion and exclusion, regional integration and disintegration are debated and molded into policy in Europe’s youngest member states in the broader EU context.Keywords: Copenhagen School, migrant crisis, refugees, security
Procedia PDF Downloads 12125207 Incidences and Chemico-Mobility of Toxic Heavy Metals in Environmental Samples
Authors: I. Hilia, C. Hange, F. Hakala, M. Matheus, C. Jansen, J. Hidinwa, O. Awofolu
Abstract:
The article reports on the occurrences, level, and mobility of selected trace metals in environmental samples. The conceptual basis was to examine the possible influence of anthropogenic activities and the impact on human and environmental health. Environmental samples (soil, plant and lower animal) were randomly collected from stratified study/sampling areas, preserved and pre-treated before analysis. Mineral acid digestion procedure was employed for the isolation of metallic contents in samples, and elemental qualitative and quantitative analysis was by ICP-OES. Analytical protocol was validated through the quality assurance process and was found acceptable with quantitative metallic recoveries in the range of 85-90%; hence considered applicable for the analyses of environmental samples. The mean concentration of analysed metals in soil samples ranged from 53.2- 2532.8 mg/kg (Cu); 59.5- 2020.1 mg/kg (Zn); 1.80 – 21.26 mg/kg (Cd) and 19.6- 140.9 mg/kg (Pb). The mean level in grass samples ranged from 9.33 – 38.63 mg/kg (Cu); 64.20-105.18 mg/kg (Zn); 0.28–0.73 mg/kg (Cd) and 0.53 -16.26 mg/kg (Pb) while the mean level in lower animal sample (beetle) varied from 9.6 - 105.3 mg/kg (Cu); 134.1-297.2 mg/kg (Zn); 0.63 – 3.78 (Cd) and 8.0 – 29.1 mg/kg (Pb) across sample collection points (SCPs) 1-4 respectively. Metallic transfer factors (TFs) were in the order Zn >Cd > Cu > Pb with metal Pollution Indices (MPIs) in the order SCP1 > SCP2 > SCP3 > SCP4. About 60-70 % of analysed metals were above the maximum allowable limits (MALs) in soil and plant samples. Results obtained revealed the general prevalence of analysed metals at all sampled sites with indication of metallic mobility across the food chain which portrayed dire consequences for environmental and human health. Systematic environmental remediation and pollution abatement strategies are recommended.Keywords: trace metals, pollution, human health, Incidences, ICP-OES
Procedia PDF Downloads 15925206 The Effect of Measurement Distribution on System Identification and Detection of Behavior of Nonlinearities of Data
Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri
Abstract:
In this paper, we considered and applied parametric modeling for some experimental data of dynamical system. In this study, we investigated the different distribution of output measurement from some dynamical systems. Also, with variance processing in experimental data we obtained the region of nonlinearity in experimental data and then identification of output section is applied in different situation and data distribution. Finally, the effect of the spanning the measurement such as variance to identification and limitation of this approach is explained.Keywords: Gaussian process, nonlinearity distribution, particle filter, system identification
Procedia PDF Downloads 51625205 Building a Scalable Telemetry Based Multiclass Predictive Maintenance Model in R
Authors: Jaya Mathew
Abstract:
Many organizations are faced with the challenge of how to analyze and build Machine Learning models using their sensitive telemetry data. In this paper, we discuss how users can leverage the power of R without having to move their big data around as well as a cloud based solution for organizations willing to host their data in the cloud. By using ScaleR technology to benefit from parallelization and remote computing or R Services on premise or in the cloud, users can leverage the power of R at scale without having to move their data around.Keywords: predictive maintenance, machine learning, big data, cloud based, on premise solution, R
Procedia PDF Downloads 37925204 Trusting the Big Data Analytics Process from the Perspective of Different Stakeholders
Authors: Sven Gehrke, Johannes Ruhland
Abstract:
Data is the oil of our time, without them progress would come to a hold [1]. On the other hand, the mistrust of data mining is increasing [2]. The paper at hand shows different aspects of the concept of trust and describes the information asymmetry of the typical stakeholders of a data mining project using the CRISP-DM phase model. Based on the identified influencing factors in relation to trust, problematic aspects of the current approach are verified using various interviews with the stakeholders. The results of the interviews confirm the theoretically identified weak points of the phase model with regard to trust and show potential research areas.Keywords: trust, data mining, CRISP DM, stakeholder management
Procedia PDF Downloads 9425203 One Step Further: Pull-Process-Push Data Processing
Authors: Romeo Botes, Imelda Smit
Abstract:
In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list
Procedia PDF Downloads 24425202 SIP Flooding Attacks Detection and Prevention Using Shannon, Renyi and Tsallis Entropy
Authors: Neda Seyyedi, Reza Berangi
Abstract:
Voice over IP (VOIP) network, also known as Internet telephony, is growing increasingly having occupied a large part of the communications market. With the growth of each technology, the related security issues become of particular importance. Taking advantage of this technology in different environments with numerous features put at our disposal, there arises an increasing need to address the security threats. Being IP-based and playing a signaling role in VOIP networks, Session Initiation Protocol (SIP) lets the invaders use weaknesses of the protocol to disable VOIP service. One of the most important threats is denial of service attack, a branch of which in this article we have discussed as flooding attacks. These attacks make server resources wasted and deprive it from delivering service to authorized users. Distributed denial of service attacks and attacks with a low rate can mislead many attack detection mechanisms. In this paper, we introduce a mechanism which not only detects distributed denial of service attacks and low rate attacks, but can also identify the attackers accurately. We detect and prevent flooding attacks in SIP protocol using Shannon (FDP-S), Renyi (FDP-R) and Tsallis (FDP-T) entropy. We conducted an experiment to compare the percentage of detection and rate of false alarm messages using any of the Shannon, Renyi and Tsallis entropy as a measure of disorder. Implementation results show that, according to the parametric nature of the Renyi and Tsallis entropy, by changing the parameters, different detection percentages and false alarm rates will be gained with the possibility to adjust the sensitivity of the detection mechanism.Keywords: VOIP networks, flooding attacks, entropy, computer networks
Procedia PDF Downloads 40525201 Extreme Temperature Forecast in Mbonge, Cameroon Through Return Level Analysis of the Generalized Extreme Value (GEV) Distribution
Authors: Nkongho Ayuketang Arreyndip, Ebobenow Joseph
Abstract:
In this paper, temperature extremes are forecast by employing the block maxima method of the generalized extreme value (GEV) distribution to analyse temperature data from the Cameroon Development Corporation (CDC). By considering two sets of data (raw data and simulated data) and two (stationary and non-stationary) models of the GEV distribution, return levels analysis is carried out and it was found that in the stationary model, the return values are constant over time with the raw data, while in the simulated data the return values show an increasing trend with an upper bound. In the non-stationary model, the return levels of both the raw data and simulated data show an increasing trend with an upper bound. This clearly shows that although temperatures in the tropics show a sign of increase in the future, there is a maximum temperature at which there is no exceedance. The results of this paper are very vital in agricultural and environmental research.Keywords: forecasting, generalized extreme value (GEV), meteorology, return level
Procedia PDF Downloads 47825200 Impact of Stack Caches: Locality Awareness and Cost Effectiveness
Authors: Abdulrahman K. Alshegaifi, Chun-Hsi Huang
Abstract:
Treating data based on its location in memory has received much attention in recent years due to its different properties, which offer important aspects for cache utilization. Stack data and non-stack data may interfere with each other’s locality in the data cache. One of the important aspects of stack data is that it has high spatial and temporal locality. In this work, we simulate non-unified cache design that split data cache into stack and non-stack caches in order to maintain stack data and non-stack data separate in different caches. We observe that the overall hit rate of non-unified cache design is sensitive to the size of non-stack cache. Then, we investigate the appropriate size and associativity for stack cache to achieve high hit ratio especially when over 99% of accesses are directed to stack cache. The result shows that on average more than 99% of stack cache accuracy is achieved by using 2KB of capacity and 1-way associativity. Further, we analyze the improvement in hit rate when adding small, fixed, size of stack cache at level1 to unified cache architecture. The result shows that the overall hit rate of unified cache design with adding 1KB of stack cache is improved by approximately, on average, 3.9% for Rijndael benchmark. The stack cache is simulated by using SimpleScalar toolset.Keywords: hit rate, locality of program, stack cache, stack data
Procedia PDF Downloads 30325199 Information Extraction Based on Search Engine Results
Authors: Mohammed R. Elkobaisi, Abdelsalam Maatuk
Abstract:
The search engines are the large scale information retrieval tools from the Web that are currently freely available to all. This paper explains how to convert the raw resulted number of search engines into useful information. This represents a new method for data gathering comparing with traditional methods. When a query is submitted for a multiple numbers of keywords, this take a long time and effort, hence we develop a user interface program to automatic search by taking multi-keywords at the same time and leave this program to collect wanted data automatically. The collected raw data is processed using mathematical and statistical theories to eliminate unwanted data and converting it to usable data.Keywords: search engines, information extraction, agent system
Procedia PDF Downloads 43025198 Integrated Microsystem for Multiplexed Genosensor Detection of Biowarfare Agents
Authors: Samuel B. Dulay, Sandra Julich, Herbert Tomaso, Ciara K. O'Sullivan
Abstract:
An early, rapid and definite detection for the presence of biowarfare agents, pathogens, viruses and toxins is required in different situations which include civil rescue and security units, homeland security, military operations, public transportation securities such as airports, metro and railway stations due to its harmful effect on the human population. In this work, an electrochemical genosensor array that allows simultaneous detection of different biowarfare agents within an integrated microsystem that provides an easy handling of the technology which combines a microfluidics setup with a multiplexing genosensor array has been developed and optimised for the following targets: Bacillus anthracis, Brucella abortis and melitensis, Bacteriophage lambda, Francisella tularensis, Burkholderia mallei and pseudomallei, Coxiella burnetii, Yersinia pestis, and Bacillus thuringiensis. The electrode array was modified via co-immobilisation of a 1:100 (mol/mol) mixture of a thiolated probe and an oligoethyleneglycol-terminated monopodal thiol. PCR products from these relevant biowarfare agents were detected reproducibly through a sandwich assay format with the target hybridised between a surface immobilised probe into the electrode and a horseradish peroxidase-labelled secondary reporter probe, which provided an enzyme based electrochemical signal. The potential of the designed microsystem for multiplexed genosensor detection and cross-reactivity studies over potential interfering DNA sequences has demonstrated high selectivity using the developed platform producing high-throughput.Keywords: biowarfare agents, genosensors, multipled detection, microsystem
Procedia PDF Downloads 27225197 Data Monetisation by E-commerce Companies: A Need for a Regulatory Framework in India
Authors: Anushtha Saxena
Abstract:
This paper examines the process of data monetisation bye-commerce companies operating in India. Data monetisation is collecting, storing, and analysing consumers’ data to use further the data that is generated for profits, revenue, etc. Data monetisation enables e-commerce companies to get better businesses opportunities, innovative products and services, a competitive edge over others to the consumers, and generate millions of revenues. This paper analyses the issues and challenges that are faced due to the process of data monetisation. Some of the issues highlighted in the paper pertain to the right to privacy, protection of data of e-commerce consumers. At the same time, data monetisation cannot be prohibited, but it can be regulated and monitored by stringent laws and regulations. The right to privacy isa fundamental right guaranteed to the citizens of India through Article 21 of The Constitution of India. The Supreme Court of India recognized the Right to Privacy as a fundamental right in the landmark judgment of Justice K.S. Puttaswamy (Retd) and Another v. Union of India . This paper highlights the legal issue of how e-commerce businesses violate individuals’ right to privacy by using the data collected, stored by them for economic gains and monetisation and protection of data. The researcher has mainly focused on e-commerce companies like online shopping websitesto analyse the legal issue of data monetisation. In the Internet of Things and the digital age, people have shifted to online shopping as it is convenient, easy, flexible, comfortable, time-consuming, etc. But at the same time, the e-commerce companies store the data of their consumers and use it by selling to the third party or generating more data from the data stored with them. This violatesindividuals’ right to privacy because the consumers do not know anything while giving their data online. Many times, data is collected without the consent of individuals also. Data can be structured, unstructured, etc., that is used by analytics to monetise. The Indian legislation like The Information Technology Act, 2000, etc., does not effectively protect the e-consumers concerning their data and how it is used by e-commerce businesses to monetise and generate revenues from that data. The paper also examines the draft Data Protection Bill, 2021, pending in the Parliament of India, and how this Bill can make a huge impact on data monetisation. This paper also aims to study the European Union General Data Protection Regulation and how this legislation can be helpful in the Indian scenarioconcerning e-commerce businesses with respect to data monetisation.Keywords: data monetization, e-commerce companies, regulatory framework, GDPR
Procedia PDF Downloads 12025196 Cybersecurity Strategies for Protecting Oil and Gas Industrial Control Systems
Authors: Gaurav Kumar Sinha
Abstract:
The oil and gas industry is a critical component of the global economy, relying heavily on industrial control systems (ICS) to manage and monitor operations. However, these systems are increasingly becoming targets for cyber-attacks, posing significant risks to operational continuity, safety, and environmental integrity. This paper explores comprehensive cybersecurity strategies for protecting oil and gas industrial control systems. It delves into the unique vulnerabilities of ICS in this sector, including outdated legacy systems, integration with IT networks, and the increased connectivity brought by the Industrial Internet of Things (IIoT). We propose a multi-layered defense approach that includes the implementation of robust network security protocols, regular system updates and patch management, advanced threat detection and response mechanisms, and stringent access control measures. We illustrate the effectiveness of these strategies in mitigating cyber risks and ensuring the resilient and secure operation of oil and gas industrial control systems. The findings underscore the necessity for a proactive and adaptive cybersecurity framework to safeguard critical infrastructure in the face of evolving cyber threats.Keywords: cybersecurity, industrial control systems, oil and gas, cyber-attacks, network security, IoT, threat detection, system updates, patch management, access control, cybersecurity awareness, critical infrastructure, resilience, cyber threats, legacy systems, IT integration, multi-layered defense, operational continuity, safety, environmental integrity
Procedia PDF Downloads 4425195 Experiments on Weakly-Supervised Learning on Imperfect Data
Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler
Abstract:
Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation
Procedia PDF Downloads 19925194 Operating Speed Models on Tangent Sections of Two-Lane Rural Roads
Authors: Dražen Cvitanić, Biljana Maljković
Abstract:
This paper presents models for predicting operating speeds on tangent sections of two-lane rural roads developed on continuous speed data. The data corresponds to 20 drivers of different ages and driving experiences, driving their own cars along an 18 km long section of a state road. The data were first used for determination of maximum operating speeds on tangents and their comparison with speeds in the middle of tangents i.e. speed data used in most of operating speed studies. Analysis of continuous speed data indicated that the spot speed data are not reliable indicators of relevant speeds. After that, operating speed models for tangent sections were developed. There was no significant difference between models developed using speed data in the middle of tangent sections and models developed using maximum operating speeds on tangent sections. All developed models have higher coefficient of determination then models developed on spot speed data. Thus, it can be concluded that the method of measuring has more significant impact on the quality of operating speed model than the location of measurement.Keywords: operating speed, continuous speed data, tangent sections, spot speed, consistency
Procedia PDF Downloads 45225193 A Neural Network Based Clustering Approach for Imputing Multivariate Values in Big Data
Authors: S. Nickolas, Shobha K.
Abstract:
The treatment of incomplete data is an important step in the data pre-processing. Missing values creates a noisy environment in all applications and it is an unavoidable problem in big data management and analysis. Numerous techniques likes discarding rows with missing values, mean imputation, expectation maximization, neural networks with evolutionary algorithms or optimized techniques and hot deck imputation have been introduced by researchers for handling missing data. Among these, imputation techniques plays a positive role in filling missing values when it is necessary to use all records in the data and not to discard records with missing values. In this paper we propose a novel artificial neural network based clustering algorithm, Adaptive Resonance Theory-2(ART2) for imputation of missing values in mixed attribute data sets. The process of ART2 can recognize learned models fast and be adapted to new objects rapidly. It carries out model-based clustering by using competitive learning and self-steady mechanism in dynamic environment without supervision. The proposed approach not only imputes the missing values but also provides information about handling the outliers.Keywords: ART2, data imputation, clustering, missing data, neural network, pre-processing
Procedia PDF Downloads 27425192 The Effect That the Data Assimilation of Qinghai-Tibet Plateau Has on a Precipitation Forecast
Authors: Ruixia Liu
Abstract:
Qinghai-Tibet Plateau has an important influence on the precipitation of its lower reaches. Data from remote sensing has itself advantage and numerical prediction model which assimilates RS data will be better than other. We got the assimilation data of MHS and terrestrial and sounding from GSI, and introduced the result into WRF, then got the result of RH and precipitation forecast. We found that assimilating MHS and terrestrial and sounding made the forecast on precipitation, area and the center of the precipitation more accurate by comparing the result of 1h,6h,12h, and 24h. Analyzing the difference of the initial field, we knew that the data assimilating about Qinghai-Tibet Plateau influence its lower reaches forecast by affecting on initial temperature and RH.Keywords: Qinghai-Tibet Plateau, precipitation, data assimilation, GSI
Procedia PDF Downloads 23425191 Positive Affect, Negative Affect, Organizational and Motivational Factor on the Acceptance of Big Data Technologies
Authors: Sook Ching Yee, Angela Siew Hoong Lee
Abstract:
Big data technologies have become a trend to exploit business opportunities and provide valuable business insights through the analysis of big data. However, there are still many organizations that have yet to adopt big data technologies especially small and medium organizations (SME). This study uses the technology acceptance model (TAM) to look into several constructs in the TAM and other additional constructs which are positive affect, negative affect, organizational factor and motivational factor. The conceptual model proposed in the study will be tested on the relationship and influence of positive affect, negative affect, organizational factor and motivational factor towards the intention to use big data technologies to produce an outcome. Empirical research is used in this study by conducting a survey to collect data.Keywords: big data technologies, motivational factor, negative affect, organizational factor, positive affect, technology acceptance model (TAM)
Procedia PDF Downloads 36225190 Big Data Analysis with Rhipe
Authors: Byung Ho Jung, Ji Eun Shin, Dong Hoon Lim
Abstract:
Rhipe that integrates R and Hadoop environment made it possible to process and analyze massive amounts of data using a distributed processing environment. In this paper, we implemented multiple regression analysis using Rhipe with various data sizes of actual data. Experimental results for comparing the performance of our Rhipe with stats and biglm packages available on bigmemory, showed that our Rhipe was more fast than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases. We also compared the computing speeds of pseudo-distributed and fully-distributed modes for configuring Hadoop cluster. The results showed that fully-distributed mode was faster than pseudo-distributed mode, and computing speeds of fully-distributed mode were faster as the number of data nodes increases.Keywords: big data, Hadoop, Parallel regression analysis, R, Rhipe
Procedia PDF Downloads 49725189 Farmers Willingness to Pay for Irrigated Maize Production in Rural Kenya
Authors: Dennis Otieno, Lilian Kirimi Nicholas Odhiambo, Hillary Bii
Abstract:
Kenya is considered to be a middle level income country and usuaaly does not meet household food security needs especially in North and South eastern parts. Approximately half of the population is living under the poverty line (www, CIA 1, 2012). Agriculture is the largest sector in the country, employing 80% of the population. These are thereby directly dependent on the sufficiency of outputs received. This makes efficient, easy-accessible and cheap agricultural practices an important matter in order to improve food security. Maize is the prime staple food commodity in Kenya and represents a substantial share of people’s nutritional intake. This study is the result of questionnaire based interviews, Key informant and focus group discussion involving 220 small scale maize farmers Kenyan. The study was located to two separated areas; Lower Kuja, Bunyala, Nandi, Lower Nzoia, Perkerra, Mwea Bura, Hola and Galana Kulalu in Kenya. The questionnaire captured the farmers’ use and perceived importance of the use irrigation services and irrigated maize production. Viability was evaluated using the four indices which were all positive with NPV giving positive cash flows in less than 21 years at most for one season output. The mean willingness to pay was found to be KES 3082 and willingness to pay increased with increase in irrigation premiums. The economic value of water was found to be greater than the willingness to pay implying that irrigated maize production is sustainable. Farmers stated that viability was influenced by high output levels, good produce quality, crop of choice, availability of sufficient water and enforcement the last two factors had a positive influence while the other had negative effect on the viability of irrigated maize. A regression was made over the correlation between the willingness to pay for irrigated maize production using scheme and plot level factors. Farmers that already use other inputs such as animal manure, hired labor and chemical fertilizer should also have a demand for improved seeds according to Liebig's law of minimum and expansion path theory. The regression showed that premiums, and high yields have a positive effect on willingness to pay while produce quality, efficient fertilizer use, and crop season have a negative effect.Keywords: maize, food security, profits, sustainability, willingness to pay
Procedia PDF Downloads 22025188 Survival Data with Incomplete Missing Categorical Covariates
Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar
Abstract:
The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution
Procedia PDF Downloads 40525187 The Report of Co-Construction into a Trans-National Education Teaching Team
Authors: Juliette MacDonald, Jun Li, Wenji Xiang, Mingwei Zhao
Abstract:
Shanghai International College of Fashion and Innovation (SCF) was created as a result of a collaborative partnership agreement between the University of Edinburgh and Donghua University. The College provides two programmes: Fashion Innovation and Fashion Interior Design and the overarching curriculum has the intention of developing innovation and creativity within an international learning, teaching, knowledge exchange and research context. The research problem presented here focuses on the multi-national/cultural faculty in the team, the challenges arising from difficulties in communication and the associated limitations of management frameworks. The teaching faculty at SCF are drawn from China, Finland, Korea, Singapore and the UK with input from Flying Faculty from Fashion and Interior Design, Edinburgh College of Art (ECA), for 5 weeks each semester. Rather than fully replicating the administrative and pedagogical style of one or other of the institutions within this joint partnership the aim from the outset was to create a third way which acknowledges the quality assurance requirements of both Donghua and Edinburgh, the academic and technical needs of the students and provides relevant development and support for all the SCF-based staff and Flying Academics. It has been well acknowledged by those who are involved in teaching across cultures that there is often a culture shock associated with transnational education but that the experience of being involved in the delivery of a curriculum at a Joint Institution can also be very rewarding for staff and students. It became clear at SCF that if a third way might be achieved which encourages innovative approaches to fashion education whilst balancing the expectations of Chinese and western concepts of education and the aims of two institutions, then it was going to be necessary to construct a framework which developed close working relationships for the entire teaching team, so not only between academics and students but also between technicians and administrators at ECA and SCF. The attempts at co-construction and integration are built on the sharing of cultural and educational experiences and knowledge as well as provision of opportunities for reflection on the pedagogical purpose of the curriculum and its delivery. Methods on evaluating the effectiveness of these aims include a series of surveys and interviews and analysis of data drawn from teaching projects delivered to the students along with graduate successes from the last five years, since SCF first opened its doors. This paper will provide examples of best practice developed by SCF which have helped guide the faculty and embed common core values and aims of co-construction regulations and management, whilst building a pro-active TNE (Trans-National Education) team which enhances the learning experience for staff and students alike.Keywords: cultural co-construction, educational team management, multi-cultural challenges, TNE integration for teaching teams
Procedia PDF Downloads 12025186 Reduction of Energy Consumption Using Smart Home Techniques in the Household Sector
Authors: Ahmed Al-Adaileh, Souheil Khaddaj
Abstract:
Outcomes of exhaustion of natural resources started influencing each spirit on this planet. Energy is an essential factor in this aspect. To restore the circumstance to the appropriate track, all attempts must focus on two fundamental branches: producing electricity from clean and renewable reserves and decreasing the overall unnecessary consumption of energy. The focal point of this paper will be on lessening the power consumption in the household's segment. This paper is an attempt to give a clear understanding of a framework called Reduction of Energy Consumption in Household Sector (RECHS) and how it should help householders to reduce their power consumption by substituting their household appliances, turning-off the appliances when stand-by modus is detected, and scheduling their appliances operation periods. Technically, the framework depends on utilizing Z-Wave compatible plug-ins which will be connected to the usual house devices to gauge and control them remotely and semi-automatically. The suggested framework underpins numerous quality characteristics, for example, integrability, scalability, security and adaptability.Keywords: smart energy management systems, internet of things, wireless mesh networks, microservices, cloud computing, big data
Procedia PDF Downloads 19625185 A Study of Blockchain Oracles
Authors: Abdeljalil Beniiche
Abstract:
The limitation with smart contracts is that they cannot access external data that might be required to control the execution of business logic. Oracles can be used to provide external data to smart contracts. An oracle is an interface that delivers data from external data outside the blockchain to a smart contract to consume. Oracle can deliver different types of data depending on the industry and requirements. In this paper, we study and describe the widely used blockchain oracles. Then, we elaborate on his potential role, technical architecture, and design patterns. Finally, we discuss the human oracle and its key role in solving the truth problem by reaching a consensus about a certain inquiry and tasks.Keywords: blockchain, oracles, oracles design, human oracles
Procedia PDF Downloads 13625184 Multi Data Management Systems in a Cluster Randomized Trial in Poor Resource Setting: The Pneumococcal Vaccine Schedules Trial
Authors: Abdoullah Nyassi, Golam Sarwar, Sarra Baldeh, Mamadou S. K. Jallow, Bai Lamin Dondeh, Isaac Osei, Grant A. Mackenzie
Abstract:
A randomized controlled trial is the "gold standard" for evaluating the efficacy of an intervention. Large-scale, cluster-randomized trials are expensive and difficult to conduct, though. To guarantee the validity and generalizability of findings, high-quality, dependable, and accurate data management systems are necessary. Robust data management systems are crucial for optimizing and validating the quality, accuracy, and dependability of trial data. Regarding the difficulties of data gathering in clinical trials in low-resource areas, there is a scarcity of literature on this subject, which may raise concerns. Effective data management systems and implementation goals should be part of trial procedures. Publicizing the creative clinical data management techniques used in clinical trials should boost public confidence in the study's conclusions and encourage further replication. In the ongoing pneumococcal vaccine schedule study in rural Gambia, this report details the development and deployment of multi-data management systems and methodologies. We implemented six different data management, synchronization, and reporting systems using Microsoft Access, RedCap, SQL, Visual Basic, Ruby, and ASP.NET. Additionally, data synchronization tools were developed to integrate data from these systems into the central server for reporting systems. Clinician, lab, and field data validation systems and methodologies are the main topics of this report. Our process development efforts across all domains were driven by the complexity of research project data collected in real-time data, online reporting, data synchronization, and ways for cleaning and verifying data. Consequently, we effectively used multi-data management systems, demonstrating the value of creative approaches in enhancing the consistency, accuracy, and reporting of trial data in a poor resource setting.Keywords: data management, data collection, data cleaning, cluster-randomized trial
Procedia PDF Downloads 2725183 Trusting Smart Speakers: Analysing the Different Levels of Trust between Technologies
Authors: Alec Wells, Aminu Bello Usman, Justin McKeown
Abstract:
The growing usage of smart speakers raises many privacy and trust concerns compared to other technologies such as smart phones and computers. In this study, a proxy measure of trust is used to gauge users’ opinions on three different technologies based on an empirical study, and to understand which technology most people are most likely to trust. The collected data were analysed using the Kruskal-Wallis H test to determine the statistical differences between the users’ trust level of the three technologies: smart speaker, computer and smart phone. The findings of the study revealed that despite the wide acceptance, ease of use and reputation of smart speakers, people find it difficult to trust smart speakers with their sensitive information via the Direct Voice Input (DVI) and would prefer to use a keyboard or touchscreen offered by computers and smart phones. Findings from this study can inform future work on users’ trust in technology based on perceived ease of use, reputation, perceived credibility and risk of using technologies via DVI.Keywords: direct voice input, risk, security, technology, trust
Procedia PDF Downloads 19125182 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning
Authors: Hossein Havaeji, Tony Wong, Thien-My Dao
Abstract:
1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning
Procedia PDF Downloads 12225181 Finding Bicluster on Gene Expression Data of Lymphoma Based on Singular Value Decomposition and Hierarchical Clustering
Authors: Alhadi Bustaman, Soeganda Formalidin, Titin Siswantining
Abstract:
DNA microarray technology is used to analyze thousand gene expression data simultaneously and a very important task for drug development and test, function annotation, and cancer diagnosis. Various clustering methods have been used for analyzing gene expression data. However, when analyzing very large and heterogeneous collections of gene expression data, conventional clustering methods often cannot produce a satisfactory solution. Biclustering algorithm has been used as an alternative approach to identifying structures from gene expression data. In this paper, we introduce a transform technique based on singular value decomposition to identify normalized matrix of gene expression data followed by Mixed-Clustering algorithm and the Lift algorithm, inspired in the node-deletion and node-addition phases proposed by Cheng and Church based on Agglomerative Hierarchical Clustering (AHC). Experimental study on standard datasets demonstrated the effectiveness of the algorithm in gene expression data.Keywords: agglomerative hierarchical clustering (AHC), biclustering, gene expression data, lymphoma, singular value decomposition (SVD)
Procedia PDF Downloads 278