Search results for: housing data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25592

Search results for: housing data

24902 Anomaly Detection Based Fuzzy K-Mode Clustering for Categorical Data

Authors: Murat Yazici

Abstract:

Anomalies are irregularities found in data that do not adhere to a well-defined standard of normal behavior. The identification of outliers or anomalies in data has been a subject of study within the statistics field since the 1800s. Over time, a variety of anomaly detection techniques have been developed in several research communities. The cluster analysis can be used to detect anomalies. It is the process of associating data with clusters that are as similar as possible while dissimilar clusters are associated with each other. Many of the traditional cluster algorithms have limitations in dealing with data sets containing categorical properties. To detect anomalies in categorical data, fuzzy clustering approach can be used with its advantages. The fuzzy k-Mode (FKM) clustering algorithm, which is one of the fuzzy clustering approaches, by extension to the k-means algorithm, is reported for clustering datasets with categorical values. It is a form of clustering: each point can be associated with more than one cluster. In this paper, anomaly detection is performed on two simulated data by using the FKM cluster algorithm. As a significance of the study, the FKM cluster algorithm allows to determine anomalies with their abnormality degree in contrast to numerous anomaly detection algorithms. According to the results, the FKM cluster algorithm illustrated good performance in the anomaly detection of data, including both one anomaly and more than one anomaly.

Keywords: fuzzy k-mode clustering, anomaly detection, noise, categorical data

Procedia PDF Downloads 53
24901 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encyption Scheme

Authors: Victor Onomza Waziri, John K. Alhassan, Idris Ismaila, Noel Dogonyara

Abstract:

This paper describes the problem of building secure computational services for encrypted information in the Cloud. Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy or confidentiality, availability and integrity of the data and user’s security. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute a theoretical presentations in a high-level computational processes that are based on number theory that is derivable from abstract algebra which can easily be integrated and leveraged in the Cloud computing interface with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based on cryptographic security algorithm.

Keywords: big data analytics, security, privacy, bootstrapping, Fully Homomorphic Encryption Scheme

Procedia PDF Downloads 480
24900 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach

Authors: Sarisa Pinkham, Kanyarat Bussaban

Abstract:

The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.

Keywords: daily rainfall, image processing, approximation, pixel value data

Procedia PDF Downloads 387
24899 A Next-Generation Blockchain-Based Data Platform: Leveraging Decentralized Storage and Layer 2 Scaling for Secure Data Management

Authors: Kenneth Harper

Abstract:

The rapid growth of data-driven decision-making across various industries necessitates advanced solutions to ensure data integrity, scalability, and security. This study introduces a decentralized data platform built on blockchain technology to improve data management processes in high-volume environments such as healthcare and financial services. The platform integrates blockchain networks using Cosmos SDK and Polkadot Substrate alongside decentralized storage solutions like IPFS and Filecoin, and coupled with decentralized computing infrastructure built on top of Avalanche. By leveraging advanced consensus mechanisms, we create a scalable, tamper-proof architecture that supports both structured and unstructured data. Key features include secure data ingestion, cryptographic hashing for robust data lineage, and Zero-Knowledge Proof mechanisms that enhance privacy while ensuring compliance with regulatory standards. Additionally, we implement performance optimizations through Layer 2 scaling solutions, including ZK-Rollups, which provide low-latency data access and trustless data verification across a distributed ledger. The findings from this exercise demonstrate significant improvements in data accessibility, reduced operational costs, and enhanced data integrity when tested in real-world scenarios. This platform reference architecture offers a decentralized alternative to traditional centralized data storage models, providing scalability, security, and operational efficiency.

Keywords: blockchain, cosmos SDK, decentralized data platform, IPFS, ZK-Rollups

Procedia PDF Downloads 27
24898 Incidence and Etiology of Neonatal Calf Diarrhea in the Region of Blida, Algeria

Authors: A. Dadda, D. Khelef, K. Ait-Oudia, R. Kaidi

Abstract:

Neonatal calf diarrhea is the most important disease of neonatal calves and results in the greatest economic losses due to disease in this age group in both dairy and beef calves. The objectives of the present study were to estimate the morbidity and the mortality of neonatal diarrhea in dairy calves also to determine aetiology and risk factors were caused diarrhea in dairy veal under 60 days old. A total of 324 claves, housed in 30 dairy breeding were followed during two velage season from January to Juan 2013. The total mortality was 5,9% and was significantly higher in calves had less than 15 days of age. The incidence rate of diarrhea was 31,5% and peaked in the first two weeks after velage. The main causes were breeding controls, defect of passive immunity, old of calf, production season, and nutrient of pregnant cattle, veal’s housing and infectious agents. ELISA test on 22 fecal samples revealed that the 31, 82% of dairy breeding were infected, by cryptosporidium parvum in 13, 6% of study population, E.Coli F5 in 9% and Rotavirus with rate of 4, 5%.

Keywords: diarrhoea, neonatal, mortality, aetiology, risk factors, incidence

Procedia PDF Downloads 635
24897 The Effect of Measurement Distribution on System Identification and Detection of Behavior of Nonlinearities of Data

Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri

Abstract:

In this paper, we considered and applied parametric modeling for some experimental data of dynamical system. In this study, we investigated the different distribution of output measurement from some dynamical systems. Also, with variance processing in experimental data we obtained the region of nonlinearity in experimental data and then identification of output section is applied in different situation and data distribution. Finally, the effect of the spanning the measurement such as variance to identification and limitation of this approach is explained.

Keywords: Gaussian process, nonlinearity distribution, particle filter, system identification

Procedia PDF Downloads 516
24896 Building a Scalable Telemetry Based Multiclass Predictive Maintenance Model in R

Authors: Jaya Mathew

Abstract:

Many organizations are faced with the challenge of how to analyze and build Machine Learning models using their sensitive telemetry data. In this paper, we discuss how users can leverage the power of R without having to move their big data around as well as a cloud based solution for organizations willing to host their data in the cloud. By using ScaleR technology to benefit from parallelization and remote computing or R Services on premise or in the cloud, users can leverage the power of R at scale without having to move their data around.

Keywords: predictive maintenance, machine learning, big data, cloud based, on premise solution, R

Procedia PDF Downloads 379
24895 Trusting the Big Data Analytics Process from the Perspective of Different Stakeholders

Authors: Sven Gehrke, Johannes Ruhland

Abstract:

Data is the oil of our time, without them progress would come to a hold [1]. On the other hand, the mistrust of data mining is increasing [2]. The paper at hand shows different aspects of the concept of trust and describes the information asymmetry of the typical stakeholders of a data mining project using the CRISP-DM phase model. Based on the identified influencing factors in relation to trust, problematic aspects of the current approach are verified using various interviews with the stakeholders. The results of the interviews confirm the theoretically identified weak points of the phase model with regard to trust and show potential research areas.

Keywords: trust, data mining, CRISP DM, stakeholder management

Procedia PDF Downloads 94
24894 Wireless Transmission of Big Data Using Novel Secure Algorithm

Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha

Abstract:

This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.

Keywords: big data, two-hop transmission, physical layer wireless security, cooperative jamming, energy balance

Procedia PDF Downloads 490
24893 One Step Further: Pull-Process-Push Data Processing

Authors: Romeo Botes, Imelda Smit

Abstract:

In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.

Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list

Procedia PDF Downloads 244
24892 Extreme Temperature Forecast in Mbonge, Cameroon Through Return Level Analysis of the Generalized Extreme Value (GEV) Distribution

Authors: Nkongho Ayuketang Arreyndip, Ebobenow Joseph

Abstract:

In this paper, temperature extremes are forecast by employing the block maxima method of the generalized extreme value (GEV) distribution to analyse temperature data from the Cameroon Development Corporation (CDC). By considering two sets of data (raw data and simulated data) and two (stationary and non-stationary) models of the GEV distribution, return levels analysis is carried out and it was found that in the stationary model, the return values are constant over time with the raw data, while in the simulated data the return values show an increasing trend with an upper bound. In the non-stationary model, the return levels of both the raw data and simulated data show an increasing trend with an upper bound. This clearly shows that although temperatures in the tropics show a sign of increase in the future, there is a maximum temperature at which there is no exceedance. The results of this paper are very vital in agricultural and environmental research.

Keywords: forecasting, generalized extreme value (GEV), meteorology, return level

Procedia PDF Downloads 478
24891 Impact of Stack Caches: Locality Awareness and Cost Effectiveness

Authors: Abdulrahman K. Alshegaifi, Chun-Hsi Huang

Abstract:

Treating data based on its location in memory has received much attention in recent years due to its different properties, which offer important aspects for cache utilization. Stack data and non-stack data may interfere with each other’s locality in the data cache. One of the important aspects of stack data is that it has high spatial and temporal locality. In this work, we simulate non-unified cache design that split data cache into stack and non-stack caches in order to maintain stack data and non-stack data separate in different caches. We observe that the overall hit rate of non-unified cache design is sensitive to the size of non-stack cache. Then, we investigate the appropriate size and associativity for stack cache to achieve high hit ratio especially when over 99% of accesses are directed to stack cache. The result shows that on average more than 99% of stack cache accuracy is achieved by using 2KB of capacity and 1-way associativity. Further, we analyze the improvement in hit rate when adding small, fixed, size of stack cache at level1 to unified cache architecture. The result shows that the overall hit rate of unified cache design with adding 1KB of stack cache is improved by approximately, on average, 3.9% for Rijndael benchmark. The stack cache is simulated by using SimpleScalar toolset.

Keywords: hit rate, locality of program, stack cache, stack data

Procedia PDF Downloads 303
24890 Epidemiological Investigation of Abortion in Ewes in Algeria

Authors: Laatra Zemmouri, Said Boukhechem, Samia Haffaf, Mohamed Lafri

Abstract:

A study was conducted in order to determine the prevalence and risk factors associated with abortion in ewes in the region of M’sila, located in central-eastern Algeria. A questionnaire was carried out to obtain information about the occurrence of abortion, sheep housing conditions, vaccination, feeding and management practices, and whether the farmers kept other livestock. This cross-sectional study was conducted for 36 months (between 2016 and 2019). A total of 71 sheep flocks were visited. Among 8168 ewes, we recorded 734 (8.99%) abortions and 3861 lambings. The risk factor analysis using multivariable logistic regression showed an association between abortion and vaccination against brucellosis (CI 95%= 2,76-1,35; p<0,001). Abortion decreased when dogs are owned (CI 95%= 0,36-0,84; p= 0.006), however, abortion increased with the presence of cats in farms (CI 95%= 1,24-2,8; p=0.003). There was a significant association between abortion and keeping goats (CI 95%= 1,18-2,40; p= 0.004), bovins (CI 95%= 0,3-0,68; p<0,001) and poultry CI 95%= 0,39-0,77; p= 0.001) in farms. Through this study, it is noticed that a strong association between the occurrence of abortion and estrus synchronization, stillbirth occurrence, and feed supplementation (p<0.05). Identification of the causes of abortion is an important task to reduce foetal losses and to improve livestock productivity.

Keywords: abortion, ewes, questionnaire, risk factors

Procedia PDF Downloads 227
24889 The Jurisprudential Evolution of Corruption Offenses in Spain: Before and after the Economic Crisis

Authors: Marta Fernandez Cabrera

Abstract:

The period of economic boom generated by the housing bubble created a climate of social indifference to the problem of corruption. This resulted in the persecution and conviction for these criminal offenses being low. After the economic recession, social awareness about the problem of corruption has increased. This has led to the Spanish citizenship requiring the public authorities to try to end the problem in the most effective way possible. In order to respond to the continuous social demands that require an exemplary punishment, the legislator has made changes in crimes against the public administration in the Spanish Criminal Code. However, from the point of view of criminal law, the social change has not served to modify only the law, but also the jurisprudence. After the recession, judges are punishing more severely these conducts than in the past. Before the crisis, it was usual for criminal judges to divert relevant behavior to other areas of the legal system such as administrative law and acquit in the criminal field. Criminal judges have considered that administrative law already has mechanisms that can effectively deal with this type of behavior in order to respect the principle of subsidiarity or ultima ratio. It has also been usual for criminal judges to acquit civil servants due to the absence of requirements unrelated to the applicable offense. For example, they have required an economic damage to the public administration when the offense in the criminal code does not require it. Nevertheless, for some years, these arguments have either partially disappeared or considerably transformed. Since 2010, a jurisprudential stream has been consolidated that aims to provide a more severe response to corruption than it had received until now. This change of opinion, together with greater prosecution of these behaviors by judges and prosecutors, has led to a significant increase in the number of individuals convicted of corruption crimes. This paper has two objectives. The first one is to show that even though judges apply the law impartially, they are flexible to social changes. The second one is to identify the erroneous arguments the courts have used up until now. To carry out the present paper, it has been done a detailed analysis of the judgments of the supreme court before and after the year 2010. Therefore, the jurisprudential analysis is complemented with the statistical data on corruption available.

Keywords: corruption, public administration, social perception, ultima ratio principle

Procedia PDF Downloads 146
24888 Autonomic Threat Avoidance and Self-Healing in Database Management System

Authors: Wajahat Munir, Muhammad Haseeb, Adeel Anjum, Basit Raza, Ahmad Kamran Malik

Abstract:

Databases are the key components of the software systems. Due to the exponential growth of data, it is the concern that the data should be accurate and available. The data in databases is vulnerable to internal and external threats, especially when it contains sensitive data like medical or military applications. Whenever the data is changed by malicious intent, data analysis result may lead to disastrous decisions. Autonomic self-healing is molded toward computer system after inspiring from the autonomic system of human body. In order to guarantee the accuracy and availability of data, we propose a technique which on a priority basis, tries to avoid any malicious transaction from execution and in case a malicious transaction affects the system, it heals the system in an isolated mode in such a way that the availability of system would not be compromised. Using this autonomic system, the management cost and time of DBAs can be minimized. In the end, we test our model and present the findings.

Keywords: autonomic computing, self-healing, threat avoidance, security

Procedia PDF Downloads 504
24887 Information Extraction Based on Search Engine Results

Authors: Mohammed R. Elkobaisi, Abdelsalam Maatuk

Abstract:

The search engines are the large scale information retrieval tools from the Web that are currently freely available to all. This paper explains how to convert the raw resulted number of search engines into useful information. This represents a new method for data gathering comparing with traditional methods. When a query is submitted for a multiple numbers of keywords, this take a long time and effort, hence we develop a user interface program to automatic search by taking multi-keywords at the same time and leave this program to collect wanted data automatically. The collected raw data is processed using mathematical and statistical theories to eliminate unwanted data and converting it to usable data.

Keywords: search engines, information extraction, agent system

Procedia PDF Downloads 430
24886 New York’s Heat Pump Mandate: Doubling Annual Heating Costs to Achieve a 13% Reduction in New York’s CO₂ Gas Emissions

Authors: William Burdick

Abstract:

Manmade climate change is an existential threat that must be mitigated at the earliest opportunity. The role of government in climate change mitigation is enacting and enforcing law and policy to affect substantial reductions in greenhouse gasses, in the short and long term, without substantial increases in the cost of energy. To be optimally effective those laws and policies must be established and enforced based on peer reviewed evidence and scientific facts and result in substantial outcomes in years, not decades. Over the next fifty years, New York’s 2019 Climate Change and Community Protection Act and 2021 All Electric Building Act that mandate replacing natural gas heating systems with heat pumps will, immediately double annual heating costs and by 2075, yield less than 16.2% reduction in CO₂ emissions from heating systems in new housing units, less than a 13% reduction in total CO₂ emissions, and affect a $40B in cumulative additional heating cost, compared to natural gas fueled heating systems.

Keywords: climate change, mandate, heat pump, natural gas

Procedia PDF Downloads 70
24885 Implementation and Performance Analysis of Data Encryption Standard and RSA Algorithm with Image Steganography and Audio Steganography

Authors: S. C. Sharma, Ankit Gambhir, Rajeev Arya

Abstract:

In today’s era data security is an important concern and most demanding issues because it is essential for people using online banking, e-shopping, reservations etc. The two major techniques that are used for secure communication are Cryptography and Steganography. Cryptographic algorithms scramble the data so that intruder will not able to retrieve it; however steganography covers that data in some cover file so that presence of communication is hidden. This paper presents the implementation of Ron Rivest, Adi Shamir, and Leonard Adleman (RSA) Algorithm with Image and Audio Steganography and Data Encryption Standard (DES) Algorithm with Image and Audio Steganography. The coding for both the algorithms have been done using MATLAB and its observed that these techniques performed better than individual techniques. The risk of unauthorized access is alleviated up to a certain extent by using these techniques. These techniques could be used in Banks, RAW agencies etc, where highly confidential data is transferred. Finally, the comparisons of such two techniques are also given in tabular forms.

Keywords: audio steganography, data security, DES, image steganography, intruder, RSA, steganography

Procedia PDF Downloads 290
24884 Data Monetisation by E-commerce Companies: A Need for a Regulatory Framework in India

Authors: Anushtha Saxena

Abstract:

This paper examines the process of data monetisation bye-commerce companies operating in India. Data monetisation is collecting, storing, and analysing consumers’ data to use further the data that is generated for profits, revenue, etc. Data monetisation enables e-commerce companies to get better businesses opportunities, innovative products and services, a competitive edge over others to the consumers, and generate millions of revenues. This paper analyses the issues and challenges that are faced due to the process of data monetisation. Some of the issues highlighted in the paper pertain to the right to privacy, protection of data of e-commerce consumers. At the same time, data monetisation cannot be prohibited, but it can be regulated and monitored by stringent laws and regulations. The right to privacy isa fundamental right guaranteed to the citizens of India through Article 21 of The Constitution of India. The Supreme Court of India recognized the Right to Privacy as a fundamental right in the landmark judgment of Justice K.S. Puttaswamy (Retd) and Another v. Union of India . This paper highlights the legal issue of how e-commerce businesses violate individuals’ right to privacy by using the data collected, stored by them for economic gains and monetisation and protection of data. The researcher has mainly focused on e-commerce companies like online shopping websitesto analyse the legal issue of data monetisation. In the Internet of Things and the digital age, people have shifted to online shopping as it is convenient, easy, flexible, comfortable, time-consuming, etc. But at the same time, the e-commerce companies store the data of their consumers and use it by selling to the third party or generating more data from the data stored with them. This violatesindividuals’ right to privacy because the consumers do not know anything while giving their data online. Many times, data is collected without the consent of individuals also. Data can be structured, unstructured, etc., that is used by analytics to monetise. The Indian legislation like The Information Technology Act, 2000, etc., does not effectively protect the e-consumers concerning their data and how it is used by e-commerce businesses to monetise and generate revenues from that data. The paper also examines the draft Data Protection Bill, 2021, pending in the Parliament of India, and how this Bill can make a huge impact on data monetisation. This paper also aims to study the European Union General Data Protection Regulation and how this legislation can be helpful in the Indian scenarioconcerning e-commerce businesses with respect to data monetisation.

Keywords: data monetization, e-commerce companies, regulatory framework, GDPR

Procedia PDF Downloads 120
24883 Experiments on Weakly-Supervised Learning on Imperfect Data

Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler

Abstract:

Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.

Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation

Procedia PDF Downloads 199
24882 Transforming Healthcare Data Privacy: Integrating Blockchain with Zero-Knowledge Proofs and Cryptographic Security

Authors: Kenneth Harper

Abstract:

Blockchain technology presents solutions for managing healthcare data, addressing critical challenges in privacy, integrity, and access. This paper explores how privacy-preserving technologies, such as zero-knowledge proofs (ZKPs) and homomorphic encryption (HE), enhance decentralized healthcare platforms by enabling secure computations and patient data protection. An examination of the mathematical foundations of these methods, their practical applications, and how they meet the evolving demands of healthcare data security is unveiled. Using real-world examples, this research highlights industry-leading implementations and offers a roadmap for future applications in secure, decentralized healthcare ecosystems.

Keywords: blockchain, cryptography, data privacy, decentralized data management, differential privacy, healthcare, healthcare data security, homomorphic encryption, privacy-preserving technologies, secure computations, zero-knowledge proofs

Procedia PDF Downloads 18
24881 Operating Speed Models on Tangent Sections of Two-Lane Rural Roads

Authors: Dražen Cvitanić, Biljana Maljković

Abstract:

This paper presents models for predicting operating speeds on tangent sections of two-lane rural roads developed on continuous speed data. The data corresponds to 20 drivers of different ages and driving experiences, driving their own cars along an 18 km long section of a state road. The data were first used for determination of maximum operating speeds on tangents and their comparison with speeds in the middle of tangents i.e. speed data used in most of operating speed studies. Analysis of continuous speed data indicated that the spot speed data are not reliable indicators of relevant speeds. After that, operating speed models for tangent sections were developed. There was no significant difference between models developed using speed data in the middle of tangent sections and models developed using maximum operating speeds on tangent sections. All developed models have higher coefficient of determination then models developed on spot speed data. Thus, it can be concluded that the method of measuring has more significant impact on the quality of operating speed model than the location of measurement.

Keywords: operating speed, continuous speed data, tangent sections, spot speed, consistency

Procedia PDF Downloads 452
24880 Real Energy Performance Study of Large-Scale Solar Water Heater by Using Remote Monitoring

Authors: F. Sahnoune, M. Belhamel, M. Zelmat

Abstract:

Solar thermal systems available today provide reliability, efficiency and significant environmental benefits. In housing, they can satisfy the hot water demand and reduce energy bills by 60 % or more. Additionally, collective systems or large scale solar thermal systems are increasingly used in different conditions for hot water applications and space heating in hotels and multi-family homes, hospitals, nursing homes and sport halls as well as in commercial and industrial building. However, in situ real performance data for collective solar water heating systems has not been extensively outlined. This paper focuses on the study of real energy performances of a collective solar water heating system using the remote monitoring technique in Algerian climatic conditions. This is to ensure proper operation of the system at any time, determine the system performance and to check to what extent solar performance guarantee can be achieved. The measurements are performed on an active indirect heating system of 12 m2 flat plate collector’s surface installed in Algiers and equipped with a various sensors. The sensors transmit measurements to a local station which controls the pumps, valves, electrical auxiliaries, etc. The simulation of the installation was developed using the software SOLO 2000. The system provides a yearly solar yield of 6277.5 KWh for an estimated annual need of 7896 kWh; the yearly average solar cover rate amounted to 79.5%. The productivity is in the order of 523.13 kWh / m²/year. Simulation results are compared to measured results and to guaranteed solar performances. The remote monitoring shows that 90% of the expected solar results can be easy guaranteed on a long period. Furthermore, the installed remote monitoring unit was able to detect some dysfunctions. It follows that remote monitoring is an important tool in energy management of some building equipment.

Keywords: large-scale solar water heater, real energy performance, remote monitoring, solar performance guarantee, tool to promote solar water heater

Procedia PDF Downloads 243
24879 A Neural Network Based Clustering Approach for Imputing Multivariate Values in Big Data

Authors: S. Nickolas, Shobha K.

Abstract:

The treatment of incomplete data is an important step in the data pre-processing. Missing values creates a noisy environment in all applications and it is an unavoidable problem in big data management and analysis. Numerous techniques likes discarding rows with missing values, mean imputation, expectation maximization, neural networks with evolutionary algorithms or optimized techniques and hot deck imputation have been introduced by researchers for handling missing data. Among these, imputation techniques plays a positive role in filling missing values when it is necessary to use all records in the data and not to discard records with missing values. In this paper we propose a novel artificial neural network based clustering algorithm, Adaptive Resonance Theory-2(ART2) for imputation of missing values in mixed attribute data sets. The process of ART2 can recognize learned models fast and be adapted to new objects rapidly. It carries out model-based clustering by using competitive learning and self-steady mechanism in dynamic environment without supervision. The proposed approach not only imputes the missing values but also provides information about handling the outliers.

Keywords: ART2, data imputation, clustering, missing data, neural network, pre-processing

Procedia PDF Downloads 274
24878 The Effect That the Data Assimilation of Qinghai-Tibet Plateau Has on a Precipitation Forecast

Authors: Ruixia Liu

Abstract:

Qinghai-Tibet Plateau has an important influence on the precipitation of its lower reaches. Data from remote sensing has itself advantage and numerical prediction model which assimilates RS data will be better than other. We got the assimilation data of MHS and terrestrial and sounding from GSI, and introduced the result into WRF, then got the result of RH and precipitation forecast. We found that assimilating MHS and terrestrial and sounding made the forecast on precipitation, area and the center of the precipitation more accurate by comparing the result of 1h,6h,12h, and 24h. Analyzing the difference of the initial field, we knew that the data assimilating about Qinghai-Tibet Plateau influence its lower reaches forecast by affecting on initial temperature and RH.

Keywords: Qinghai-Tibet Plateau, precipitation, data assimilation, GSI

Procedia PDF Downloads 234
24877 A Simulation of Land Market through Agent-Based Modeling

Authors: Zilin Zhang

Abstract:

Agent-based simulation has become a popular method of exploring the behavior of all kinds of urban systems. The city clearly is viewed as such a system. Many urban evolution processes, such as the development or the transaction of a piece of land, can be modeled with a set of rules. Such modeling approaches can be used to gain insight into urban-development and land market transactions in the real world. Our work contributes to such type of research by modeling the transactions of lands in a city and its surrounding suburbs. By replicating the demand and supply needs in the land market, we are able to demonstrate the different transaction patterns in three types of residential areas - downtown, city-suburban, and further suburban areas. In addition, we are also able to compare the vital roles of different activation conditions play in generating the various transaction patterns of the land market at the macro level. We use this simulation to loosely test our hypotheses about the nature of activation regimes by the replication of the Zi traders’ model. In the end, we hope our analytical results can be useful for city planners and policymakers to develop rational city plans and policies for shaping sustainable urban development.

Keywords: simulation, agent-based modeling, housing market, city

Procedia PDF Downloads 89
24876 Positive Affect, Negative Affect, Organizational and Motivational Factor on the Acceptance of Big Data Technologies

Authors: Sook Ching Yee, Angela Siew Hoong Lee

Abstract:

Big data technologies have become a trend to exploit business opportunities and provide valuable business insights through the analysis of big data. However, there are still many organizations that have yet to adopt big data technologies especially small and medium organizations (SME). This study uses the technology acceptance model (TAM) to look into several constructs in the TAM and other additional constructs which are positive affect, negative affect, organizational factor and motivational factor. The conceptual model proposed in the study will be tested on the relationship and influence of positive affect, negative affect, organizational factor and motivational factor towards the intention to use big data technologies to produce an outcome. Empirical research is used in this study by conducting a survey to collect data.

Keywords: big data technologies, motivational factor, negative affect, organizational factor, positive affect, technology acceptance model (TAM)

Procedia PDF Downloads 362
24875 Big Data Analysis with Rhipe

Authors: Byung Ho Jung, Ji Eun Shin, Dong Hoon Lim

Abstract:

Rhipe that integrates R and Hadoop environment made it possible to process and analyze massive amounts of data using a distributed processing environment. In this paper, we implemented multiple regression analysis using Rhipe with various data sizes of actual data. Experimental results for comparing the performance of our Rhipe with stats and biglm packages available on bigmemory, showed that our Rhipe was more fast than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases. We also compared the computing speeds of pseudo-distributed and fully-distributed modes for configuring Hadoop cluster. The results showed that fully-distributed mode was faster than pseudo-distributed mode, and computing speeds of fully-distributed mode were faster as the number of data nodes increases.

Keywords: big data, Hadoop, Parallel regression analysis, R, Rhipe

Procedia PDF Downloads 497
24874 Security in Resource Constraints Network Light Weight Encryption for Z-MAC

Authors: Mona Almansoori, Ahmed Mustafa, Ahmad Elshamy

Abstract:

Wireless sensor network was formed by a combination of nodes, systematically it transmitting the data to their base stations, this transmission data can be easily compromised if the limited processing power and the data consistency from these nodes are kept in mind; there is always a discussion to address the secure data transfer or transmission in actual time. This will present a mechanism to securely transmit the data over a chain of sensor nodes without compromising the throughput of the network by utilizing available battery resources available in the sensor node. Our methodology takes many different advantages of Z-MAC protocol for its efficiency, and it provides a unique key by sharing the mechanism using neighbor node MAC address. We present a light weighted data integrity layer which is embedded in the Z-MAC protocol to prove that our protocol performs well than Z-MAC when we introduce the different attack scenarios.

Keywords: hybrid MAC protocol, data integrity, lightweight encryption, neighbor based key sharing, sensor node dataprocessing, Z-MAC

Procedia PDF Downloads 144
24873 Elderly for Elderly: The Role of Community Volunteer, a Case Study from the Great East Japan Earthquake and Tsunami in Kesennuma, Japan

Authors: Kensuke Otsuyama

Abstract:

The United Nation World Conference on Disaster Risk Reduction was held in Sendai, Japan, in 2015 and priorities for actions until 2030 were adopted for the next 15 years. Although one of these priorities is to ‘build back better’, there is neither a consensus definition of better recovery, nor indicators to measure better recovery. However, the community is considered as a key driver of recovery nowadays, and participation is a key word for effective recovery. In order to understand more about participatory community recovery, the author investigated recovery from the Great East Japan Earthquake and Tsunami (GEJET) in Kesennuma, a severely affected city. The research sought to: 1) Identify the elements that contribute to better recovery at the community level, and 2) analyze the role of community volunteers for disaster risk reduction for better recovery. A Participatory Community Recovery Index (PCRI) was created as a tool to measure community recovery. The index adopts seven primary indicators and 20 tertiary indicators, including: socio-economic aspect, housing, health, environment, self-organization, transformation, and institution. The index was applied to nine districts in Kesennuma city. Secondary and primary data by questionnaire surveys with local residents’ organization leaders and interviews with crisis management department officials in city government were also obtained. The indicator results were transformed into scores among 1 to 5, and the results were shown for each district. Based on the result of PCRI, it was found that the s Local Social Welfare Council played an important role in facilitating better recovery, enhancing community volunteer involvement to allow elderly residents to initiate local volunteer work for more affected single-living elderly people. Volunteers for the elderly by the elderly played a crucial role to strengthen community bonding in Kesennuma. In this research, the potential of community volunteers and inter-linkage with DRR activities are discussed.

Keywords: recovery, participation, the great East Japan earthquake and tsunami, community volunteers

Procedia PDF Downloads 266