Search results for: income data
24939 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach
Authors: Sarisa Pinkham, Kanyarat Bussaban
Abstract:
The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.Keywords: daily rainfall, image processing, approximation, pixel value data
Procedia PDF Downloads 38624938 A Next-Generation Blockchain-Based Data Platform: Leveraging Decentralized Storage and Layer 2 Scaling for Secure Data Management
Authors: Kenneth Harper
Abstract:
The rapid growth of data-driven decision-making across various industries necessitates advanced solutions to ensure data integrity, scalability, and security. This study introduces a decentralized data platform built on blockchain technology to improve data management processes in high-volume environments such as healthcare and financial services. The platform integrates blockchain networks using Cosmos SDK and Polkadot Substrate alongside decentralized storage solutions like IPFS and Filecoin, and coupled with decentralized computing infrastructure built on top of Avalanche. By leveraging advanced consensus mechanisms, we create a scalable, tamper-proof architecture that supports both structured and unstructured data. Key features include secure data ingestion, cryptographic hashing for robust data lineage, and Zero-Knowledge Proof mechanisms that enhance privacy while ensuring compliance with regulatory standards. Additionally, we implement performance optimizations through Layer 2 scaling solutions, including ZK-Rollups, which provide low-latency data access and trustless data verification across a distributed ledger. The findings from this exercise demonstrate significant improvements in data accessibility, reduced operational costs, and enhanced data integrity when tested in real-world scenarios. This platform reference architecture offers a decentralized alternative to traditional centralized data storage models, providing scalability, security, and operational efficiency.Keywords: blockchain, cosmos SDK, decentralized data platform, IPFS, ZK-Rollups
Procedia PDF Downloads 2424937 The Effect of Measurement Distribution on System Identification and Detection of Behavior of Nonlinearities of Data
Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri
Abstract:
In this paper, we considered and applied parametric modeling for some experimental data of dynamical system. In this study, we investigated the different distribution of output measurement from some dynamical systems. Also, with variance processing in experimental data we obtained the region of nonlinearity in experimental data and then identification of output section is applied in different situation and data distribution. Finally, the effect of the spanning the measurement such as variance to identification and limitation of this approach is explained.Keywords: Gaussian process, nonlinearity distribution, particle filter, system identification
Procedia PDF Downloads 51224936 Analysis of Eating Pattern in Adolescent and Young Adult College Students in Pune City
Authors: Sangeeta Dhamdhere, G. V. P. Rao
Abstract:
Adolescent students need more energy, proteins, vitamins, and minerals because they grow to maturity in this age. Balanced diet plays important role in their wellbeing and health. The study conducted showed 48% students are not normal in their height and weight. 26% students found underweight, 18% overweight and 4% students found obese. The annual income group of underweight students was below 7 Lac and more than 90% students were staying at their home. The researcher has analysed the eating pattern of these students and concluded that there is need of awareness among the parents and students about balance diet and nutrition. The present research will help students improve their dietary habits and health, increase the number of attendees, and achieve academic excellence.Keywords: balanced diet, nutrition, malnutrition, obesity, health education
Procedia PDF Downloads 6724935 Building a Scalable Telemetry Based Multiclass Predictive Maintenance Model in R
Authors: Jaya Mathew
Abstract:
Many organizations are faced with the challenge of how to analyze and build Machine Learning models using their sensitive telemetry data. In this paper, we discuss how users can leverage the power of R without having to move their big data around as well as a cloud based solution for organizations willing to host their data in the cloud. By using ScaleR technology to benefit from parallelization and remote computing or R Services on premise or in the cloud, users can leverage the power of R at scale without having to move their data around.Keywords: predictive maintenance, machine learning, big data, cloud based, on premise solution, R
Procedia PDF Downloads 37624934 Effects of Macroprudential Policies on BankLending and Risks
Authors: Stefanie Behncke
Abstract:
This paper analyses the effects of different macroprudential policy measures that have recently been implemented in Switzerland. Among them is the activation and the increase of the countercyclical capital buffer (CCB) and a tightening of loan-to-value (LTV) requirements. These measures were introduced to limit systemic risks in the Swiss mortgage and real estate markets. They were meant to affect mortgage growth, mortgage risks, and banks’ capital buffers. Evaluation of their quantitative effects provides insights for Swiss policymakers when reassessing their policy. It is also informative for policymakers in other countries who plan to introduce macroprudential instruments. We estimate the effects of the different macroprudential measures with a Differences-in-Differences estimator. Banks differ with respect to the relative importance of mortgages in their portfolio, their riskiness, and their capital buffers. Thus, some of the banks were more affected than others by the CCB, while others were more affected by the LTV requirements. Our analysis is made possible by an unusually informative bank panel data set. It combines data on newly issued mortgage loans and quantitative risk indicators such as LTV and loan-to-income (LTI) ratios with supervisory information on banks’ capital and liquidity situation and balance sheets. Our results suggest that the LTV cap of 90% was most effective. The proportion of new mortgages with a high LTV ratio was significantly reduced. This result does not only apply to the 90% LTV, but also to other threshold values (e.g. 80%, 75%) suggesting that the entire upper part of the LTV distribution was affected. Other outcomes such as the LTI distribution, the growth rates of mortgages and other credits, however, were not significantly affected. Regarding the activation and the increase of the CCB, we do not find any significant effects: neither LTV/LTI risk parameters nor mortgage and other credit growth rates were significantly reduced. This result may reflect that the size of the CCB (1% of relevant residential real estate risk-weighted assets at activation, respectively 2% at the increase) was not sufficiently high enough to trigger a distinct reaction between the banks most likely to be affected by the CCB and those serving as controls. Still, it might be have been effective in increasing the resilience in the overall banking system. From a policy perspective, these results suggest that targeted macroprudential policy measures can contribute to financial stability. In line with findings by others, caps on LTV reduced risk taking in Switzerland. To fully assess the effectiveness of the CCB, further experience is needed.Keywords: banks, financial stability, macroprudential policy, mortgages
Procedia PDF Downloads 36024933 Trusting the Big Data Analytics Process from the Perspective of Different Stakeholders
Authors: Sven Gehrke, Johannes Ruhland
Abstract:
Data is the oil of our time, without them progress would come to a hold [1]. On the other hand, the mistrust of data mining is increasing [2]. The paper at hand shows different aspects of the concept of trust and describes the information asymmetry of the typical stakeholders of a data mining project using the CRISP-DM phase model. Based on the identified influencing factors in relation to trust, problematic aspects of the current approach are verified using various interviews with the stakeholders. The results of the interviews confirm the theoretically identified weak points of the phase model with regard to trust and show potential research areas.Keywords: trust, data mining, CRISP DM, stakeholder management
Procedia PDF Downloads 9324932 Relationships among Sleep Quality and Quality of Life in Oncology Nurses
Authors: Yi-Fung Lin, Pei-Chen Tsai
Abstract:
Background: The hospital healthcare team provides 24-hour patient care, and therefore shift-work is inevitable in the nursing field. There is an increased awareness that shift-work affecting circadian rhythms may cause various health problems, especially in poor sleep quality, which may harm the quality of life. Purposes: The purpose of this study was to investigate the influences of demographic characteristics on nurses’ sleep quality and quality of life and the relationship between these predictors of nurses’ quality of life. Methods: A cross-sectional, descriptive correlational study was conducted with purposive sampling of 520 female nurses in a medical center in north Taiwan from July to September 2014. Data were collected with structured questionnaires using Psychometric Evaluation of the Chinese version of the Pittsburgh Sleep Quality Index (PSQI) and the World Health Organization Quality of Life (WHOQOL-BREF). Outcomes: The main results include: 1) Irregular menstruation, non-regular exercisers, and more daily caffeine consumption have negative impacts on sleep quality. 2) Younger age, fewer children, low education level, low annual income, irregular menstruation, pain during menstrual cycles, non-regular exercisers, constipation, and poor sleep quality all contribute negative impacts on the quality of life. 3) The odds ratio of sleep disturbance between 12-hour shifts and 8-hour shifts was 2.26, but there was no significant difference regarding their quality of life scores. Conclusion: This study showed that there is a strong correlation between oncology nurses’ sleep quality and quality of life. Sleep quality is a significant predictor of quality of life in oncology nurses.Keywords: oncology nurses, sleep quality, quality of life, shift-work
Procedia PDF Downloads 15724931 Wireless Transmission of Big Data Using Novel Secure Algorithm
Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha
Abstract:
This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.Keywords: big data, two-hop transmission, physical layer wireless security, cooperative jamming, energy balance
Procedia PDF Downloads 48624930 One Step Further: Pull-Process-Push Data Processing
Authors: Romeo Botes, Imelda Smit
Abstract:
In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list
Procedia PDF Downloads 24324929 Extreme Temperature Forecast in Mbonge, Cameroon Through Return Level Analysis of the Generalized Extreme Value (GEV) Distribution
Authors: Nkongho Ayuketang Arreyndip, Ebobenow Joseph
Abstract:
In this paper, temperature extremes are forecast by employing the block maxima method of the generalized extreme value (GEV) distribution to analyse temperature data from the Cameroon Development Corporation (CDC). By considering two sets of data (raw data and simulated data) and two (stationary and non-stationary) models of the GEV distribution, return levels analysis is carried out and it was found that in the stationary model, the return values are constant over time with the raw data, while in the simulated data the return values show an increasing trend with an upper bound. In the non-stationary model, the return levels of both the raw data and simulated data show an increasing trend with an upper bound. This clearly shows that although temperatures in the tropics show a sign of increase in the future, there is a maximum temperature at which there is no exceedance. The results of this paper are very vital in agricultural and environmental research.Keywords: forecasting, generalized extreme value (GEV), meteorology, return level
Procedia PDF Downloads 47724928 Optimization Model for Support Decision for Maximizing Production of Mixed Fruit Tree Farms
Authors: Andrés I. Ávila, Patricia Aros, César San Martín, Elizabeth Kehr, Yovana Leal
Abstract:
We consider a linear programming model to help farmers to decide if it is convinient to choose among three kinds of export fruits for their future investment. We consider area, investment, water, productivitiy minimal unit, and harvest restrictions and a monthly based model to compute the average income in five years. Also, conditions on the field as area, water availability and initia investment are required. Using the Chilean costs and dollar-peso exchange rate, we can simulate several scenarios to understand the possible risks associated to this market.Keywords: mixed integer problem, fruit production, support decision model, fruit tree farms
Procedia PDF Downloads 45524927 Impact of Stack Caches: Locality Awareness and Cost Effectiveness
Authors: Abdulrahman K. Alshegaifi, Chun-Hsi Huang
Abstract:
Treating data based on its location in memory has received much attention in recent years due to its different properties, which offer important aspects for cache utilization. Stack data and non-stack data may interfere with each other’s locality in the data cache. One of the important aspects of stack data is that it has high spatial and temporal locality. In this work, we simulate non-unified cache design that split data cache into stack and non-stack caches in order to maintain stack data and non-stack data separate in different caches. We observe that the overall hit rate of non-unified cache design is sensitive to the size of non-stack cache. Then, we investigate the appropriate size and associativity for stack cache to achieve high hit ratio especially when over 99% of accesses are directed to stack cache. The result shows that on average more than 99% of stack cache accuracy is achieved by using 2KB of capacity and 1-way associativity. Further, we analyze the improvement in hit rate when adding small, fixed, size of stack cache at level1 to unified cache architecture. The result shows that the overall hit rate of unified cache design with adding 1KB of stack cache is improved by approximately, on average, 3.9% for Rijndael benchmark. The stack cache is simulated by using SimpleScalar toolset.Keywords: hit rate, locality of program, stack cache, stack data
Procedia PDF Downloads 30224926 Autonomic Threat Avoidance and Self-Healing in Database Management System
Authors: Wajahat Munir, Muhammad Haseeb, Adeel Anjum, Basit Raza, Ahmad Kamran Malik
Abstract:
Databases are the key components of the software systems. Due to the exponential growth of data, it is the concern that the data should be accurate and available. The data in databases is vulnerable to internal and external threats, especially when it contains sensitive data like medical or military applications. Whenever the data is changed by malicious intent, data analysis result may lead to disastrous decisions. Autonomic self-healing is molded toward computer system after inspiring from the autonomic system of human body. In order to guarantee the accuracy and availability of data, we propose a technique which on a priority basis, tries to avoid any malicious transaction from execution and in case a malicious transaction affects the system, it heals the system in an isolated mode in such a way that the availability of system would not be compromised. Using this autonomic system, the management cost and time of DBAs can be minimized. In the end, we test our model and present the findings.Keywords: autonomic computing, self-healing, threat avoidance, security
Procedia PDF Downloads 50324925 Information Extraction Based on Search Engine Results
Authors: Mohammed R. Elkobaisi, Abdelsalam Maatuk
Abstract:
The search engines are the large scale information retrieval tools from the Web that are currently freely available to all. This paper explains how to convert the raw resulted number of search engines into useful information. This represents a new method for data gathering comparing with traditional methods. When a query is submitted for a multiple numbers of keywords, this take a long time and effort, hence we develop a user interface program to automatic search by taking multi-keywords at the same time and leave this program to collect wanted data automatically. The collected raw data is processed using mathematical and statistical theories to eliminate unwanted data and converting it to usable data.Keywords: search engines, information extraction, agent system
Procedia PDF Downloads 42724924 Developing Alternatives: Citizens Perspectives on Causes and Ramification of Political Conflict in Ivory Coast from 2002 - 2009
Authors: Suaka Yaro
Abstract:
This article provides an alternative examination of the causes and the ramifications of the Ivorian political conflict from 2002 to 2009. The researcher employed a constructivist epistemology and qualitative study based upon fieldwork in different African cities interviewing Ivorians outside and within Ivory Coast. A purposive sampling of fourteen participants was selected. A purposive sampling was used to select fourteen respondents. The respondents were selected based on their involvement in Ivorian conflict. Their experiences on the causes and effects of the conflict were tapped for analysis. Qualitative methodology was used for the study. The data collection instruments were semi-structured interview questions, open-ended semi-structured questionnaire, and documentary analysis. The perceptions of these participants on the causes, effects and the possible solution to the endemic conflict in their homeland hold key perspectives that have hitherto been ignored in the whole debate about the Ivorian political conflict and its legacies. Finally, from the synthesized findings of the investigation, the researcher concluded that the analysed data revealed that the causes of the conflict were competition for scarce resources, bad governance, media incitement, xenophobia, incessant political power struggle and the proliferation of small firearms entering the country. The effects experienced during the conflict were the human rights violation, destruction of property including UN premises and displaced people both internally and externally. Some recommendations made include: Efforts should be made by the government to strengthen good relationship among different ethnic groups and help them adapt to new challenges that confront democratic developments in the country. The government should organise the South African style of Truth and Reconciliation Commission to revisit the horrors of the past in order to heal wounds and prevent future occurrence of the conflict. Employment opportunities and other income generating ventures for Ivorian should be created by the government by attracting local and foreign investors. The numerous rebels should be given special skills training in other for them to be able to live among the communities in Ivory Coast. Government of national unity should be encouraged in situation like this.Keywords: displaced, federalism, pluralism, identity politics, grievance, eligibility, greed
Procedia PDF Downloads 22124923 Towards an African Model: A Survey of Social Enterprises in South Africa
Authors: Kerryn Krige, Kerrin Myers
Abstract:
Social entrepreneurship offers the opportunity to simultaneously address both social and economic inequality in South Africa. Its appeal across racial groups, its attractiveness to young people, its applicability in rural and peri-urban markets, and its acceleration in middle income, large-business economies suits the South African context. However, the potential to deliver much-needed developmental benefits has not been realised because the social entrepreneurship debate lacks evidence as to who social entrepreneurs are, their goals and operations and the socio-economic results they achieve. As a result, policy development has been stunted, and legislative barriers and red tape remain. Social entrepreneurs are isolated from the mainstream economy, and struggle to access funding because of limitations in legislative and organisational structures. The objective of the study is to strengthen the ecosystem for social entrepreneurship in South Africa by producing robust, policy-rich information from and about social enterprises currently in operation across the country. The study employs a quantitative survey methodology, using online and telephonic data collection methods. A purposive sample of 1000 social enterprises was included in the first large-scale study of social entrepreneurship in South Africa. The results offer deep insight into the characteristics of social enterprises; the activities they undertake and the markets they serve; their modes of operation and funding sources as well as key challenges and support systems. The results contribute towards developing a model of social enterprise in the African context.Keywords: social enterprise, key characteristics, challenges and enablers, towards an African model
Procedia PDF Downloads 30724922 Implementation and Performance Analysis of Data Encryption Standard and RSA Algorithm with Image Steganography and Audio Steganography
Authors: S. C. Sharma, Ankit Gambhir, Rajeev Arya
Abstract:
In today’s era data security is an important concern and most demanding issues because it is essential for people using online banking, e-shopping, reservations etc. The two major techniques that are used for secure communication are Cryptography and Steganography. Cryptographic algorithms scramble the data so that intruder will not able to retrieve it; however steganography covers that data in some cover file so that presence of communication is hidden. This paper presents the implementation of Ron Rivest, Adi Shamir, and Leonard Adleman (RSA) Algorithm with Image and Audio Steganography and Data Encryption Standard (DES) Algorithm with Image and Audio Steganography. The coding for both the algorithms have been done using MATLAB and its observed that these techniques performed better than individual techniques. The risk of unauthorized access is alleviated up to a certain extent by using these techniques. These techniques could be used in Banks, RAW agencies etc, where highly confidential data is transferred. Finally, the comparisons of such two techniques are also given in tabular forms.Keywords: audio steganography, data security, DES, image steganography, intruder, RSA, steganography
Procedia PDF Downloads 28724921 Data Monetisation by E-commerce Companies: A Need for a Regulatory Framework in India
Authors: Anushtha Saxena
Abstract:
This paper examines the process of data monetisation bye-commerce companies operating in India. Data monetisation is collecting, storing, and analysing consumers’ data to use further the data that is generated for profits, revenue, etc. Data monetisation enables e-commerce companies to get better businesses opportunities, innovative products and services, a competitive edge over others to the consumers, and generate millions of revenues. This paper analyses the issues and challenges that are faced due to the process of data monetisation. Some of the issues highlighted in the paper pertain to the right to privacy, protection of data of e-commerce consumers. At the same time, data monetisation cannot be prohibited, but it can be regulated and monitored by stringent laws and regulations. The right to privacy isa fundamental right guaranteed to the citizens of India through Article 21 of The Constitution of India. The Supreme Court of India recognized the Right to Privacy as a fundamental right in the landmark judgment of Justice K.S. Puttaswamy (Retd) and Another v. Union of India . This paper highlights the legal issue of how e-commerce businesses violate individuals’ right to privacy by using the data collected, stored by them for economic gains and monetisation and protection of data. The researcher has mainly focused on e-commerce companies like online shopping websitesto analyse the legal issue of data monetisation. In the Internet of Things and the digital age, people have shifted to online shopping as it is convenient, easy, flexible, comfortable, time-consuming, etc. But at the same time, the e-commerce companies store the data of their consumers and use it by selling to the third party or generating more data from the data stored with them. This violatesindividuals’ right to privacy because the consumers do not know anything while giving their data online. Many times, data is collected without the consent of individuals also. Data can be structured, unstructured, etc., that is used by analytics to monetise. The Indian legislation like The Information Technology Act, 2000, etc., does not effectively protect the e-consumers concerning their data and how it is used by e-commerce businesses to monetise and generate revenues from that data. The paper also examines the draft Data Protection Bill, 2021, pending in the Parliament of India, and how this Bill can make a huge impact on data monetisation. This paper also aims to study the European Union General Data Protection Regulation and how this legislation can be helpful in the Indian scenarioconcerning e-commerce businesses with respect to data monetisation.Keywords: data monetization, e-commerce companies, regulatory framework, GDPR
Procedia PDF Downloads 11924920 Agritourism Development Mode Study in Rural Area of Boshan China
Authors: Lingfei Sun
Abstract:
Based on the significant value of ecology, the strategic planning for ecological civilization construction was mentioned in the 17th and 18th National Congress of the Communist Party of China. How to generate economic value based on the environmental capacity is not only an economic decision but also a political decision to make. Boshan took the full use of “Ecology” and transformed it as an inexhaustible green resource to benefit people, reflecting the sustainable value of new agriculture development mode. The Strawberry Harvest Festival and Blueberry Harvest Festival hosted approximately 96,000 and 54,000 leisure tourists respectively in 2014. For the Kiwi Harvest Festival in August 2014, in average, it attracted about 4600 tourists per day, which generated daily kiwi sales of 50,000 lbs and 3 million RMB (About 476,000 USD) of daily revenue. The purpose of this study is to elaborate the modes of agritourism development, by analyzing the cases in rural area of Boshan, China. Interviews with the local government officers were applied to discover operation mode of agritourism operation. The financial data was used to demonstrate the strength of government policy and improvement of the income of rural people. The result indicated that there are mainly three types of modes: the Intensive Mode, the Model Mode and the Mixed Mode, supported by case study respectively. With the boom of tourism, the development of agritourism in Boshan relies on the agriculture encouraging policy of China and the effort of local government; meanwhile, large scale of cultivation and the product differentiation are the crucial elements for the success of rural agritourism projects.Keywords: agriculture, agritourism, economy, rural area development
Procedia PDF Downloads 30724919 Whether Buffer Zone Community Forests’ Benefits Are Distributed Fairly to Low-Income Users: Reflection From the Buffer Zone Community Forests in Bardia National Park, Nepal
Authors: Keshav Raj Acharya, Thakur Silwal, Neelam C. Poudyal
Abstract:
Buffer zones, the peripheral areas around the national parks and wildlife reserves, are available for the purpose of benefitting the local inhabitants by providing forest products for subsistence needs of basic forest products outside the protected areas. The forest area within the buffer zone has been managed as a buffer zone community forest (BZCF) for the last 25 years after the approval of the buffer zone management regulation 1996. With a case study of select BZCF in Bardia National Park, this study aims to analyze whether the benefit provided by BZCF is equally available to poor users among other socioeconomic classes of the users. The findings are based on the analysis of cross-sectional data involving household surveys (n=305) and key informants’ interviews (n=10) as well as office records available at different 5 buffer zone community forest user groups offices. Results indicate that despite the provisions of subsidized rates for poor; poor households were more deprived due to higher forest products price particularly, the timber price in buffer zone. Evidence also indicate that due to the increased forest coverage, the incidence of wildlife damage has also increased and impacted the poor more due to lack of land ownership as well as limited alternatives. Clear community forest management guidelines with equitable benefit sharing and compensatory mechanisms to the users of poor socioeconomic class have been identified as a solution to increase the benefit to poor users in BZCFUGs.Keywords: crop depredation, forest products, users, wellbeing ranking
Procedia PDF Downloads 4724918 Experiments on Weakly-Supervised Learning on Imperfect Data
Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler
Abstract:
Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation
Procedia PDF Downloads 19724917 Transforming Healthcare Data Privacy: Integrating Blockchain with Zero-Knowledge Proofs and Cryptographic Security
Authors: Kenneth Harper
Abstract:
Blockchain technology presents solutions for managing healthcare data, addressing critical challenges in privacy, integrity, and access. This paper explores how privacy-preserving technologies, such as zero-knowledge proofs (ZKPs) and homomorphic encryption (HE), enhance decentralized healthcare platforms by enabling secure computations and patient data protection. An examination of the mathematical foundations of these methods, their practical applications, and how they meet the evolving demands of healthcare data security is unveiled. Using real-world examples, this research highlights industry-leading implementations and offers a roadmap for future applications in secure, decentralized healthcare ecosystems.Keywords: blockchain, cryptography, data privacy, decentralized data management, differential privacy, healthcare, healthcare data security, homomorphic encryption, privacy-preserving technologies, secure computations, zero-knowledge proofs
Procedia PDF Downloads 1724916 Operating Speed Models on Tangent Sections of Two-Lane Rural Roads
Authors: Dražen Cvitanić, Biljana Maljković
Abstract:
This paper presents models for predicting operating speeds on tangent sections of two-lane rural roads developed on continuous speed data. The data corresponds to 20 drivers of different ages and driving experiences, driving their own cars along an 18 km long section of a state road. The data were first used for determination of maximum operating speeds on tangents and their comparison with speeds in the middle of tangents i.e. speed data used in most of operating speed studies. Analysis of continuous speed data indicated that the spot speed data are not reliable indicators of relevant speeds. After that, operating speed models for tangent sections were developed. There was no significant difference between models developed using speed data in the middle of tangent sections and models developed using maximum operating speeds on tangent sections. All developed models have higher coefficient of determination then models developed on spot speed data. Thus, it can be concluded that the method of measuring has more significant impact on the quality of operating speed model than the location of measurement.Keywords: operating speed, continuous speed data, tangent sections, spot speed, consistency
Procedia PDF Downloads 45124915 The Pricing-Out Phenomenon in the U.S. Housing Market
Authors: Francesco Berald, Yunhui Zhao
Abstract:
The COVID-19 pandemic further extended the multi-year housing boom in advanced economies and emerging markets alike against massive monetary easing during the pandemic. In this paper, we analyze the pricing-out phenomenon in the U.S. residential housing market due to higher house prices associated with monetary easing. We first set up a stylized general equilibrium model and show that although monetary easing decreases the mortgage payment burden, it would raise house prices and lower housing affordability for first-time homebuyers (through the initial housing wealth channel and the liquidity constraint channel that increases repeat buyers’ housing demand), and increase housing wealth inequality between first-time and repeat homebuyers. We then use the U.S. household-level data to quantify the effect of the house price change on housing affordability relative to that of the interest rate change. We find evidence of the pricing-out effect for all homebuyers; moreover, we find that the pricing-out effect is stronger for first-time homebuyers than for repeat homebuyers. The paper highlights the importance of accounting for general equilibrium effects and distributional implications of monetary policy while assessing housing affordability. It also calls for complementing monetary easing with well-targeted policy measures that can boost housing affordability, particularly for first-time and lower-income households. Such measures are also needed during aggressive monetary tightening, given that the fall in house prices may be insufficient or too slow to fully offset the immediate adverse impact of higher rates on housing affordability.Keywords: pricing-out, U.S. housing market, housing affordability, distributional effects, monetary policy
Procedia PDF Downloads 3324914 Perception Study on the Environmental Ramifications of Inadequate Drainage Systems in Jere Local Government Area, Borno State, Nigeria
Authors: Mohammed Bukar Maina, Mohammed Alhaji Bukar
Abstract:
Flooding is a significant threat to human lives, particularly in low- and middle-income nations. This study focuses on the environmental implications of inadequate drainage systems in the Jere Local Government Area of Borno State, Nigeria. By examining community awareness, understanding, and perceived impacts of the absence of drainage systems, as well as exploring potential solutions, this research aims to address the existing knowledge gap. The study focuses on the Fori and 202/303 Quarters, chosen for their lack of drainage infrastructure and environmental challenges. Primary data was collected through questionnaires and observations supplemented by secondary sources. The findings highlight the need for increased awareness of drainage systems and the consequences of inadequate infrastructure. The community faces challenges like flooding, water-logging, contamination of drinking water, waterborne diseases, and property damage, necessitating the implementation of proper drainage systems. Recommendations include prioritizing new drainage systems, awareness campaigns, community participation, involvement of local government and leaders, and regular maintenance. Long-term planning is crucial for integrating drainage infrastructure into future development. Implementing these recommendations will establish sustainable and resilient drainage systems, mitigating environmental hazards. This research provides valuable insights for policymakers, stakeholders, and communities in addressing insufficient drainage systems and safeguarding the community's well-being.Keywords: environment, drainage systems, flooding, lack
Procedia PDF Downloads 2424913 A Neural Network Based Clustering Approach for Imputing Multivariate Values in Big Data
Authors: S. Nickolas, Shobha K.
Abstract:
The treatment of incomplete data is an important step in the data pre-processing. Missing values creates a noisy environment in all applications and it is an unavoidable problem in big data management and analysis. Numerous techniques likes discarding rows with missing values, mean imputation, expectation maximization, neural networks with evolutionary algorithms or optimized techniques and hot deck imputation have been introduced by researchers for handling missing data. Among these, imputation techniques plays a positive role in filling missing values when it is necessary to use all records in the data and not to discard records with missing values. In this paper we propose a novel artificial neural network based clustering algorithm, Adaptive Resonance Theory-2(ART2) for imputation of missing values in mixed attribute data sets. The process of ART2 can recognize learned models fast and be adapted to new objects rapidly. It carries out model-based clustering by using competitive learning and self-steady mechanism in dynamic environment without supervision. The proposed approach not only imputes the missing values but also provides information about handling the outliers.Keywords: ART2, data imputation, clustering, missing data, neural network, pre-processing
Procedia PDF Downloads 27424912 The Effect That the Data Assimilation of Qinghai-Tibet Plateau Has on a Precipitation Forecast
Authors: Ruixia Liu
Abstract:
Qinghai-Tibet Plateau has an important influence on the precipitation of its lower reaches. Data from remote sensing has itself advantage and numerical prediction model which assimilates RS data will be better than other. We got the assimilation data of MHS and terrestrial and sounding from GSI, and introduced the result into WRF, then got the result of RH and precipitation forecast. We found that assimilating MHS and terrestrial and sounding made the forecast on precipitation, area and the center of the precipitation more accurate by comparing the result of 1h,6h,12h, and 24h. Analyzing the difference of the initial field, we knew that the data assimilating about Qinghai-Tibet Plateau influence its lower reaches forecast by affecting on initial temperature and RH.Keywords: Qinghai-Tibet Plateau, precipitation, data assimilation, GSI
Procedia PDF Downloads 23124911 Positive Affect, Negative Affect, Organizational and Motivational Factor on the Acceptance of Big Data Technologies
Authors: Sook Ching Yee, Angela Siew Hoong Lee
Abstract:
Big data technologies have become a trend to exploit business opportunities and provide valuable business insights through the analysis of big data. However, there are still many organizations that have yet to adopt big data technologies especially small and medium organizations (SME). This study uses the technology acceptance model (TAM) to look into several constructs in the TAM and other additional constructs which are positive affect, negative affect, organizational factor and motivational factor. The conceptual model proposed in the study will be tested on the relationship and influence of positive affect, negative affect, organizational factor and motivational factor towards the intention to use big data technologies to produce an outcome. Empirical research is used in this study by conducting a survey to collect data.Keywords: big data technologies, motivational factor, negative affect, organizational factor, positive affect, technology acceptance model (TAM)
Procedia PDF Downloads 36024910 Big Data Analysis with Rhipe
Authors: Byung Ho Jung, Ji Eun Shin, Dong Hoon Lim
Abstract:
Rhipe that integrates R and Hadoop environment made it possible to process and analyze massive amounts of data using a distributed processing environment. In this paper, we implemented multiple regression analysis using Rhipe with various data sizes of actual data. Experimental results for comparing the performance of our Rhipe with stats and biglm packages available on bigmemory, showed that our Rhipe was more fast than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases. We also compared the computing speeds of pseudo-distributed and fully-distributed modes for configuring Hadoop cluster. The results showed that fully-distributed mode was faster than pseudo-distributed mode, and computing speeds of fully-distributed mode were faster as the number of data nodes increases.Keywords: big data, Hadoop, Parallel regression analysis, R, Rhipe
Procedia PDF Downloads 495