Search results for: data infrastructure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25639

Search results for: data infrastructure

24709 Investigating the Effect of Refinancing on Financial Behaviour of Energy Efficiency Projects

Authors: Zohreh Soltani, Seyedmohammadhossein Hosseinian

Abstract:

Reduction of energy consumption in built infrastructure, through the installation of energy-efficient technologies, is a major approach to achieving sustainability. In practice, the viability of energy efficiency projects strongly depends on the cost reimbursement and profitability. These projects are subject to failure if the actual cost savings do not reimburse the project cost in a timely manner. In such cases, refinancing could be a solution to benefit from the long-term returns of the project if implemented wisely. However, very little is still known about the effect of refinancing options on financial performance of energy efficiency projects. To fill this gap, the present study investigates the financial behavior of energy efficiency projects with focus on refinancing options, such as Leveraged Loans. A System Dynamics (SD) model is introduced, and the model application is presented using an actual case-study data. The case study results indicate that while high-interest start-ups make using Leveraged Loan inevitable, refinancing can rescue the project and bring about profitability. This paper also presents some managerial implications of refinancing energy efficiency projects based on the case-study analysis. Results of this study help implementing financially viable energy efficiency projects, so the community could benefit from their environmental advantages widely.

Keywords: energy efficiency projects, leveraged loan, refinancing, sustainability

Procedia PDF Downloads 379
24708 Assessing the Impact of Renewable Energy on Regional Sustainability: A Comparative Study of Suwon and Seoul

Authors: Jongsoo Jurng

Abstract:

The drive to expand renewable energies is often in direct conflict with sustainable development goals. Thus, it is important that energy policies account for potential trade-offs. We assess the interlinkages between energy, food, water, and land, for two case studies, Suwon and Seoul. We apply a range of assessment methods and study their usefulness as tools to identify trade-offs and to compare the sustainability performance. We calculate cross-sectoral footprints, self-sufficiency ratios and perform a simplified Energy-Water-Food nexus analysis. We use the latter for assessing scenarios to increase energy and food self-sufficiency in Suwon, while we use ecosystem service (ESS) accounting for Seoul. For Suwon, we find that constraints on the energy, food and water sectors urgently call for integrated approaches to energy policy; for Seoul, the further expansion of renewables comes at the expense of cultural and supporting ESS, which could outweigh gains from increased energy exports. We recommend a general upgrade to indicators and visualization methods that look beyond averages and a fostering of infrastructure for data on sustainable development based on harmonized international protocols. We warn against rankings of countries or regions based on benchmarks that are neither theory-driven nor location-specific.

Keywords: ESS, renewable energy, energy-water-food nexus, assessment

Procedia PDF Downloads 119
24707 Governing Urban Water Infrasystems: A Case Study of Los Angeles in the Context of Global Frameworks

Authors: Joachim Monkelbaan, Marcia Hale

Abstract:

Now that global frameworks for sustainability governance (e.g. the Sustainable Development Goals, Paris Climate Agreement and Sendai Framework for Disaster Risk Reduction) are in place, the question is how these aspirations that represent major transitions can be put into practice. Water ‘infrasystems’ can play an especially significant role in strengthening regional sustainability. Infrasystems include both hard and soft infrastructure, such as pipes and technology for delivering water, as well as the institutions and governance models that direct its delivery. As such, an integrated infrasystems view is crucial for Integrative Water Management (IWM). Due to frequently contested ownership of and responsibility for water resources, these infrasystems can also play an important role in facilitating conflict and catalysing community empowerment, especially through participatory approaches to governance. In this paper, we analyze the water infrasystem of the Los Angeles region through the lens of global frameworks for sustainability governance. By complementing a solid overview of governance theories with empirical data from interviews with water actors in the LA metropolitan region (including NGOs, water managers, scientists and elected officials), this paper elucidates ways for this infrasystem to be better aligned with global sustainability frameworks. In addition, it opens up the opportunity to scrutinize the appropriateness of global frameworks when it comes to fostering sustainability action at the local level.

Keywords: governance, transitions, global frameworks, infrasystems

Procedia PDF Downloads 229
24706 Genetic Data of Deceased People: Solving the Gordian Knot

Authors: Inigo de Miguel Beriain

Abstract:

Genetic data of deceased persons are of great interest for both biomedical research and clinical use. This is due to several reasons. On the one hand, many of our diseases have a genetic component; on the other hand, we share genes with a good part of our biological family. Therefore, it would be possible to improve our response considerably to these pathologies if we could use these data. Unfortunately, at the present moment, the status of data on the deceased is far from being satisfactorily resolved by the EU data protection regulation. Indeed, the General Data Protection Regulation has explicitly excluded these data from the category of personal data. This decision has given rise to a fragmented legal framework on this issue. Consequently, each EU member state offers very different solutions. For instance, Denmark considers the data as personal data of the deceased person for a set period of time while some others, such as Spain, do not consider this data as such, but have introduced some specifically focused regulations on this type of data and their access by relatives. This is an extremely dysfunctional scenario from multiple angles, not least of which is scientific cooperation at the EU level. This contribution attempts to outline a solution to this dilemma through an alternative proposal. Its main hypothesis is that, in reality, health data are, in a sense, a rara avis within data in general because they do not refer to one person but to several. Hence, it is possible to think that all of them can be considered data subjects (although not all of them can exercise the corresponding rights in the same way). When the person from whom the data were obtained dies, the data remain as personal data of his or her biological relatives. Hence, the general regime provided for in the GDPR may apply to them. As these are personal data, we could go back to thinking in terms of a general prohibition of data processing, with the exceptions provided for in Article 9.2 and on the legal bases included in Article 6. This may be complicated in practice, given that, since we are dealing with data that refer to several data subjects, it may be complex to refer to some of these bases, such as consent. Furthermore, there are theoretical arguments that may oppose this hypothesis. In this contribution, it is shown, however, that none of these objections is of sufficient substance to delegitimize the argument exposed. Therefore, the conclusion of this contribution is that we can indeed build a general framework on the processing of personal data of deceased persons in the context of the GDPR. This would constitute a considerable improvement over the current regulatory framework, although it is true that some clarifications will be necessary for its practical application.

Keywords: collective data conceptual issues, data from deceased people, genetic data protection issues, GDPR and deceased people

Procedia PDF Downloads 140
24705 Dynamic Externalities and Regional Productivity Growth: Evidence from Manufacturing Industries of India and China

Authors: Veerpal Kaur

Abstract:

The present paper aims at investigating the role of dynamic externalities of agglomeration in the regional productivity growth of manufacturing sector in India and China. Taking 2-digit level manufacturing sector data of states and provinces of India and China respectively for the period of 1998-99 to 2011-12, this paper examines the effect of dynamic externalities namely – Marshall-Arrow-Romer (MAR) specialization externalities, Jacobs’s diversity externalities, and Porter’s competition externalities on regional total factor productivity growth (TFPG) of manufacturing sector in both economies. Regressions have been carried on pooled data for all 2-digit manufacturing industries for India and China separately. The estimation of Panel has been based on a fixed effect by sector model. The results of econometric exercise show that labour-intensive industries in Indian regional manufacturing benefit from diversity externalities and capital intensive industries gain more from specialization in terms of TFPG. In China, diversity externalities and competition externalities hold better prospectus for regional TFPG in both labour intensive and capital intensive industries. But if we look at results for coastal and non-coastal region separately, specialization tends to assert a positive effect on TFPG in coastal regions whereas it has a negative effect on TFPG of coastal regions. Competition externalities put a negative effect on TFPG of non-coastal regions whereas it has a positive effect on TFPG of coastal regions. Diversity externalities made a positive contribution to TFPG in both coastal and non-coastal regions. So the results of the study postulate that the importance of dynamic externalities should not be examined by pooling all industries and all regions together. This could hold differential implications for region specific and industry-specific policy formulation. Other important variables explaining regional level TFPG in both India and China have been the availability of infrastructure, level of competitiveness, foreign direct investment, exports and geographical location of the region (especially in China).

Keywords: China, dynamic externalities, India, manufacturing, productivity

Procedia PDF Downloads 112
24704 Toward Digital Maturity : Empowering Small Medium Enterprise in Sleman Yogyakarta Indonesia toward Sustainable Tourism and Creative Economy Development

Authors: Cornellia Ayu, Putrianti Herni, Saptoto Robertus

Abstract:

In the context of global tourism and creative economies, digital maturity has become a crucial factor for the sustainable development of small and medium enterprises (SMEs). This paper explores the journey toward digital maturity among SMEs in Sleman, Yogyakarta, Indonesia, focusing on their empowerment to foster sustainable tourism and creative economy growth. The study adopts a mixed-methods approach, integrating qualitative interviews with SME owners and quantitative surveys to assess their digital capabilities and readiness. Data were collected from a diverse sample of SMEs engaged in various sectors, including crafts and culinary services. Findings reveal significant gaps in digital literacy and infrastructure, impeding the full realization of digital benefits. However, targeted interventions, such as digital training programs and the provision of affordable technology, have shown promise in bridging these gaps. The study concludes that enhancing digital maturity among SMEs is vital for their competitiveness and sustainability in the modern economy. The insights gained can inform policymakers and stakeholders aiming to bolster the digital transformation of SMEs in similar contexts.

Keywords: digital maturity, small medium enterprises, digital literacy, sustainable tourism, creative economy

Procedia PDF Downloads 21
24703 Steps towards the Development of National Health Data Standards in Developing Countries

Authors: Abdullah I. Alkraiji, Thomas W. Jackson, Ian Murray

Abstract:

The proliferation of health data standards today is somewhat overlapping and conflicting, resulting in market confusion and leading to increasing proprietary interests. The government role and support in standardization for health data are thought to be crucial in order to establish credible standards for the next decade, to maximize interoperability across the health sector, and to decrease the risks associated with the implementation of non-standard systems. The normative literature missed out the exploration of the different steps required to be undertaken by the government towards the development of national health data standards. Based on the lessons learned from a qualitative study investigating the different issues to the adoption of health data standards in the major tertiary hospitals in Saudi Arabia and the opinions and feedback from different experts in the areas of data exchange and standards and medical informatics in Saudi Arabia and UK, a list of steps required towards the development of national health data standards was constructed. Main steps are the existence of: a national formal reference for health data standards, an agreed national strategic direction for medical data exchange, a national medical information management plan and a national accreditation body, and more important is the change management at the national and organizational level. The outcome of this study can be used by academics and practitioners to develop the planning of health data standards, and in particular those in developing countries.

Keywords: interoperabilty, medical data exchange, health data standards, case study, Saudi Arabia

Procedia PDF Downloads 323
24702 A Proposal for U-City (Smart City) Service Method Using Real-Time Digital Map

Authors: SangWon Han, MuWook Pyeon, Sujung Moon, DaeKyo Seo

Abstract:

Recently, technologies based on three-dimensional (3D) space information are being developed and quality of life is improving as a result. Research on real-time digital map (RDM) is being conducted now to provide 3D space information. RDM is a service that creates and supplies 3D space information in real time based on location/shape detection. Research subjects on RDM include the construction of 3D space information with matching image data, complementing the weaknesses of image acquisition using multi-source data, and data collection methods using big data. Using RDM will be effective for space analysis using 3D space information in a U-City and for other space information utilization technologies.

Keywords: RDM, multi-source data, big data, U-City

Procedia PDF Downloads 417
24701 Assessing the Roles Languages Education Plays in Nation Building in Nigeria

Authors: Edith Lotachukwu Ochege

Abstract:

Nations stay together when citizens share enough values and preferences and can communicate with each other. Homogeneity among people can be built with education, teaching a common language to facilitate communication, infrastructure for easier travel, but also by brute force such as prohibiting local cultures. This paper discusses the role of language education in nation building. It defines education, highlights the functions of language. Furthermore, it expresses socialization agents that aid culture which are all embodied in language, problems of nation building.

Keywords: nation building, language education, function of language, socialization

Procedia PDF Downloads 550
24700 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-

Authors: Nieto Bernal Wilson, Carmona Suarez Edgar

Abstract:

The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects.  Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.

Keywords: data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse

Procedia PDF Downloads 391
24699 Identifying Model to Predict Deterioration of Water Mains Using Robust Analysis

Authors: Go Bong Choi, Shin Je Lee, Sung Jin Yoo, Gibaek Lee, Jong Min Lee

Abstract:

In South Korea, it is difficult to obtain data for statistical pipe assessment. In this paper, to address these issues, we find that various statistical model presented before is how data mixed with noise and are whether apply in South Korea. Three major type of model is studied and if data is presented in the paper, we add noise to data, which affects how model response changes. Moreover, we generate data from model in paper and analyse effect of noise. From this we can find robustness and applicability in Korea of each model.

Keywords: proportional hazard model, survival model, water main deterioration, ecological sciences

Procedia PDF Downloads 725
24698 Automated Testing to Detect Instance Data Loss in Android Applications

Authors: Anusha Konduru, Zhiyong Shan, Preethi Santhanam, Vinod Namboodiri, Rajiv Bagai

Abstract:

Mobile applications are increasing in a significant amount, each to address the requirements of many users. However, the quick developments and enhancements are resulting in many underlying defects. Android apps create and handle a large variety of 'instance' data that has to persist across runs, such as the current navigation route, workout results, antivirus settings, or game state. Due to the nature of Android, an app can be paused, sent into the background, or killed at any time. If the instance data is not saved and restored between runs, in addition to data loss, partially-saved or corrupted data can crash the app upon resume or restart. However, it is difficult for the programmer to manually test this issue for all the activities. This results in the issue of data loss that the data entered by the user are not saved when there is any interruption. This issue can degrade user experience because the user needs to reenter the information each time there is an interruption. Automated testing to detect such data loss is important to improve the user experience. This research proposes a tool, DroidDL, a data loss detector for Android, which detects the instance data loss from a given android application. We have tested 395 applications and found 12 applications with the issue of data loss. This approach is proved highly accurate and reliable to find the apps with this defect, which can be used by android developers to avoid such errors.

Keywords: Android, automated testing, activity, data loss

Procedia PDF Downloads 222
24697 The Development of Open Access in Latin America and Caribbean: Mapping National and International Policies and Scientific Publications of the Region

Authors: Simone Belli, Sergio Minniti, Valeria Santoro

Abstract:

ICTs and technology transfer can benefit and move a country forward in economic and social development. However, ICT and access to the Internet have been inequitably distributed in most developing countries. In terms of science production and dissemination, this divide articulates itself also through the inequitable distribution of access to scientific knowledge and networks, which results in the exclusion of developing countries from the center of science. Developing countries are on the fringe of Science and Technology (S&T) production due not only to low investment in research but also to the difficulties to access international scholarly literature. In this respect, Open access (OA) initiatives and knowledge infrastructure represent key elements for both producing significant changes in scholarly communication and reducing the problems of developing countries. The spreading of the OA movement in the region, exemplified by the growth of regional and national initiatives, such as the creation of OA institutional repositories (e.g. SciELO and Redalyc) and the establishing of supportive governmental policies, provides evidence of the significant role that OA is playing in reducing the scientific gap between Latin American countries and improving their participation in the so-called ‘global knowledge commons’. In this paper, we map OA publications in Latin America and observe how Latin American countries are moving forward and becoming a leading force in widening access to knowledge. Our analysis, developed as part of the H2020 EULAC Focus research project, is based on mixed methods and consists mainly of a bibliometric analysis of OA publications indexed in the most important scientific databases (Web of Science and Scopus) and OA regional repositories, as well as the qualitative analysis of documents related to the main OA initiatives in Latin America. Through our analysis, we aim at reflecting critically on what policies, international standards, and best practices might be adapted to incorporate OA worldwide and improve the infrastructure of the global knowledge commons.

Keywords: open access, LAC countries, scientific publications, bibliometric analysis

Procedia PDF Downloads 194
24696 Big Data: Appearance and Disappearance

Authors: James Moir

Abstract:

The mainstay of Big Data is prediction in that it allows practitioners, researchers, and policy analysts to predict trends based upon the analysis of large and varied sources of data. These can range from changing social and political opinions, patterns in crimes, and consumer behaviour. Big Data has therefore shifted the criterion of success in science from causal explanations to predictive modelling and simulation. The 19th-century science sought to capture phenomena and seek to show the appearance of it through causal mechanisms while 20th-century science attempted to save the appearance and relinquish causal explanations. Now 21st-century science in the form of Big Data is concerned with the prediction of appearances and nothing more. However, this pulls social science back in the direction of a more rule- or law-governed reality model of science and away from a consideration of the internal nature of rules in relation to various practices. In effect Big Data offers us no more than a world of surface appearance and in doing so it makes disappear any context-specific conceptual sensitivity.

Keywords: big data, appearance, disappearance, surface, epistemology

Procedia PDF Downloads 398
24695 From Data Processing to Experimental Design and Back Again: A Parameter Identification Problem Based on FRAP Images

Authors: Stepan Papacek, Jiri Jablonsky, Radek Kana, Ctirad Matonoha, Stefan Kindermann

Abstract:

FRAP (Fluorescence Recovery After Photobleaching) is a widely used measurement technique to determine the mobility of fluorescent molecules within living cells. While the experimental setup and protocol for FRAP experiments are usually fixed, data processing part is still under development. In this paper, we formulate and solve the problem of data selection which enhances the processing of FRAP images. We introduce the concept of the irrelevant data set, i.e., the data which are almost not reducing the confidence interval of the estimated parameters and thus could be neglected. Based on sensitivity analysis, we both solve the problem of the optimal data space selection and we find specific conditions for optimizing an important experimental design factor, e.g., the radius of bleach spot. Finally, a theorem announcing less precision of the integrated data approach compared to the full data case is proven; i.e., we claim that the data set represented by the FRAP recovery curve lead to a larger confidence interval compared to the spatio-temporal (full) data.

Keywords: FRAP, inverse problem, parameter identification, sensitivity analysis, optimal experimental design

Procedia PDF Downloads 262
24694 Representation Data without Lost Compression Properties in Time Series: A Review

Authors: Nabilah Filzah Mohd Radzuan, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan

Abstract:

Uncertain data is believed to be an important issue in building up a prediction model. The main objective in the time series uncertainty analysis is to formulate uncertain data in order to gain knowledge and fit low dimensional model prior to a prediction task. This paper discusses the performance of a number of techniques in dealing with uncertain data specifically those which solve uncertain data condition by minimizing the loss of compression properties.

Keywords: compression properties, uncertainty, uncertain time series, mining technique, weather prediction

Procedia PDF Downloads 414
24693 Data Mining As A Tool For Knowledge Management: A Review

Authors: Maram Saleh

Abstract:

Knowledge has become an essential resource in today’s economy and become the most important asset of maintaining competition advantage in organizations. The importance of knowledge has made organizations to manage their knowledge assets and resources through all multiple knowledge management stages such as: Knowledge Creation, knowledge storage, knowledge sharing and knowledge use. Researches on data mining are continues growing over recent years on both business and educational fields. Data mining is one of the most important steps of the knowledge discovery in databases process aiming to extract implicit, unknown but useful knowledge and it is considered as significant subfield in knowledge management. Data miming have the great potential to help organizations to focus on extracting the most important information on their data warehouses. Data mining tools and techniques can predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. This review paper explores the applications of data mining techniques in supporting knowledge management process as an effective knowledge discovery technique. In this paper, we identify the relationship between data mining and knowledge management, and then focus on introducing some application of date mining techniques in knowledge management for some real life domains.

Keywords: Data Mining, Knowledge management, Knowledge discovery, Knowledge creation.

Procedia PDF Downloads 190
24692 Anomaly Detection Based Fuzzy K-Mode Clustering for Categorical Data

Authors: Murat Yazici

Abstract:

Anomalies are irregularities found in data that do not adhere to a well-defined standard of normal behavior. The identification of outliers or anomalies in data has been a subject of study within the statistics field since the 1800s. Over time, a variety of anomaly detection techniques have been developed in several research communities. The cluster analysis can be used to detect anomalies. It is the process of associating data with clusters that are as similar as possible while dissimilar clusters are associated with each other. Many of the traditional cluster algorithms have limitations in dealing with data sets containing categorical properties. To detect anomalies in categorical data, fuzzy clustering approach can be used with its advantages. The fuzzy k-Mode (FKM) clustering algorithm, which is one of the fuzzy clustering approaches, by extension to the k-means algorithm, is reported for clustering datasets with categorical values. It is a form of clustering: each point can be associated with more than one cluster. In this paper, anomaly detection is performed on two simulated data by using the FKM cluster algorithm. As a significance of the study, the FKM cluster algorithm allows to determine anomalies with their abnormality degree in contrast to numerous anomaly detection algorithms. According to the results, the FKM cluster algorithm illustrated good performance in the anomaly detection of data, including both one anomaly and more than one anomaly.

Keywords: fuzzy k-mode clustering, anomaly detection, noise, categorical data

Procedia PDF Downloads 36
24691 A Pathway to Financial Inclusion: Mobile Money and Individual Savings in Uganda

Authors: Musa Mayanja Lwanga, Annet Adong

Abstract:

This study provides a micro perspective on the impact of mobile money services on individual’s saving behavior using the 2013 Uganda FinScope data. Results show that although saving through the mobile phone is not a common practice in Uganda, being a registered mobile money user increases the likelihood to save with mobile money. Saving using mobile is more prevalent in urban areas and in Kampala and Central region compared to other regions. This can be explained by: first, rural dwellers tend on average to have lower incomes and thus have lower to saving compared to the urban counterpart. Similarly, residents of Kampala tend to have higher incomes and thus high savings compared to residents of other regions. Secondly, poor infrastructure in rural areas in terms of lack of electricity and poor telecommunication network coverage may limit the use of mobile phones and consequently the use of mobile money as a saving mechanism. Overall, the use of mobile money as a saving mechanism is still very low and this could be partly explained by limitations in the legislation that does not incorporate mobile finance services into mobile money. The absence of interest payments on mobile money savings may act as a disincentive to save through this mechanism. Given the emerging mobile banking services, there is a need to create more awareness and the need for enhanced synergies between telecom companies and commercial banks.

Keywords: financial inclusion, mobile money, savings, Uganda

Procedia PDF Downloads 272
24690 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encyption Scheme

Authors: Victor Onomza Waziri, John K. Alhassan, Idris Ismaila, Noel Dogonyara

Abstract:

This paper describes the problem of building secure computational services for encrypted information in the Cloud. Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy or confidentiality, availability and integrity of the data and user’s security. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute a theoretical presentations in a high-level computational processes that are based on number theory that is derivable from abstract algebra which can easily be integrated and leveraged in the Cloud computing interface with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based on cryptographic security algorithm.

Keywords: big data analytics, security, privacy, bootstrapping, Fully Homomorphic Encryption Scheme

Procedia PDF Downloads 462
24689 Sustainable Solutions for Enhancing Efficiency, Safety, and Quality of Construction Value Chain Services Integration

Authors: Lo Kar Yin

Abstract:

In view of the increasing speed and quantity of the housing supply, building, and civil engineering infrastructure works triggered by the pandemic across the globe, contractors, professional services providers (PSP), including consultants (e.g., architect, project manager, civil/geotechnical/structural engineer, building services engineer, quantity surveyor/cost manager, etc.) and suppliers have faced tremendous challenges of the fierce market, limited manpower, and resources under contract prices fluctuation and competitive fee and price. With qualitative analysis, this paper is to review the available information from the industry stakeholders with a view to finding solutions for enhancing efficiency, safety, and quality of construction value chain services for public and private organizations/companies’ sustainable growth, not limited to checking the deliverables and data transfer from multi-disciplinary parties. Technology, contracts, and people are the key requirements for shaping the construction industry. With the integration of a modern engineering contract (e.g., NEC) collaborative approach, practical workflows are designed to address loopholes together with different levels of people employment/retention and technology adoption to achieve the best value for money.

Keywords: efficiency, safety, quality, technology, contract, people, sustainable solutions, construction, services, integration

Procedia PDF Downloads 111
24688 Cost and Benefits of Collocation in the Use of Biogas to Reduce Vulnerabilities and Risks

Authors: Janaina Camile Pasqual Lofhagen, David Savarese, Veronika Vazhnik

Abstract:

The urgency of the climate crisis requires both innovation and practicality. The energy transition framework allows industry to deliver resilient cities, enhance adaptability to change, pursue energy objectives such as growth or efficiencies, and increase renewable energy. This paper investigates a real-world application perspective for the use of biogas in Brazil and the U.S.. It will examine interventions to provide a foundation of infrastructure, as well as the tangible benefits for policy-makers crafting law and providing incentives.

Keywords: resilience, vulnerability, risks, biogas, sustainability.

Procedia PDF Downloads 84
24687 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach

Authors: Sarisa Pinkham, Kanyarat Bussaban

Abstract:

The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.

Keywords: daily rainfall, image processing, approximation, pixel value data

Procedia PDF Downloads 373
24686 The Effect of Measurement Distribution on System Identification and Detection of Behavior of Nonlinearities of Data

Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri

Abstract:

In this paper, we considered and applied parametric modeling for some experimental data of dynamical system. In this study, we investigated the different distribution of output measurement from some dynamical systems. Also, with variance processing in experimental data we obtained the region of nonlinearity in experimental data and then identification of output section is applied in different situation and data distribution. Finally, the effect of the spanning the measurement such as variance to identification and limitation of this approach is explained.

Keywords: Gaussian process, nonlinearity distribution, particle filter, system identification

Procedia PDF Downloads 495
24685 Price Compensation Mechanism with Unmet Demand for Public-Private Partnership Projects

Authors: Zhuo Feng, Ying Gao

Abstract:

Public-private partnership (PPP), as an innovative way to provide infrastructures by the private sector, is being widely used throughout the world. Compared with the traditional mode, PPP emerges largely for merits of relieving public budget constraint and improving infrastructure supply efficiency by involving private funds. However, PPP projects are characterized by large scale, high investment, long payback period, and long concession period. These characteristics make PPP projects full of risks. One of the most important risks faced by the private sector is demand risk because many factors affect the real demand. If the real demand is far lower than the forecasting demand, the private sector will be got into big trouble because operating revenue is the main means for the private sector to recoup the investment and obtain profit. Therefore, it is important to study how the government compensates the private sector when the demand risk occurs in order to achieve Pareto-improvement. This research focuses on price compensation mechanism, an ex-post compensation mechanism, and analyzes, by mathematical modeling, the impact of price compensation mechanism on payoff of the private sector and consumer surplus for PPP toll road projects. This research first investigates whether or not price compensation mechanisms can obtain Pareto-improvement and, if so, then explores boundary conditions for this mechanism. The research results show that price compensation mechanism can realize Pareto-improvement under certain conditions. Especially, to make the price compensation mechanism accomplish Pareto-improvement, renegotiation costs of the government and the private sector should be lower than a certain threshold which is determined by marginal operating cost and distortionary cost of the tax. In addition, the compensation percentage should match with the price cut of the private investor when demand drops. This research aims to provide theoretical support for the government when determining compensation scope under the price compensation mechanism. Moreover, some policy implications can also be drawn from the analysis for better risk-sharing and sustainability of PPP projects.

Keywords: infrastructure, price compensation mechanism, public-private partnership, renegotiation

Procedia PDF Downloads 162
24684 Building a Scalable Telemetry Based Multiclass Predictive Maintenance Model in R

Authors: Jaya Mathew

Abstract:

Many organizations are faced with the challenge of how to analyze and build Machine Learning models using their sensitive telemetry data. In this paper, we discuss how users can leverage the power of R without having to move their big data around as well as a cloud based solution for organizations willing to host their data in the cloud. By using ScaleR technology to benefit from parallelization and remote computing or R Services on premise or in the cloud, users can leverage the power of R at scale without having to move their data around.

Keywords: predictive maintenance, machine learning, big data, cloud based, on premise solution, R

Procedia PDF Downloads 361
24683 Trusting the Big Data Analytics Process from the Perspective of Different Stakeholders

Authors: Sven Gehrke, Johannes Ruhland

Abstract:

Data is the oil of our time, without them progress would come to a hold [1]. On the other hand, the mistrust of data mining is increasing [2]. The paper at hand shows different aspects of the concept of trust and describes the information asymmetry of the typical stakeholders of a data mining project using the CRISP-DM phase model. Based on the identified influencing factors in relation to trust, problematic aspects of the current approach are verified using various interviews with the stakeholders. The results of the interviews confirm the theoretically identified weak points of the phase model with regard to trust and show potential research areas.

Keywords: trust, data mining, CRISP DM, stakeholder management

Procedia PDF Downloads 81
24682 Wireless Transmission of Big Data Using Novel Secure Algorithm

Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha

Abstract:

This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.

Keywords: big data, two-hop transmission, physical layer wireless security, cooperative jamming, energy balance

Procedia PDF Downloads 468
24681 Cloud Computing Architecture Based on SOA

Authors: Negin Mohammadrezaee Larki

Abstract:

Cloud Computing is a popular solution that has been used in recent years to cooperate and collaborate among distributed applications over networks. Moving successfully into cloud computing requires an architecture that will support the new cloud capabilities. Many business leaders and analysts agree that moving to cloud requires having a solid, service-oriented architecture to provide the infrastructure needed for successful cloud implementation.

Keywords: Service Oriented Architecture (SOA), Service Oriented Cloud Computing Architecture (SOCCA), cloud computing, cloud computing architecture

Procedia PDF Downloads 365
24680 One Step Further: Pull-Process-Push Data Processing

Authors: Romeo Botes, Imelda Smit

Abstract:

In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.

Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list

Procedia PDF Downloads 228