Search results for: Privacy Preserving Data Publication (PPDP)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25488

Search results for: Privacy Preserving Data Publication (PPDP)

24618 Sustainable Urban Growth of Neighborhoods: A Case Study of Alryad-Khartoum

Authors: Zuhal Eltayeb Awad

Abstract:

Alryad neighborhood is located in Khartoum town– the administrative center of the Capital of Sudan. The neighborhood is one of the high-income residential areas with villa type development of low-density. It was planned and developed in 1972 with large plots (600-875m²), wide crossing roads and balanced environment. Recently the area transformed into more compact urban form of high density, mixed-use integrated development with more intensive use of land; multi-storied apartments. The most important socio-economic process in the neighborhood has been the commercialization and deinitialization of the area in connect with the displacement of the residential function. This transformation affected the quality of the neighborhood and the inter-related features of the built environment. A case study approach was chosen to gather the necessary qualitative and quantitative data. A detailed survey on existing development pattern was carried out over the whole area of Alryad. Data on the built and social environment of the neighborhoods were collected through observations, interviews and secondary data sources. The paper reflected a theoretical and empirical interest in the particular characteristics of compact neighborhood with high density, and mixed land uses and their effect on social wellbeing of the residents all in the context of the sustainable development. The research problem is focused on the challenges of transformation that associated with compact neighborhood that created multiple urban problems, e.g., stress of essential services (water supply, electricity, and drainage), congestion of streets and demand for parking. The main objective of the study is to analyze the transformation of this area from residential use to commercial and administrative use. The study analyzed the current situation of the neighborhood compared to the five principles of sustainable neighborhood prepared by UN Habitat. The study found that the neighborhood is experienced changes that occur to inner-city residential areas and the process of change of the neighborhood was originated by external forces due to the declining economic situation of the whole country. It is evident that non-residential uses have taken place uncontrolled, unregulated and haphazardly that led to damage the residential environment and deficiency in infrastructure. The quality of urban life and in particular on levels of privacy was reduced, the neighborhood changed gradually to be a central business district that provides services to the whole Khartoum town. The change of house type may be attributed to a demand-led housing market and absence of policy. The results showed that Alryad is not fully sustainable and self-contained, street network characteristics and mixed land-uses development are compatible with the principles of sustainability. The area of streets represents 27.4% of the total area of the neighborhood. Residential density is 4,620 people/ km², that is lower than the recommendations, and the limited block land-use specialization is higher than 10% of the blocks. Most inhabitants have a high income so that there is no social mix in the neighborhood. The study recommended revision of the current zoning regulations in order to control and regulate undesirable development in the neighborhood and provide new solutions which allow promoting the neighborhood sustainable development.

Keywords: compact neighborhood, land uses, mixed use, residential area, transformation

Procedia PDF Downloads 126
24617 Single-Cell Visualization with Minimum Volume Embedding

Authors: Zhenqiu Liu

Abstract:

Visualizing the heterogeneity within cell-populations for single-cell RNA-seq data is crucial for studying the functional diversity of a cell. However, because of the high level of noises, outlier, and dropouts, it is very challenging to measure the cell-to-cell similarity (distance), visualize and cluster the data in a low-dimension. Minimum volume embedding (MVE) projects the data into a lower-dimensional space and is a promising tool for data visualization. However, it is computationally inefficient to solve a semi-definite programming (SDP) when the sample size is large. Therefore, it is not applicable to single-cell RNA-seq data with thousands of samples. In this paper, we develop an efficient algorithm with an accelerated proximal gradient method and visualize the single-cell RNA-seq data efficiently. We demonstrate that the proposed approach separates known subpopulations more accurately in single-cell data sets than other existing dimension reduction methods.

Keywords: single-cell RNA-seq, minimum volume embedding, visualization, accelerated proximal gradient method

Procedia PDF Downloads 225
24616 Cloud Data Security Using Map/Reduce Implementation of Secret Sharing Schemes

Authors: Sara Ibn El Ahrache, Tajje-eddine Rachidi, Hassan Badir, Abderrahmane Sbihi

Abstract:

Recently, there has been increasing confidence for a favorable usage of big data drawn out from the huge amount of information deposited in a cloud computing system. Data kept on such systems can be retrieved through the network at the user’s convenience. However, the data that users send include private information, and therefore, information leakage from these data is now a major social problem. The usage of secret sharing schemes for cloud computing have lately been approved to be relevant in which users deal out their data to several servers. Notably, in a (k,n) threshold scheme, data security is assured if and only if all through the whole life of the secret the opponent cannot compromise more than k of the n servers. In fact, a number of secret sharing algorithms have been suggested to deal with these security issues. In this paper, we present a Mapreduce implementation of Shamir’s secret sharing scheme to increase its performance and to achieve optimal security for cloud data. Different tests were run and through it has been demonstrated the contributions of the proposed approach. These contributions are quite considerable in terms of both security and performance.

Keywords: cloud computing, data security, Mapreduce, Shamir's secret sharing

Procedia PDF Downloads 301
24615 Lexico-semantic and Morphosyntactic Analyses of Student-generated Paraphrased Academic Texts

Authors: Hazel P. Atilano

Abstract:

In this age of AI-assisted teaching and learning, there seems to be a dearth of research literature on the linguistic analysis of English as a Second Language (ESL) student-generated paraphrased academic texts. This study sought to examine the lexico-semantic, morphosyntactic features of paraphrased academic texts generated by ESL students. Employing a descriptive qualitative design, specifically linguistic analysis, the study involved a total of 85 students from senior high school, college, and graduate school enrolled in research courses. Data collection consisted of a 60-minute real-time, on-site paraphrasing practice exercise using excerpts from discipline-specific literature reviews of 150 to 200 words. A focus group discussion (FGD) was conducted to probe into the challenges experienced by the participants. The writing exercise yielded a total of 516 paraphrase pairs. A total of 176 paraphrase units (PUs) and 340 non-paraphrase pairs (NPPs) were detected. Findings from the linguistic analysis of PUs reveal that the modifications made to the original texts are predominantly syntax-based (Diathesis Alterations and Coordination Changes) and a combination of Miscellaneous Changes (Change of Order, Change of Format, and Addition/Deletion). Results of the analysis of paraphrase extremes (PE) show that Identical Structures resulting from the use of synonymous substitutions, with no significant change in the structural features of the original, is the most frequently occurring instance of PE. The analysis of paraphrase errors reveals that synonymous substitutions resulting in identical structures are the most frequently occurring error that leads to PE. Another type of paraphrasing error involves semantic and content loss resulting from the deletion or addition of meaning-altering content. Three major themes emerged from the FGD: (1) The Challenge of Preserving Semantic Content and Fidelity; (2) The Best Words in the Best Order: Grappling with the Lexico-semantic and Morphosyntactic Demands of Paraphrasing; and (3) Contending with Limited Vocabulary, Poor Comprehension, and Lack of Practice. A pedagogical paradigm was designed based on the major findings of the study for a sustainable instructional intervention.

Keywords: academic text, lexico-semantic analysis, linguistic analysis, morphosyntactic analysis, paraphrasing

Procedia PDF Downloads 61
24614 Anti-Bubble Painting Booth for Wood Coating Resins

Authors: Abasali Masoumi, Amir Gholamian Bozorgi

Abstract:

To have the best quality in wood products such as tabletops and inlay-woods, applying two principles are required: aesthetic and protection against the destructive agent. Artists spent a lot of time creating a masterwork project and also for better demonstrating beautiful appearance and preserving it for hundred years. So they need good material and appropriate method to finish it. As usual, wood painters use polyester or epoxy resins. These finishes need a special skill to use and then give a fantastic paint film and clearness. If we let resins dry in exposure to environmental agents such as unstable temperature, dust and etc., no doubt it becomes cloudy, crack, blister and much wood dust and air bubbles in it. We have designed a special wood coating booth (IR-Patent No: 70429) for wood-coating resins (polyester and epoxy), and this booth provides an adjustable space to control factors that is necessary to have a good finish in the end. Anti-bubble painting booth has the ability to remove bubbles from resin, precludes the cracking process and causes the resin to be the best. With this booth drying time of resin is reduced from 24 hours to 6 hours by fixing the optimum temperature, and it is very good for saving time. This booth is environment-friendly and never lets the poisonous vapors and other VOC (Volatile organic components) enter to workplace atmosphere because they are very harmful to humans.

Keywords: wood coating, epoxy resin, polyester resin, wood finishes

Procedia PDF Downloads 222
24613 A Modular Framework for Enabling Analysis for Educators with Different Levels of Data Mining Skills

Authors: Kyle De Freitas, Margaret Bernard

Abstract:

Enabling data mining analysis among a wider audience of educators is an active area of research within the educational data mining (EDM) community. The paper proposes a framework for developing an environment that caters for educators who have little technical data mining skills as well as for more advanced users with some data mining expertise. This framework architecture was developed through the review of the strengths and weaknesses of existing models in the literature. The proposed framework provides a modular architecture for future researchers to focus on the development of specific areas within the EDM process. Finally, the paper also highlights a strategy of enabling analysis through either the use of predefined questions or a guided data mining process and highlights how the developed questions and analysis conducted can be reused and extended over time.

Keywords: educational data mining, learning management system, learning analytics, EDM framework

Procedia PDF Downloads 322
24612 Using Audit Tools to Maintain Data Quality for ACC/NCDR PCI Registry Abstraction

Authors: Vikrum Malhotra, Manpreet Kaur, Ayesha Ghotto

Abstract:

Background: Cardiac registries such as ACC Percutaneous Coronary Intervention Registry require high quality data to be abstracted, including data elements such as nuclear cardiology, diagnostic coronary angiography, and PCI. Introduction: The audit tool created is used by data abstractors to provide data audits and assess the accuracy and inter-rater reliability of abstraction performed by the abstractors for a health system. This audit tool solution has been developed across 13 registries, including ACC/NCDR registries, PCI, STS, Get with the Guidelines. Methodology: The data audit tool was used to audit internal registry abstraction for all data elements, including stress test performed, type of stress test, data of stress test, results of stress test, risk/extent of ischemia, diagnostic catheterization detail, and PCI data elements for ACC/NCDR PCI registries. This is being used across 20 hospital systems internally and providing abstraction and audit services for them. Results: The data audit tool had inter-rater reliability and accuracy greater than 95% data accuracy and IRR score for the PCI registry in 50 PCI registry cases in 2021. Conclusion: The tool is being used internally for surgical societies and across hospital systems. The audit tool enables the abstractor to be assessed by an external abstractor and includes all of the data dictionary fields for each registry.

Keywords: abstraction, cardiac registry, cardiovascular registry, registry, data

Procedia PDF Downloads 100
24611 Artificial Intelligence Based Comparative Analysis for Supplier Selection in Multi-Echelon Automotive Supply Chains via GEP and ANN Models

Authors: Seyed Esmail Seyedi Bariran, Laysheng Ewe, Amy Ling

Abstract:

Since supplier selection appears as a vital decision, selecting supplier based on the best and most accurate ways has a lot of importance for enterprises. In this study, a new Artificial Intelligence approach is exerted to remove weaknesses of supplier selection. The paper has three parts. First part is choosing the appropriate criteria for assessing the suppliers’ performance. Next one is collecting the data set based on experts. Afterwards, the data set is divided into two parts, the training data set and the testing data set. By the training data set the best structure of GEP and ANN are selected and to evaluate the power of the mentioned methods the testing data set is used. The result obtained shows that the accuracy of GEP is more than ANN. Moreover, unlike ANN, a mathematical equation is presented by GEP for the supplier selection.

Keywords: supplier selection, automotive supply chains, ANN, GEP

Procedia PDF Downloads 626
24610 Increasing the Apparent Time Resolution of Tc-99m Diethylenetriamine Pentaacetic Acid Galactosyl Human Serum Albumin Dynamic SPECT by Use of an 180-Degree Interpolation Method

Authors: Yasuyuki Takahashi, Maya Yamashita, Kyoko Saito

Abstract:

In general, dynamic SPECT data acquisition needs a few minutes for one rotation. Thus, the time-activity curve (TAC) derived from the dynamic SPECT is relatively coarse. In order to effectively shorten the interval, between data points, we adopted a 180-degree interpolation method. This method is already used for reconstruction of the X-ray CT data. In this study, we applied this 180-degree interpolation method to SPECT and investigated its effectiveness.To briefly describe the 180-degree interpolation method: the 180-degree data in the second half of one rotation are combined with the 180-degree data in the first half of the next rotation to generate a 360-degree data set appropriate for the time halfway between the first and second rotations. In both a phantom and a patient study, the data points from the interpolated images fell in good agreement with the data points tracking the accumulation of 99mTc activity over time for appropriate region of interest. We conclude that data derived from interpolated images improves the apparent time resolution of dynamic SPECT.

Keywords: dynamic SPECT, time resolution, 180-degree interpolation method, 99mTc-GSA.

Procedia PDF Downloads 491
24609 Memory and Myth in Future Cities Case Study: Walking to Imam Reza Holy Shrine of Mashhad, Iran

Authors: Samaneh Eshraghi Ivaria, Torkild Thellefsenb

Abstract:

The article discusses the significance of understanding the semiotics of future cities and recognizing the signs of cultural identity in contributing to the preservation of citizens' memories. The identities of citizens are conveyed through memories in urban planning, and with the rapid advancements in technology, cities are constantly changing. Therefore, preserving memories in the design of future cities is essential in maintaining a quality environment that reflects the citizens' identities. The article focuses on the semiotics of the movement pattern morphology in Mashhad city's historical area, using the historical interpretation method. The practice of walking to the shrine of Imam Reza as a religious building has been a historical and religious custom among Shiites from the past until now. By recognizing the signs that result from this religious and cultural approach on the morphology of the city, the aim of the research is to preserve the place of memories in future cities. Overall, the article highlights the importance of recognizing the cultural and historical significance of cities in designing future urban spaces. By doing so, it is possible to preserve the memories and identities of citizens, ensuring that the urban environment reflects the unique cultural heritage of a place.

Keywords: memories, future cities, movement pattern, mashhad, semiotics

Procedia PDF Downloads 78
24608 Discussing Embedded versus Central Machine Learning in Wireless Sensor Networks

Authors: Anne-Lena Kampen, Øivind Kure

Abstract:

Machine learning (ML) can be implemented in Wireless Sensor Networks (WSNs) as a central solution or distributed solution where the ML is embedded in the nodes. Embedding improves privacy and may reduce prediction delay. In addition, the number of transmissions is reduced. However, quality factors such as prediction accuracy, fault detection efficiency and coordinated control of the overall system suffer. Here, we discuss and highlight the trade-offs that should be considered when choosing between embedding and centralized ML, especially for multihop networks. In addition, we present estimations that demonstrate the energy trade-offs between embedded and centralized ML. Although the total network energy consumption is lower with central prediction, it makes the network more prone for partitioning due to the high forwarding load on the one-hop nodes. Moreover, the continuous improvements in the number of operations per joule for embedded devices will move the energy balance toward embedded prediction.

Keywords: central machine learning, embedded machine learning, energy consumption, local machine learning, wireless sensor networks, WSN

Procedia PDF Downloads 148
24607 AI-Driven Solutions for Optimizing Master Data Management

Authors: Srinivas Vangari

Abstract:

In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.

Keywords: artificial intelligence, master data management, data governance, data quality

Procedia PDF Downloads 9
24606 Genetic Data of Deceased People: Solving the Gordian Knot

Authors: Inigo de Miguel Beriain

Abstract:

Genetic data of deceased persons are of great interest for both biomedical research and clinical use. This is due to several reasons. On the one hand, many of our diseases have a genetic component; on the other hand, we share genes with a good part of our biological family. Therefore, it would be possible to improve our response considerably to these pathologies if we could use these data. Unfortunately, at the present moment, the status of data on the deceased is far from being satisfactorily resolved by the EU data protection regulation. Indeed, the General Data Protection Regulation has explicitly excluded these data from the category of personal data. This decision has given rise to a fragmented legal framework on this issue. Consequently, each EU member state offers very different solutions. For instance, Denmark considers the data as personal data of the deceased person for a set period of time while some others, such as Spain, do not consider this data as such, but have introduced some specifically focused regulations on this type of data and their access by relatives. This is an extremely dysfunctional scenario from multiple angles, not least of which is scientific cooperation at the EU level. This contribution attempts to outline a solution to this dilemma through an alternative proposal. Its main hypothesis is that, in reality, health data are, in a sense, a rara avis within data in general because they do not refer to one person but to several. Hence, it is possible to think that all of them can be considered data subjects (although not all of them can exercise the corresponding rights in the same way). When the person from whom the data were obtained dies, the data remain as personal data of his or her biological relatives. Hence, the general regime provided for in the GDPR may apply to them. As these are personal data, we could go back to thinking in terms of a general prohibition of data processing, with the exceptions provided for in Article 9.2 and on the legal bases included in Article 6. This may be complicated in practice, given that, since we are dealing with data that refer to several data subjects, it may be complex to refer to some of these bases, such as consent. Furthermore, there are theoretical arguments that may oppose this hypothesis. In this contribution, it is shown, however, that none of these objections is of sufficient substance to delegitimize the argument exposed. Therefore, the conclusion of this contribution is that we can indeed build a general framework on the processing of personal data of deceased persons in the context of the GDPR. This would constitute a considerable improvement over the current regulatory framework, although it is true that some clarifications will be necessary for its practical application.

Keywords: collective data conceptual issues, data from deceased people, genetic data protection issues, GDPR and deceased people

Procedia PDF Downloads 152
24605 Steps towards the Development of National Health Data Standards in Developing Countries

Authors: Abdullah I. Alkraiji, Thomas W. Jackson, Ian Murray

Abstract:

The proliferation of health data standards today is somewhat overlapping and conflicting, resulting in market confusion and leading to increasing proprietary interests. The government role and support in standardization for health data are thought to be crucial in order to establish credible standards for the next decade, to maximize interoperability across the health sector, and to decrease the risks associated with the implementation of non-standard systems. The normative literature missed out the exploration of the different steps required to be undertaken by the government towards the development of national health data standards. Based on the lessons learned from a qualitative study investigating the different issues to the adoption of health data standards in the major tertiary hospitals in Saudi Arabia and the opinions and feedback from different experts in the areas of data exchange and standards and medical informatics in Saudi Arabia and UK, a list of steps required towards the development of national health data standards was constructed. Main steps are the existence of: a national formal reference for health data standards, an agreed national strategic direction for medical data exchange, a national medical information management plan and a national accreditation body, and more important is the change management at the national and organizational level. The outcome of this study can be used by academics and practitioners to develop the planning of health data standards, and in particular those in developing countries.

Keywords: interoperabilty, medical data exchange, health data standards, case study, Saudi Arabia

Procedia PDF Downloads 336
24604 A Proposal for U-City (Smart City) Service Method Using Real-Time Digital Map

Authors: SangWon Han, MuWook Pyeon, Sujung Moon, DaeKyo Seo

Abstract:

Recently, technologies based on three-dimensional (3D) space information are being developed and quality of life is improving as a result. Research on real-time digital map (RDM) is being conducted now to provide 3D space information. RDM is a service that creates and supplies 3D space information in real time based on location/shape detection. Research subjects on RDM include the construction of 3D space information with matching image data, complementing the weaknesses of image acquisition using multi-source data, and data collection methods using big data. Using RDM will be effective for space analysis using 3D space information in a U-City and for other space information utilization technologies.

Keywords: RDM, multi-source data, big data, U-City

Procedia PDF Downloads 429
24603 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-

Authors: Nieto Bernal Wilson, Carmona Suarez Edgar

Abstract:

The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects.  Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.

Keywords: data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse

Procedia PDF Downloads 401
24602 Generative Adversarial Network for Bidirectional Mappings between Retinal Fundus Images and Vessel Segmented Images

Authors: Haoqi Gao, Koichi Ogawara

Abstract:

Retinal vascular segmentation of color fundus is the basis of ophthalmic computer-aided diagnosis and large-scale disease screening systems. Early screening of fundus diseases has great value for clinical medical diagnosis. The traditional methods depend on the experience of the doctor, which is time-consuming, labor-intensive, and inefficient. Furthermore, medical images are scarce and fraught with legal concerns regarding patient privacy. In this paper, we propose a new Generative Adversarial Network based on CycleGAN for retinal fundus images. This method can generate not only synthetic fundus images but also generate corresponding segmentation masks, which has certain application value and challenge in computer vision and computer graphics. In the results, we evaluate our proposed method from both quantitative and qualitative. For generated segmented images, our method achieves dice coefficient of 0.81 and PR of 0.89 on DRIVE dataset. For generated synthetic fundus images, we use ”Toy Experiment” to verify the state-of-the-art performance of our method.

Keywords: retinal vascular segmentations, generative ad-versarial network, cyclegan, fundus images

Procedia PDF Downloads 139
24601 Knowledge Management for Competitiveness and Performances in Higher Educational Institutes

Authors: Jeyarajan Sivapathasundram

Abstract:

Knowledge management has been recognised as an emerging factor for being competitive among institutions and performances in firms. As such, being recognised as knowledge rich institution, higher education institutes have to be recognised knowledge management based resources for achieving competitive advantages. Present research picked result out of postgraduate research conducted in knowledge management at non-state higher educational institutes of Sri Lanka. Besides, the present research aimed to discover knowledge management for competition and firm performances of higher educational institutes out of the result produced by the postgraduate study. Besides, the results are found in a pair that developed out of knowledge management practices and the reason behind the existence of the practices. As such, the present research has developed a filter to pick the pairs that satisfy its condition of competition and performance of the firm. As such, the pair, such as benchmarking is practised to be ethically competing through conducting courses. As the postgraduate research tested results of foreign researches in a qualitative paradigm, the finding of the present research are generalise fact for knowledge management for competitiveness and performances in higher educational institutes. Further, the presented research method used attributes which explain competition and performance in its filter to discover the pairs relevant to competition and performances. As such, the fact in regards to knowledge management for competition and performances in higher educational institutes are presented in the publication that the presentation is out of the generalised result. Therefore, knowledge management for competition and performance in higher educational institutes are generalised.

Keywords: competition in and among higher educational institutes, performances of higher educational institutes, noun based filtering, production out of generalisation of a research

Procedia PDF Downloads 133
24600 Identifying Model to Predict Deterioration of Water Mains Using Robust Analysis

Authors: Go Bong Choi, Shin Je Lee, Sung Jin Yoo, Gibaek Lee, Jong Min Lee

Abstract:

In South Korea, it is difficult to obtain data for statistical pipe assessment. In this paper, to address these issues, we find that various statistical model presented before is how data mixed with noise and are whether apply in South Korea. Three major type of model is studied and if data is presented in the paper, we add noise to data, which affects how model response changes. Moreover, we generate data from model in paper and analyse effect of noise. From this we can find robustness and applicability in Korea of each model.

Keywords: proportional hazard model, survival model, water main deterioration, ecological sciences

Procedia PDF Downloads 739
24599 Environmental Law and Payment for Environmental Services: Perceptions of the Family Farmers of the Federal District, Brazil

Authors: Kever Bruno Paradelo Gomes, Rosana Carvalho Cristo Martins

Abstract:

Payment for Environmental Services (PSA) has been a strategy used since the late 1990s by Latin American countries to finance environmental conservation. Payment for Environmental Services has been absorbing a growing amount of time in the discussions around environmentally sustainable development strategies in the world. In Brazil, this theme has permeated the discussions since the publication of the new Forest Code. The objective of this work was to verify the perception of the resident farmers in the region of Ponte Alta, Gama, Federal District, Brazil, on environmental legislation and Payments for Environmental Services. The work was carried out in 99 rural properties of the family farmers of the Rural Nucleus Ponte Alta, Administrative Region of Gama, in the city of Brasília, Federal District, Brazil. The present research is characterized methodologically as a quantitative, exploratory, and descriptive nature. The data treatment was performed through descriptive statistical analysis and hypothesis testing. The perceptions about environmental legislation in the rural area of Ponte Alta, Gama, DF respondents were positive. Although most of the family farmers interviewed have some knowledge about environmental legislation, it is perceived that in practice, the environmental adequacy of property is ineffective given the current situation of sustainable rural development; there is an abyss between what is envisaged by legislation and reality in the field. Thus, as in the reports of other researchers, it is verified that the majority of respondents are not aware of PSA (62.62%). Among those interviewed who were aware of the subject, two learned through the course, three through the university, two through TV and five through other people. The planting of native forest species on the rural property was the most informed practice by farmers if they received some Environmental Service Payment (PSA). Reflections on the environment allow us to infer that the effectiveness and fulfillment of the incentives and rewards in the scope of public policies to encourage the maintenance of environmental services, already existing in all spheres of government, are of great relevance to the process of environmental sustainability of rural properties. The relevance of the present research is an important tool to promote the discussion and formulation of public policies focused on sustainable rural development, especially on payments for environmental services; it is a space of great interest for the strengthening of the social group dedicated to production. Public policies that are efficient and accessible to the small rural producers become decisive elements for the promotion of changes in behavior in the field, be it economic, social, or environmental.

Keywords: forest code, public policy, rural development, sustainable agriculture

Procedia PDF Downloads 145
24598 Automated Testing to Detect Instance Data Loss in Android Applications

Authors: Anusha Konduru, Zhiyong Shan, Preethi Santhanam, Vinod Namboodiri, Rajiv Bagai

Abstract:

Mobile applications are increasing in a significant amount, each to address the requirements of many users. However, the quick developments and enhancements are resulting in many underlying defects. Android apps create and handle a large variety of 'instance' data that has to persist across runs, such as the current navigation route, workout results, antivirus settings, or game state. Due to the nature of Android, an app can be paused, sent into the background, or killed at any time. If the instance data is not saved and restored between runs, in addition to data loss, partially-saved or corrupted data can crash the app upon resume or restart. However, it is difficult for the programmer to manually test this issue for all the activities. This results in the issue of data loss that the data entered by the user are not saved when there is any interruption. This issue can degrade user experience because the user needs to reenter the information each time there is an interruption. Automated testing to detect such data loss is important to improve the user experience. This research proposes a tool, DroidDL, a data loss detector for Android, which detects the instance data loss from a given android application. We have tested 395 applications and found 12 applications with the issue of data loss. This approach is proved highly accurate and reliable to find the apps with this defect, which can be used by android developers to avoid such errors.

Keywords: Android, automated testing, activity, data loss

Procedia PDF Downloads 232
24597 Big Data: Appearance and Disappearance

Authors: James Moir

Abstract:

The mainstay of Big Data is prediction in that it allows practitioners, researchers, and policy analysts to predict trends based upon the analysis of large and varied sources of data. These can range from changing social and political opinions, patterns in crimes, and consumer behaviour. Big Data has therefore shifted the criterion of success in science from causal explanations to predictive modelling and simulation. The 19th-century science sought to capture phenomena and seek to show the appearance of it through causal mechanisms while 20th-century science attempted to save the appearance and relinquish causal explanations. Now 21st-century science in the form of Big Data is concerned with the prediction of appearances and nothing more. However, this pulls social science back in the direction of a more rule- or law-governed reality model of science and away from a consideration of the internal nature of rules in relation to various practices. In effect Big Data offers us no more than a world of surface appearance and in doing so it makes disappear any context-specific conceptual sensitivity.

Keywords: big data, appearance, disappearance, surface, epistemology

Procedia PDF Downloads 415
24596 From Data Processing to Experimental Design and Back Again: A Parameter Identification Problem Based on FRAP Images

Authors: Stepan Papacek, Jiri Jablonsky, Radek Kana, Ctirad Matonoha, Stefan Kindermann

Abstract:

FRAP (Fluorescence Recovery After Photobleaching) is a widely used measurement technique to determine the mobility of fluorescent molecules within living cells. While the experimental setup and protocol for FRAP experiments are usually fixed, data processing part is still under development. In this paper, we formulate and solve the problem of data selection which enhances the processing of FRAP images. We introduce the concept of the irrelevant data set, i.e., the data which are almost not reducing the confidence interval of the estimated parameters and thus could be neglected. Based on sensitivity analysis, we both solve the problem of the optimal data space selection and we find specific conditions for optimizing an important experimental design factor, e.g., the radius of bleach spot. Finally, a theorem announcing less precision of the integrated data approach compared to the full data case is proven; i.e., we claim that the data set represented by the FRAP recovery curve lead to a larger confidence interval compared to the spatio-temporal (full) data.

Keywords: FRAP, inverse problem, parameter identification, sensitivity analysis, optimal experimental design

Procedia PDF Downloads 273
24595 Exploring the Feasibility of Utilizing Blockchain in Cloud Computing and AI-Enabled BIM for Enhancing Data Exchange in Construction Supply Chain Management

Authors: Tran Duong Nguyen, Marwan Shagar, Qinghao Zeng, Aras Maqsoodi, Pardis Pishdad, Eunhwa Yang

Abstract:

Construction supply chain management (CSCM) involves the collaboration of many disciplines and actors, which generates vast amounts of data. However, inefficient, fragmented, and non-standardized data storage often hinders this data exchange. The industry has adopted building information modeling (BIM) -a digital representation of a facility's physical and functional characteristics to improve collaboration, enhance transmission security, and provide a common data exchange platform. Still, the volume and complexity of data require tailored information categorization, aligning with stakeholders' preferences and demands. To address this, artificial intelligence (AI) can be integrated to handle this data’s magnitude and complexities. This research aims to develop an integrated and efficient approach for data exchange in CSCM by utilizing AI. The paper covers five main objectives: (1) Investigate existing framework and BIM adoption; (2) Identify challenges in data exchange; (3) Propose an integrated framework; (4) Enhance data transmission security; and (5) Develop data exchange in CSCM. The proposed framework demonstrates how integrating BIM and other technologies, such as cloud computing, blockchain, and AI applications, can significantly improve the efficiency and accuracy of data exchange in CSCM.

Keywords: construction supply chain management, BIM, data exchange, artificial intelligence

Procedia PDF Downloads 15
24594 Representation Data without Lost Compression Properties in Time Series: A Review

Authors: Nabilah Filzah Mohd Radzuan, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan

Abstract:

Uncertain data is believed to be an important issue in building up a prediction model. The main objective in the time series uncertainty analysis is to formulate uncertain data in order to gain knowledge and fit low dimensional model prior to a prediction task. This paper discusses the performance of a number of techniques in dealing with uncertain data specifically those which solve uncertain data condition by minimizing the loss of compression properties.

Keywords: compression properties, uncertainty, uncertain time series, mining technique, weather prediction

Procedia PDF Downloads 423
24593 Efficient GIS Based Public Health System for Disease Prevention

Authors: K. M. G. T. R. Waidyarathna, S. M. Vidanagamachchi

Abstract:

Public Health System exists in Sri Lanka has a satisfactory complete information flow when compared to other systems in developing countries. The availability of a good health information system contributed immensely to achieve health indices that are in line with the developed countries like US and UK. The health information flow at the moment is completely paper based. In Sri Lanka, the fields like banking, accounting and engineering have incorporated information and communication technology to the same extent that can be observed in any other country. The field of medicine has behind those fields throughout the world mainly due to its complexity, issues like privacy, confidentially and lack of people with knowledge in both fields of Information Technology (IT) and Medicine. Sri Lanka’s situation is much worse and the gap is rapidly increasing with huge IT initiatives by private-public partnerships in all other countries. The major goal of the framework is to support minimizing the spreading diseases. To achieve that a web based framework should be implemented for this application domain with web mapping. The aim of this GIS based public health system is a secure, flexible, easy to maintain environment for creating and maintaining public health records and easy to interact with relevant parties.

Keywords: DHIS2, GIS, public health, Sri Lanka

Procedia PDF Downloads 561
24592 Data Mining As A Tool For Knowledge Management: A Review

Authors: Maram Saleh

Abstract:

Knowledge has become an essential resource in today’s economy and become the most important asset of maintaining competition advantage in organizations. The importance of knowledge has made organizations to manage their knowledge assets and resources through all multiple knowledge management stages such as: Knowledge Creation, knowledge storage, knowledge sharing and knowledge use. Researches on data mining are continues growing over recent years on both business and educational fields. Data mining is one of the most important steps of the knowledge discovery in databases process aiming to extract implicit, unknown but useful knowledge and it is considered as significant subfield in knowledge management. Data miming have the great potential to help organizations to focus on extracting the most important information on their data warehouses. Data mining tools and techniques can predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. This review paper explores the applications of data mining techniques in supporting knowledge management process as an effective knowledge discovery technique. In this paper, we identify the relationship between data mining and knowledge management, and then focus on introducing some application of date mining techniques in knowledge management for some real life domains.

Keywords: Data Mining, Knowledge management, Knowledge discovery, Knowledge creation.

Procedia PDF Downloads 202
24591 Anomaly Detection Based Fuzzy K-Mode Clustering for Categorical Data

Authors: Murat Yazici

Abstract:

Anomalies are irregularities found in data that do not adhere to a well-defined standard of normal behavior. The identification of outliers or anomalies in data has been a subject of study within the statistics field since the 1800s. Over time, a variety of anomaly detection techniques have been developed in several research communities. The cluster analysis can be used to detect anomalies. It is the process of associating data with clusters that are as similar as possible while dissimilar clusters are associated with each other. Many of the traditional cluster algorithms have limitations in dealing with data sets containing categorical properties. To detect anomalies in categorical data, fuzzy clustering approach can be used with its advantages. The fuzzy k-Mode (FKM) clustering algorithm, which is one of the fuzzy clustering approaches, by extension to the k-means algorithm, is reported for clustering datasets with categorical values. It is a form of clustering: each point can be associated with more than one cluster. In this paper, anomaly detection is performed on two simulated data by using the FKM cluster algorithm. As a significance of the study, the FKM cluster algorithm allows to determine anomalies with their abnormality degree in contrast to numerous anomaly detection algorithms. According to the results, the FKM cluster algorithm illustrated good performance in the anomaly detection of data, including both one anomaly and more than one anomaly.

Keywords: fuzzy k-mode clustering, anomaly detection, noise, categorical data

Procedia PDF Downloads 48
24590 Enhancing Seismic Performance of Ductile Moment Frames with Delayed Wire-Rope Bracing Using Middle Steel Plate

Authors: Babak Dizangian, Mohammad Reza Ghasemi, Akram Ghalandari

Abstract:

Moment frames have considerable ductility against cyclic lateral loads and displacements; however, if this feature causes the relative displacement to exceed the permissible limit, it can impose unfavorable hysteretic behavior on the frame. Therefore, adding a bracing system with the capability of preserving the capacity of high energy absorption and controlling displacements without a considerable increase in the stiffness is quite important. This paper investigates the retrofitting of a single storey steel moment frame through a delayed wire-rope bracing system using a middle steel plate. In this model, the steel plate lies where the wire ropes meet, and the model geometry is such that the cables are continuously under tension so that they can take the most advantage of the inherent potential they have in tolerating tensile stress. Using the steel plate also reduces the system stiffness considerably compared to cross bracing systems and preserves the ductile frame’s energy absorption capacity. In this research, the software models of delayed wire-rope bracing system have been studied, validated, and compared with other researchers’ laboratory test results.

Keywords: cyclic loading, delayed wire rope bracing, ductile moment frame, energy absorption, hysteresis curve

Procedia PDF Downloads 285
24589 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach

Authors: Sarisa Pinkham, Kanyarat Bussaban

Abstract:

The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.

Keywords: daily rainfall, image processing, approximation, pixel value data

Procedia PDF Downloads 383