Search results for: housing data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25017

Search results for: housing data

24387 The Planning Criteria of Block-Unit Redevelopment to Improve Residential Environment: Focused on Redevelopment Project in Seoul

Authors: Hong-Nam Choi, Hyeong-Wook Song, Sungwan Hong, Hong-Kyu Kim

Abstract:

In Korea, elements that decide the quality of residential environment are not only diverse, but show deviation as well. However, people do not consider these elements and instead, they try to settle the uniformed style of residential environment, which focuses on the construction development of apartment housing and business based plans. Recently, block-unit redevelopment is becoming the standout alternative plan of standardize redevelopment projects, but constructions become inefficient because of indefinite planning criteria. In conclusion, the following research is about analyzing and categorizing the development method and legal ground of redevelopment project district, plan determinant and applicable standard. The purpose of this study is to become a basis in compatible analysis of planning standards that will happen in the future.

Keywords: shape restrictions, improvement of regulation, diversity of residential environment, classification of redevelopment project, planning criteria of redevelopment, special architectural district (SAD)

Procedia PDF Downloads 474
24386 Pattern Recognition Using Feature Based Die-Map Clustering in the Semiconductor Manufacturing Process

Authors: Seung Hwan Park, Cheng-Sool Park, Jun Seok Kim, Youngji Yoo, Daewoong An, Jun-Geol Baek

Abstract:

Depending on the big data analysis becomes important, yield prediction using data from the semiconductor process is essential. In general, yield prediction and analysis of the causes of the failure are closely related. The purpose of this study is to analyze pattern affects the final test results using a die map based clustering. Many researches have been conducted using die data from the semiconductor test process. However, analysis has limitation as the test data is less directly related to the final test results. Therefore, this study proposes a framework for analysis through clustering using more detailed data than existing die data. This study consists of three phases. In the first phase, die map is created through fail bit data in each sub-area of die. In the second phase, clustering using map data is performed. And the third stage is to find patterns that affect final test result. Finally, the proposed three steps are applied to actual industrial data and experimental results showed the potential field application.

Keywords: die-map clustering, feature extraction, pattern recognition, semiconductor manufacturing process

Procedia PDF Downloads 390
24385 Spatial Integrity of Seismic Data for Oil and Gas Exploration

Authors: Afiq Juazer Rizal, Siti Zaleha Misnan, M. Zairi M. Yusof

Abstract:

Seismic data is the fundamental tool utilized by exploration companies to determine potential hydrocarbon. However, the importance of seismic trace data will be undermined unless the geo-spatial component of the data is understood. Deriving a proposed well to be drilled from data that has positional ambiguity will jeopardize business decision and millions of dollars’ investment that every oil and gas company would like to avoid. Spatial integrity QC workflow has been introduced in PETRONAS to ensure positional errors within the seismic data are recognized throughout the exploration’s lifecycle from acquisition, processing, and seismic interpretation. This includes, amongst other tests, quantifying that the data is referenced to the appropriate coordinate reference system, survey configuration validation, and geometry loading verification. The direct outcome of the workflow implementation helps improve reliability and integrity of sub-surface geological model produced by geoscientist and provide important input to potential hazard assessment where positional accuracy is crucial. This workflow’s development initiative is part of a bigger geospatial integrity management effort, whereby nearly eighty percent of the oil and gas data are location-dependent.

Keywords: oil and gas exploration, PETRONAS, seismic data, spatial integrity QC workflow

Procedia PDF Downloads 208
24384 The Early Stages of the Standardisation of Finnish Building Sector

Authors: Anu Soikkeli

Abstract:

Early 20th century functionalism aimed at generalising living and rationalising construction, thus laying the foundation for the standardisation of construction components and products. From the 1930s onwards, all measurement and quality instructions for building products, different types of building components, descriptions of working methods complying with advisable building practises, planning, measurement and calculation guidelines, terminology, etc. were called standards. Standardisation was regarded as a necessary prerequisite for the mass production of housing. This article examines the early stages of standardisation in Finland in the 1940s and 1950s, as reflected on the working history of an individual architect, Erkki Koiso-Kanttila (1914-2006). In 1950 Koiso-Kanttila was appointed the Head of Design of the Finnish Association of Architects’ Building Standards Committee, a position which he held until 1958. His main responsibilities were the development of the RT Building Information File and compiling of the files.

Keywords: architecture, post WWII period, reconstruction, standardisation

Procedia PDF Downloads 406
24383 Single-Cell Visualization with Minimum Volume Embedding

Authors: Zhenqiu Liu

Abstract:

Visualizing the heterogeneity within cell-populations for single-cell RNA-seq data is crucial for studying the functional diversity of a cell. However, because of the high level of noises, outlier, and dropouts, it is very challenging to measure the cell-to-cell similarity (distance), visualize and cluster the data in a low-dimension. Minimum volume embedding (MVE) projects the data into a lower-dimensional space and is a promising tool for data visualization. However, it is computationally inefficient to solve a semi-definite programming (SDP) when the sample size is large. Therefore, it is not applicable to single-cell RNA-seq data with thousands of samples. In this paper, we develop an efficient algorithm with an accelerated proximal gradient method and visualize the single-cell RNA-seq data efficiently. We demonstrate that the proposed approach separates known subpopulations more accurately in single-cell data sets than other existing dimension reduction methods.

Keywords: single-cell RNA-seq, minimum volume embedding, visualization, accelerated proximal gradient method

Procedia PDF Downloads 217
24382 Cloud Data Security Using Map/Reduce Implementation of Secret Sharing Schemes

Authors: Sara Ibn El Ahrache, Tajje-eddine Rachidi, Hassan Badir, Abderrahmane Sbihi

Abstract:

Recently, there has been increasing confidence for a favorable usage of big data drawn out from the huge amount of information deposited in a cloud computing system. Data kept on such systems can be retrieved through the network at the user’s convenience. However, the data that users send include private information, and therefore, information leakage from these data is now a major social problem. The usage of secret sharing schemes for cloud computing have lately been approved to be relevant in which users deal out their data to several servers. Notably, in a (k,n) threshold scheme, data security is assured if and only if all through the whole life of the secret the opponent cannot compromise more than k of the n servers. In fact, a number of secret sharing algorithms have been suggested to deal with these security issues. In this paper, we present a Mapreduce implementation of Shamir’s secret sharing scheme to increase its performance and to achieve optimal security for cloud data. Different tests were run and through it has been demonstrated the contributions of the proposed approach. These contributions are quite considerable in terms of both security and performance.

Keywords: cloud computing, data security, Mapreduce, Shamir's secret sharing

Procedia PDF Downloads 291
24381 Contestation of Local and Non-Local Knowledge in Developing Bali Cattle at Barru Regency, Province of South Sulawesi, Indonesia

Authors: A. Amidah Amrawaty, M. Saleh S. Ali, Darmawan Salman

Abstract:

The aim of this study was to identify local and non local knowledge in Bali cattle development, to analyze the contestation between local and non-local knowledge. The paradigm used was constructivism paradigm with a qualitative approach. descriptive type of research using case study method. The study was conducted in four villages subjected to Agropolitan Program, i.e. Palakka, Tompo, Galung and Anabanua in Barru District, province of South Sulawesi. The results indicated that the local knowledge of the farmers were: a) knowledge of animal housing, b) knowledge of the prevention and control disease, c) knowledge of the feed, d) knowledge of breed selection, e) knowledge of sharing arrangement, f) knowledge of marketing, Generally, there are three patterns of knowledge contestation namely coexistence, ‘zero sum game’ and hybridization but in this research only coexistence and zero sum game patterns took place, while the pattern of hybridization did not occur.

Keywords: contestation, local knowledge, non-local knowledge, developing of Bali cattle

Procedia PDF Downloads 389
24380 A Modular Framework for Enabling Analysis for Educators with Different Levels of Data Mining Skills

Authors: Kyle De Freitas, Margaret Bernard

Abstract:

Enabling data mining analysis among a wider audience of educators is an active area of research within the educational data mining (EDM) community. The paper proposes a framework for developing an environment that caters for educators who have little technical data mining skills as well as for more advanced users with some data mining expertise. This framework architecture was developed through the review of the strengths and weaknesses of existing models in the literature. The proposed framework provides a modular architecture for future researchers to focus on the development of specific areas within the EDM process. Finally, the paper also highlights a strategy of enabling analysis through either the use of predefined questions or a guided data mining process and highlights how the developed questions and analysis conducted can be reused and extended over time.

Keywords: educational data mining, learning management system, learning analytics, EDM framework

Procedia PDF Downloads 313
24379 Using Audit Tools to Maintain Data Quality for ACC/NCDR PCI Registry Abstraction

Authors: Vikrum Malhotra, Manpreet Kaur, Ayesha Ghotto

Abstract:

Background: Cardiac registries such as ACC Percutaneous Coronary Intervention Registry require high quality data to be abstracted, including data elements such as nuclear cardiology, diagnostic coronary angiography, and PCI. Introduction: The audit tool created is used by data abstractors to provide data audits and assess the accuracy and inter-rater reliability of abstraction performed by the abstractors for a health system. This audit tool solution has been developed across 13 registries, including ACC/NCDR registries, PCI, STS, Get with the Guidelines. Methodology: The data audit tool was used to audit internal registry abstraction for all data elements, including stress test performed, type of stress test, data of stress test, results of stress test, risk/extent of ischemia, diagnostic catheterization detail, and PCI data elements for ACC/NCDR PCI registries. This is being used across 20 hospital systems internally and providing abstraction and audit services for them. Results: The data audit tool had inter-rater reliability and accuracy greater than 95% data accuracy and IRR score for the PCI registry in 50 PCI registry cases in 2021. Conclusion: The tool is being used internally for surgical societies and across hospital systems. The audit tool enables the abstractor to be assessed by an external abstractor and includes all of the data dictionary fields for each registry.

Keywords: abstraction, cardiac registry, cardiovascular registry, registry, data

Procedia PDF Downloads 94
24378 Artificial Intelligence Based Comparative Analysis for Supplier Selection in Multi-Echelon Automotive Supply Chains via GEP and ANN Models

Authors: Seyed Esmail Seyedi Bariran, Laysheng Ewe, Amy Ling

Abstract:

Since supplier selection appears as a vital decision, selecting supplier based on the best and most accurate ways has a lot of importance for enterprises. In this study, a new Artificial Intelligence approach is exerted to remove weaknesses of supplier selection. The paper has three parts. First part is choosing the appropriate criteria for assessing the suppliers’ performance. Next one is collecting the data set based on experts. Afterwards, the data set is divided into two parts, the training data set and the testing data set. By the training data set the best structure of GEP and ANN are selected and to evaluate the power of the mentioned methods the testing data set is used. The result obtained shows that the accuracy of GEP is more than ANN. Moreover, unlike ANN, a mathematical equation is presented by GEP for the supplier selection.

Keywords: supplier selection, automotive supply chains, ANN, GEP

Procedia PDF Downloads 615
24377 Increasing the Apparent Time Resolution of Tc-99m Diethylenetriamine Pentaacetic Acid Galactosyl Human Serum Albumin Dynamic SPECT by Use of an 180-Degree Interpolation Method

Authors: Yasuyuki Takahashi, Maya Yamashita, Kyoko Saito

Abstract:

In general, dynamic SPECT data acquisition needs a few minutes for one rotation. Thus, the time-activity curve (TAC) derived from the dynamic SPECT is relatively coarse. In order to effectively shorten the interval, between data points, we adopted a 180-degree interpolation method. This method is already used for reconstruction of the X-ray CT data. In this study, we applied this 180-degree interpolation method to SPECT and investigated its effectiveness.To briefly describe the 180-degree interpolation method: the 180-degree data in the second half of one rotation are combined with the 180-degree data in the first half of the next rotation to generate a 360-degree data set appropriate for the time halfway between the first and second rotations. In both a phantom and a patient study, the data points from the interpolated images fell in good agreement with the data points tracking the accumulation of 99mTc activity over time for appropriate region of interest. We conclude that data derived from interpolated images improves the apparent time resolution of dynamic SPECT.

Keywords: dynamic SPECT, time resolution, 180-degree interpolation method, 99mTc-GSA.

Procedia PDF Downloads 487
24376 Genetic Data of Deceased People: Solving the Gordian Knot

Authors: Inigo de Miguel Beriain

Abstract:

Genetic data of deceased persons are of great interest for both biomedical research and clinical use. This is due to several reasons. On the one hand, many of our diseases have a genetic component; on the other hand, we share genes with a good part of our biological family. Therefore, it would be possible to improve our response considerably to these pathologies if we could use these data. Unfortunately, at the present moment, the status of data on the deceased is far from being satisfactorily resolved by the EU data protection regulation. Indeed, the General Data Protection Regulation has explicitly excluded these data from the category of personal data. This decision has given rise to a fragmented legal framework on this issue. Consequently, each EU member state offers very different solutions. For instance, Denmark considers the data as personal data of the deceased person for a set period of time while some others, such as Spain, do not consider this data as such, but have introduced some specifically focused regulations on this type of data and their access by relatives. This is an extremely dysfunctional scenario from multiple angles, not least of which is scientific cooperation at the EU level. This contribution attempts to outline a solution to this dilemma through an alternative proposal. Its main hypothesis is that, in reality, health data are, in a sense, a rara avis within data in general because they do not refer to one person but to several. Hence, it is possible to think that all of them can be considered data subjects (although not all of them can exercise the corresponding rights in the same way). When the person from whom the data were obtained dies, the data remain as personal data of his or her biological relatives. Hence, the general regime provided for in the GDPR may apply to them. As these are personal data, we could go back to thinking in terms of a general prohibition of data processing, with the exceptions provided for in Article 9.2 and on the legal bases included in Article 6. This may be complicated in practice, given that, since we are dealing with data that refer to several data subjects, it may be complex to refer to some of these bases, such as consent. Furthermore, there are theoretical arguments that may oppose this hypothesis. In this contribution, it is shown, however, that none of these objections is of sufficient substance to delegitimize the argument exposed. Therefore, the conclusion of this contribution is that we can indeed build a general framework on the processing of personal data of deceased persons in the context of the GDPR. This would constitute a considerable improvement over the current regulatory framework, although it is true that some clarifications will be necessary for its practical application.

Keywords: collective data conceptual issues, data from deceased people, genetic data protection issues, GDPR and deceased people

Procedia PDF Downloads 141
24375 Evaluation of the Impact of Community Based Disaster Risk Management Applied In Landslide Prone Area; Reference to Badulla District

Authors: S. B. D. Samarasinghe, Malini Herath

Abstract:

Participatory planning is a very important process for decision making and choosing the best alternative options for community welfare, development of the society and its interactions among community and professionals. People’s involvement is considered as the key guidance in participatory planning. Presently, Participatory planning is being used in many fields. It's not only limited to planning but also to disaster management, poverty, housing, etc. In the past, Disaster management practice was a top-down approach, but it raised many issues as it was converted to a bottom-up approach. There are several approaches that can aid disaster management. Community-Based Disaster Risk Management (CBDRM) is a very successful participatory approach to risk management that is often successfully applied by other disaster-prone countries. In the local context, CBDRM has been applied to prevent Diseases as well as to prevent disasters such as landslides, tsunamis and floods. From three years before, Sri Lanka has initiated the CBDRM approach to minimize landslide vulnerability. Hence, this study mainly focuses on the impact of CBDRM approaches on landslide hazards. Also to identify their successes and failures from both implementing parties and community. This research is carried out based on a qualitative method combined with a descriptive research approach. A successful framework was prepared via a literature review. Case studies were selected considering landslide CBDRM programs which were implemented by Disaster Management Center and National Building Research Organization in Badulla. Their processes were evaluated. Data collection is done through interviews and informal discussions. Then their ideas were quantified by using the Relative Effectiveness index. The resulting numerical value was used to rank the program effectiveness and their success, failures and impacting factors. Results show that there are several failures among implementing parties and the community. Overcoming those factors can make way for better conduction of future CBDRM programs.

Keywords: community-based disaster risk management, disaster management, preparedness, landslide

Procedia PDF Downloads 128
24374 Steps towards the Development of National Health Data Standards in Developing Countries

Authors: Abdullah I. Alkraiji, Thomas W. Jackson, Ian Murray

Abstract:

The proliferation of health data standards today is somewhat overlapping and conflicting, resulting in market confusion and leading to increasing proprietary interests. The government role and support in standardization for health data are thought to be crucial in order to establish credible standards for the next decade, to maximize interoperability across the health sector, and to decrease the risks associated with the implementation of non-standard systems. The normative literature missed out the exploration of the different steps required to be undertaken by the government towards the development of national health data standards. Based on the lessons learned from a qualitative study investigating the different issues to the adoption of health data standards in the major tertiary hospitals in Saudi Arabia and the opinions and feedback from different experts in the areas of data exchange and standards and medical informatics in Saudi Arabia and UK, a list of steps required towards the development of national health data standards was constructed. Main steps are the existence of: a national formal reference for health data standards, an agreed national strategic direction for medical data exchange, a national medical information management plan and a national accreditation body, and more important is the change management at the national and organizational level. The outcome of this study can be used by academics and practitioners to develop the planning of health data standards, and in particular those in developing countries.

Keywords: interoperabilty, medical data exchange, health data standards, case study, Saudi Arabia

Procedia PDF Downloads 327
24373 A Proposal for U-City (Smart City) Service Method Using Real-Time Digital Map

Authors: SangWon Han, MuWook Pyeon, Sujung Moon, DaeKyo Seo

Abstract:

Recently, technologies based on three-dimensional (3D) space information are being developed and quality of life is improving as a result. Research on real-time digital map (RDM) is being conducted now to provide 3D space information. RDM is a service that creates and supplies 3D space information in real time based on location/shape detection. Research subjects on RDM include the construction of 3D space information with matching image data, complementing the weaknesses of image acquisition using multi-source data, and data collection methods using big data. Using RDM will be effective for space analysis using 3D space information in a U-City and for other space information utilization technologies.

Keywords: RDM, multi-source data, big data, U-City

Procedia PDF Downloads 419
24372 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-

Authors: Nieto Bernal Wilson, Carmona Suarez Edgar

Abstract:

The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects.  Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.

Keywords: data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse

Procedia PDF Downloads 392
24371 Parental Investment in Education: A Pathway for the Children's Access to Quality Education

Authors: Tukur Husaini Nahuche

Abstract:

The parent resources play a vital role in the life of the offspring. It help give children basic necessities of life like food, clothing, and housing. In a like manner financial assets allow parents to move into neighborhood with more affluent school systems, to pay school bills, purchase expensive technologies like personal computer, save money for tutoring books, magazines, journals, Newspapers etc. Making of proper provision in the home environment conducive for learning after school hours and creation of other outdoor activities for them are what necessitate in enhancing and accelerating children’s learning opportunities. Indeed, this paper intends to discuss parental investment in education, parent income resources, parental education, occupation, and income as relatively influencing children’s access to quality education. With the hope that families would provide equal opportunities for children irrespective of their sex, intelligence, subject choice,etc.

Keywords: parental investment, children's access, quality education

Procedia PDF Downloads 540
24370 Identifying Model to Predict Deterioration of Water Mains Using Robust Analysis

Authors: Go Bong Choi, Shin Je Lee, Sung Jin Yoo, Gibaek Lee, Jong Min Lee

Abstract:

In South Korea, it is difficult to obtain data for statistical pipe assessment. In this paper, to address these issues, we find that various statistical model presented before is how data mixed with noise and are whether apply in South Korea. Three major type of model is studied and if data is presented in the paper, we add noise to data, which affects how model response changes. Moreover, we generate data from model in paper and analyse effect of noise. From this we can find robustness and applicability in Korea of each model.

Keywords: proportional hazard model, survival model, water main deterioration, ecological sciences

Procedia PDF Downloads 730
24369 Automated Testing to Detect Instance Data Loss in Android Applications

Authors: Anusha Konduru, Zhiyong Shan, Preethi Santhanam, Vinod Namboodiri, Rajiv Bagai

Abstract:

Mobile applications are increasing in a significant amount, each to address the requirements of many users. However, the quick developments and enhancements are resulting in many underlying defects. Android apps create and handle a large variety of 'instance' data that has to persist across runs, such as the current navigation route, workout results, antivirus settings, or game state. Due to the nature of Android, an app can be paused, sent into the background, or killed at any time. If the instance data is not saved and restored between runs, in addition to data loss, partially-saved or corrupted data can crash the app upon resume or restart. However, it is difficult for the programmer to manually test this issue for all the activities. This results in the issue of data loss that the data entered by the user are not saved when there is any interruption. This issue can degrade user experience because the user needs to reenter the information each time there is an interruption. Automated testing to detect such data loss is important to improve the user experience. This research proposes a tool, DroidDL, a data loss detector for Android, which detects the instance data loss from a given android application. We have tested 395 applications and found 12 applications with the issue of data loss. This approach is proved highly accurate and reliable to find the apps with this defect, which can be used by android developers to avoid such errors.

Keywords: Android, automated testing, activity, data loss

Procedia PDF Downloads 224
24368 Big Data: Appearance and Disappearance

Authors: James Moir

Abstract:

The mainstay of Big Data is prediction in that it allows practitioners, researchers, and policy analysts to predict trends based upon the analysis of large and varied sources of data. These can range from changing social and political opinions, patterns in crimes, and consumer behaviour. Big Data has therefore shifted the criterion of success in science from causal explanations to predictive modelling and simulation. The 19th-century science sought to capture phenomena and seek to show the appearance of it through causal mechanisms while 20th-century science attempted to save the appearance and relinquish causal explanations. Now 21st-century science in the form of Big Data is concerned with the prediction of appearances and nothing more. However, this pulls social science back in the direction of a more rule- or law-governed reality model of science and away from a consideration of the internal nature of rules in relation to various practices. In effect Big Data offers us no more than a world of surface appearance and in doing so it makes disappear any context-specific conceptual sensitivity.

Keywords: big data, appearance, disappearance, surface, epistemology

Procedia PDF Downloads 402
24367 From Data Processing to Experimental Design and Back Again: A Parameter Identification Problem Based on FRAP Images

Authors: Stepan Papacek, Jiri Jablonsky, Radek Kana, Ctirad Matonoha, Stefan Kindermann

Abstract:

FRAP (Fluorescence Recovery After Photobleaching) is a widely used measurement technique to determine the mobility of fluorescent molecules within living cells. While the experimental setup and protocol for FRAP experiments are usually fixed, data processing part is still under development. In this paper, we formulate and solve the problem of data selection which enhances the processing of FRAP images. We introduce the concept of the irrelevant data set, i.e., the data which are almost not reducing the confidence interval of the estimated parameters and thus could be neglected. Based on sensitivity analysis, we both solve the problem of the optimal data space selection and we find specific conditions for optimizing an important experimental design factor, e.g., the radius of bleach spot. Finally, a theorem announcing less precision of the integrated data approach compared to the full data case is proven; i.e., we claim that the data set represented by the FRAP recovery curve lead to a larger confidence interval compared to the spatio-temporal (full) data.

Keywords: FRAP, inverse problem, parameter identification, sensitivity analysis, optimal experimental design

Procedia PDF Downloads 264
24366 Representation Data without Lost Compression Properties in Time Series: A Review

Authors: Nabilah Filzah Mohd Radzuan, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan

Abstract:

Uncertain data is believed to be an important issue in building up a prediction model. The main objective in the time series uncertainty analysis is to formulate uncertain data in order to gain knowledge and fit low dimensional model prior to a prediction task. This paper discusses the performance of a number of techniques in dealing with uncertain data specifically those which solve uncertain data condition by minimizing the loss of compression properties.

Keywords: compression properties, uncertainty, uncertain time series, mining technique, weather prediction

Procedia PDF Downloads 417
24365 The Effect of Climatic and Cultural Conditions in Increasing the Sense of Community in Residential Complexes (Case Study: Saedyeh Residential Complex)

Authors: Razieh Esfandiarisedgh

Abstract:

Community architecture has been proposed as an alternative approach in architecture, with three political, sociological, and psychological approaches. In community architecture, the psychological approach, as the only approach related to community design, has an important index called a sense of community. Changes in today's modern society, such as the shrinking of families, cause a decrease in the sense of community and unwillingness of people. It has become a residential complex to be present in public spaces. This issue can be increased by creating motivation with the help of design for the presence and participation of people in public spaces and taking advantage of the facilities and quality of these spaces. This research used the qualitative research method, studied and collected information, and used observation and interviews in the selected sample. Through targeted sampling and matching it with the extracted design table, it was concluded that climate and culture are known as two important factors in the collective view of housing in Hamedan.

Keywords: community architecture, sense of community, environmental psychology, architecture

Procedia PDF Downloads 48
24364 Data Mining As A Tool For Knowledge Management: A Review

Authors: Maram Saleh

Abstract:

Knowledge has become an essential resource in today’s economy and become the most important asset of maintaining competition advantage in organizations. The importance of knowledge has made organizations to manage their knowledge assets and resources through all multiple knowledge management stages such as: Knowledge Creation, knowledge storage, knowledge sharing and knowledge use. Researches on data mining are continues growing over recent years on both business and educational fields. Data mining is one of the most important steps of the knowledge discovery in databases process aiming to extract implicit, unknown but useful knowledge and it is considered as significant subfield in knowledge management. Data miming have the great potential to help organizations to focus on extracting the most important information on their data warehouses. Data mining tools and techniques can predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. This review paper explores the applications of data mining techniques in supporting knowledge management process as an effective knowledge discovery technique. In this paper, we identify the relationship between data mining and knowledge management, and then focus on introducing some application of date mining techniques in knowledge management for some real life domains.

Keywords: Data Mining, Knowledge management, Knowledge discovery, Knowledge creation.

Procedia PDF Downloads 196
24363 Anomaly Detection Based Fuzzy K-Mode Clustering for Categorical Data

Authors: Murat Yazici

Abstract:

Anomalies are irregularities found in data that do not adhere to a well-defined standard of normal behavior. The identification of outliers or anomalies in data has been a subject of study within the statistics field since the 1800s. Over time, a variety of anomaly detection techniques have been developed in several research communities. The cluster analysis can be used to detect anomalies. It is the process of associating data with clusters that are as similar as possible while dissimilar clusters are associated with each other. Many of the traditional cluster algorithms have limitations in dealing with data sets containing categorical properties. To detect anomalies in categorical data, fuzzy clustering approach can be used with its advantages. The fuzzy k-Mode (FKM) clustering algorithm, which is one of the fuzzy clustering approaches, by extension to the k-means algorithm, is reported for clustering datasets with categorical values. It is a form of clustering: each point can be associated with more than one cluster. In this paper, anomaly detection is performed on two simulated data by using the FKM cluster algorithm. As a significance of the study, the FKM cluster algorithm allows to determine anomalies with their abnormality degree in contrast to numerous anomaly detection algorithms. According to the results, the FKM cluster algorithm illustrated good performance in the anomaly detection of data, including both one anomaly and more than one anomaly.

Keywords: fuzzy k-mode clustering, anomaly detection, noise, categorical data

Procedia PDF Downloads 40
24362 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encyption Scheme

Authors: Victor Onomza Waziri, John K. Alhassan, Idris Ismaila, Noel Dogonyara

Abstract:

This paper describes the problem of building secure computational services for encrypted information in the Cloud. Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy or confidentiality, availability and integrity of the data and user’s security. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute a theoretical presentations in a high-level computational processes that are based on number theory that is derivable from abstract algebra which can easily be integrated and leveraged in the Cloud computing interface with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based on cryptographic security algorithm.

Keywords: big data analytics, security, privacy, bootstrapping, Fully Homomorphic Encryption Scheme

Procedia PDF Downloads 466
24361 From 'Segregation' to 'Integration': The Dynamic Mechanism of Residential Segregation and the Responsive Sustainable Regeneration Methods in China

Authors: Yang Chen

Abstract:

The property-led regeneration has played an important role in the process of rapid urbanization during the past twenty years in China, but it is also been criticized unsustainable as it always focuses on the economic aspect and overlooks the social issues, especially it has exacerbated the residential segregation in the inner city. Based on author’s studying the area around Nanjing railway station, this paper demonstrates that residential segregation indeed exists in the inner city through synthetic analysis on patterns of residents’ living, consumption and welfare, and to some extent, the segregation distribution characteristics represent in a concentric ring model. According to author’s further investigation on the property right and age of the dwelling buildings, the housing-commercialization-led regeneration is defined as the mainspring of the segregation. To solve these problems, the system of sustainable community should be established in both policy and practice, above all, well-designed public facilities including green infrastructure will be appropriate to promote the residential integration and sustainable development in contemporary China.

Keywords: China, dynamic mechanism, residential segregation, sustainable regeneration

Procedia PDF Downloads 439
24360 A Challenge to Conserve Moklen Ethnic House: Case Study in Tubpla Village, Phang Nga Province, Southern Thailand

Authors: M. Attavanich, H. Kobayashi

Abstract:

Moklen is a sub-group of ethnic minority in Thailand. In the past, they were vagabonds of the sea. Their livelihood relied on the sea but they built temporary shelters to avoid strong wind and waves during monsoon season. Recently, they have permanently settled on land along coastal area and mangrove forest in Phang Nga and Phuket Province, Southern Thailand. Moklen people have their own housing culture: the Moklen ethnic house was built from local natural materials, indicating a unique structure and design. Its wooden structure is joined by rattan ropes. The construction process is very unique because of using body-based unit of measurement for design and construction. However, there are several threats for those unique structures. One of the most important threats on Moklen ethnic house is tsunami. Especially the 2004 Indian Ocean Tsunami caused widely damage to Southern Thailand and Phang Nga province was the most affected area. In that time, Moklen villages which are located along the coastal area also affected calamitously. In order to recover the damage in affected villages, mostly new modern style houses were provided by aid agencies. This process has caused a significant impact on Moklen housing culture. Not only tsunami, but also modernization has an influence on the changing appearance of the Moklen houses and the effect of modernization has been started to experience before the tsunami. As a result, local construction knowledge is very limited nowadays because the number of elderly people in Moklen has been decreasing drastically. Last but not the least, restrictions of construction materials which are originally provided from accessible mangroves, create limitations in building a Moklen house. In particular, after the Reserved Forest Act, wood chopping without any permission has become illegal. These are some of the most important reasons for Moklen ethnic houses to disappear. Nevertheless, according to the results of field surveys done in 2013 in Phang Nga province, it is found out that some Moklen ethnic houses are still available in Tubpla Village, but only a few. Next survey in the same area in 2014 showed that number of Moklen houses in the village has been started to increase significantly. That proves that there is a high potential to conserve Moklen houses. Also the project of our research team in February 2014 contributed to continuation of Moklen ethnic house. With the cooperation of the village leader and our team, it was aimed to construct a Moklen house with the help of local participants. For the project, villagers revealed the building knowledge and techniques, and in the end, project helped community to understand the value of their houses. Also, it was a good opportunity for Moklen children to learn about their culture. In addition, NGOs recently have started to support ecotourism projects in the village. It not only helps to preserve a way of life, but also contributes to preserve indigenous knowledge and techniques of Moklen ethnic house. This kind of supporting activities are important for the conservation of Moklen ethnic houses.

Keywords: conservation, construction project, Moklen Ethnic House, 2004 Indian Ocean tsunami

Procedia PDF Downloads 298
24359 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach

Authors: Sarisa Pinkham, Kanyarat Bussaban

Abstract:

The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.

Keywords: daily rainfall, image processing, approximation, pixel value data

Procedia PDF Downloads 380
24358 The Effect of Measurement Distribution on System Identification and Detection of Behavior of Nonlinearities of Data

Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri

Abstract:

In this paper, we considered and applied parametric modeling for some experimental data of dynamical system. In this study, we investigated the different distribution of output measurement from some dynamical systems. Also, with variance processing in experimental data we obtained the region of nonlinearity in experimental data and then identification of output section is applied in different situation and data distribution. Finally, the effect of the spanning the measurement such as variance to identification and limitation of this approach is explained.

Keywords: Gaussian process, nonlinearity distribution, particle filter, system identification

Procedia PDF Downloads 498