Search results for: data consistency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24697

Search results for: data consistency

24187 Dissimilarity Measure for General Histogram Data and Its Application to Hierarchical Clustering

Authors: K. Umbleja, M. Ichino

Abstract:

Symbolic data mining has been developed to analyze data in very large datasets. It is also useful in cases when entry specific details should remain hidden. Symbolic data mining is quickly gaining popularity as datasets in need of analyzing are becoming ever larger. One type of such symbolic data is a histogram, which enables to save huge amounts of information into a single variable with high-level of granularity. Other types of symbolic data can also be described in histograms, therefore making histogram a very important and general symbolic data type - a method developed for histograms - can also be applied to other types of symbolic data. Due to its complex structure, analyzing histograms is complicated. This paper proposes a method, which allows to compare two histogram-valued variables and therefore find a dissimilarity between two histograms. Proposed method uses the Ichino-Yaguchi dissimilarity measure for mixed feature-type data analysis as a base and develops a dissimilarity measure specifically for histogram data, which allows to compare histograms with different number of bins and bin widths (so called general histogram). Proposed dissimilarity measure is then used as a measure for clustering. Furthermore, linkage method based on weighted averages is proposed with the concept of cluster compactness to measure the quality of clustering. The method is then validated with application on real datasets. As a result, the proposed dissimilarity measure is found producing adequate and comparable results with general histograms without the loss of detail or need to transform the data.

Keywords: dissimilarity measure, hierarchical clustering, histograms, symbolic data analysis

Procedia PDF Downloads 143
24186 WiFi Data Offloading: Bundling Method in a Canvas Business Model

Authors: Majid Mokhtarnia, Alireza Amini

Abstract:

Mobile operators deal with increasing in the data traffic as a critical issue. As a result, a vital responsibility of the operators is to deal with such a trend in order to create added values. This paper addresses a bundling method in a Canvas business model in a WiFi Data Offloading (WDO) strategy by which some elements of the model may be affected. In the proposed method, it is supposed to sell a number of data packages for subscribers in which there are some packages with a free given volume of data-offloaded WiFi complimentary. The paper on hands analyses this method in the views of attractiveness and profitability. The results demonstrate that the quality of implementation of the WDO strongly affects the final result and helps the decision maker to make the best one.

Keywords: bundling, canvas business model, telecommunication, WiFi data offloading

Procedia PDF Downloads 175
24185 Distributed Perceptually Important Point Identification for Time Series Data Mining

Authors: Tak-Chung Fu, Ying-Kit Hung, Fu-Lai Chung

Abstract:

In the field of time series data mining, the concept of the Perceptually Important Point (PIP) identification process is first introduced in 2001. This process originally works for financial time series pattern matching and it is then found suitable for time series dimensionality reduction and representation. Its strength is on preserving the overall shape of the time series by identifying the salient points in it. With the rise of Big Data, time series data contributes a major proportion, especially on the data which generates by sensors in the Internet of Things (IoT) environment. According to the nature of PIP identification and the successful cases, it is worth to further explore the opportunity to apply PIP in time series ‘Big Data’. However, the performance of PIP identification is always considered as the limitation when dealing with ‘Big’ time series data. In this paper, two distributed versions of PIP identification based on the Specialized Binary (SB) Tree are proposed. The proposed approaches solve the bottleneck when running the PIP identification process in a standalone computer. Improvement in term of speed is obtained by the distributed versions.

Keywords: distributed computing, performance analysis, Perceptually Important Point identification, time series data mining

Procedia PDF Downloads 410
24184 Analysing Techniques for Fusing Multimodal Data in Predictive Scenarios Using Convolutional Neural Networks

Authors: Philipp Ruf, Massiwa Chabbi, Christoph Reich, Djaffar Ould-Abdeslam

Abstract:

In recent years, convolutional neural networks (CNN) have demonstrated high performance in image analysis, but oftentimes, there is only structured data available regarding a specific problem. By interpreting structured data as images, CNNs can effectively learn and extract valuable insights from tabular data, leading to improved predictive accuracy and uncovering hidden patterns that may not be apparent in traditional structured data analysis. In applying a single neural network for analyzing multimodal data, e.g., both structured and unstructured information, significant advantages in terms of time complexity and energy efficiency can be achieved. Converting structured data into images and merging them with existing visual material offers a promising solution for applying CNN in multimodal datasets, as they often occur in a medical context. By employing suitable preprocessing techniques, structured data is transformed into image representations, where the respective features are expressed as different formations of colors and shapes. In an additional step, these representations are fused with existing images to incorporate both types of information. This final image is finally analyzed using a CNN.

Keywords: CNN, image processing, tabular data, mixed dataset, data transformation, multimodal fusion

Procedia PDF Downloads 95
24183 A Study on the Iterative Scheme for Stratified Shields Gamma Ray Buildup Factors Using Layer-Splitting Technique in Double-Layer Shields

Authors: Sari F. Alkhatib, Chang Je Park, Gyuhong Roh

Abstract:

The iterative scheme which is used to treat buildup factors for stratified shields is being investigated here using the layer-splitting technique. A simple suggested formalism for the scheme based on the Kalos’ formula is introduced, based on which the implementation of the testing technique is carried out. The second layer in a double-layer shield was split into two equivalent layers and the scheme (with the suggested formalism) was implemented on the new “three-layer” shield configuration. The results of such manipulation on water-lead and water-iron shields combinations are presented here for 1 MeV photons. It was found that splitting the second layer introduces some deviation on the overall buildup factor value. This expected deviation appeared to be higher in the case of low Z layer followed by high Z. However, the overall performance of the iterative scheme showed a great consistency and strong coherence even with the introduced changes. The introduced layer-splitting testing technique shows the capability to be implemented in test the iterative scheme with a wide range of formalisms.

Keywords: buildup factor, iterative scheme, stratified shields, layer-splitting tecnique

Procedia PDF Downloads 394
24182 An Aesthetic Spatial Turn - AI and Aesthetics in the Physical, Psychological, and Symbolic Spaces of Brand Advertising

Authors: Yu Chen

Abstract:

In line with existing philosophical approaches, this research proposes a conceptual model with an innovative spatial vision and aesthetic principles for Artificial Intelligence (AI) application in brand advertising. The model first identifies the major constituencies in contemporary advertising on three spatial levels—physical, psychological, and symbolic. The model further incorporates the relationships among AI, aesthetics, branding, and advertising and their interactions with the major actors in all spaces. It illustrates that AI may follow the aesthetic principles-- beauty, elegance, and simplicity-- to reinforce brand identity and consistency in advertising, to collaborate with stakeholders, and to satisfy different advertising objectives on each level. It proposes that, with aesthetic guidelines, AI may assist consumers to emerge into the physical, psychological, and symbolic advertising spaces and helps transcend the tangible advertising messages to meaningful brand symbols. Conceptually, the research illustrates that even though consumers’ engagement with brand mostly begins with physical advertising and later moves to psychological-symbolic, AI-assisted advertising should start with the understanding of brand symbolic-psychological and consumer aesthetic preferences before the physical design to better resonate. Limits of AI and future AI functions in advertising are discussed.

Keywords: AI, spatial, aesthetic, brand advertising

Procedia PDF Downloads 55
24181 Knowledge Discovery and Data Mining Techniques in Textile Industry

Authors: Filiz Ersoz, Taner Ersoz, Erkin Guler

Abstract:

This paper addresses the issues and technique for textile industry using data mining techniques. Data mining has been applied to the stitching of garments products that were obtained from a textile company. Data mining techniques were applied to the data obtained from the CHAID algorithm, CART algorithm, Regression Analysis and, Artificial Neural Networks. Classification technique based analyses were used while data mining and decision model about the production per person and variables affecting about production were found by this method. In the study, the results show that as the daily working time increases, the production per person also decreases. In addition, the relationship between total daily working and production per person shows a negative result and the production per person show the highest and negative relationship.

Keywords: data mining, textile production, decision trees, classification

Procedia PDF Downloads 329
24180 Psychometric Properties of Several New Positive Psychology Measures

Authors: Lauren Benyo Linford, Jared Warren, Jeremy Bekker, Gus Salazar

Abstract:

In order to accurately identify areas needing improvement and track growth, the availability of valid and reliable measures of different facets of well-being is vital. Because no specific measures currently exist for many facets of well-being, the purpose of this study was to construct and validate measures of the following constructs: Purpose, Values, Mindfulness, Savoring, Gratitude, Optimism, Supportive Relationships, Interconnectedness, Compassion, Community, Contribution, Engaged Living, Personal Growth, Flow Experiences, Self-Compassion, Exercise, Meditation, and an overall measure of subjective well-being—the Survey on Flourishing. In order to assess their psychometric properties, each measure was examined for internal consistency estimates, and items with poor item-test correlations were dropped. Additionally, the convergent validity of the Survey on Flourishing (SURF) was assessed. Total score correlations of SURF and other commonly used measures of well-being such as the Positive and Negative Affect Schedule (PANAS), The Satisfaction with Life Scale (SWLS), the PERMA Profiler (measure of Positive Emotion, Engagement, Relationships, Meaning, and Achievement) were examined to establish convergent validity. The Kessler Psychological distress scale (K6) was also included to determine the divergent validity of the SURF measure. Three week test-retest reliability was also assessed for the SURF measure. Additionally, normative data from general population samples was collected for both the Self-Compassion and Survey on Flourishing (SURF) measures. The purpose of this study is to introduce each of these measures, divulge the psychometric findings of this study, as well as explore additional psychometric properties of the SURF measure in particular. This study will highlight how these measures can be used in future research exploring these positive psychology constructs. Additionally, this study will discuss the utility of these measures to guide individuals in their use of the online self-directed, self-administered My Best Self 101 positive psychology resources developed by the researchers. The goal of My Best Self 101 is to disseminate real, research-based measures and tools to individuals who are seeking to increase their well-being.

Keywords: measurement, psychometrics, test validation, well-Being

Procedia PDF Downloads 167
24179 Investigation of Delivery of Triple Play Data in GE-PON Fiber to the Home Network

Authors: Ashima Anurag Sharma

Abstract:

Optical fiber based networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This research paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparison between various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be decreases due to increase in bit error rate.

Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT

Procedia PDF Downloads 507
24178 Microarray Gene Expression Data Dimensionality Reduction Using PCA

Authors: Fuad M. Alkoot

Abstract:

Different experimental technologies such as microarray sequencing have been proposed to generate high-resolution genetic data, in order to understand the complex dynamic interactions between complex diseases and the biological system components of genes and gene products. However, the generated samples have a very large dimension reaching thousands. Therefore, hindering all attempts to design a classifier system that can identify diseases based on such data. Additionally, the high overlap in the class distributions makes the task more difficult. The data we experiment with is generated for the identification of autism. It includes 142 samples, which is small compared to the large dimension of the data. The classifier systems trained on this data yield very low classification rates that are almost equivalent to a guess. We aim at reducing the data dimension and improve it for classification. Here, we experiment with applying a multistage PCA on the genetic data to reduce its dimensionality. Results show a significant improvement in the classification rates which increases the possibility of building an automated system for autism detection.

Keywords: PCA, gene expression, dimensionality reduction, classification, autism

Procedia PDF Downloads 536
24177 Data Science-Based Key Factor Analysis and Risk Prediction of Diabetic

Authors: Fei Gao, Rodolfo C. Raga Jr.

Abstract:

This research proposal will ascertain the major risk factors for diabetes and to design a predictive model for risk assessment. The project aims to improve diabetes early detection and management by utilizing data science techniques, which may improve patient outcomes and healthcare efficiency. The phase relation values of each attribute were used to analyze and choose the attributes that might influence the examiner's survival probability using Diabetes Health Indicators Dataset from Kaggle’s data as the research data. We compare and evaluate eight machine learning algorithms. Our investigation begins with comprehensive data preprocessing, including feature engineering and dimensionality reduction, aimed at enhancing data quality. The dataset, comprising health indicators and medical data, serves as a foundation for training and testing these algorithms. A rigorous cross-validation process is applied, and we assess their performance using five key metrics like accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC). After analyzing the data characteristics, investigate their impact on the likelihood of diabetes and develop corresponding risk indicators.

Keywords: diabetes, risk factors, predictive model, risk assessment, data science techniques, early detection, data analysis, Kaggle

Procedia PDF Downloads 52
24176 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0

Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini

Abstract:

Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.

Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling

Procedia PDF Downloads 71
24175 Laboratory Investigation of Expansive Soil Stabilized with Calcium Chloride

Authors: Magdi M. E. Zumrawi, Khalid A. Eltayeb

Abstract:

Chemical stabilization is a technique commonly used to improve the expansive soil properties. In this regard, an attempt has been made to evaluate the influence of Calcium Chloride (CaCl2) stabilizer on the engineering properties of expansive soil. A series of laboratory experiments including consistency limits, free swell, compaction, and shear strength tests were performed to investigate the effect of CaCl2 additive with various percentages 0%, 2%, 5%, 10% and 15% for improving expansive soil. The results obtained shows that the increase in the percentage of CaCl2 decreased the liquid limit and plasticity index leading to significant reduction in the free swell index. This, in turn, increased the maximum dry density and decreased the optimum moisture content which results in greater strength. The unconfined compressive strength of soil stabilized with 5% CaCl2 increased approximately by 50% as compared to virgin soil. It can be concluded that CaCl2 had shown promising influence on the strength and swelling properties of expansive soil, thereby giving an advantage in improving problematic expansive soil.

Keywords: calcium chloride, chemical stabilization, expansive soil, improving

Procedia PDF Downloads 309
24174 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encryption

Authors: Waziri Victor Onomza, John K. Alhassan, Idris Ismaila, Noel Dogonyaro Moses

Abstract:

This paper describes the problem of building secure computational services for encrypted information in the Cloud Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy, confidentiality, availability of the users. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute theoretical presentations in high-level computational processes that are based on number theory and algebra that can easily be integrated and leveraged in the Cloud computing with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based cryptographic security algorithm.

Keywords: big data analytics, security, privacy, bootstrapping, homomorphic, homomorphic encryption scheme

Procedia PDF Downloads 355
24173 Protecting Privacy and Data Security in Online Business

Authors: Bilquis Ferdousi

Abstract:

With the exponential growth of the online business, the threat to consumers’ privacy and data security has become a serious challenge. This literature review-based study focuses on a better understanding of those threats and what legislative measures have been taken to address those challenges. Research shows that people are increasingly involved in online business using different digital devices and platforms, although this practice varies based on age groups. The threat to consumers’ privacy and data security is a serious hindrance in developing trust among consumers in online businesses. There are some legislative measures taken at the federal and state level to protect consumers’ privacy and data security. The study was based on an extensive review of current literature on protecting consumers’ privacy and data security and legislative measures that have been taken.

Keywords: privacy, data security, legislation, online business

Procedia PDF Downloads 81
24172 Flowing Online Vehicle GPS Data Clustering Using a New Parallel K-Means Algorithm

Authors: Orhun Vural, Oguz Bayat, Rustu Akay, Osman N. Ucan

Abstract:

This study presents a new parallel approach clustering of GPS data. Evaluation has been made by comparing execution time of various clustering algorithms on GPS data. This paper aims to propose a parallel based on neighborhood K-means algorithm to make it faster. The proposed parallelization approach assumes that each GPS data represents a vehicle and to communicate between vehicles close to each other after vehicles are clustered. This parallelization approach has been examined on different sized continuously changing GPS data and compared with serial K-means algorithm and other serial clustering algorithms. The results demonstrated that proposed parallel K-means algorithm has been shown to work much faster than other clustering algorithms.

Keywords: parallel k-means algorithm, parallel clustering, clustering algorithms, clustering on flowing data

Procedia PDF Downloads 196
24171 An Analysis of Privacy and Security for Internet of Things Applications

Authors: Dhananjay Singh, M. Abdullah-Al-Wadud

Abstract:

The Internet of Things is a concept of a large scale ecosystem of wireless actuators. The actuators are defined as things in the IoT, those which contribute or produces some data to the ecosystem. However, ubiquitous data collection, data security, privacy preserving, large volume data processing, and intelligent analytics are some of the key challenges into the IoT technologies. In order to solve the security requirements, challenges and threats in the IoT, we have discussed a message authentication mechanism for IoT applications. Finally, we have discussed data encryption mechanism for messages authentication before propagating into IoT networks.

Keywords: Internet of Things (IoT), message authentication, privacy, security

Procedia PDF Downloads 353
24170 The Application of Article 111 of the Constitution of Bangladesh in the Criminal Justice System as a Sentencing Guideline

Authors: Sadiya S. Silvee

Abstract:

Generally, the decision of the higher court is binding on its subordinate courts. As provided in Article 111 of the Constitution, 'the law declared by the Appellate Division (AD) shall be binding on the High Court Division (HCD) and the law declared by either division of the Supreme Court shall be binding on all courts subordinate to it.' This means the judicial discipline requires the HCD to follow the decision of the AD and that it is necessary for the lower tiers of courts to accept the decision of the higher tiers as a binding precedent. Analyzing the application of Article 111 of the Constitution in the criminal justice system as a sentencing guideline, the paper, by examining whether there is any consistency in decision between one HC Bench and another HC Bench, explores whether HCD can per incuriam its previous decision. In doing so, the Death Reference (DR) Cases are contemplated. Furthermore, the paper shall examine whether the Court of Session follows the decision of the HCD while using their discretion to make the choice between death and imprisonment for life under section 302 of PC. The paper argues due to the absence of any specific direction for sentencing and inconsistency in jurisprudence among the HCD; the subordinate courts are in a dilemma.

Keywords: death reference, sentencing factor, sentencing guideline, criminal justice system and constitution

Procedia PDF Downloads 148
24169 Research on Robot Adaptive Polishing Control Technology

Authors: Yi Ming Zhang, Zhan Xi Wang, Hang Chen, Gang Wang

Abstract:

Manual polishing has problems such as high labor intensity, low production efficiency and difficulty in guaranteeing the consistency of polishing quality. It is more and more necessary to replace manual polishing with robot polishing. Polishing force directly affects the quality of polishing, so accurate tracking and control of polishing force is one of the most important conditions for improving the accuracy of robot polishing. The traditional force control strategy is difficult to adapt to the strong coupling of force control and position control during the robot polishing process. Therefore, based on the analysis of force-based impedance control and position-based impedance control, this paper proposed a new type of adaptive controller. Based on force feedback control of active compliance control, the controller can adaptively estimate the stiffness and position of the external environment and eliminate the steady-state force error produced by traditional impedance control. The simulation results of the model shows that the adaptive controller has good adaptability to changing environmental positions and environmental stiffness, and can accurately track and control polishing force.

Keywords: robot polishing, force feedback, impedance control, adaptive control

Procedia PDF Downloads 176
24168 Cognitive Science Based Scheduling in Grid Environment

Authors: N. D. Iswarya, M. A. Maluk Mohamed, N. Vijaya

Abstract:

Grid is infrastructure that allows the deployment of distributed data in large size from multiple locations to reach a common goal. Scheduling data intensive applications becomes challenging as the size of data sets are very huge in size. Only two solutions exist in order to tackle this challenging issue. First, computation which requires huge data sets to be processed can be transferred to the data site. Second, the required data sets can be transferred to the computation site. In the former scenario, the computation cannot be transferred since the servers are storage/data servers with little or no computational capability. Hence, the second scenario can be considered for further exploration. During scheduling, transferring huge data sets from one site to another site requires more network bandwidth. In order to mitigate this issue, this work focuses on incorporating cognitive science in scheduling. Cognitive Science is the study of human brain and its related activities. Current researches are mainly focused on to incorporate cognitive science in various computational modeling techniques. In this work, the problem solving approach of human brain is studied and incorporated during the data intensive scheduling in grid environments. Here, a cognitive engine is designed and deployed in various grid sites. The intelligent agents present in CE will help in analyzing the request and creating the knowledge base. Depending upon the link capacity, decision will be taken whether to transfer data sets or to partition the data sets. Prediction of next request is made by the agents to serve the requesting site with data sets in advance. This will reduce the data availability time and data transfer time. Replica catalog and Meta data catalog created by the agents assist in decision making process.

Keywords: data grid, grid workflow scheduling, cognitive artificial intelligence

Procedia PDF Downloads 374
24167 Heritage and Tourism in the Era of Big Data: Analysis of Chinese Cultural Tourism in Catalonia

Authors: Xinge Liao, Francesc Xavier Roige Ventura, Dolores Sanchez Aguilera

Abstract:

With the development of the Internet, the study of tourism behavior has rapidly expanded from the traditional physical market to the online market. Data on the Internet is characterized by dynamic changes, and new data appear all the time. In recent years the generation of a large volume of data was characterized, such as forums, blogs, and other sources, which have expanded over time and space, together they constitute large-scale Internet data, known as Big Data. This data of technological origin that derives from the use of devices and the activity of multiple users is becoming a source of great importance for the study of geography and the behavior of tourists. The study will focus on cultural heritage tourist practices in the context of Big Data. The research will focus on exploring the characteristics and behavior of Chinese tourists in relation to the cultural heritage of Catalonia. Geographical information, target image, perceptions in user-generated content will be studied through data analysis from Weibo -the largest social networks of blogs in China. Through the analysis of the behavior of heritage tourists in the Big Data environment, this study will understand the practices (activities, motivations, perceptions) of cultural tourists and then understand the needs and preferences of tourists in order to better guide the sustainable development of tourism in heritage sites.

Keywords: Barcelona, Big Data, Catalonia, cultural heritage, Chinese tourism market, tourists’ behavior

Procedia PDF Downloads 116
24166 Towards A Framework for Using Open Data for Accountability: A Case Study of A Program to Reduce Corruption

Authors: Darusalam, Jorish Hulstijn, Marijn Janssen

Abstract:

Media has revealed a variety of corruption cases in the regional and local governments all over the world. Many governments pursued many anti-corruption reforms and have created a system of checks and balances. Three types of corruption are faced by citizens; administrative corruption, collusion and extortion. Accountability is one of the benchmarks for building transparent government. The public sector is required to report the results of the programs that have been implemented so that the citizen can judge whether the institution has been working such as economical, efficient and effective. Open Data is offering solutions for the implementation of good governance in organizations who want to be more transparent. In addition, Open Data can create transparency and accountability to the community. The objective of this paper is to build a framework of open data for accountability to combating corruption. This paper will investigate the relationship between open data, and accountability as part of anti-corruption initiatives. This research will investigate the impact of open data implementation on public organization.

Keywords: open data, accountability, anti-corruption, framework

Procedia PDF Downloads 307
24165 Vertical Urban Design Guideline and Its Application to Measure Human Cognition and Emotions

Authors: Hee Sun (Sunny) Choi, Gerhard Bruyns, Wang Zhang, Sky Cheng, Saijal Sharma

Abstract:

This research addresses the need for a comprehensive framework that can guide the design and assessment of multi-level public spaces and public realms and their impact on the built environment. The study aims to understand and measure the neural mechanisms involved in this process. By doing so, it can lay the foundation for vertical and volumetric urbanism and ensure consistency and excellence in the field while also supporting scientific research methods for urban design with cognitive neuroscientists. To investigate these aspects, the paper focuses on the neighborhood scale in Hong Kong, specifically examining multi-level public spaces and quasi-public spaces within both commercial and residential complexes. The researchers use predictive Artificial Intelligence (AI) as a methodology to assess and comprehend the applicability of the urban design framework for vertical and volumetric urbanism. The findings aim to identify the factors that contribute to successful public spaces within a vertical living environment, thus introducing a new typology of public spaces.

Keywords: vertical urbanism, scientific research methods, spatial cognition, urban design guideline

Procedia PDF Downloads 52
24164 Syndromic Surveillance Framework Using Tweets Data Analytics

Authors: David Ming Liu, Benjamin Hirsch, Bashir Aden

Abstract:

Syndromic surveillance is to detect or predict disease outbreaks through the analysis of medical sources of data. Using social media data like tweets to do syndromic surveillance becomes more and more popular with the aid of open platform to collect data and the advantage of microblogging text and mobile geographic location features. In this paper, a Syndromic Surveillance Framework is presented with machine learning kernel using tweets data analytics. Influenza and the three cities Abu Dhabi, Al Ain and Dubai of United Arabic Emirates are used as the test disease and trial areas. Hospital cases data provided by the Health Authority of Abu Dhabi (HAAD) are used for the correlation purpose. In our model, Latent Dirichlet allocation (LDA) engine is adapted to do supervised learning classification and N-Fold cross validation confusion matrix are given as the simulation results with overall system recall 85.595% performance achieved.

Keywords: Syndromic surveillance, Tweets, Machine Learning, data mining, Latent Dirichlet allocation (LDA), Influenza

Procedia PDF Downloads 92
24163 Analysis of Urban Population Using Twitter Distribution Data: Case Study of Makassar City, Indonesia

Authors: Yuyun Wabula, B. J. Dewancker

Abstract:

In the past decade, the social networking app has been growing very rapidly. Geolocation data is one of the important features of social media that can attach the user's location coordinate in the real world. This paper proposes the use of geolocation data from the Twitter social media application to gain knowledge about urban dynamics, especially on human mobility behavior. This paper aims to explore the relation between geolocation Twitter with the existence of people in the urban area. Firstly, the study will analyze the spread of people in the particular area, within the city using Twitter social media data. Secondly, we then match and categorize the existing place based on the same individuals visiting. Then, we combine the Twitter data from the tracking result and the questionnaire data to catch the Twitter user profile. To do that, we used the distribution frequency analysis to learn the visitors’ percentage. To validate the hypothesis, we compare it with the local population statistic data and land use mapping released by the city planning department of Makassar local government. The results show that there is the correlation between Twitter geolocation and questionnaire data. Thus, integration the Twitter data and survey data can reveal the profile of the social media users.

Keywords: geolocation, Twitter, distribution analysis, human mobility

Procedia PDF Downloads 293
24162 The Evaluation of Shear Modulus (Go) Consistency State of Consolidation Cohesive Soils and Seismic Reflection Survey Using Degree of Soil Consolidation

Authors: Abdul Halim Abdul, Wan Ismail Wan Yusoff

Abstract:

The geological formation at Limau Manis Besar area, are consist of low grade metamorphic rock and undulating mountaineers, rugged terrain and the quite steeply 45 degree slope gradient. The objectives of this paper are present the methods and devices used in measurement of P-wave velocity to estimate the initial Shear Modulus (Go) in steady state and critical state soil consolidation. The relationship between SPT-N values and the Shear Modulus (Go) at very small strain is widely considered to be evaluated. Based on the seismic reflection survey, the constant (K) poroelastic theory, mean effectives stress and primer wave velocity (Vs) increase as the soil depth increase. The steady state and critical state, Degree of Soil Consolidation(U) concept is used to interpret the behavior of Shear Modulus (Go). The relationship between Consolidation Test and Seismic Reflection Survey is also discussed.

Keywords: geological setting, shear modulus, poroelastic theory, steady state and none steady state degree of soil consolidation, consolidation test

Procedia PDF Downloads 454
24161 Analysis and Rule Extraction of Coronary Artery Disease Data Using Data Mining

Authors: Rezaei Hachesu Peyman, Oliyaee Azadeh, Salahzadeh Zahra, Alizadeh Somayyeh, Safaei Naser

Abstract:

Coronary Artery Disease (CAD) is one major cause of disability in adults and one main cause of death in developed. In this study, data mining techniques including Decision Trees, Artificial neural networks (ANNs), and Support Vector Machine (SVM) analyze CAD data. Data of 4948 patients who had suffered from heart diseases were included in the analysis. CAD is the target variable, and 24 inputs or predictor variables are used for the classification. The performance of these techniques is compared in terms of sensitivity, specificity, and accuracy. The most significant factor influencing CAD is chest pain. Elderly males (age > 53) have a high probability to be diagnosed with CAD. SVM algorithm is the most useful way for evaluation and prediction of CAD patients as compared to non-CAD ones. Application of data mining techniques in analyzing coronary artery diseases is a good method for investigating the existing relationships between variables.

Keywords: classification, coronary artery disease, data-mining, knowledge discovery, extract

Procedia PDF Downloads 634
24160 Sensor Data Analysis for a Large Mining Major

Authors: Sudipto Shanker Dasgupta

Abstract:

One of the largest mining companies wanted to look at health analytics for their driverless trucks. These trucks were the key to their supply chain logistics. The automated trucks had multi-level sub-assemblies which would send out sensor information. The use case that was worked on was to capture the sensor signal from the truck subcomponents and analyze the health of the trucks from repair and replacement purview. Open source software was used to stream the data into a clustered Hadoop setup in Amazon Web Services cloud and Apache Spark SQL was used to analyze the data. All of this was achieved through a 10 node amazon 32 core, 64 GB RAM setup real-time analytics was achieved on ‘300 million records’. To check the scalability of the system, the cluster was increased to 100 node setup. This talk will highlight how Open Source software was used to achieve the above use case and the insights on the high data throughput on a cloud set up.

Keywords: streaming analytics, data science, big data, Hadoop, high throughput, sensor data

Procedia PDF Downloads 386
24159 Data-Centric Anomaly Detection with Diffusion Models

Authors: Sheldon Liu, Gordon Wang, Lei Liu, Xuefeng Liu

Abstract:

Anomaly detection, also referred to as one-class classification, plays a crucial role in identifying product images that deviate from the expected distribution. This study introduces Data-centric Anomaly Detection with Diffusion Models (DCADDM), presenting a systematic strategy for data collection and further diversifying the data with image generation via diffusion models. The algorithm addresses data collection challenges in real-world scenarios and points toward data augmentation with the integration of generative AI capabilities. The paper explores the generation of normal images using diffusion models. The experiments demonstrate that with 30% of the original normal image size, modeling in an unsupervised setting with state-of-the-art approaches can achieve equivalent performances. With the addition of generated images via diffusion models (10% equivalence of the original dataset size), the proposed algorithm achieves better or equivalent anomaly localization performance.

Keywords: diffusion models, anomaly detection, data-centric, generative AI

Procedia PDF Downloads 64
24158 Rheological Characteristics of Ice Slurries Based on Propylene- and Ethylene-Glycol at High Ice Fractions

Authors: Senda Trabelsi, Sébastien Poncet, Michel Poirier

Abstract:

Ice slurries are considered as a promising phase-changing secondary fluids for air-conditioning, packaging or cooling industrial processes. An experimental study has been here carried out to measure the rheological characteristics of ice slurries. Ice slurries consist in a solid phase (flake ice crystals) and a liquid phase. The later is composed of a mixture of liquid water and an additive being here either (1) Propylene-Glycol (PG) or (2) Ethylene-Glycol (EG) used to lower the freezing point of water. Concentrations of 5%, 14% and 24% of both additives are investigated with ice mass fractions ranging from 5% to 85%. The rheological measurements are carried out using a Discovery HR-2 vane-concentric cylinder with four full-length blades. The experimental results show that the behavior of ice slurries is generally non-Newtonian with shear-thinning or shear-thickening behaviors depending on the experimental conditions. In order to determine the consistency and the flow index, the Herschel-Bulkley model is used to describe the behavior of ice slurries. The present results are finally validated against an experimental database found in the literature and the predictions of an Artificial Neural Network model.

Keywords: ice slurry, propylene-glycol, ethylene-glycol, rheology

Procedia PDF Downloads 242