Search results for: housing data
24559 Reimagining Urban Food Security Through Informality Practices: The Case of Street Food Vending in Johannesburg, South Africa
Authors: Blessings Masuku
Abstract:
This study positions itself within the nascent of street food vending that plays a crucial role in addressing urban household food security across the urban landscape of South Africa. The study aimed to understand how various forms of infrastructure systems (i.e., energy, water and sanitation, housing, and transport, among others) intersect with food and urban informality and how vendors and households’ choices and decisions made around food are influenced by infrastructure assemblages. This study noted that most of the literature studies on food security have mainly focused on the rural agricultural sector, with limited attention to urban food security, notably the role of informality practices in addressing urban food insecurity at the household level. This study pays close attention to how informal informality practices such as street food vending can be used as a catalyst to address urban poverty and household food security and steer local economies for sustainable livelihoods of the urban poor who live in the periphery of the city in Johannesburg. This study deconstructs the infrastructure needs of street food vendors, and the aim was to understand how such infrastructure needs intersect with food and policy that governs urban informality practices. The study argues that the decisions and choices of informality actors in the city of Johannesburg are chiefly determined by the assemblages of infrastructure, including regulatory frameworks that govern the informal sector in the city of Johannesburg. A qualitative approach that includes surveys (open-ended questions), archival research (i., e policy and other key document reviews), and key interviews mainly with city officials and informality actors. A thematic analysis was used to analyze the data collected. This study contributes to greater debates on urban studies and burgeoning literature on urban food security in many ways that include Firstly, the pivotal role that the informal food sector, notably street food vending, plays within the urban economy to address urban poverty and household food security, therefore questioning the conservative perspectives that view the informal sector as a hindrance to a ‘modern city’ and an annoyance to ‘modern’ urban spaces. Secondly, this study contributes to the livelihood and coping strategies of the urban poor who, despite harsh and restrictive regulatory frameworks, devise various agentive ways to generate incomes and address urban poverty and food insecurities.Keywords: urban food security, street food vending, informal food sector, infrastructure systems, livelihood strategies, policy framework and governance
Procedia PDF Downloads 6024558 Variance-Aware Routing and Authentication Scheme for Harvesting Data in Cloud-Centric Wireless Sensor Networks
Authors: Olakanmi Oladayo Olufemi, Bamifewe Olusegun James, Badmus Yaya Opeyemi, Adegoke Kayode
Abstract:
The wireless sensor network (WSN) has made a significant contribution to the emergence of various intelligent services or cloud-based applications. Most of the time, these data are stored on a cloud platform for efficient management and sharing among different services or users. However, the sensitivity of the data makes them prone to various confidentiality and performance-related attacks during and after harvesting. Various security schemes have been developed to ensure the integrity and confidentiality of the WSNs' data. However, their specificity towards particular attacks and the resource constraint and heterogeneity of WSNs make most of these schemes imperfect. In this paper, we propose a secure variance-aware routing and authentication scheme with two-tier verification to collect, share, and manage WSN data. The scheme is capable of classifying WSN into different subnets, detecting any attempt of wormhole and black hole attack during harvesting, and enforcing access control on the harvested data stored in the cloud. The results of the analysis showed that the proposed scheme has more security functionalities than other related schemes, solves most of the WSNs and cloud security issues, prevents wormhole and black hole attacks, identifies the attackers during data harvesting, and enforces access control on the harvested data stored in the cloud at low computational, storage, and communication overheads.Keywords: data block, heterogeneous IoT network, data harvesting, wormhole attack, blackhole attack access control
Procedia PDF Downloads 8324557 A Comparative Study on the Effects of Different Clustering Layouts and Geometry of Urban Street Canyons on Urban Heat Island in Residential Neighborhoods of Kolkata
Authors: Shreya Banerjee, Roshmi Sen, Subrata Chattopadhyay
Abstract:
Urbanization during the second half of the last century has created many serious environment related issues leading to global warming and climate change. India is not an exception as the country is also facing the problems of global warming and urban heat islands (UHI) in all the major metropolises. This paper discusses the effect of different housing cluster layouts, site geometry, and geometry of urban street canyons on the urban heat island profile. The study is carried out using the three dimensional microclimatic computational fluid dynamics model ENVI-met version 3.1. Simulation models are done for a typical summer day of 21st June, 2015 in four different residential neighborhoods in the city of Kolkata which predominantly belongs to Warm-Humid Monsoon Climate. The results show the changing pattern of urban heat island profile with respect to different clustering layouts, geometry, and morphology of urban street canyons. The comparison between the four neighborhoods shows that different microclimatic variables are strongly dependant on the neighborhood layout pattern and geometry. The inferences obtained from this study can be indicative towards the formulation of neighborhood design by-laws that will attenuate the urban heat island effect.Keywords: urban heat island, neighborhood morphology, site microclimate, ENVI-met, numerical analysis
Procedia PDF Downloads 36624556 Enhancing Seismic Resilience in Urban Environments
Authors: Beatriz González-rodrigo, Diego Hidalgo-leiva, Omar Flores, Claudia Germoso, Maribel Jiménez-martínez, Laura Navas-sánchez, Belén Orta, Nicola Tarque, Orlando Hernández- Rubio, Miguel Marchamalo, Juan Gregorio Rejas, Belén Benito-oterino
Abstract:
Cities facing seismic hazard necessitate detailed risk assessments for effective urban planning and vulnerability identification, ensuring the safety and sustainability of urban infrastructure. Comprehensive studies involving seismic hazard, vulnerability, and exposure evaluations are pivotal for estimating potential losses and guiding proactive measures against seismic events. However, broad-scale traditional risk studies limit consideration of specific local threats and identify vulnerable housing within a structural typology. Achieving precise results at neighbourhood levels demands higher resolution seismic hazard exposure, and vulnerability studies. This research aims to bolster sustainability and safety against seismic disasters in three Central American and Caribbean capitals. It integrates geospatial techniques and artificial intelligence into seismic risk studies, proposing cost-effective methods for exposure data collection and damage prediction. The methodology relies on prior seismic threat studies in pilot zones, utilizing existing exposure and vulnerability data in the region. Emphasizing detailed building attributes enables the consideration of behaviour modifiers affecting seismic response. The approach aims to generate detailed risk scenarios, facilitating prioritization of preventive actions pre-, during, and post-seismic events, enhancing decision-making certainty. Detailed risk scenarios necessitate substantial investment in fieldwork, training, research, and methodology development. Regional cooperation becomes crucial given similar seismic threats, urban planning, and construction systems among involved countries. The outcomes hold significance for emergency planning and national and regional construction regulations. The success of this methodology depends on cooperation, investment, and innovative approaches, offering insights and lessons applicable to regions facing moderate seismic threats with vulnerable constructions. Thus, this framework aims to fortify resilience in seismic-prone areas and serves as a reference for global urban planning and disaster management strategies. In conclusion, this research proposes a comprehensive framework for seismic risk assessment in high-risk urban areas, emphasizing detailed studies at finer resolutions for precise vulnerability evaluations. The approach integrates regional cooperation, geospatial technologies, and adaptive fragility curve adjustments to enhance risk assessment accuracy, guiding effective mitigation strategies and emergency management plans.Keywords: assessment, behaviour modifiers, emergency management, mitigation strategies, resilience, vulnerability
Procedia PDF Downloads 6624555 Model for Introducing Products to New Customers through Decision Tree Using Algorithm C4.5 (J-48)
Authors: Komol Phaisarn, Anuphan Suttimarn, Vitchanan Keawtong, Kittisak Thongyoun, Chaiyos Jamsawang
Abstract:
This article is intended to analyze insurance information which contains information on the customer decision when purchasing life insurance pay package. The data were analyzed in order to present new customers with Life Insurance Perfect Pay package to meet new customers’ needs as much as possible. The basic data of insurance pay package were collect to get data mining; thus, reducing the scattering of information. The data were then classified in order to get decision model or decision tree using Algorithm C4.5 (J-48). In the classification, WEKA tools are used to form the model and testing datasets are used to test the decision tree for the accurate decision. The validation of this model in classifying showed that the accurate prediction was 68.43% while 31.25% were errors. The same set of data were then tested with other models, i.e. Naive Bayes and Zero R. The results showed that J-48 method could predict more accurately. So, the researcher applied the decision tree in writing the program used to introduce the product to new customers to persuade customers’ decision making in purchasing the insurance package that meets the new customers’ needs as much as possible.Keywords: decision tree, data mining, customers, life insurance pay package
Procedia PDF Downloads 42524554 Assessing the Community Change Effects of Transit Oriented Development in Jabodetabek, Indonesia
Authors: Hayati Sari Hasibuan, Tresna P. Soemardi, Raldi H. Koestoer, Setyo S. Moersidik
Abstract:
Facing the severe transportation system in daily basis, the government of Indonesia were searching an alternative solution to combat the acute traffic jam and the socio-economic negative effects and pollutions resulted. Transit-oriented development as a strategy in reformulating and restructuring of the urban land uses as well as the transport system will be implemented in many urban areas in Indonesia, especially in Jabodetabek. Jabodetabek is the greatest metropolitan area in Indonesia with 27.9 million inhabitants. The Jabodetabek is also the center of economic activity with gross domestic product around 22 percent of gross national product. This study aims to assess the potential of economic development and community change effects with implementing the transit oriented development. This study found that using transit oriented development as an alternative approach in reconstructing of urban land uses in metropolitan region will effect to the behaviour of urban mobilities, the housing choices, and the cost of transportation. The sustainable of socio-economic aspects resulting from the transit oriented development is the main focus of this paper. The challenge here is to explore the characteristics of transit oriented development that suitable for metropolitan region in developing country,which considering the uniqueness of nature and socio-cultural that shapes this urban.Keywords: economic development, community change, restructuring, land use, transportation, environment
Procedia PDF Downloads 40624553 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review
Authors: Faisal Muhibuddin, Ani Dijah Rahajoe
Abstract:
This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review
Procedia PDF Downloads 6224552 Assessing Supply Chain Performance through Data Mining Techniques: A Case of Automotive Industry
Authors: Emin Gundogar, Burak Erkayman, Nusret Sazak
Abstract:
Providing effective management performance through the whole supply chain is critical issue and hard to applicate. The proper evaluation of integrated data may conclude with accurate information. Analysing the supply chain data through OLAP (On-Line Analytical Processing) technologies may provide multi-angle view of the work and consolidation. In this study, association rules and classification techniques are applied to measure the supply chain performance metrics of an automotive manufacturer in Turkey. Main criteria and important rules are determined. The comparison of the results of the algorithms is presented.Keywords: supply chain performance, performance measurement, data mining, automotive
Procedia PDF Downloads 51224551 Multimodal Data Fusion Techniques in Audiovisual Speech Recognition
Authors: Hadeer M. Sayed, Hesham E. El Deeb, Shereen A. Taie
Abstract:
In the big data era, we are facing a diversity of datasets from different sources in different domains that describe a single life event. These datasets consist of multiple modalities, each of which has a different representation, distribution, scale, and density. Multimodal fusion is the concept of integrating information from multiple modalities in a joint representation with the goal of predicting an outcome through a classification task or regression task. In this paper, multimodal fusion techniques are classified into two main classes: model-agnostic techniques and model-based approaches. It provides a comprehensive study of recent research in each class and outlines the benefits and limitations of each of them. Furthermore, the audiovisual speech recognition task is expressed as a case study of multimodal data fusion approaches, and the open issues through the limitations of the current studies are presented. This paper can be considered a powerful guide for interested researchers in the field of multimodal data fusion and audiovisual speech recognition particularly.Keywords: multimodal data, data fusion, audio-visual speech recognition, neural networks
Procedia PDF Downloads 10924550 Knowledge-Driven Decision Support System Based on Knowledge Warehouse and Data Mining by Improving Apriori Algorithm with Fuzzy Logic
Authors: Pejman Hosseinioun, Hasan Shakeri, Ghasem Ghorbanirostam
Abstract:
In recent years, we have seen an increasing importance of research and study on knowledge source, decision support systems, data mining and procedure of knowledge discovery in data bases and it is considered that each of these aspects affects the others. In this article, we have merged information source and knowledge source to suggest a knowledge based system within limits of management based on storing and restoring of knowledge to manage information and improve decision making and resources. In this article, we have used method of data mining and Apriori algorithm in procedure of knowledge discovery one of the problems of Apriori algorithm is that, a user should specify the minimum threshold for supporting the regularity. Imagine that a user wants to apply Apriori algorithm for a database with millions of transactions. Definitely, the user does not have necessary knowledge of all existing transactions in that database, and therefore cannot specify a suitable threshold. Our purpose in this article is to improve Apriori algorithm. To achieve our goal, we tried using fuzzy logic to put data in different clusters before applying the Apriori algorithm for existing data in the database and we also try to suggest the most suitable threshold to the user automatically.Keywords: decision support system, data mining, knowledge discovery, data discovery, fuzzy logic
Procedia PDF Downloads 33424549 Spatial Variation in Urbanization and Slum Development in India: Issues and Challenges in Urban Planning
Authors: Mala Mukherjee
Abstract:
Background: India is urbanizing very fast and urbanisation in India is treated as one of the most crucial components of economic growth. Though the pace of urbanisation (31.6 per cent in 2011) is however slower and lower than the average for Asia but the absolute number of people residing in cities and towns has increased substantially. Rapid urbanization leads to urban poverty and it is well represented in slums. Currently India has four metropolises and 53 million plus cities. All of them have significant slum population but the standard of living and success of slum development programmes varies across regions. Objectives: Objectives of the paper are to show how urbanisation and slum development varies across space; to show spatial variation in the standard of living in Indian slums; to analyse how the implementation of slum development policies like JNNURM, Rajiv Awas Yojana varies across cities and bring different results in different regions and what are the factors responsible for such variation. Data Sources and Methodology: Census 2011 data on urban population and slum households and amenities have been used for analysing the regional variation of urbanisation in 53 million plus cities of India. Special focus has been put on Kolkata Metropolitan Area. Statistical techniques like z-score and PCA have been employed to work out Standard of Living Deprivation score for all the slums of 53 metropolises. ARC-GIS software is used for making maps. Standard of living has been measured in terms of access to basic amenities, infrastructure and assets like drinking water, sanitation, housing condition, bank account, and so on. Findings: 1. The first finding reveals that migration and urbanization is very high in Greater Mumbai, Delhi, Bangaluru, Chennai, Hyderabad and Kolkata; but slum population is high in Greater Mumbai (50% population live in slums), Meerut, Faridabad, Ludhiana, Nagpur, Kolkata etc. Though the rate of urbanization is high in southern and western states but the percentage of slum population is high in northern states (except Greater Mumbai). 2. Standard of Living also varies widely. Slums of Greater Mumbai and North Indian Cities score fairly high in the index indicating the fact that standard of living is high in those slums compare to the slums in eastern India (Dhanbad, Jamshedpur, Kolkata). Therefore, though Kolkata have relatively lesser percentage of slum population compare to north and south Indian cities but the standard of living in Kolkata’s slums is deplorable. 3. It is interesting to note that even within Kolkata Metropolitan Area slums located in the southern and eastern municipal towns like Rajpur-Sonarpur, Pujali, Diamond Harbour, Baduria and Dankuni have lower standard of living compare to the slums located in the Hooghly Industrial belt like Titagarh, Rishrah, Srerampore etc. Slums of the Hooghly Industrial Belt are older than the slums located in eastern and southern part of the urban agglomeration. 4. Therefore, urban development and emergence of slums should not be the only issue of urban governance but standard of living should be the main focus. Slums located in the main cities like Delhi, Mumbai, Kolkata get more attention from the urban planners and similarly, older slums in a city receives greater political attention compare to the slums of smaller cities and newly emerged slums of the peripheral parts.Keywords: urbanisation, slum, spatial variation, India
Procedia PDF Downloads 35924548 The Study of Dengue Fever Outbreak in Thailand Using Geospatial Techniques, Satellite Remote Sensing Data and Big Data
Authors: Tanapat Chongkamunkong
Abstract:
The objective of this paper is to present a practical use of Geographic Information System (GIS) to the public health from spatial correlation between multiple factors and dengue fever outbreak. Meteorological factors, demographic factors and environmental factors are compiled using GIS techniques along with the Global Satellite Mapping Remote Sensing (RS) data. We use monthly dengue fever cases, population density, precipitation, Digital Elevation Model (DEM) data. The scope cover study area under climate change of the El Niño–Southern Oscillation (ENSO) indicated by sea surface temperature (SST) and study area in 12 provinces of Thailand as remote sensing (RS) data from January 2007 to December 2014.Keywords: dengue fever, sea surface temperature, Geographic Information System (GIS), remote sensing
Procedia PDF Downloads 19724547 Model of Optimal Centroids Approach for Multivariate Data Classification
Authors: Pham Van Nha, Le Cam Binh
Abstract:
Particle swarm optimization (PSO) is a population-based stochastic optimization algorithm. PSO was inspired by the natural behavior of birds and fish in migration and foraging for food. PSO is considered as a multidisciplinary optimization model that can be applied in various optimization problems. PSO’s ideas are simple and easy to understand but PSO is only applied in simple model problems. We think that in order to expand the applicability of PSO in complex problems, PSO should be described more explicitly in the form of a mathematical model. In this paper, we represent PSO in a mathematical model and apply in the multivariate data classification. First, PSOs general mathematical model (MPSO) is analyzed as a universal optimization model. Then, Model of Optimal Centroids (MOC) is proposed for the multivariate data classification. Experiments were conducted on some benchmark data sets to prove the effectiveness of MOC compared with several proposed schemes.Keywords: analysis of optimization, artificial intelligence based optimization, optimization for learning and data analysis, global optimization
Procedia PDF Downloads 20624546 Study of Inhibition of the End Effect Based on AR Model Predict of Combined Data Extension and Window Function
Authors: Pan Hongxia, Wang Zhenhua
Abstract:
In this paper, the EMD decomposition in the process of endpoint effect adopted data based on AR model to predict the continuation and window function method of combining the two effective inhibition. Proven by simulation of the simulation signal obtained the ideal effect, then, apply this method to the gearbox test data is also achieved good effect in the process, for the analysis of the subsequent data processing to improve the calculation accuracy. In the end, under various working conditions for the gearbox fault diagnosis laid a good foundation.Keywords: gearbox, fault diagnosis, ar model, end effect
Procedia PDF Downloads 36524545 Exploring the Intersection Between the General Data Protection Regulation and the Artificial Intelligence Act
Authors: Maria Jędrzejczak, Patryk Pieniążek
Abstract:
The European legal reality is on the eve of significant change. In European Union law, there is talk of a “fourth industrial revolution”, which is driven by massive data resources linked to powerful algorithms and powerful computing capacity. The above is closely linked to technological developments in the area of artificial intelligence, which has prompted an analysis covering both the legal environment as well as the economic and social impact, also from an ethical perspective. The discussion on the regulation of artificial intelligence is one of the most serious yet widely held at both European Union and Member State level. The literature expects legal solutions to guarantee security for fundamental rights, including privacy, in artificial intelligence systems. There is no doubt that personal data have been increasingly processed in recent years. It would be impossible for artificial intelligence to function without processing large amounts of data (both personal and non-personal). The main driving force behind the current development of artificial intelligence is advances in computing, but also the increasing availability of data. High-quality data are crucial to the effectiveness of many artificial intelligence systems, particularly when using techniques involving model training. The use of computers and artificial intelligence technology allows for an increase in the speed and efficiency of the actions taken, but also creates security risks for the data processed of an unprecedented magnitude. The proposed regulation in the field of artificial intelligence requires analysis in terms of its impact on the regulation on personal data protection. It is necessary to determine what the mutual relationship between these regulations is and what areas are particularly important in the personal data protection regulation for processing personal data in artificial intelligence systems. The adopted axis of considerations is a preliminary assessment of two issues: 1) what principles of data protection should be applied in particular during processing personal data in artificial intelligence systems, 2) what regulation on liability for personal data breaches is in such systems. The need to change the regulations regarding the rights and obligations of data subjects and entities processing personal data cannot be excluded. It is possible that changes will be required in the provisions regarding the assignment of liability for a breach of personal data protection processed in artificial intelligence systems. The research process in this case concerns the identification of areas in the field of personal data protection that are particularly important (and may require re-regulation) due to the introduction of the proposed legal regulation regarding artificial intelligence. The main question that the authors want to answer is how the European Union regulation against data protection breaches in artificial intelligence systems is shaping up. The answer to this question will include examples to illustrate the practical implications of these legal regulations.Keywords: data protection law, personal data, AI law, personal data breach
Procedia PDF Downloads 6324544 A Method for Identifying Unusual Transactions in E-commerce Through Extended Data Flow Conformance Checking
Authors: Handie Pramana Putra, Ani Dijah Rahajoe
Abstract:
The proliferation of smart devices and advancements in mobile communication technologies have permeated various facets of life with the widespread influence of e-commerce. Detecting abnormal transactions holds paramount significance in this realm due to the potential for substantial financial losses. Moreover, the fusion of data flow and control flow assumes a critical role in the exploration of process modeling and data analysis, contributing significantly to the accuracy and security of business processes. This paper introduces an alternative approach to identify abnormal transactions through a model that integrates both data and control flows. Referred to as the Extended Data Petri net (DPNE), our model encapsulates the entire process, encompassing user login to the e-commerce platform and concluding with the payment stage, including the mobile transaction process. We scrutinize the model's structure, formulate an algorithm for detecting anomalies in pertinent data, and elucidate the rationale and efficacy of the comprehensive system model. A case study validates the responsive performance of each system component, demonstrating the system's adeptness in evaluating every activity within mobile transactions. Ultimately, the results of anomaly detection are derived through a thorough and comprehensive analysis.Keywords: database, data analysis, DPNE, extended data flow, e-commerce
Procedia PDF Downloads 5324543 Child of the Dark by Carolina Maria De Jesus in a Fundamental Rights Perspective
Authors: Eliziane Navarro, Aparecida Citta
Abstract:
Child of the dark is the work of the Brazilian author Carolina Maria de Jesus published at the first time by Ática & Francisco Alves in 1960. It is, mostly, a story of lack of rights. It lacks to men who live in the slums what is essential in order to take advantage of the privilege of rationality to develop themselves as civilized humans. It is, therefore, in the withholding of the basic rights that inequality finds space to build itself to be the main misery on Earth. Antonio Candido, a Brazilian sociologist, claims that it is the right to literature has the ability to humanize men, once the aptitude to create fiction and fable is essential to the social balance. Hence, for the forming role that literature holds, it must be thought as the number of rights that assure human dignity, such as housing, education, health, freedom, etc. When talking about her routine, Carolina puts in evidence something that has great influence over the formation of human beings, contributing to the way they live: the slum. Even though it happens in a distinct way and using her linguistics variation, Carolina writes about something that will only be discussed later on Brazil’s Cities Statute and Ermia Maricato: the right to the city, and how the slums are, although inserted in the city, an attachment, an illegal city, a dismissing room. It interests ourselves, for that matter, in this work, to analyse how the deprivation of the rights to the city and literature, detailed in Carolina’s journal, conditions human beings to a life where the instincts overcome the social values.Keywords: Child of the dark, slum, Brazil, architecture and literature
Procedia PDF Downloads 25224542 Advanced Analytical Competency Is Necessary for Strategic Leadership to Achieve High-Quality Decision-Making
Authors: Amal Mohammed Alqahatni
Abstract:
This paper is a non-empirical analysis of existing literature on digital leadership competency, data-driven organizations, and dealing with AI technology (big data). This paper will provide insights into the importance of developing the leader’s analytical skills and style to be more effective for high-quality decision-making in a data-driven organization and achieve creativity during the organization's transformation to be digitalized. Despite the enormous potential that big data has, there are not enough experts in the field. Many organizations faced an issue with leadership style, which was considered an obstacle to organizational improvement. It investigates the obstacles to leadership style in this context and the challenges leaders face in coaching and development. The leader's lack of analytical skill with AI technology, such as big data tools, was noticed, as was the lack of understanding of the value of that data, resulting in poor communication with others, especially in meetings when the decision should be made. By acknowledging the different dynamics of work competency and organizational structure and culture, organizations can make the necessary adjustments to best support their leaders. This paper reviews prior research studies and applies what is known to assist with current obstacles. This paper addresses how analytical leadership will assist in overcoming challenges in a data-driven organization's work environment.Keywords: digital leadership, big data, leadership style, digital leadership challenge
Procedia PDF Downloads 6824541 Analysis of Operating Speed on Four-Lane Divided Highways under Mixed Traffic Conditions
Authors: Chaitanya Varma, Arpan Mehar
Abstract:
The present study demonstrates the procedure to analyse speed data collected on various four-lane divided sections in India. Field data for the study was collected at different straight and curved sections on rural highways with the help of radar speed gun and video camera. The data collected at the sections were analysed and parameters pertain to speed distributions were estimated. The different statistical distribution was analysed on vehicle type speed data and for mixed traffic speed data. It was found that vehicle type speed data was either follows the normal distribution or Log-normal distribution, whereas the mixed traffic speed data follows more than one type of statistical distribution. The most common fit observed on mixed traffic speed data were Beta distribution and Weibull distribution. The separate operating speed model based on traffic and roadway geometric parameters were proposed in the present study. The operating speed model with traffic parameters and curve geometry parameters were established. Two different operating speed models were proposed with variables 1/R and Ln(R) and were found to be realistic with a different range of curve radius. The models developed in the present study are simple and realistic and can be used for forecasting operating speed on four-lane highways.Keywords: highway, mixed traffic flow, modeling, operating speed
Procedia PDF Downloads 45924540 Accurate HLA Typing at High-Digit Resolution from NGS Data
Authors: Yazhi Huang, Jing Yang, Dingge Ying, Yan Zhang, Vorasuk Shotelersuk, Nattiya Hirankarn, Pak Chung Sham, Yu Lung Lau, Wanling Yang
Abstract:
Human leukocyte antigen (HLA) typing from next generation sequencing (NGS) data has the potential for applications in clinical laboratories and population genetic studies. Here we introduce a novel technique for HLA typing from NGS data based on read-mapping using a comprehensive reference panel containing all known HLA alleles and de novo assembly of the gene-specific short reads. An accurate HLA typing at high-digit resolution was achieved when it was tested on publicly available NGS data, outperforming other newly-developed tools such as HLAminer and PHLAT.Keywords: human leukocyte antigens, next generation sequencing, whole exome sequencing, HLA typing
Procedia PDF Downloads 66124539 Early Childhood Education: Teachers Ability to Assess
Authors: Ade Dwi Utami
Abstract:
Pedagogic competence is the basic competence of teachers to perform their tasks as educators. The ability to assess has become one of the demands in teachers pedagogic competence. Teachers ability to assess is related to curriculum instructions and applications. This research is aimed at obtaining data concerning teachers ability to assess that comprises of understanding assessment, determining assessment type, tools and procedure, conducting assessment process, and using assessment result information. It uses mixed method of explanatory technique in which qualitative data is used to verify the quantitative data obtained through a survey. The technique of quantitative data collection is by test whereas the qualitative data collection is by observation, interview and documentation. Then, the analyzed data is processed through a proportion study technique to be categorized into high, medium and low. The result of the research shows that teachers ability to assess can be grouped into 3 namely, 2% of high, 4% of medium and 94% of low. The data shows that teachers ability to assess is still relatively low. Teachers are lack of knowledge and comprehension in assessment application. The statement is verified by the qualitative data showing that teachers did not state which aspect was assessed in learning, record children’s behavior, and use the data result as a consideration to design a program. Teachers have assessment documents yet they only serve as means of completing teachers administration for the certification program. Thus, assessment documents were not used with the basis of acquired knowledge. The condition should become a consideration of the education institution of educators and the government to improve teachers pedagogic competence, including the ability to assess.Keywords: assessment, early childhood education, pedagogic competence, teachers
Procedia PDF Downloads 24424538 Statistical Analysis for Overdispersed Medical Count Data
Authors: Y. N. Phang, E. F. Loh
Abstract:
Many researchers have suggested the use of zero inflated Poisson (ZIP) and zero inflated negative binomial (ZINB) models in modeling over-dispersed medical count data with extra variations caused by extra zeros and unobserved heterogeneity. The studies indicate that ZIP and ZINB always provide better fit than using the normal Poisson and negative binomial models in modeling over-dispersed medical count data. In this study, we proposed the use of Zero Inflated Inverse Trinomial (ZIIT), Zero Inflated Poisson Inverse Gaussian (ZIPIG) and zero inflated strict arcsine models in modeling over-dispersed medical count data. These proposed models are not widely used by many researchers especially in the medical field. The results show that these three suggested models can serve as alternative models in modeling over-dispersed medical count data. This is supported by the application of these suggested models to a real life medical data set. Inverse trinomial, Poisson inverse Gaussian, and strict arcsine are discrete distributions with cubic variance function of mean. Therefore, ZIIT, ZIPIG and ZISA are able to accommodate data with excess zeros and very heavy tailed. They are recommended to be used in modeling over-dispersed medical count data when ZIP and ZINB are inadequate.Keywords: zero inflated, inverse trinomial distribution, Poisson inverse Gaussian distribution, strict arcsine distribution, Pearson’s goodness of fit
Procedia PDF Downloads 54124537 Monotone Rational Trigonometric Interpolation
Authors: Uzma Bashir, Jamaludin Md. Ali
Abstract:
This study is concerned with the visualization of monotone data using a piece-wise C1 rational trigonometric interpolating scheme. Four positive shape parameters are incorporated in the structure of rational trigonometric spline. Conditions on two of these parameters are derived to attain the monotonicity of monotone data and other two are left-free. Figures are used widely to exhibit that the proposed scheme produces graphically smooth monotone curves.Keywords: trigonometric splines, monotone data, shape preserving, C1 monotone interpolant
Procedia PDF Downloads 26924536 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels
Authors: Joshua Buli, David Pietrowski, Samuel Britton
Abstract:
Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization
Procedia PDF Downloads 8224535 Study of the Benefit Analysis Using Vertical Farming Method in Urban Renewal within the Older City of Taichung
Authors: Hsu Kuo-Wei, Tan Roon Fang, Chao Jen-chih
Abstract:
Cities face environmental challenges, including over-urbanization issues, air and water quality issues, lack of green space, excess heat capture, polluted storm water runoff and lack of ecological biodiversity. The vertical farming holds the condition of technology addressing these issues by enabling more food to be produced with finite less resources use and space. Most of the existing research regarding to technology Industry of agriculture between plant factory and vertical greening, which with high costs and high-technology. Relative research developed a sustainable model for construction and operation of the vertical farm in urban housing which aims to revolutionize our daily life of food production and urban development. However, those researches focused on quantitative analysis. This study utilized relative research for key variables of benefits of vertical farming. In the second stage, utilizes Fuzzy Delphi Method to obtain the critical factors of benefits of vertical farming using in Urban Renewal by interviewing the foregoing experts. Then, Analytic Hierarchy Process is applied to find the importance degree of each criterion as the measurable indices of the vertical farming method in urban renewal within the older city of Taichung.Keywords: urban renewal, vertical farming, urban agriculture, benefit analysis, the older city of Taichung
Procedia PDF Downloads 46524534 Integration of Knowledge and Metadata for Complex Data Warehouses and Big Data
Authors: Jean Christian Ralaivao, Fabrice Razafindraibe, Hasina Rakotonirainy
Abstract:
This document constitutes a resumption of work carried out in the field of complex data warehouses (DW) relating to the management and formalization of knowledge and metadata. It offers a methodological approach for integrating two concepts, knowledge and metadata, within the framework of a complex DW architecture. The objective of the work considers the use of the technique of knowledge representation by description logics and the extension of Common Warehouse Metamodel (CWM) specifications. This will lead to a fallout in terms of the performance of a complex DW. Three essential aspects of this work are expected, including the representation of knowledge in description logics and the declination of this knowledge into consistent UML diagrams while respecting or extending the CWM specifications and using XML as pivot. The field of application is large but will be adapted to systems with heteroge-neous, complex and unstructured content and moreover requiring a great (re)use of knowledge such as medical data warehouses.Keywords: data warehouse, description logics, integration, knowledge, metadata
Procedia PDF Downloads 13624533 Data Analytics in Energy Management
Authors: Sanjivrao Katakam, Thanumoorthi I., Antony Gerald, Ratan Kulkarni, Shaju Nair
Abstract:
With increasing energy costs and its impact on the business, sustainability today has evolved from a social expectation to an economic imperative. Therefore, finding methods to reduce cost has become a critical directive for Industry leaders. Effective energy management is the only way to cut costs. However, Energy Management has been a challenge because it requires a change in old habits and legacy systems followed for decades. Today exorbitant levels of energy and operational data is being captured and stored by Industries, but they are unable to convert these structured and unstructured data sets into meaningful business intelligence. It must be noted that for quick decisions, organizations must learn to cope with large volumes of operational data in different formats. Energy analytics not only helps in extracting inferences from these data sets, but also is instrumental in transformation from old approaches of energy management to new. This in turn assists in effective decision making for implementation. It is the requirement of organizations to have an established corporate strategy for reducing operational costs through visibility and optimization of energy usage. Energy analytics play a key role in optimization of operations. The paper describes how today energy data analytics is extensively used in different scenarios like reducing operational costs, predicting energy demands, optimizing network efficiency, asset maintenance, improving customer insights and device data insights. The paper also highlights how analytics helps transform insights obtained from energy data into sustainable solutions. The paper utilizes data from an array of segments such as retail, transportation, and water sectors.Keywords: energy analytics, energy management, operational data, business intelligence, optimization
Procedia PDF Downloads 36324532 Efficient Frequent Itemset Mining Methods over Real-Time Spatial Big Data
Authors: Hamdi Sana, Emna Bouazizi, Sami Faiz
Abstract:
In recent years, there is a huge increase in the use of spatio-temporal applications where data and queries are continuously moving. As a result, the need to process real-time spatio-temporal data seems clear and real-time stream data management becomes a hot topic. Sliding window model and frequent itemset mining over dynamic data are the most important problems in the context of data mining. Thus, sliding window model for frequent itemset mining is a widely used model for data stream mining due to its emphasis on recent data and its bounded memory requirement. These methods use the traditional transaction-based sliding window model where the window size is based on a fixed number of transactions. Actually, this model supposes that all transactions have a constant rate which is not suited for real-time applications. And the use of this model in such applications endangers their performance. Based on these observations, this paper relaxes the notion of window size and proposes the use of a timestamp-based sliding window model. In our proposed frequent itemset mining algorithm, support conditions are used to differentiate frequents and infrequent patterns. Thereafter, a tree is developed to incrementally maintain the essential information. We evaluate our contribution. The preliminary results are quite promising.Keywords: real-time spatial big data, frequent itemset, transaction-based sliding window model, timestamp-based sliding window model, weighted frequent patterns, tree, stream query
Procedia PDF Downloads 16024531 The Extent of Big Data Analysis by the External Auditors
Authors: Iyad Ismail, Fathilatul Abdul Hamid
Abstract:
This research was mainly investigated to recognize the extent of big data analysis by external auditors. This paper adopts grounded theory as a framework for conducting a series of semi-structured interviews with eighteen external auditors. The research findings comprised the availability extent of big data and big data analysis usage by the external auditors in Palestine, Gaza Strip. Considering the study's outcomes leads to a series of auditing procedures in order to improve the external auditing techniques, which leads to high-quality audit process. Also, this research is crucial for auditing firms by giving an insight into the mechanisms of auditing firms to identify the most important strategies that help in achieving competitive audit quality. These results are aims to instruct the auditing academic and professional institutions in developing techniques for external auditors in order to the big data analysis. This paper provides appropriate information for the decision-making process and a source of future information which affects technological auditing.Keywords: big data analysis, external auditors, audit reliance, internal audit function
Procedia PDF Downloads 6824530 A Model of Teacher Leadership in History Instruction
Authors: Poramatdha Chutimant
Abstract:
The objective of the research was to propose a model of teacher leadership in history instruction for utilization. Everett M. Rogers’ Diffusion of Innovations Theory is applied as theoretical framework. Qualitative method is to be used in the study, and the interview protocol used as an instrument to collect primary data from best practices who awarded by Office of National Education Commission (ONEC). Open-end questions will be used in interview protocol in order to gather the various data. Then, information according to international context of history instruction is the secondary data used to support in the summarizing process (Content Analysis). Dendrogram is a key to interpret and synthesize the primary data. Thus, secondary data comes as the supportive issue in explanation and elaboration. In-depth interview is to be used to collected information from seven experts in educational field. The focal point is to validate a draft model in term of future utilization finally.Keywords: history study, nationalism, patriotism, responsible citizenship, teacher leadership
Procedia PDF Downloads 278