Search results for: ignorable missing data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24379

Search results for: ignorable missing data

24049 Predicting Shot Making in Basketball Learnt Fromadversarial Multiagent Trajectories

Authors: Mark Harmon, Abdolghani Ebrahimi, Patrick Lucey, Diego Klabjan

Abstract:

In this paper, we predict the likelihood of a player making a shot in basketball from multiagent trajectories. Previous approaches to similar problems center on hand-crafting features to capture domain-specific knowledge. Although intuitive, recent work in deep learning has shown, this approach is prone to missing important predictive features. To circumvent this issue, we present a convolutional neural network (CNN) approach where we initially represent the multiagent behavior as an image. To encode the adversarial nature of basketball, we use a multichannel image which we then feed into a CNN. Additionally, to capture the temporal aspect of the trajectories, we use “fading.” We find that this approach is superior to a traditional FFN model. By using gradient ascent, we were able to discover what the CNN filters look for during training. Last, we find that a combined FFN+CNN is the best performing network with an error rate of 39%.

Keywords: basketball, computer vision, image processing, convolutional neural network

Procedia PDF Downloads 124
24048 Framework for Incorporating Environmental Performance in Network-Level Pavement Maintenance Program

Authors: Jessica Achebe, Susan Tighe

Abstract:

The reduction of material consumption and greenhouse gas emission when maintain and rehabilitating road networks can achieve added benefits including improved life cycle performance of pavements, reduced climate change impacts and human health effect due to less air pollution, improved productivity due to an optimal allocation of resources and reduced road user cost. This is the essence of incorporating environmental sustainability into pavement management. The functionality of performance measurement approach has made it one of the most valuable tool to Pavement Management Systems (PMSs) to account for different criteria in the decision-making process. However measuring the environmental performance of road network is still a far-fetched practice in road network management, more so an ostensive agency-wide environmental sustainability or sustainable maintenance specifications is missing. To address this challenge, this present research focuses on the environmental sustainability performance of network-level pavement management. The ultimate goal is to develop a framework to incorporate environmental sustainability in pavement management systems for network-level maintenance programming. In order to achieve this goal, this paper present the first step, the intention is to review the previous studies that employed environmental performance measures, as well as the suitability of environmental performance indicators for the evaluation of the sustainability of network-level pavement maintenance strategies. Through an industry practice survey, this paper provides a brief forward regarding the pavement manager motivations and barriers to making more sustainable decisions, and data needed to support the network-level environmental sustainability. The trends in network-level sustainable pavement management are also presented, existing gaps are highlighted, and ideas are proposed for network-level sustainable maintenance and rehabilitation programming.

Keywords: pavement management, environment sustainability, network-level evaluation, performance measures

Procedia PDF Downloads 276
24047 Emotional Labour and Employee Performance Appraisal: The Missing Link in Some Hotels in South East Nigeria

Authors: Polycarp Igbojekwe

Abstract:

The main objective of this study was to determine if emotional labour has become a criterion in performance appraisal, job description, selection, and training schemes in the hotel industry in Nigeria. Our main assumption was that majority of hotel organizations have not built emotional labour into their human resources management schemes. Data were gathered by the use of structured questionnaires designed in Likert format, and interviews. The focus group was managers of the selected hotels. Analyses revealed that majority of the hotels have not built emotional labour into their human resources schemes particularly in the 1, 2, and 3-star hotels. It was observed that service employees of 1, 2, and 3-star hotels have not been adequately trained to perform emotional labour; a critical factor in quality service delivery. Managers of 1, 2, and 3-star hotels have not given serious thought to emotional labour as a critical factor in quality service delivery. The study revealed that suitability of an individual’s characteristics is not being considered as a criterion for selection and performance appraisal for service employees. The implication of this is that, person-job-fit is not seriously considered. It was observed that there has been a disconnect between required emotional competency, its recognition, evaluation, and training. Based on the findings of this study, it is concluded that selection, training, job description and performance appraisal instruments in use in hotels in Nigeria are inadequate. Human resource implications of the findings in this study are presented. It is recommended that hotel organizations should re-design and plan the emotional content and context of their human resources practices to reflect the emotional demands of front line jobs in the hotel industry and the crucial role emotional labour plays during service encounters.

Keywords: emotional labour, employee selection, job description, performance appraisal, person-job-fit, employee compensation

Procedia PDF Downloads 173
24046 The Educational, Social and Cultural Significance of Boys Choirs

Authors: Johannes Van Der Sandt

Abstract:

Worldwide, there are many boys choirs, but the Drakensberg Boys Choir is one of only a few of its kind: selected from a residential boys choir school using choral music as a significant vehicle for holistic education. With ongoing debates as to whether single-gender education is advantageous for boys, and research on the missing males in choirs problem, this presentation‘s purpose is to explore the perceived benefits and values for boys singing in the world-renowned Drakensberg Boys Choir, and to establish educational grounds for the existence of boys choirs. Semi-structured questionnaires were given to choristers, known as Drakies, to ascertain their perceptions of their choir membership. Their experiences are noted in terms of musical, social and behavioral skills gained. The main emerging themes in each category are discussed in order to lay claim to the assumption that boys choirs exist not only to entertain, and nor are their goals purely musical or pedagogical, but that they can be regarded as unique, cultural artifacts that aid boys‘ development into well-equipped and well-rounded young men.

Keywords: boys, choirs, choral, education, skills, values

Procedia PDF Downloads 178
24045 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh

Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila

Abstract:

Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.

Keywords: data culture, data-driven organization, data mesh, data quality for business success

Procedia PDF Downloads 94
24044 Design and Implementation of Testable Reversible Sequential Circuits Optimized Power

Authors: B. Manikandan, A. Vijayaprabhu

Abstract:

The conservative reversible gates are used to designed reversible sequential circuits. The sequential circuits are flip-flops and latches. The conservative logic gates are Feynman, Toffoli, and Fredkin. The design of two vectors testable sequential circuits based on conservative logic gates. All sequential circuit based on conservative logic gates can be tested for classical unidirectional stuck-at faults using only two test vectors. The two test vectors are all 1s, and all 0s. The designs of two vectors testable latches, master-slave flip-flops and double edge triggered (DET) flip-flops are presented. We also showed the application of the proposed approach toward 100% fault coverage for single missing/additional cell defect in the quantum- dot cellular automata (QCA) layout of the Fredkin gate. The conservative logic gates are in terms of complexity, speed, and area.

Keywords: DET, QCA, reversible logic gates, POS, SOP, latches, flip flops

Procedia PDF Downloads 276
24043 Big Data Analysis with RHadoop

Authors: Ji Eun Shin, Byung Ho Jung, Dong Hoon Lim

Abstract:

It is almost impossible to store or analyze big data increasing exponentially with traditional technologies. Hadoop is a new technology to make that possible. R programming language is by far the most popular statistical tool for big data analysis based on distributed processing with Hadoop technology. With RHadoop that integrates R and Hadoop environment, we implemented parallel multiple regression analysis with different sizes of actual data. Experimental results showed our RHadoop system was much faster as the number of data nodes increases. We also compared the performance of our RHadoop with lm function and big lm packages available on big memory. The results showed that our RHadoop was faster than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases.

Keywords: big data, Hadoop, parallel regression analysis, R, RHadoop

Procedia PDF Downloads 407
24042 A Mutually Exclusive Task Generation Method Based on Data Augmentation

Authors: Haojie Wang, Xun Li, Rui Yin

Abstract:

In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.

Keywords: data augmentation, mutex task generation, meta-learning, text classification.

Procedia PDF Downloads 64
24041 Development of Industry Sector Specific Factory Standards

Authors: Peter Burggräf, Moritz Krunke, Hanno Voet

Abstract:

Due to shortening product and technology lifecycles, many companies use standardization approaches in product development and factory planning to reduce costs and time to market. Unlike large companies, where modular systems are already widely used, small and medium-sized companies often show a much lower degree of standardization due to lower scale effects and missing capacities for the development of these standards. To overcome these challenges, the development of industry sector specific standards in cooperations or by third parties is an interesting approach. This paper analyzes which branches that are mainly dominated by small or medium-sized companies might be especially interesting for the development of factory standards using the example of the German industry. For this, a key performance indicator based approach was developed that will be presented in detail with its specific results for the German industry structure.

Keywords: factory planning, factory standards, industry sector specific standardization, production planning

Procedia PDF Downloads 369
24040 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network

Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan

Abstract:

Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.

Keywords: aggregation point, data communication, data aggregation, wireless sensor network

Procedia PDF Downloads 126
24039 Spatial Econometric Approaches for Count Data: An Overview and New Directions

Authors: Paula Simões, Isabel Natário

Abstract:

This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.

Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data

Procedia PDF Downloads 556
24038 A NoSQL Based Approach for Real-Time Managing of Robotics's Data

Authors: Gueidi Afef, Gharsellaoui Hamza, Ben Ahmed Samir

Abstract:

This paper deals with the secret of the continual progression data that new data management solutions have been emerged: The NoSQL databases. They crossed several areas like personalization, profile management, big data in real-time, content management, catalog, view of customers, mobile applications, internet of things, digital communication and fraud detection. Nowadays, these database management systems are increasing. These systems store data very well and with the trend of big data, a new challenge’s store demands new structures and methods for managing enterprise data. The new intelligent machine in the e-learning sector, thrives on more data, so smart machines can learn more and faster. The robotics are our use case to focus on our test. The implementation of NoSQL for Robotics wrestle all the data they acquire into usable form because with the ordinary type of robotics; we are facing very big limits to manage and find the exact information in real-time. Our original proposed approach was demonstrated by experimental studies and running example used as a use case.

Keywords: NoSQL databases, database management systems, robotics, big data

Procedia PDF Downloads 320
24037 Bariatric Surgery Referral as an Alternative to Fundoplication in Obese Patients Presenting with GORD: A Retrospective Hospital-Based Cohort Study

Authors: T. Arkle, D. Pournaras, S. Lam, B. Kumar

Abstract:

Introduction: Fundoplication is widely recognised as the best surgical option for gastro-oesophageal reflux disease (GORD) in the general population. However, there is controversy surrounding the use of conventional fundoplication in obese patients. Whilst the intra-operative failure of fundoplication, including wrap disruption, is reportedly higher in obese individuals, the more significant issue surrounds symptom recurrence post-surgery. Could a bariatric procedure be considered in obese patients for weight management, to treat the GORD, and to also reduce the risk of recurrence? Roux-en-Y gastric bypass, a widely performed bariatric procedure, has been shown to be highly successful both in controlling GORD symptoms and in weight management in obese patients. Furthermore, NICE has published clear guidelines on eligibility for bariatric surgery, with the main criteria being type 3 obesity or type 2 obesity with the presence of significant co-morbidities that would improve with weight loss. This study aims to identify the proportion of patients who undergo conventional fundoplication for GORD and/or hiatus hernia, which would have been eligible for bariatric surgery referral according to NICE guidelines. Methods: All patients who underwent fundoplication procedures for GORD and/or hiatus hernia repair at a single NHS foundation trust over a 10-year period will be identified using the Trust’s health records database. Pre-operative patient records will be used to find BMI and the presence of significant co-morbidities at the time of consideration for surgery. This information will be compared to NICE guidelines to determine potential eligibility for the bariatric surgical referral at the time of initial surgical intervention. Results: A total of 321 patients underwent fundoplication procedures between January 2011 and December 2020; 133 (41.4%) had available data for BMI or to allow BMI to be estimated. Of those 133, 40 patients (30%) had a BMI greater than 30kg/m², and 7 (5.3%) had BMI >35kg/m². One patient (0.75%) had a BMI >40 and would therefore be automatically eligible according to NICE guidelines. 4 further patients had significant co-morbidities, such as hypertension and osteoarthritis, which likely be improved by weight management surgery and therefore also indicated eligibility for referral. Overall, 3.75% (5/133) of patients undergoing conventional fundoplication procedures would have been eligible for bariatric surgical referral, these patients were all female, and the average age was 60.4 years. Conclusions: Based on this Trust’s experience, around 4% of obese patients undergoing fundoplication would have been eligible for bariatric surgical intervention. Based on current evidence, in class 2/3 obese patients, there is likely to have been a notable proportion with recurrent disease, potentially requiring further intervention. These patient’s may have benefitted more through undergoing bariatric surgery, for example a Roux-en-Y gastric bypass, addressing both their obesity and GORD. Use of patient written notes to obtain BMI data for the 188 patients with missing BMI data and further analysis to determine outcomes following fundoplication in all patients, assessing for incidence of recurrent disease, will be undertaken to strengthen conclusions.

Keywords: bariatric surgery, GORD, Nissen fundoplication, nice guidelines

Procedia PDF Downloads 37
24036 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis

Authors: C. B. Le, V. N. Pham

Abstract:

In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.

Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering

Procedia PDF Downloads 143
24035 Modeling Activity Pattern Using XGBoost for Mining Smart Card Data

Authors: Eui-Jin Kim, Hasik Lee, Su-Jin Park, Dong-Kyu Kim

Abstract:

Smart-card data are expected to provide information on activity pattern as an alternative to conventional person trip surveys. The focus of this study is to propose a method for training the person trip surveys to supplement the smart-card data that does not contain the purpose of each trip. We selected only available features from smart card data such as spatiotemporal information on the trip and geographic information system (GIS) data near the stations to train the survey data. XGboost, which is state-of-the-art tree-based ensemble classifier, was used to train data from multiple sources. This classifier uses a more regularized model formalization to control the over-fitting and show very fast execution time with well-performance. The validation results showed that proposed method efficiently estimated the trip purpose. GIS data of station and duration of stay at the destination were significant features in modeling trip purpose.

Keywords: activity pattern, data fusion, smart-card, XGboost

Procedia PDF Downloads 214
24034 Disclosure Extension of Oil and Gas Reserve Quantum

Authors: Ali Alsawayeh, Ibrahim Eldanfour

Abstract:

This paper examines the extent of disclosure of oil and gas reserve quantum in annual reports of international oil and gas exploration and production companies, particularly companies in untested international markets, such as Canada, the UK and the US, and seeks to determine the underlying factors that affect the level of disclosure on oil reserve quantum. The study is concerned with the usefulness of disclosure of oil and gas reserves quantum to investors and other users. Given the primacy of the annual report (10-k) as a source of supplemental reserves data about the company and as the channel through which companies disseminate information about their performance, the annual reports for one year (2009) were the central focus of the study. This comparative study seeks to establish whether differences exist between the sample companies, based on new disclosure requirements by the Securities and Exchange Commission (SEC) in respect of reserves classification and definition. The extent of disclosure of reserve is provided and compared among the selected companies. Statistical analysis is performed to determine whether any differences exist in the extent of disclosure of reserve under the determinant variables. This study shows that some factors would affect the extent of disclosure of reserve quantum in the above-mentioned countries, namely: company’s size, leverage and quality of auditor. Companies that provide reserves quantum in detail appear to display higher size. The findings also show that the level of leverage has affected companies’ reserves quantum disclosure. Indeed, companies that provide detailed reserves quantum disclosure tend to employ a ‘high-quality auditor’. In addition, the study found significant independent variable such as Profit Sharing Contracts (PSC). This factor could explain variations in the level of disclosure of oil reserve quantum between the contractor and host governments. The implementation of SEC oil and gas reporting requirements do not enhance companies’ valuation because the new rules are based only on past and present reserves information (proven reserves); hence, future valuation of oil and gas companies is missing for the market.

Keywords: comparison, company characteristics, disclosure, reserve quantum, regulation

Procedia PDF Downloads 379
24033 A Mutually Exclusive Task Generation Method Based on Data Augmentation

Authors: Haojie Wang, Xun Li, Rui Yin

Abstract:

In order to solve the memorization overfitting in the model-agnostic meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to an exponential growth of computation, this paper also proposes a key data extraction method that only extract part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.

Keywords: mutex task generation, data augmentation, meta-learning, text classification.

Procedia PDF Downloads 97
24032 Revolutionizing Traditional Farming Using Big Data/Cloud Computing: A Review on Vertical Farming

Authors: Milind Chaudhari, Suhail Balasinor

Abstract:

Due to massive deforestation and an ever-increasing population, the organic content of the soil is depleting at a much faster rate. Due to this, there is a big chance that the entire food production in the world will drop by 40% in the next two decades. Vertical farming can help in aiding food production by leveraging big data and cloud computing to ensure plants are grown naturally by providing the optimum nutrients sunlight by analyzing millions of data points. This paper outlines the most important parameters in vertical farming and how a combination of big data and AI helps in calculating and analyzing these millions of data points. Finally, the paper outlines how different organizations are controlling the indoor environment by leveraging big data in enhancing food quantity and quality.

Keywords: big data, IoT, vertical farming, indoor farming

Procedia PDF Downloads 143
24031 Experience of Hydatid Disease of Liver at a Tertiary Care Center 7 Years Experience

Authors: Jibran Abbasy, Rizwan Sultan, Ammar Humayun, Tabish Chawla

Abstract:

Background: Hydatid disease caused by Echinococcus Granulosus affects liver in 70-90% of cases. Dogs are the definitive host while humans are the accidental host. Modalities used for its treatment are especially important for our population as the disease is endemic in many Asian countries. The aim of the study was to perform an audit of the various modalities used for treatment of hydatid disease of liver and the response to each modality in tertiary care center of Pakistan. Materials and Methods: Retrospective audit of patients diagnosed and treated for Hydatid disease of the liver at Aga Khan University Hospital from 1st January 2007 to 31st December 2014 was completed. All patients aged 16 and above were included. Patients who had extra hepatic disease and missing records were excluded. Outcome measures were morbidity, mortality and recurrence of the disease. Results: During the study period 56 patients were treated for isolated hepatic hydatid disease and were included. Mean age was 39 years with 48% being females and 52% males. Most common presenting complaint was abdominal pain seen in 53% of patients(n=41). Duration of symptoms was less than 6 months in 74% (n=38). Mostly right lobe was involved in 69% (n=38).Most common treatment modality used was surgery in 34 patients followed by PAIR in 14 patients while 8 patients were treated medically. At a median follow up of 34 months recurrence was seen in 2 patients treated with PAIR while no patient treated with surgery had recurrence with the median follow up of 20 months. While no morbidity and mortality were observed in PAIR, but in surgery 5 patients had morbidity while 1 patient had mortality. Conclusion: Our data is comparative to other studies in terms of morbidity, mortality, and recurrence. We had adequate follow up. In our study PAIR and surgery both are effective and have less complications and recurrence rate. Surgery is still the gold standard in terms of recurrence.

Keywords: echinococcous granulosus, puncture aspiration irrigation reaspiration (PAIR), surgery, hydatid disease

Procedia PDF Downloads 237
24030 Big Data-Driven Smart Policing: Big Data-Based Patrol Car Dispatching in Abu Dhabi, UAE

Authors: Oualid Walid Ben Ali

Abstract:

Big Data has become one of the buzzwords today. The recent explosion of digital data has led the organization, either private or public, to a new era towards a more efficient decision making. At some point, business decided to use that concept in order to learn what make their clients tick with phrases like ‘sales funnel’ analysis, ‘actionable insights’, and ‘positive business impact’. So, it stands to reason that Big Data was viewed through green (read: money) colored lenses. Somewhere along the line, however someone realized that collecting and processing data doesn’t have to be for business purpose only, but also could be used for other purposes to assist law enforcement or to improve policing or in road safety. This paper presents briefly, how Big Data have been used in the fields of policing order to improve the decision making process in the daily operation of the police. As example, we present a big-data driven system which is sued to accurately dispatch the patrol cars in a geographic environment. The system is also used to allocate, in real-time, the nearest patrol car to the location of an incident. This system has been implemented and applied in the Emirate of Abu Dhabi in the UAE.

Keywords: big data, big data analytics, patrol car allocation, dispatching, GIS, intelligent, Abu Dhabi, police, UAE

Procedia PDF Downloads 457
24029 Missing Narratives and Their Potential Impact on Resettlement Strategies

Authors: Natina Roberts, Hanhee Lee

Abstract:

The existing and emerging refugee research reports unfavorable resettlement outcomes in multiple domains. The proposed paper highlights trends in refugee research in which empirical studies investigate resettlement of former refugees from individual and culturally homogeneous perspectives. The proposed paper then aims to examine the reality of the lived experience of resettlement from family and cross-cultural viewpoints. Proponents for this focus include the United Nations High Commissioner for Refugees (UNHCR). The UNHCR is responsible for leading resettlement efforts for refugees through the durable solutions of repatriation, local integration and resettlement. Life experiences with refugee families, and a report of literary findings on former refugee resettlement from various cultural backgrounds – that highlight similarities and differences among various ethnic groups, will be discussed. The proposed paper is expected to frame underrepresented refugee perspectives, and review policy implications in healthcare, education, and public support systems.

Keywords: refugee, cross-cultural, families, resettlement policy

Procedia PDF Downloads 239
24028 Mining Multicity Urban Data for Sustainable Population Relocation

Authors: Xu Du, Aparna S. Varde

Abstract:

In this research, we propose to conduct diagnostic and predictive analysis about the key factors and consequences of urban population relocation. To achieve this goal, urban simulation models extract the urban development trends as land use change patterns from a variety of data sources. The results are treated as part of urban big data with other information such as population change and economic conditions. Multiple data mining methods are deployed on this data to analyze nonlinear relationships between parameters. The result determines the driving force of population relocation with respect to urban sprawl and urban sustainability and their related parameters. Experiments so far reveal that data mining methods discover useful knowledge from the multicity urban data. This work sets the stage for developing a comprehensive urban simulation model for catering to specific questions by targeted users. It contributes towards achieving sustainability as a whole.

Keywords: data mining, environmental modeling, sustainability, urban planning

Procedia PDF Downloads 266
24027 Model Order Reduction for Frequency Response and Effect of Order of Method for Matching Condition

Authors: Aref Ghafouri, Mohammad javad Mollakazemi, Farhad Asadi

Abstract:

In this paper, model order reduction method is used for approximation in linear and nonlinearity aspects in some experimental data. This method can be used for obtaining offline reduced model for approximation of experimental data and can produce and follow the data and order of system and also it can match to experimental data in some frequency ratios. In this study, the method is compared in different experimental data and influence of choosing of order of the model reduction for obtaining the best and sufficient matching condition for following the data is investigated in format of imaginary and reality part of the frequency response curve and finally the effect and important parameter of number of order reduction in nonlinear experimental data is explained further.

Keywords: frequency response, order of model reduction, frequency matching condition, nonlinear experimental data

Procedia PDF Downloads 370
24026 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis

Authors: Serhat Tüzün, Tufan Demirel

Abstract:

Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.

Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review

Procedia PDF Downloads 246
24025 An Empirical Study of the Impacts of Big Data on Firm Performance

Authors: Thuan Nguyen

Abstract:

In the present time, data to a data-driven knowledge-based economy is the same as oil to the industrial age hundreds of years ago. Data is everywhere in vast volumes! Big data analytics is expected to help firms not only efficiently improve performance but also completely transform how they should run their business. However, employing the emergent technology successfully is not easy, and assessing the roles of big data in improving firm performance is even much harder. There was a lack of studies that have examined the impacts of big data analytics on organizational performance. This study aimed to fill the gap. The present study suggested using firms’ intellectual capital as a proxy for big data in evaluating its impact on organizational performance. The present study employed the Value Added Intellectual Coefficient method to measure firm intellectual capital, via its three main components: human capital efficiency, structural capital efficiency, and capital employed efficiency, and then used the structural equation modeling technique to model the data and test the models. The financial fundamental and market data of 100 randomly selected publicly listed firms were collected. The results of the tests showed that only human capital efficiency had a significant positive impact on firm profitability, which highlighted the prominent human role in the impact of big data technology.

Keywords: big data, big data analytics, intellectual capital, organizational performance, value added intellectual coefficient

Procedia PDF Downloads 212
24024 Automated Test Data Generation For some types of Algorithm

Authors: Hitesh Tahbildar

Abstract:

The cost of test data generation for a program is computationally very high. In general case, no algorithm to generate test data for all types of algorithms has been found. The cost of generating test data for different types of algorithm is different. Till date, people are emphasizing the need to generate test data for different types of programming constructs rather than different types of algorithms. The test data generation methods have been implemented to find heuristics for different types of algorithms. Some algorithms that includes divide and conquer, backtracking, greedy approach, dynamic programming to find the minimum cost of test data generation have been tested. Our experimental results say that some of these types of algorithm can be used as a necessary condition for selecting heuristics and programming constructs are sufficient condition for selecting our heuristics. Finally we recommend the different heuristics for test data generation to be selected for different types of algorithms.

Keywords: ongest path, saturation point, lmax, kL, kS

Procedia PDF Downloads 371
24023 Understanding the Qualities of Indian Neighborhoods: Understanding of Social Spaces

Authors: Venkata Ravi Kumar Veluru

Abstract:

Indian traditional neighborhoods are socially active and sometimes intrusive communities, which are losing their qualities due to western influences, undermining the traditional Indian values by blind adaptation of western neighborhood concepts since the scale is not suitable to the Indian context. This paper aims to understand the qualities of Indian traditional neighborhoods by evaluating a traditional neighborhood of Jaipur, comparing it with a modern planned neighborhood of Chandigarh, designed by a foreign planner, in the neighborhood concept of the western world, to find out the special qualities of traditional Indian neighborhoods as compared to western concepts in terms of social spaces, by way of physical observation of selected neighborhoods and residents structured questionnaire survey. The combined analysis found that social spaces are abundantly available in traditional neighborhoods, which are missing in modern neighborhoods, which are the main qualities where interactions happen, aiming towards the formation of social capital. The qualities of traditional neighborhoods have to be considered while designing new neighborhoods in India.

Keywords: Indian neighborhoods, modern neighborhoods, neighborhood planning, social spaces, traditional neighborhoods

Procedia PDF Downloads 88
24022 Support for Reporting Guidelines in Surgical Journals Needs Improvement: A Systematic Review

Authors: Riaz A. Agha, Ishani Barai, Shivanchan Rajmohan, Seon Lee, Mohammed O. Anwar, Alex J. Fowler, Dennis P. Orgill, Douglas G. Altman

Abstract:

Introduction: Medical knowledge is growing fast. Evidence-based medicine works best if the evidence is reported well. Past studies have shown reporting quality to be lacking in the field of surgery. Reporting guidelines are an important tool for authors to optimize the reporting of their research. The objective of this study was to analyse the frequency and strength of recommendation for such reporting guidelines within surgical journals. Methods: A systematic review of the 198 journals within the Journal Citation Report 2014 (surgery category) published by Thomson Reuters was undertaken. The online guide for authors for each journal was screened by two independent groups and results were compared. Data regarding the presence and strength of recommendation to use reporting guidelines was extracted. Results: 193 journals were included (as five appeared twice having changed their name). These had a median impact factor of 1.526 (range 0.047 to 8.327), with a median of 145 articles published per journal (range 29-659), with 34,036 articles published in total over the two-year window 2012-2013. The majority (62%) of surgical journals made no mention of reporting guidelines within their guidelines for authors. Of the journals (38%) that did mention them, only 14% (10/73) required the use of all relevant reporting guidelines. The most frequently mentioned reporting guideline was CONSORT (46 journals). Conclusion: The mention of reporting guidelines within the guide for authors of surgical journals needs improvement. Authors, reviewers and editors should work to ensure that research is reported in line with the relevant reporting guidelines. Journals should consider hard-wiring adherence to them. This will allow peer-reviewers to focus on what is present, not what is missing, raising the level of scholarly discourse between authors and the scientific community and reducing frustration amongst readers.

Keywords: CONSORT, guide for authors, PRISMA, reporting guidelines, journal impact factor, citation analysis

Procedia PDF Downloads 442
24021 The Perspective on Data Collection Instruments for Younger Learners

Authors: Hatice Kübra Koç

Abstract:

For academia, collecting reliable and valid data is one of the most significant issues for researchers. However, it is not the same procedure for all different target groups; meanwhile, during data collection from teenagers, young adults, or adults, researchers can use common data collection tools such as questionnaires, interviews, and semi-structured interviews; yet, for young learners and very young ones, these reliable and valid data collection tools cannot be easily designed or applied by the researchers. In this study, firstly, common data collection tools are examined for ‘very young’ and ‘young learners’ participant groups since it is thought that the quality and efficiency of an academic study is mainly based on its valid and correct data collection and data analysis procedure. Secondly, two different data collection instruments for very young and young learners are stated as discussing the efficacy of them. Finally, a suggested data collection tool – a performance-based questionnaire- which is specifically developed for ‘very young’ and ‘young learners’ participant groups in the field of teaching English to young learners as a foreign language is presented in this current study. The designing procedure and suggested items/factors for the suggested data collection tool are accordingly revealed at the end of the study to help researchers have studied with young and very learners.

Keywords: data collection instruments, performance-based questionnaire, young learners, very young learners

Procedia PDF Downloads 49
24020 Integrating Dependent Material Planning Cycle into Building Information Management: A Building Information Management-Based Material Management Automation Framework

Authors: Faris Elghaish, Sepehr Abrishami, Mark Gaterell, Richard Wise

Abstract:

The collaboration and integration between all building information management (BIM) processes and tasks are necessary to ensure that all project objectives can be delivered. The literature review has been used to explore the state of the art BIM technologies to manage construction materials as well as the challenges which have faced the construction process using traditional methods. Thus, this paper aims to articulate a framework to integrate traditional material planning methods such as ABC analysis theory (Pareto principle) to analyse and categorise the project materials, as well as using independent material planning methods such as Economic Order Quantity (EOQ) and Fixed Order Point (FOP) into the BIM 4D, and 5D capabilities in order to articulate a dependent material planning cycle into BIM, which relies on the constructability method. Moreover, we build a model to connect between the material planning outputs and the BIM 4D and 5D data to ensure that all project information will be accurately presented throughout integrated and complementary BIM reporting formats. Furthermore, this paper will present a method to integrate between the risk management output and the material management process to ensure that all critical materials are monitored and managed under the all project stages. The paper includes browsers which are proposed to be embedded in any 4D BIM platform in order to predict the EOQ as well as FOP and alarm the user during the construction stage. This enables the planner to check the status of the materials on the site as well as to get alarm when the new order will be requested. Therefore, this will lead to manage all the project information in a single context and avoid missing any information at early design stage. Subsequently, the planner will be capable of building a more reliable 4D schedule by allocating the categorised material with the required EOQ to check the optimum locations for inventory and the temporary construction facilitates.

Keywords: building information management, BIM, economic order quantity, EOQ, fixed order point, FOP, BIM 4D, BIM 5D

Procedia PDF Downloads 142