Search results for: data acquisition (DAQ)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25431

Search results for: data acquisition (DAQ)

24411 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 72
24410 Experimental Evaluation of Succinct Ternary Tree

Authors: Dmitriy Kuptsov

Abstract:

Tree data structures, such as binary or in general k-ary trees, are essential in computer science. The applications of these data structures can range from data search and retrieval to sorting and ranking algorithms. Naive implementations of these data structures can consume prohibitively large volumes of random access memory limiting their applicability in certain solutions. Thus, in these cases, more advanced representation of these data structures is essential. In this paper we present the design of the compact version of ternary tree data structure and demonstrate the results for the experimental evaluation using static dictionary problem. We compare these results with the results for binary and regular ternary trees. The conducted evaluation study shows that our design, in the best case, consumes up to 12 times less memory (for the dictionary used in our experimental evaluation) than a regular ternary tree and in certain configuration shows performance comparable to regular ternary trees. We have evaluated the performance of the algorithms using both 32 and 64 bit operating systems.

Keywords: algorithms, data structures, succinct ternary tree, per- formance evaluation

Procedia PDF Downloads 158
24409 Mergers and Acquisitions in the Banking Sector: The West African Experience

Authors: Sunday Odunaiya

Abstract:

The statistics of banks in operation in this current dispensation compared to some decades ago has brought about a lot of changes on the face of the financial system. The demand of customers, technological advancement, and government policies among others has therefore generated a lot of heat for financial sector’s growth, sustenance and survival. This paper discusses mergers and acquisitions (M&A) in banking sector using West Africa as a yardstick of evaluation. It explains rigorously the conditions that warrant mergers and acquisitions in the banking sector, its effect, and how to ensure mergers and acquisitions effectiveness in the banking sector. The conceptual and empirical review of the relevant literature were done systematically while value-increasing and value-decreasing theories were used to substantiate the discourse. Findings of this paper show that mergers and acquisitions is a practical and conscious activity in Nigeria, Ghana and Ivory Coast from earliest time till date with tremendous turnaround in the financial sector. It was found out that M&A is consensually arrived at by the targets and the acquirer on a value-based account. In other words, merger and acquisition is a deliberate decision reached by the management of such bank for a ‘just cause’.

Keywords: acquisitions, merger, management, financial sector

Procedia PDF Downloads 271
24408 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement

Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti

Abstract:

Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.

Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing

Procedia PDF Downloads 103
24407 Prosperous Digital Image Watermarking Approach by Using DCT-DWT

Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar

Abstract:

In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacks

Keywords: watermarking, digital, DCT-DWT, security

Procedia PDF Downloads 418
24406 Talent-Priority: Exploring the Human Resource Reengineering Model in Digital Transformation of a Benchmark Company

Authors: Hsiu Hua Hu

Abstract:

Digital transformation has widely affected various industries. It provides technological innovation, process redesign, new business model construction, and talent value creation. This transformation not only allows organizations to obtain and deploy specific technologies and methods suitable for organizational reengineering but also is an important way to solve management problems in human resource (HR) reengineering, business efficiency, and process redesign. In this study, we present the results of a qualitative study that offers insight into a series of key feature of reengineering related to the digital transformation and how to create talent value when the companies successfully perform digital transformation and human resource reengineering, which is led by business digitalization strategies including talent planning, talent acquisition, talent adjustment, and talent development. Drawing from the qualitative investigation findings, we built an inductive model of HR reengineering, which aims to provide research and practical references on future digital transformation and management inquiry.

Keywords: talent value creation, digital transformation, HR reengineering, qualitative study

Procedia PDF Downloads 150
24405 Measuring Digital Literacy in the Chilean Workforce

Authors: Carolina Busco, Daniela Osses

Abstract:

The development of digital literacy has become a fundamental element that allows for citizen inclusion, access to quality jobs, and a labor market capable of responding to the digital economy. There are no methodological instruments available in Chile to measure the workforce’s digital literacy and improve national policies on this matter. Thus, the objective of this research is to develop a survey to measure digital literacy in a sample of 200 Chilean workers. Dimensions considered in the instrument are sociodemographics, access to infrastructure, digital education, digital skills, and the ability to use e-government services. To achieve the research objective of developing a digital literacy model of indicators and a research instrument for this purpose, along with an exploratory analysis of data using factor analysis, we used an empirical, quantitative-qualitative, exploratory, non-probabilistic, and cross-sectional research design. The research instrument is a survey created to measure variables that make up the conceptual map prepared from the bibliographic review. Before applying the survey, a pilot test was implemented, resulting in several adjustments to the phrasing of some items. A validation test was also applied using six experts, including their observations on the final instrument. The survey contained 49 items that were further divided into three sets of questions: sociodemographic data; a Likert scale of four values ranked according to the level of agreement; iii) multiple choice questions complementing the dimensions. Data collection occurred between January and March 2022. For the factor analysis, we used the answers to 12 items with the Likert scale. KMO showed a value of 0.626, indicating a medium level of correlation, whereas Bartlett’s test yielded a significance value of less than 0.05 and a Cronbach’s Alpha of 0.618. Taking all factor selection criteria into account, we decided to include and analyze four factors that together explain 53.48% of the accumulated variance. We identified the following factors: i) access to infrastructure and opportunities to develop digital skills at the workplace or educational establishment (15.57%), ii) ability to solve everyday problems using digital tools (14.89%), iii) online tools used to stay connected with others (11.94%), and iv) residential Internet access and speed (11%). Quantitative results were discussed within six focus groups using heterogenic selection criteria related to the most relevant variables identified in the statistical analysis: upper-class school students; middle-class university students; Ph.D. professors; low-income working women, elderly individuals, and a group of rural workers. The digital divide and its social and economic correlations are evident in the results of this research. In Chile, the items that explain the acquisition of digital tools focus on access to infrastructure, which ultimately puts the first filter on the development of digital skills. Therefore, as expressed in the literature review, the advance of these skills is radically different when sociodemographic variables are considered. This increases socioeconomic distances and exclusion criteria, putting those who do not have these skills at a disadvantage and forcing them to seek the assistance of others.

Keywords: digital literacy, digital society, workforce digitalization, digital skills

Procedia PDF Downloads 66
24404 Machine Learning Data Architecture

Authors: Neerav Kumar, Naumaan Nayyar, Sharath Kashyap

Abstract:

Most companies see an increase in the adoption of machine learning (ML) applications across internal and external-facing use cases. ML applications vend output either in batch or real-time patterns. A complete batch ML pipeline architecture comprises data sourcing, feature engineering, model training, model deployment, model output vending into a data store for downstream application. Due to unclear role expectations, we have observed that scientists specializing in building and optimizing models are investing significant efforts into building the other components of the architecture, which we do not believe is the best use of scientists’ bandwidth. We propose a system architecture created using AWS services that bring industry best practices to managing the workflow and simplifies the process of model deployment and end-to-end data integration for an ML application. This narrows down the scope of scientists’ work to model building and refinement while specialized data engineers take over the deployment, pipeline orchestration, data quality, data permission system, etc. The pipeline infrastructure is built and deployed as code (using terraform, cdk, cloudformation, etc.) which makes it easy to replicate and/or extend the architecture to other models that are used in an organization.

Keywords: data pipeline, machine learning, AWS, architecture, batch machine learning

Procedia PDF Downloads 60
24403 Ecological Systems Theory, the SCERTS Model, and the Autism Spectrum, Node and Nexus

Authors: C. Surmei

Abstract:

Autism Spectrum Disorder (ASD) is a complex developmental disorder that can affect an individual’s (but is not limited to) cognitive development, emotional development, language acquisition and the capability to relate to others. Ecological Systems Theory is a sociocultural theory that focuses on environmental systems with which an individual interacts. The SCERTS Model is an educational approach and multidisciplinary framework that addresses the challenges confronted by individuals on the autism spectrum and other developmental disabilities. To aid the understanding of ASD and educational philosophies for families, educators, and the global community alike, a Comparative Analysis was undertaken to examine key variables (the child, society, education, nurture/care, relationships, communication). The results indicated that the Ecological Systems Theory and the SCERTS Model were comparable in focus, motivation, and application, attaining to a viable and notable relationship between both theories. This paper unpacks two child development philosophies and their relationship to each other.

Keywords: autism spectrum disorder, ecological systems theory, education, SCERTS model

Procedia PDF Downloads 577
24402 A Comparison of Image Data Representations for Local Stereo Matching

Authors: André Smith, Amr Abdel-Dayem

Abstract:

The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.

Keywords: colour data, local stereo matching, stereo correspondence, disparity map

Procedia PDF Downloads 367
24401 Making Use of Content and Language Integrated Learning for Teaching Entrepreneurship and Neuromarketing to Master Students: Case Study

Authors: Svetlana Polskaya

Abstract:

The study deals with the issue of using the Content and Language Integrated Learning (CLIL) concept when teaching Master Program students majoring in neuromarketing and entrepreneurship. Present-day employers expect young graduates to conduct professional communication with their English-speaking peers and demonstrate proper knowledge of the industry’s terminology and jargon. The idea of applying CLIL was the result of the above-mentioned students possessing high proficiency in English, thus, not requiring any further knowledge of the English language in terms of traditional grammar or lexis. Due to this situation, a CLIL-type program was devised, allowing learners to acquire new knowledge of entrepreneurship and neuromarketing spheres combined with simultaneous honing their English language practical usage. The case study analyzes CLIL application within this particular program as well as the experience accumulated in the process.

Keywords: CLIL, entrepreneurship, neuromarketing, foreign language acquisition, proficiency level

Procedia PDF Downloads 83
24400 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System

Authors: Karima Qayumi, Alex Norta

Abstract:

The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.

Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)

Procedia PDF Downloads 426
24399 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design

Authors: Qing K. Zhu

Abstract:

Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.

Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise

Procedia PDF Downloads 251
24398 Introduction of Electronic Health Records to Improve Data Quality in Emergency Department Operations

Authors: Anuruddha Jagoda, Samiddhi Samarakoon, Anil Jasinghe

Abstract:

In its simplest form, data quality can be defined as 'fitness for use' and it is a concept with multi-dimensions. Emergency Departments(ED) require information to treat patients and on the other hand it is the primary source of information regarding accidents, injuries, emergencies etc. Also, it is the starting point of various patient registries, databases and surveillance systems. This interventional study was carried out to improve data quality at the ED of the National Hospital of Sri Lanka (NHSL) by introducing an e health solution to improve data quality. The NHSL is the premier trauma care centre in Sri Lanka. The study consisted of three components. A research study was conducted to assess the quality of data in relation to selected five dimensions of data quality namely accuracy, completeness, timeliness, legibility and reliability. The intervention was to develop and deploy an electronic emergency department information system (eEDIS). Post assessment of the intervention confirmed that all five dimensions of data quality had improved. The most significant improvements are noticed in accuracy and timeliness dimensions.

Keywords: electronic health records, electronic emergency department information system, emergency department, data quality

Procedia PDF Downloads 270
24397 Using Pump as Turbine in Drinking Water Networks to Monitor and Control Water Processes Remotely

Authors: Sara Bahariderakhshan, Morteza Ahmadifar

Abstract:

Leakage is one of the most important problems that water distribution networks face which first reason is high-pressure existence. There are many approaches to control this excess pressure, which using pressure reducing valves (PRVs) or reducing pipe diameter are ones. In the other hand, Pumps are using electricity or fossil fuels to supply needed pressure in distribution networks but excess pressure are made in some branches due to topology problems and water networks’ variables therefore using pressure valves will be inevitable. Although using PRVs is inevitable but it leads to waste electricity or fuels used by pumps because PRVs just waste excess hydraulic pressure to lower it. Pumps working in reverse or Pumps as Turbine (called PaT in this article) are easily available and also effective sources of reducing the equipment cost in small hydropower plants. Urban areas of developing countries are facing increasing in area and maybe water scarcity in near future. These cities need wider water networks which make it hard to predict, control and have a better operation in the urban water cycle. Using more energy and, therefore, more pollution, slower repairing services, more user dissatisfaction and more leakage are these networks’ serious problems. Therefore, more effective systems are needed to monitor and act in these complicated networks than what is used now. In this article a new approach is proposed and evaluated: Using PAT to produce enough energy for remote valves and sensors in the water network. These sensors can be used to determine the discharge, pressure, water quality and other important network characteristics. With the help of remote valves pipeline discharge can be controlled so Instead of wasting excess hydraulic pressure which may be destructive in some cases, obtaining extra pressure from pipeline and producing clean electricity used by remote instruments is this articles’ goal. Furthermore due to increasing the area of the network there is unwanted high pressure in some critical points which is not destructive but lowering the pressure results to longer lifetime for pipeline networks without users’ dissatisfaction. This strategy proposed in this article, leads to use PaT widely for pressure containment and producing energy needed for remote valves and sensors like what happens in supervisory control and data acquisition (SCADA) systems which make it easy for us to monitor, receive data from urban water cycle and make any needed changes in discharge and pressure of pipelines easily and remotely. This is a clean project of energy production without significant environmental impacts and can be used in urban drinking water networks, without any problem for consumers which leads to a stable and dynamic network which lowers leakage and pollution.

Keywords: new energies, pump as turbine, drinking water, distribution network, remote control equipments

Procedia PDF Downloads 460
24396 Data Presentation of Lane-Changing Events Trajectories Using HighD Dataset

Authors: Basma Khelfa, Antoine Tordeux, Ibrahima Ba

Abstract:

We present a descriptive analysis data of lane-changing events in multi-lane roads. The data are provided from The Highway Drone Dataset (HighD), which are microscopic trajectories in highway. This paper describes and analyses the role of the different parameters and their significance. Thanks to HighD data, we aim to find the most frequent reasons that motivate drivers to change lanes. We used the programming language R for the processing of these data. We analyze the involvement and relationship of different variables of each parameter of the ego vehicle and the four vehicles surrounding it, i.e., distance, speed difference, time gap, and acceleration. This was studied according to the class of the vehicle (car or truck), and according to the maneuver it undertook (overtaking or falling back).

Keywords: autonomous driving, physical traffic model, prediction model, statistical learning process

Procedia PDF Downloads 251
24395 Variable-Fidelity Surrogate Modelling with Kriging

Authors: Selvakumar Ulaganathan, Ivo Couckuyt, Francesco Ferranti, Tom Dhaene, Eric Laermans

Abstract:

Variable-fidelity surrogate modelling offers an efficient way to approximate function data available in multiple degrees of accuracy each with varying computational cost. In this paper, a Kriging-based variable-fidelity surrogate modelling approach is introduced to approximate such deterministic data. Initially, individual Kriging surrogate models, which are enhanced with gradient data of different degrees of accuracy, are constructed. Then these Gradient enhanced Kriging surrogate models are strategically coupled using a recursive CoKriging formulation to provide an accurate surrogate model for the highest fidelity data. While, intuitively, gradient data is useful to enhance the accuracy of surrogate models, the primary motivation behind this work is to investigate if it is also worthwhile incorporating gradient data of varying degrees of accuracy.

Keywords: Kriging, CoKriging, Surrogate modelling, Variable- fidelity modelling, Gradients

Procedia PDF Downloads 553
24394 Robust Barcode Detection with Synthetic-to-Real Data Augmentation

Authors: Xiaoyan Dai, Hsieh Yisan

Abstract:

Barcode processing of captured images is a huge challenge, as different shooting conditions can result in different barcode appearances. This paper proposes a deep learning-based barcode detection using synthetic-to-real data augmentation. We first augment barcodes themselves; we then augment images containing the barcodes to generate a large variety of data that is close to the actual shooting environments. Comparisons with previous works and evaluations with our original data show that this approach achieves state-of-the-art performance in various real images. In addition, the system uses hybrid resolution for barcode “scan” and is applicable to real-time applications.

Keywords: barcode detection, data augmentation, deep learning, image-based processing

Procedia PDF Downloads 162
24393 Using Pump as Turbine in Urban Water Networks to Control, Monitor, and Simulate Water Processes Remotely

Authors: Morteza Ahmadifar, Sarah Bahari Derakhshan

Abstract:

Leakage is one of the most important problems that water distribution networks face which first reason is high-pressure existence. There are many approaches to control this excess pressure, which using pressure reducing valves (PRVs) or reducing pipe diameter are ones. On the other hand, Pumps are using electricity or fossil fuels to supply needed pressure in distribution networks but excess pressure are made in some branches due to topology problems and water networks’ variables, therefore using pressure valves will be inevitable. Although using PRVs is inevitable but it leads to waste electricity or fuels used by pumps because PRVs just waste excess hydraulic pressure to lower it. Pumps working in reverse or Pumps as Turbine (called PAT in this article) are easily available and also effective sources of reducing the equipment cost in small hydropower plants. Urban areas of developing countries are facing increasing in area and maybe water scarcity in near future. These cities need wider water networks which make it hard to predict, control and have a better operation in the urban water cycle. Using more energy and therefore more pollution, slower repairing services, more user dissatisfaction and more leakage are these networks’ serious problems. Therefore, more effective systems are needed to monitor and act in these complicated networks than what is used now. In this article a new approach is proposed and evaluated: Using PAT to produce enough energy for remote valves and sensors in the water network. These sensors can be used to determine the discharge, pressure, water quality and other important network characteristics. With the help of remote valves pipeline discharge can be controlled so Instead of wasting excess hydraulic pressure which may be destructive in some cases, obtaining extra pressure from pipeline and producing clean electricity used by remote instruments is this articles’ goal. Furthermore, due to increasing the area of network there is unwanted high pressure in some critical points which is not destructive but lowering the pressure results to longer lifetime for pipeline networks without users’ dissatisfaction. This strategy proposed in this article, leads to use PAT widely for pressure containment and producing energy needed for remote valves and sensors like what happens in supervisory control and data acquisition (SCADA) systems which make it easy for us to monitor, receive data from urban water cycle and make any needed changes in discharge and pressure of pipelines easily and remotely. This is a clean project of energy production without significant environmental impacts and can be used in urban drinking water networks, without any problem for consumers which leads to a stable and dynamic network which lowers leakage and pollution.

Keywords: clean energies, pump as turbine, remote control, urban water distribution network

Procedia PDF Downloads 391
24392 Analysis of Delivery of Quad Play Services

Authors: Rahul Malhotra, Anurag Sharma

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: FTTH, quad play, play service, access networks, data rate

Procedia PDF Downloads 405
24391 Locomotion, Object Exploration, Social Communicative Skills, and Improve in Language Abilities

Authors: Wanqing He

Abstract:

The current study explores aspects of exploratory behaviors and social capacities in urban Chinese infants to examine whether these factors mediate the link between infant walking and receptive and productive vocabularies. The linkage between the onset of walking and language attainment proves solid, but little is known about the factors that drive such link. This study examined whether joint attention, gesture use, and object activities mediate the association between locomotion and language development. Results showed that both the frequency (p = .05) and duration (p = .03) of carrying an object are strong mediators that afford opportunities for word comprehension. Also, accessing distal objects may be beneficial to infants’ language expression. Further studies on why object carrying may account for word comprehension and why infants with autism could not benefit from walking onset in terms of language development may yield valuable clinical implications.

Keywords: exploratory behaviors, infancy, language acquisition, motor development, social communicative skills

Procedia PDF Downloads 115
24390 Social Media Consumption Habits within the Millennial Generation: A Comparison between U.S. And Bangladesh

Authors: Didarul Islam Manik

Abstract:

The study was conducted to determine social media usage by the Millennial/young-adult generation in the U.S. and Bangladesh. It investigated what types of social media Millennials/young-adults use in their everyday lives; for what purpose they use social media; what are the significant differences between the two cultures in terms of social media use; and how the age of the respondents correlates with differences in social media use. Among the 409 respondents, 200 were selected from the University of South Dakota and 209 from the University of Dhaka, Bangladesh. The convenience sampling method was used to select the samples. A four-page questionnaire instrument was constructed with 19 closed-ended questions that collected 87 data points. The study considered the uses and gratifications and domestication of technology models as theoretical frameworks. The study found that the Millennials spend an average of 4.5 hours on the Internet daily. They spend an average of 134 minutes on social media every day. However, the U.S. Millennials spend more time (141 minutes) on social media than the Bangladeshis (127 minutes). The U.S. Millennials use various types of social media including Facebook, Twitter, YouTube, Instagram, Pinterest, SnapChat, Reddit, Imgur, etc. In contrast, Bangladeshis use Facebook, YouTube, and Google plus+. The Bangladeshis tended to spend more time on Facebook (107 minutes) than the Americans (57 minutes). The study found that the Millennials of the two countries use Facebook to fill their free time, acquire information, seek entertainment, and maintain existing relationships. However, Bangladeshis are more likely to use Facebook for the acquisition of information, entertainment, educational purposes, and connecting with the people closest to them. Millennials also use Twitter to fill their free time, acquire information, and for entertainment. The study found a statistically significant difference between female and male social media use. It also found a significant correlation between age and using Facebook for educational purposes; age and discussing and posting religious issues; and age and meeting with new people. There is also a correlation between age and the use of Twitter for spending time and seeking entertainment.

Keywords: American study, social media, millennial generation, South Asian studies

Procedia PDF Downloads 228
24389 Attribute Analysis of Quick Response Code Payment Users Using Discriminant Non-negative Matrix Factorization

Authors: Hironori Karachi, Haruka Yamashita

Abstract:

Recently, the system of quick response (QR) code is getting popular. Many companies introduce new QR code payment services and the services are competing with each other to increase the number of users. For increasing the number of users, we should grasp the difference of feature of the demographic information, usage information, and value of users between services. In this study, we conduct an analysis of real-world data provided by Nomura Research Institute including the demographic data of users and information of users’ usages of two services; LINE Pay, and PayPay. For analyzing such data and interpret the feature of them, Nonnegative Matrix Factorization (NMF) is widely used; however, in case of the target data, there is a problem of the missing data. EM-algorithm NMF (EMNMF) to complete unknown values for understanding the feature of the given data presented by matrix shape. Moreover, for comparing the result of the NMF analysis of two matrices, there is Discriminant NMF (DNMF) shows the difference of users features between two matrices. In this study, we combine EMNMF and DNMF and also analyze the target data. As the interpretation, we show the difference of the features of users between LINE Pay and Paypay.

Keywords: data science, non-negative matrix factorization, missing data, quality of services

Procedia PDF Downloads 126
24388 Developing Guidelines for Public Health Nurse Data Management and Use in Public Health Emergencies

Authors: Margaret S. Wright

Abstract:

Background/Significance: During many recent public health emergencies/disasters, public health nursing data has been missing or delayed, potentially impacting the decision-making and response. Data used as evidence for decision-making in response, planning, and mitigation has been erratic and slow, decreasing the ability to respond. Methodology: Applying best practices in data management and data use in public health settings, and guided by the concepts outlined in ‘Disaster Standards of Care’ models leads to the development of recommendations for a model of best practices in data management and use in public health disasters/emergencies by public health nurses. As the ‘patient’ in public health disasters/emergencies is the community (local, regional or national), guidelines for patient documentation are incorporated in the recommendations. Findings: Using model public health nurses could better plan how to prepare for, respond to, and mitigate disasters in their communities, and better participate in decision-making in all three phases bringing public health nursing data to the discussion as part of the evidence base for decision-making.

Keywords: data management, decision making, disaster planning documentation, public health nursing

Procedia PDF Downloads 219
24387 An Embarrassingly Simple Semi-supervised Approach to Increase Recall in Online Shopping Domain to Match Structured Data with Unstructured Data

Authors: Sachin Nagargoje

Abstract:

Complete labeled data is often difficult to obtain in a practical scenario. Even if one manages to obtain the data, the quality of the data is always in question. In shopping vertical, offers are the input data, which is given by advertiser with or without a good quality of information. In this paper, an author investigated the possibility of using a very simple Semi-supervised learning approach to increase the recall of unhealthy offers (has badly written Offer Title or partial product details) in shopping vertical domain. The author found that the semisupervised learning method had improved the recall in the Smart Phone category by 30% on A=B testing on 10% traffic and increased the YoY (Year over Year) number of impressions per month by 33% at production. This also made a significant increase in Revenue, but that cannot be publicly disclosed.

Keywords: semi-supervised learning, clustering, recall, coverage

Procedia PDF Downloads 117
24386 Genodata: The Human Genome Variation Using BigData

Authors: Surabhi Maiti, Prajakta Tamhankar, Prachi Uttam Mehta

Abstract:

Since the accomplishment of the Human Genome Project, there has been an unparalled escalation in the sequencing of genomic data. This project has been the first major vault in the field of medical research, especially in genomics. This project won accolades by using a concept called Bigdata which was earlier, extensively used to gain value for business. Bigdata makes use of data sets which are generally in the form of files of size terabytes, petabytes, or exabytes and these data sets were traditionally used and managed using excel sheets and RDBMS. The voluminous data made the process tedious and time consuming and hence a stronger framework called Hadoop was introduced in the field of genetic sciences to make data processing faster and efficient. This paper focuses on using SPARK which is gaining momentum with the advancement of BigData technologies. Cloud Storage is an effective medium for storage of large data sets which is generated from the genetic research and the resultant sets produced from SPARK analysis.

Keywords: human genome project, Bigdata, genomic data, SPARK, cloud storage, Hadoop

Procedia PDF Downloads 255
24385 Ontology for a Voice Transcription of OpenStreetMap Data: The Case of Space Apprehension by Visually Impaired Persons

Authors: Said Boularouk, Didier Josselin, Eitan Altman

Abstract:

In this paper, we present a vocal ontology of OpenStreetMap data for the apprehension of space by visually impaired people. Indeed, the platform based on produsage gives a freedom to data producers to choose the descriptors of geocoded locations. Unfortunately, this freedom, called also folksonomy leads to complicate subsequent searches of data. We try to solve this issue in a simple but usable method to extract data from OSM databases in order to send them to visually impaired people using Text To Speech technology. We focus on how to help people suffering from visual disability to plan their itinerary, to comprehend a map by querying computer and getting information about surrounding environment in a mono-modal human-computer dialogue.

Keywords: TTS, ontology, open street map, visually impaired

Procedia PDF Downloads 294
24384 Design and Development of a Platform for Analyzing Spatio-Temporal Data from Wireless Sensor Networks

Authors: Walid Fantazi

Abstract:

The development of sensor technology (such as microelectromechanical systems (MEMS), wireless communications, embedded systems, distributed processing and wireless sensor applications) has contributed to a broad range of WSN applications which are capable of collecting a large amount of spatiotemporal data in real time. These systems require real-time data processing to manage storage in real time and query the data they process. In order to cover these needs, we propose in this paper a Snapshot spatiotemporal data model based on object-oriented concepts. This model allows saving storing and reducing data redundancy which makes it easier to execute spatiotemporal queries and save analyzes time. Further, to ensure the robustness of the system as well as the elimination of congestion from the main access memory we propose a spatiotemporal indexing technique in RAM called Captree *. As a result, we offer an RIA (Rich Internet Application) -based SOA application architecture which allows the remote monitoring and control.

Keywords: WSN, indexing data, SOA, RIA, geographic information system

Procedia PDF Downloads 250
24383 Prediction of Marine Ecosystem Changes Based on the Integrated Analysis of Multivariate Data Sets

Authors: Prozorkevitch D., Mishurov A., Sokolov K., Karsakov L., Pestrikova L.

Abstract:

The current body of knowledge about the marine environment and the dynamics of marine ecosystems includes a huge amount of heterogeneous data collected over decades. It generally includes a wide range of hydrological, biological and fishery data. Marine researchers collect these data and analyze how and why the ecosystem changes from past to present. Based on these historical records and linkages between the processes it is possible to predict future changes. Multivariate analysis of trends and their interconnection in the marine ecosystem may be used as an instrument for predicting further ecosystem evolution. A wide range of information about the components of the marine ecosystem for more than 50 years needs to be used to investigate how these arrays can help to predict the future.

Keywords: barents sea ecosystem, abiotic, biotic, data sets, trends, prediction

Procedia PDF Downloads 112
24382 Optical Fiber Data Throughput in a Quantum Communication System

Authors: Arash Kosari, Ali Araghi

Abstract:

A mathematical model for an optical-fiber communication channel is developed which results in an expression that calculates the throughput and loss of the corresponding link. The data are assumed to be transmitted by using of separate photons with different polarizations. The derived model also shows the dependency of data throughput with length of the channel and depolarization factor. It is observed that absorption of photons affects the throughput in a more intensive way in comparison with that of depolarization. Apart from that, the probability of depolarization and the absorption of radiated photons are obtained.

Keywords: absorption, data throughput, depolarization, optical fiber

Procedia PDF Downloads 284