Search results for: NFC data exchange format
25410 Medical Experience: Usability Testing of Displaying Computed Tomography Scans and Magnetic Resonance Imaging in Virtual and Augmented Reality for Accurate Diagnosis
Authors: Alyona Gencheva
Abstract:
The most common way to study diagnostic results is using specialized programs at a stationary workplace. Magnetic Resonance Imaging is presented in a two-dimensional (2D) format, and Computed Tomography sometimes looks like a three-dimensional (3D) model that can be interacted with. The main idea of the research is to compare ways of displaying diagnostic results in virtual reality that can help a surgeon during or before an operation in augmented reality. During the experiment, the medical staff examined liver vessels in the abdominal area and heart boundaries. The search time and detection accuracy were measured on black-and-white and coloured scans. Usability testing in virtual reality shows convenient ways of interaction like hand input, voice activation, displaying risk to the patient, and the required number of scans. The results of the experiment will be used in the new C# program based on Magic Leap technology.Keywords: augmented reality, computed tomography, magic leap, magnetic resonance imaging, usability testing, VTE risk
Procedia PDF Downloads 11225409 Data Access, AI Intensity, and Scale Advantages
Authors: Chuping Lo
Abstract:
This paper presents a simple model demonstrating that ceteris paribus countries with lower barriers to accessing global data tend to earn higher incomes than other countries. Therefore, large countries that inherently have greater data resources tend to have higher incomes than smaller countries, such that the former may be more hesitant than the latter to liberalize cross-border data flows to maintain this advantage. Furthermore, countries with higher artificial intelligence (AI) intensity in production technologies tend to benefit more from economies of scale in data aggregation, leading to higher income and more trade as they are better able to utilize global data.Keywords: digital intensity, digital divide, international trade, scale of economics
Procedia PDF Downloads 6825408 Secured Transmission and Reserving Space in Images Before Encryption to Embed Data
Authors: G. R. Navaneesh, E. Nagarajan, C. H. Rajam Raju
Abstract:
Nowadays the multimedia data are used to store some secure information. All previous methods allocate a space in image for data embedding purpose after encryption. In this paper, we propose a novel method by reserving space in image with a boundary surrounded before encryption with a traditional RDH algorithm, which makes it easy for the data hider to reversibly embed data in the encrypted images. The proposed method can achieve real time performance, that is, data extraction and image recovery are free of any error. A secure transmission process is also discussed in this paper, which improves the efficiency by ten times compared to other processes as discussed.Keywords: secure communication, reserving room before encryption, least significant bits, image encryption, reversible data hiding
Procedia PDF Downloads 41225407 Causal Relationship between Macro-Economic Indicators and Fund Unit Price Behaviour: Evidence from Malaysian Equity Unit Trust Fund Industry
Authors: Anwar Hasan Abdullah Othman, Ahamed Kameel, Hasanuddeen Abdul Aziz
Abstract:
In this study, an attempt has been made to investigate the relationship specifically the causal relation between fund unit prices of Islamic equity unit trust fund which measure by fund NAV and the selected macro-economic variables of Malaysian economy by using VECM causality test and Granger causality test. Monthly data has been used from Jan, 2006 to Dec, 2012 for all the variables. The findings of the study showed that industrial production index, political election and financial crisis are the only variables having unidirectional causal relationship with fund unit price. However, the global oil prices is having bidirectional causality with fund NAV. Thus, it is concluded that the equity unit trust fund industry in Malaysia is an inefficient market with respect to the industrial production index, global oil prices, political election and financial crisis. However, the market is approaching towards informational efficiency at least with respect to four macroeconomic variables, treasury bill rate, money supply, foreign exchange rate and corruption index.Keywords: fund unit price, unit trust industry, Malaysia, macroeconomic variables, causality
Procedia PDF Downloads 47025406 Identity Verification Using k-NN Classifiers and Autistic Genetic Data
Authors: Fuad M. Alkoot
Abstract:
DNA data have been used in forensics for decades. However, current research looks at using the DNA as a biometric identity verification modality. The goal is to improve the speed of identification. We aim at using gene data that was initially used for autism detection to find if and how accurate is this data for identification applications. Mainly our goal is to find if our data preprocessing technique yields data useful as a biometric identification tool. We experiment with using the nearest neighbor classifier to identify subjects. Results show that optimal classification rate is achieved when the test set is corrupted by normally distributed noise with zero mean and standard deviation of 1. The classification rate is close to optimal at higher noise standard deviation reaching 3. This shows that the data can be used for identity verification with high accuracy using a simple classifier such as the k-nearest neighbor (k-NN).Keywords: biometrics, genetic data, identity verification, k nearest neighbor
Procedia PDF Downloads 25825405 A Review on Intelligent Systems for Geoscience
Authors: R Palson Kennedy, P.Kiran Sai
Abstract:
This article introduces machine learning (ML) researchers to the hurdles that geoscience problems present, as well as the opportunities for improvement in both ML and geosciences. This article presents a review from the data life cycle perspective to meet that need. Numerous facets of geosciences present unique difficulties for the study of intelligent systems. Geosciences data is notoriously difficult to analyze since it is frequently unpredictable, intermittent, sparse, multi-resolution, and multi-scale. The first half addresses data science’s essential concepts and theoretical underpinnings, while the second section contains key themes and sharing experiences from current publications focused on each stage of the data life cycle. Finally, themes such as open science, smart data, and team science are considered.Keywords: Data science, intelligent system, machine learning, big data, data life cycle, recent development, geo science
Procedia PDF Downloads 13525404 Safe and Scalable Framework for Participation of Nodes in Smart Grid Networks in a P2P Exchange of Short-Term Products
Authors: Maciej Jedrzejczyk, Karolina Marzantowicz
Abstract:
Traditional utility value chain is being transformed during last few years into unbundled markets. Increased distributed generation of energy is one of considerable challenges faced by Smart Grid networks. New sources of energy introduce volatile demand response which has a considerable impact on traditional middlemen in E&U market. The purpose of this research is to search for ways to allow near-real-time electricity markets to transact with surplus energy based on accurate time synchronous measurements. A proposed framework evaluates the use of secure peer-2-peer (P2P) communication and distributed transaction ledgers to provide flat hierarchy, and allow real-time insights into present and forecasted grid operations, as well as state and health of the network. An objective is to achieve dynamic grid operations with more efficient resource usage, higher security of supply and longer grid infrastructure life cycle. Methods used for this study are based on comparative analysis of different distributed ledger technologies in terms of scalability, transaction performance, pluggability with external data sources, data transparency, privacy, end-to-end security and adaptability to various market topologies. An intended output of this research is a design of a framework for safer, more efficient and scalable Smart Grid network which is bridging a gap between traditional components of the energy network and individual energy producers. Results of this study are ready for detailed measurement testing, a likely follow-up in separate studies. New platforms for Smart Grid achieving measurable efficiencies will allow for development of new types of Grid KPI, multi-smart grid branches, markets, and businesses.Keywords: autonomous agents, Distributed computing, distributed ledger technologies, large scale systems, micro grids, peer-to-peer networks, Self-organization, self-stabilization, smart grids
Procedia PDF Downloads 30025403 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh
Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila
Abstract:
Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.Keywords: data culture, data-driven organization, data mesh, data quality for business success
Procedia PDF Downloads 13625402 Preparing Education Enter the ASEAN Community: The Case Study of Suan Sunandha Rajabhat University
Authors: Sakapas Saengchai, Vilasinee Jintalikhitdee, Mathinee Khongsatid, Nattapol Pourprasert
Abstract:
This paper studied the preparing education enter the ASEAN Community by the year 2015 the Ministry of Education has policy on ASEAN Charter, including the dissemination of information to create a good attitude about ASEAN, development of students' skills appropriately, development of educational standards to prepare for the liberalization of education in the region and Youth Development as a vital resource in advancing the ASEAN community. Preparing for the liberalization of education Commission on Higher Education (CHE) has prepared Thailand strategic to become ASEAN and support the free trade in higher education service; increasing graduate capability to reach international standards; strengthening higher educational institutions; and enhancing roles of educational institutions in the ASEAN community is main factor in set up long-term education frame 15 years, volume no. 2. As well as promoting Thailand as a center for education in the neighbor countries. As well as development data centers of higher education institutions in the region make the most of the short term plan is to supplement the curriculum in the ASEAN community. Moreover, provides a teaching of English and other languages used in the region, creating partnerships with the ASEAN countries to exchange academics staff and students, research, training, development of joint programs, and system tools in higher education.Keywords: ASEAN community, education, institution, dissemination of information
Procedia PDF Downloads 47225401 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients' Cohorts: A Case Study in Scotland
Authors: Raptis Sotirios
Abstract:
Health and social care (HSc) services planning and scheduling are facing unprecedented challenges due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven can help to improve policies, plan and design services provision schedules using algorithms assist healthcare managers’ to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as CART, random forests (RF), and logistic regression (LGR). The significance tests Chi-Squared test and Student test are used on data over a 39 years span for which HSc services data exist for services delivered in Scotland. The demands are probabilistically associated through statistical hypotheses that assume that the target service’s demands are statistically dependent on other demands as a NULL hypothesis. This linkage can be confirmed or not by the data. Complementarily, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus groups of services. Statistical tests confirm ML couplings making the prediction also statistically meaningful and prove that a target service can be matched reliably to other services, and ML shows these indicated relationships can also be linear ones. Zero paddings were used for missing years records and illustrated better such relationships both for limited years and in the entire span offering long term data visualizations while limited years groups explained how well patients numbers can be related in short periods or can change over time as opposed to behaviors across more years. The prediction performance of the associations is measured using Receiver Operating Characteristic(ROC) AUC and ACC metrics as well as the statistical tests, Chi-Squared and Student. Co-plots and comparison tables for RF, CART, and LGR as well as p-values and Information Exchange(IE), are provided showing the specific behavior of the ML and of the statistical tests and the behavior using different learning ratios. The impact of k-NN and cross-correlation and C-Means first groupings is also studied over limited years and the entire span. It was found that CART was generally behind RF and LGR, but in some interesting cases, LGR reached an AUC=0 falling below CART, while the ACC was as high as 0.912, showing that ML methods can be confused padding or by data irregularities or outliers. On average, 3 linear predictors were sufficient, LGR was found competing RF well, and CART followed with the same performance at higher learning ratios. Services were packed only if when significance level(p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited years, across various services sectors, learning configurations, as confirmed using statistical hypotheses.Keywords: class, cohorts, data frames, grouping, prediction, prob-ability, services
Procedia PDF Downloads 23425400 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques
Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah
Abstract:
Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or under-estimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improves accuracies. This requires standard measurement methods to be structured in ontologically and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.Keywords: BIM, construction projects, cost estimation, NRM, ontology
Procedia PDF Downloads 55125399 Big Data Analysis with RHadoop
Authors: Ji Eun Shin, Byung Ho Jung, Dong Hoon Lim
Abstract:
It is almost impossible to store or analyze big data increasing exponentially with traditional technologies. Hadoop is a new technology to make that possible. R programming language is by far the most popular statistical tool for big data analysis based on distributed processing with Hadoop technology. With RHadoop that integrates R and Hadoop environment, we implemented parallel multiple regression analysis with different sizes of actual data. Experimental results showed our RHadoop system was much faster as the number of data nodes increases. We also compared the performance of our RHadoop with lm function and big lm packages available on big memory. The results showed that our RHadoop was faster than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases.Keywords: big data, Hadoop, parallel regression analysis, R, RHadoop
Procedia PDF Downloads 43725398 Participation of Juvenile with Driven of Tobacco Control in Education Institute: Case Study of Suan Sunandha Rajabhat University
Authors: Sakapas Saengchai
Abstract:
This paper studied the participation of juvenile with driven of tobacco control in education institute: case study of Suan Sunandha Rajabhat University is qualitative research has objective to study participation of juvenile with driven of tobacco control in University, as guidance of development participation of juvenile with driven of tobacco control in education institute the university is also free-cigarette university. There are qualitative researches on collection data of participation observation, in-depth interview of group conversation and agent of student in each faculty and college and exchange opinion of student. Result of study found that participation in tobacco control has 3 parts; 1) Participation in campaign of tobacco control, 2) Academic training and activity of free-cigarette of university and 3) As model of juvenile in tobacco control. For guidelines on youth involvement in driven tobacco control is universities should promote tobacco control activities. Reduce smoking campaign continues include a specific area for smokers has living room as sign clearly, staying in the faculty / college and developing network of model students who are non-smoking. This is a key role in the coordination of university students driving to the free cigarette university. Including the strengthening of community in the area and outside the area as good social and quality of country.Keywords: participation, juvenile, tobacco control, institute
Procedia PDF Downloads 27325397 A Mutually Exclusive Task Generation Method Based on Data Augmentation
Authors: Haojie Wang, Xun Li, Rui Yin
Abstract:
In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.Keywords: data augmentation, mutex task generation, meta-learning, text classification.
Procedia PDF Downloads 9425396 Engaged Employee: Re-Examine the Effects of Psychological Conditions on Employee Outcomes
Authors: Muncharee Phaobthip
Abstract:
In this research, the researcher re-examine the mediating effect of employee engagement between its antecedents and consequences for investigates the relation of leadership practices, employment branding and employee engagement based on social exchange theory. As such the researcher has four objectives as follows: First, to study the effects of leadership practices on employment branding, employee engagement and work intention; second, to examine the effects of employer brand perception on employee engagement and work intention; third, to examine the effects of employee engagement on work intention; and last, forth, the researcher inquires into the respondence of work intention. The researcher constituted a sample population of 535 employees of a Thai hotel chain located in four regions of the Kingdom of Thailand (Thailand). The researcher utilized a mixed-methods approach divided into quantitative and qualitative research investigatory phases, respectively. In the quantitative phase of research investigation, the researcher collected germane data from the 535 members of the sample population through the use of a questionnaire as a research instrument. In the qualitative phase of research investigation, relevant data were obtained through carrying out in-depth interviews with three subgroups of members of the sample population. These three subgroups consisted of twelve hotelier experts, six employees at the administrator level, and operational level employees. Focus group discussions were held with discussants from these three subgroups. Findings are as follows: Leadership practices showed positive effects on employment branding, employee engagement, and work intention. Employment branding displayed positive effects on employee engagement and work intention. Employee engagement had positive effects on work intention. However, in the analysis of the equation, the researcher confirmed that the important role of employee engagement is mediator factor between its antecedent and consequence factors. This provides benefits, in that it augments the body of knowledge devoted to the fostering of employee engagement in respect to psychological conditions. In conclusion, the researcher found that the value co-creation between leaders, employers and employees had positive effects on employee outcomes for lead to business outcomes according to reciprocal rule.Keywords: antecedents, employee engagement, psychological conditions, work intention
Procedia PDF Downloads 11125395 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network
Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan
Abstract:
Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.Keywords: aggregation point, data communication, data aggregation, wireless sensor network
Procedia PDF Downloads 16025394 Spatial Econometric Approaches for Count Data: An Overview and New Directions
Authors: Paula Simões, Isabel Natário
Abstract:
This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data
Procedia PDF Downloads 59425393 A NoSQL Based Approach for Real-Time Managing of Robotics's Data
Authors: Gueidi Afef, Gharsellaoui Hamza, Ben Ahmed Samir
Abstract:
This paper deals with the secret of the continual progression data that new data management solutions have been emerged: The NoSQL databases. They crossed several areas like personalization, profile management, big data in real-time, content management, catalog, view of customers, mobile applications, internet of things, digital communication and fraud detection. Nowadays, these database management systems are increasing. These systems store data very well and with the trend of big data, a new challenge’s store demands new structures and methods for managing enterprise data. The new intelligent machine in the e-learning sector, thrives on more data, so smart machines can learn more and faster. The robotics are our use case to focus on our test. The implementation of NoSQL for Robotics wrestle all the data they acquire into usable form because with the ordinary type of robotics; we are facing very big limits to manage and find the exact information in real-time. Our original proposed approach was demonstrated by experimental studies and running example used as a use case.Keywords: NoSQL databases, database management systems, robotics, big data
Procedia PDF Downloads 35525392 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis
Authors: C. B. Le, V. N. Pham
Abstract:
In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering
Procedia PDF Downloads 18925391 Determinants of Standard Audit File for Tax Purposes Accounting Legal Obligation Compliance Costs: Empirical Study for Portuguese SMEs of Leiria District
Authors: Isa Raquel Alves Soeiro, Cristina Isabel Branco de Sá
Abstract:
In Portugal, since 2008, there has been a requirement to export the Standard Audit File for Tax Purposes (SAF-T) standard file (in XML format). This file thus gathers tax-relevant information from a company relating to a specific period of taxation. There are two types of SAF-T files that serve different purposes: the SAF-T of revenues and the SAF-T of accounting, which requires taxpayers and accounting firms to invest in order to adapt the accounting programs to the legal requirements. The implementation of the SAF-T accounting file aims to facilitate the collection of relevant tax data by tax inspectors as support of taxpayers' tax returns for the analysis of accounting records or other information with tax relevance (Portaria No. 321-A/2007 of March 26 and Portaria No. 302/2016 of December 2). The main objective of this research project is to verify, through quantitative analysis, what is the cost of compliance of Small and Medium Enterprises (SME) in the district of Leiria in the introduction and implementation of the tax obligation of SAF-T - Standard Audit File for Tax Purposes of accounting. The information was collected through a questionnaire sent to a population of companies selected through the SABI Bureau Van Dijk database in 2020. Based on the responses obtained to the questionnaire, the companies were divided into two groups: Group 1 -companies who are self-employed and whose main activity is accounting services; and Group 2 -companies that do not belong to the accounting sector. In general terms, the conclusion is that there are no statistically significant differences in the costs of complying with the accounting SAF-T between the companies in Group 1 and Group 2 and that, on average, the internal costs of both groups represent the largest component of the total cost of compliance with the accounting SAF-T. The results obtained show that, in both groups, the total costs of complying with the SAF-T of accounting are regressive, which appears to be similar to international studies, although these are related to different tax obligations. Additionally, we verified that the variables volume of business, software used, number of employees, and legal form explain the differences in the costs of complying with accounting SAF-T in the Leiria district SME.Keywords: compliance costs, SAF-T accounting, SME, Portugal
Procedia PDF Downloads 7825390 Modeling Activity Pattern Using XGBoost for Mining Smart Card Data
Authors: Eui-Jin Kim, Hasik Lee, Su-Jin Park, Dong-Kyu Kim
Abstract:
Smart-card data are expected to provide information on activity pattern as an alternative to conventional person trip surveys. The focus of this study is to propose a method for training the person trip surveys to supplement the smart-card data that does not contain the purpose of each trip. We selected only available features from smart card data such as spatiotemporal information on the trip and geographic information system (GIS) data near the stations to train the survey data. XGboost, which is state-of-the-art tree-based ensemble classifier, was used to train data from multiple sources. This classifier uses a more regularized model formalization to control the over-fitting and show very fast execution time with well-performance. The validation results showed that proposed method efficiently estimated the trip purpose. GIS data of station and duration of stay at the destination were significant features in modeling trip purpose.Keywords: activity pattern, data fusion, smart-card, XGboost
Procedia PDF Downloads 24725389 Prediction of Boundary Shear Stress with Flood Plains Enlargements
Authors: Spandan Sahu, Amiya Kumar Pati, Kishanjit Kumar Khatua
Abstract:
The river is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that need to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. During floods, part of a river is carried by the simple main channel and rest is carried by flood plains. For such compound asymmetric channels, the flow structure becomes complicated due to momentum exchange between the main channel and adjoining flood plains. Distribution of boundary shear in subsections provides us with the concept of momentum transfer between the interface of the main channel and the flood plains. Experimentally, to get better data with accurate results are very complex because of the complexity of the problem. Hence, CES software has been used to tackle the complex processes to determine the shear stresses at different sections of an open channel having asymmetric flood plains on both sides of the main channel, and the results are compared with the symmetric flood plains for various geometrical shapes and flow conditions. Error analysis is also performed to know the degree of accuracy of the model implemented.Keywords: depth average velocity, non prismatic compound channel, relative flow depth, velocity distribution
Procedia PDF Downloads 17725388 Performance of Osmotic Microbial Fuel Cell in Wastewater Treatment and Electricity Generation: A Critical Review
Authors: Shubhangi R. Deshmukh, Anupam B. Soni
Abstract:
Clean water and electricity are vital services needed in all communities. Bio-degradation of wastewater contaminants and desalination technologies are the best possible alternatives for the global shortage of fresh water supply. Osmotic microbial fuel cell (OMFC) is a versatile technology that uses microorganism (used for biodegradation of organic waste) and membrane technology (used for water purification) for wastewater treatment and energy generation simultaneously. This technology is the combination of microbial fuel cell (MFC) and forward osmosis (FO) processes. OMFC can give more electricity and clean water than the MFC which has a regular proton exchange membrane. FO gives many improvements such as high contamination removal, lower operating energy, raising high proton flux than other pressure-driven membrane technology. Lower concentration polarization lowers the membrane fouling by giving osmotic water recovery without extra cost. In this review paper, we have discussed the principle, mechanism, limitation, and application of OMFC technology reported to date. Also, we have interpreted the experimental data from various literature on the water recovery and electricity generation assessed by a different component of OMFC. The area of producing electricity using OMFC has further scope for research and seems like a promising route to wastewater treatment.Keywords: forward osmosis, microbial fuel cell, osmotic microbial fuel cell, wastewater treatment
Procedia PDF Downloads 18225387 A Mutually Exclusive Task Generation Method Based on Data Augmentation
Authors: Haojie Wang, Xun Li, Rui Yin
Abstract:
In order to solve the memorization overfitting in the model-agnostic meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to an exponential growth of computation, this paper also proposes a key data extraction method that only extract part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.Keywords: mutex task generation, data augmentation, meta-learning, text classification.
Procedia PDF Downloads 14325386 Production of Human BMP-7 with Recombinant E. coli and B. subtilis
Authors: Jong Il Rhee
Abstract:
The polypeptide representing the mature part of human BMP-7 was cloned and efficiently expressed in Escherichia coli and Bacillus subtilis, which had a clear band for hBMP-7, a homodimeric protein with an apparent molecular weight of 15.4 kDa. Recombinant E.coli produced 111 pg hBMP-7/mg of protein hBMP-7 through IPTG induction. Recombinant B. subtilis also produced 350 pg hBMP-7/ml of culture medium. The hBMP-7 was purified in 2 steps using an FPLC system with an ion exchange column and a gel filtration column. The hBMP-7 produced in this work also stimulated the alkaline phosphatase (ALP) activity in a dose-dependent manner, i.e. 2.5- and 8.9-fold at 100 and 300 ng hBMP-7/ml, respectively, and showed intact biological activity.Keywords: B. subtilis, E. coli, fermentation, hBMP-7
Procedia PDF Downloads 44125385 Efficient Utilization of Biomass for Bioenergy in Environmental Control
Authors: Subir Kundu, Sukhendra Singh, Sumedha Ojha, Kanika Kundu
Abstract:
The continuous decline of petroleum and natural gas reserves and non linear rise of oil price has brought about a realisation of the need for a change in our perpetual dependence on the fossil fuel. A day to day increased consumption of crude and petroleum products has made a considerable impact on our foreign exchange reserves. Hence, an alternate resource for the conversion of energy (both liquid and gas) is essential for the substitution of conventional fuels. Biomass is the alternate solution for the present scenario. Biomass can be converted into both liquid as well as gaseous fuels and other feedstocks for the industries.Keywords: bioenergy, biomass conversion, biorefining, efficient utilisation of night soil
Procedia PDF Downloads 40625384 Exploring Digital Media’s Impact on Sports Sponsorship: A Global Perspective
Authors: Sylvia Chan-Olmsted, Lisa-Charlotte Wolter
Abstract:
With the continuous proliferation of media platforms, there have been tremendous changes in media consumption behaviors. From the perspective of sports sponsorship, while there is now a multitude of platforms to create brand associations, the changing media landscape and shift of message control also mean that sports sponsors will have to take into account the nature of and consumer responses toward these emerging digital media to devise effective marketing strategies. Utilizing the personal interview methodology, this study is qualitative and exploratory in nature. A total of 18 experts from European and American academics, sports marketing industry, and sports leagues/teams were interviewed to address three main research questions: 1) What are the major changes in digital technologies that are relevant to sports sponsorship; 2) How have digital media influenced the channels and platforms of sports sponsorship; and 3) How have these technologies affected the goals, strategies, and measurement of sports sponsorship. The study found that sports sponsorship has moved from consumer engagement, engagement measurement, and consequences of engagement on brand behaviors to micro-targeting one on one, engagement by context, time, and space, and activation and leveraging based on tracking and databases. From the perspective of platforms and channels, the use of mobile devices is prominent during sports content consumption. Increasing multiscreen media consumption means that sports sponsors need to optimize their investment decisions in leagues, teams, or game-related content sources, as they need to go where the fans are most engaged in. The study observed an imbalanced strategic leveraging of technology and digital infrastructure. While sports leagues have had less emphasis on brand value management via technology, sports sponsors have been much more active in utilizing technologies like mobile/LBS tools, big data/user info, real-time marketing and programmatic, and social media activation. Regardless of the new media/platforms, the study found that integration and contextualization are the two essential means of improving sports sponsorship effectiveness through technology. That is, how sponsors effectively integrate social media/mobile/second screen into their existing legacy media sponsorship plan so technology works for the experience/message instead of distracting fans. Additionally, technological advancement and attention economy amplify the importance of consumer data gathering, but sports consumer data does not mean loyalty or engagement. This study also affirms the benefit of digital media as they offer viral and pre-event activations through storytelling way before the actual event, which is critical for leveraging brand association before and after. That is, sponsors now have multiple opportunities and platforms to tell stories about their brands for longer time period. In summary, digital media facilitate fan experience, access to the brand message, multiplatform/channel presentations, storytelling, and content sharing. Nevertheless, rather than focusing on technology and media, today’s sponsors need to define what they want to focus on in terms of content themes that connect with their brands and then identify the channels/platforms. The big challenge for sponsors is to play to the venues/media’s specificity and its fit with the target audience and not uniformly deliver the same message in the same format on different platforms/channels.Keywords: digital media, mobile media, social media, technology, sports sponsorship
Procedia PDF Downloads 29425383 Revolutionizing Traditional Farming Using Big Data/Cloud Computing: A Review on Vertical Farming
Authors: Milind Chaudhari, Suhail Balasinor
Abstract:
Due to massive deforestation and an ever-increasing population, the organic content of the soil is depleting at a much faster rate. Due to this, there is a big chance that the entire food production in the world will drop by 40% in the next two decades. Vertical farming can help in aiding food production by leveraging big data and cloud computing to ensure plants are grown naturally by providing the optimum nutrients sunlight by analyzing millions of data points. This paper outlines the most important parameters in vertical farming and how a combination of big data and AI helps in calculating and analyzing these millions of data points. Finally, the paper outlines how different organizations are controlling the indoor environment by leveraging big data in enhancing food quantity and quality.Keywords: big data, IoT, vertical farming, indoor farming
Procedia PDF Downloads 17525382 Empirical Evidence on the Need for Harmonization of Audit Criteria for Small Enterprises in India
Authors: Satinder Bhatia
Abstract:
Limited Liability Partnerships (LLPs) was a concept introduced in India in 2009. Ever since then, there has been a fierce growth in the number of organizations registered as LLPs outpacing the number of registrations as private companies. Among other benefits extended to LLPs, the audit being mandated only for LLPs having a turnover of at least Rs 40 lakhs or capital contribution of Rs 25 lakhs, has been a major attraction. This has resulted in only about 10 per cent of LLPs coming under mandatory audit. Even for such companies, the accounting standards to be followed in the preparation of financial statements have not been specified. The Revised Indian Accounting Standards (Revised IndAS) which are aligned with IFRS to a great extent, may apply to LLPs only under limited conditions. Thus, the veracity of even the audited financial statements of LLPs can be questioned. If in future, these LLPs would like to list on a stock exchange to raise capital, there can be serious hurdles if investors do not find the financial statements to be reliable and consistent. LLPs are generally governed by country-specific rules in the area of accounts and audit. Thus, such rules vary across UK, EU and the USA. Some countries have adopted the IFRS for SMEs and since LLPs can be referred to as SMEs; they would come under the ambit of these IFRS provisions. Besides, as the scope of audit widens to cover qualitative information in addition to quantitative data, audit of LLPs has now acquired a new meaning and a new urgency as demands for at least limited purpose audits are arising from different stakeholders including lenders, suppliers, customers and joint venture partners.Keywords: audit disclosures, audit quality, guidance for SMEs, non-audit services
Procedia PDF Downloads 15625381 China Pakistan Economic Corridor: A Changing Mechanism in Pakistan
Authors: Komal Niazi, He Guoqiang
Abstract:
This paper is focused on ‘CPEC (China Pakistan Economic Corridor) a changing mechanism in Pakistan’. China Pakistan Economic Corridor (CPEC) activity under OBOR (One Belt One Road (OBOR) CPEC is a piece of the bigger umbrella and goes for giving another hallway of exchange for China and Pakistan and is relied upon to profit the entire of South Asian area. But this study reveals that significance of acculturation can never be overemphasized in the investigation of diverse impacts and the routes people groups of various ethnic personalities figure out how to adjust and acknowledge the social attributes of a larger part group in a multiethnic culture. This study also deals with the effects of acculturation which can be seen at multiple levels through CPEC for both Pakistani and Chinese people, who were working on this project. China and Pakistan exchanged the cultural and social patterns with each other. Probably the most perceptible gathering level impacts of cultural assimilation regularly incorporate changes in sustenance (food), clothing, and language. At the individual level, the procedure of cultural assimilation alludes to the socialization procedure by which the Pakistani local people and Chinese who were working in Pakistan adopted values, traditions, attitudes, states of mind, and practices. But China has imposed discourse through economic power and language. CPEC dominates Pakistan’s poor area’s and changes their living, social and cultural values. People also claimed this acculturation was a great threat to their cultural values and religious beliefs. Main findings of the study clearly ascertained that research was to find out the conceptual understanding of people about the acculturation process through CPEC. At the cultural level, aggregate activities and social organizations end up plainly adjusted, and at the behavioral level, there are changes in a person's day by day behavioral collection and some of the time in experienced anxiety. Anthropological data methods were used to collect data, like snowball and judgmental sampling, case studied methods.Keywords: CPEC, acculturation process, language discourse, social norms, cultural values, religious beliefs
Procedia PDF Downloads 292