Search results for: patent analytics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 504

Search results for: patent analytics

294 Microclimate Variations in Rio de Janeiro Related to Massive Public Transportation

Authors: Marco E. O. Jardim, Frederico A. M. Souza, Valeria M. Bastos, Myrian C. A. Costa, Nelson F. F. Ebecken

Abstract:

Urban public transportation in Rio de Janeiro is based on bus lines, powered by diesel, and four limited metro lines that support only some neighborhoods. This work presents an infrastructure built to better understand microclimate variations related to massive urban transportation in some specific areas of the city. The use of sensor nodes with small analytics capacity provides environmental information to population or public services. The analyses of data collected from a few small sensors positioned near some heavy traffic streets show the harmful impact due to poor bus route plan.

Keywords: big data, IoT, public transportation, public health system

Procedia PDF Downloads 215
293 An Analysis of Privacy and Security for Internet of Things Applications

Authors: Dhananjay Singh, M. Abdullah-Al-Wadud

Abstract:

The Internet of Things is a concept of a large scale ecosystem of wireless actuators. The actuators are defined as things in the IoT, those which contribute or produces some data to the ecosystem. However, ubiquitous data collection, data security, privacy preserving, large volume data processing, and intelligent analytics are some of the key challenges into the IoT technologies. In order to solve the security requirements, challenges and threats in the IoT, we have discussed a message authentication mechanism for IoT applications. Finally, we have discussed data encryption mechanism for messages authentication before propagating into IoT networks.

Keywords: Internet of Things (IoT), message authentication, privacy, security

Procedia PDF Downloads 346
292 Value Chain Based New Business Opportunity

Authors: Seonjae Lee, Sungjoo Lee

Abstract:

Excavation is necessary to remain competitive in the current business environment. The company survived the rapidly changing industry conditions by adapting new business strategy and reducing technology challenges. Traditionally, the two methods are conducted excavations for new businesses. The first method is, qualitative analysis of expert opinion, which is gathered through opportunities and secondly, new technologies are discovered through quantitative data analysis of method patents. The second method increases time and cost. Patent data is restricted for use and the purpose of discovering business opportunities. This study presents the company's characteristics (sector, size, etc.), of new business opportunities in customized form by reviewing the value chain perspective and to contributing to creating new business opportunities in the proposed model. It utilizes the trademark database of the Korean Intellectual Property Office (KIPO) and proprietary company information database of the Korea Enterprise Data (KED). This data is key to discovering new business opportunities with analysis of competitors and advanced business trademarks (Module 1) and trading analysis of competitors found in the KED (Module 2).

Keywords: value chain, trademark, trading analysis, new business opportunity

Procedia PDF Downloads 345
291 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction

Authors: Mohammad Ghahramani, Fahimeh Saei Manesh

Abstract:

Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.

Keywords: soccer, analytics, machine learning, database

Procedia PDF Downloads 212
290 Focusing of Technology Monitoring Activities Using Indicators

Authors: Günther Schuh, Christina König, Toni Drescher

Abstract:

One of the key factors for the competitiveness and market success of technology-driven companies is the timely provision of information about emerging technologies, changes in existing technologies, as well as relevant related changes in the market's structures and participants. Therefore, many companies conduct technology intelligence (TI) activities to ensure an early identification of appropriate technologies and other (weak) signals. One base activity of TI is technology monitoring, which is defined as the systematic tracking of developments within a specified topic of interest as well as related trends over a long period of time. Due to the very large number of dynamically changing parameters within the technological and the market environment of a company as well as their possible interdependencies, it is necessary to focus technology monitoring on specific indicators or other criteria, which are able to point out technological developments and market changes. In addition to the execution of a literature review on existing approaches, which mainly propose patent-based indicators, it is examined in this paper whether indicator systems from other branches such as risk management or economic research could be transferred to technology monitoring in order to enable an efficient and focused technology monitoring for companies.

Keywords: technology forecasting, technology indicator, technology intelligence, technology management, technology monitoring

Procedia PDF Downloads 446
289 The Role of Technology in Transforming the Finance, Banking, and Insurance Sectors

Authors: Farid Fahami

Abstract:

This article explores the transformative role of technology in the finance, banking, and insurance sectors. It examines key technological trends such as AI, blockchain, data analytics, and digital platforms and their impact on operations, customer experiences, and business models. The article highlights the benefits of technology adoption, including improved efficiency, cost reduction, enhanced customer experiences, and expanded financial inclusion. It also addresses challenges like cybersecurity, data privacy, and the need for upskilling. Real-world case studies demonstrate successful technology integration, and recommendations for stakeholders emphasize embracing innovation and collaboration. The article concludes by emphasizing the importance of technology in shaping the future of these sectors.

Keywords: banking, finance, insurance, technology

Procedia PDF Downloads 48
288 The Synergistic Effects of Blockchain and AI on Enhancing Data Integrity and Decision-Making Accuracy in Smart Contracts

Authors: Sayor Ajfar Aaron, Sajjat Hossain Abir, Ashif Newaz, Md Mushfiqur Rahman

Abstract:

Investigating the convergence of blockchain technology and artificial intelligence, this paper examines their synergistic effects on data integrity and decision-making within smart contracts. By implementing AI-driven analytics on blockchain-based platforms, the research identifies improvements in automated contract enforcement and decision accuracy. The paper presents a framework that leverages AI to enhance transparency and trust, while blockchain ensures immutable record-keeping, culminating in significantly optimized operational efficiencies in various industries.

Keywords: artificial intelligence, blockchain, data Integrity, smart contracts.

Procedia PDF Downloads 10
287 Entropy Risk Factor Model of Exchange Rate Prediction

Authors: Darrol Stanley, Levan Efremidze, Jannie Rossouw

Abstract:

We investigate the predictability of the USD/ZAR (South African Rand) exchange rate with sample entropy analytics for the period of 2004-2015. We calculate sample entropy based on the daily data of the exchange rate and conduct empirical implementation of several market timing rules based on these entropy signals. The dynamic investment portfolio based on entropy signals produces better risk adjusted performance than a buy and hold strategy. The returns are estimated on the portfolio values in U.S. dollars. These results are preliminary and do not yet account for reasonable transactions costs, although these are very small in currency markets.

Keywords: currency trading, entropy, market timing, risk factor model

Procedia PDF Downloads 245
286 Predicting the Success of Bank Telemarketing Using Artificial Neural Network

Authors: Mokrane Selma

Abstract:

The shift towards decision making (DM) based on artificial intelligence (AI) techniques will change the way in which consumer markets and our societies function. Through AI, predictive analytics is being used by businesses to identify these patterns and major trends with the objective to improve the DM and influence future business outcomes. This paper proposes an Artificial Neural Network (ANN) approach to predict the success of telemarketing calls for selling bank long-term deposits. To validate the proposed model, we uses the bank marketing data of 41188 phone calls. The ANN attains 98.93% of accuracy which outperforms other conventional classifiers and confirms that it is credible and valuable approach for telemarketing campaign managers.

Keywords: bank telemarketing, prediction, decision making, artificial intelligence, artificial neural network

Procedia PDF Downloads 115
285 Social Data Aggregator and Locator of Knowledge (STALK)

Authors: Rashmi Raghunandan, Sanjana Shankar, Rakshitha K. Bhat

Abstract:

Social media contributes a vast amount of data and information about individuals to the internet. This project will greatly reduce the need for unnecessary manual analysis of large and diverse social media profiles by filtering out and combining the useful information from various social media profiles, eliminating irrelevant data. It differs from the existing social media aggregators in that it does not provide a consolidated view of various profiles. Instead, it provides consolidated INFORMATION derived from the subject’s posts and other activities. It also allows analysis over multiple profiles and analytics based on several profiles. We strive to provide a query system to provide a natural language answer to questions when a user does not wish to go through the entire profile. The information provided can be filtered according to the different use cases it is used for.

Keywords: social network, analysis, Facebook, Linkedin, git, big data

Procedia PDF Downloads 417
284 Pharmaceutical Evaluation of Five Different Generic Brands of Prednisolone

Authors: Asma A. Ben Ahmed, Hajer M. Alborawy, Alaa A. Mashina, Pradeep K. Velautham, Abdulmonem Gobassa, Emhemmed Elgallal, Mohamed N. El Attug

Abstract:

Generic medicines are those where patent protection has expired, and which may be produced by manufacturers other than the innovator company. Use of generic medicines has been increasing in recent years, primarily as a cost saving measure in healthcare provision. Generic medicines are typically 20 – 90 % cheaper than originator equivalents. Physicians often continue to prescribe brand-name drugs to their patients even when less expensive pharmacologically equivalent generic drugs are available. Because generics are less expensive than their brand-name counterparts, the cost-savings to the patient is not the only factor that physicians consider when choosing between generic and brand-name drugs. Unfortunately Physicians in general and Libyan Physicians in particular tend to prescribe brand-name drugs, even without evidence of their therapeutic superiority, because neither they nor their insured patients bear these drugs’ increased cost with respect to generic substitutes. This study is to compare the quality of five different prednisolone tablets of the same strength from different companies under different trade names: Julphar, October pharma, Akums, Actavis, Pfizer compared them with pure prednisolone reference (BPCRS).

Keywords: quality control, pharmaceutical analysis, generic medicines, prednisolone

Procedia PDF Downloads 483
283 Big Data Strategy for Telco: Network Transformation

Authors: F. Amin, S. Feizi

Abstract:

Big data has the potential to improve the quality of services; enable infrastructure that businesses depend on to adapt continually and efficiently; improve the performance of employees; help organizations better understand customers; and reduce liability risks. Analytics and marketing models of fixed and mobile operators are falling short in combating churn and declining revenue per user. Big Data presents new method to reverse the way and improve profitability. The benefits of Big Data and next-generation network, however, are more exorbitant than improved customer relationship management. Next generation of networks are in a prime position to monetize rich supplies of customer information—while being mindful of legal and privacy issues. As data assets are transformed into new revenue streams will become integral to high performance.

Keywords: big data, next generation networks, network transformation, strategy

Procedia PDF Downloads 331
282 Clinical Study of the Prunus dulcis (Almond) Shell Extract on Tinea capitis Infection

Authors: Nasreen Thebo, W. Shaikh, A. J. Laghari, P. Nangni

Abstract:

Prunus dulcis (Almond) shell extract is demonstrated for its biomedical applications. Shell extract prepared by soxhlet method and further characterized by UV-Visible spectrophotometer, atomic absorption spectrophotometer (AAS), FTIR, GC-MS techniques. In this study, the antifungal activity of almond shell extract was observed against clinically isolated pathogenic fungi by strip method. The antioxidant potential of crude shell extract of was evaluated by using DPPH (2-2-diphenyl-1-picryhydrazyl) and radical scavenging system. The possibility of short term therapy was only 20 days. The total antioxidant activity varied from 94.38 to 95.49% and total phenolic content was found as 4.455 mg/gm in almond shell extract. Finally the results provide a great therapeutic potential against Tinea capitis infection of scalp. Included in this study of shell extract that show scientific evidence for clinical efficacy, as well as found to be more useful in the treatment of dermatologic disorders and without any doubt it can be recommended to be Patent.

Keywords: Tinea capitis, DPPH, FTIR, GC-MS therapeutic treatment

Procedia PDF Downloads 349
281 PM Electrical Machines Diagnostic: Methods Selected

Authors: M. Barański

Abstract:

This paper presents a several diagnostic methods designed to electrical machines especially for permanent magnets (PM) machines. Those machines are commonly used in small wind and water systems and vehicles drives. Those methods are preferred by the author in periodic diagnostic of electrical machines. The special attention should be paid to diagnostic method of turn-to-turn insulation and vibrations. Both of those methods were created in Institute of Electrical Drives and Machines Komel. The vibration diagnostic method is the main thesis of author’s doctoral dissertation. This is method of determination the technical condition of PM electrical machine basing on its own signals is the subject of patent application No P.405669. Specific structural properties of machines excited by permanent magnets are used in this method - electromotive force (EMF) generated due to vibrations. There was analysed number of publications which describe vibration diagnostic methods and tests of electrical machines with permanent magnets and there was no method found to determine the technical condition of such machine basing on their own signals.

Keywords: electrical vehicle, generator, main insulation, permanent magnet, thermography, turn-to-traction drive, turn insulation, vibrations

Procedia PDF Downloads 366
280 The Relevance of Smart Technologies in Learning

Authors: Rachael Olubukola Afolabi

Abstract:

Immersive technologies known as X Reality or Cross Reality that include virtual reality augmented reality, and mixed reality have pervaded into the education system at different levels from elementary school to adult learning. Instructors, instructional designers, and learning experience specialists continue to find new ways to engage students in the learning process using technology. While the progression of web technologies has enhanced digital learning experiences, analytics on learning outcomes continue to be explored to determine the relevance of these technologies in learning. Digital learning has evolved from web 1.0 (static) to 4.0 (dynamic and interactive), and this evolution of technologies has also advanced teaching methods and approaches. This paper explores how these technologies are being utilized in learning and the results that educators and learners have identified as effective learning opportunities and approaches.

Keywords: immersive technologoes, virtual reality, augmented reality, technology in learning

Procedia PDF Downloads 115
279 Control the Flow of Big Data

Authors: Shizra Waris, Saleem Akhtar

Abstract:

Big data is a research area receiving attention from academia and IT communities. In the digital world, the amounts of data produced and stored have within a short period of time. Consequently this fast increasing rate of data has created many challenges. In this paper, we use functionalism and structuralism paradigms to analyze the genesis of big data applications and its current trends. This paper presents a complete discussion on state-of-the-art big data technologies based on group and stream data processing. Moreover, strengths and weaknesses of these technologies are analyzed. This study also covers big data analytics techniques, processing methods, some reported case studies from different vendor, several open research challenges and the chances brought about by big data. The similarities and differences of these techniques and technologies based on important limitations are also investigated. Emerging technologies are suggested as a solution for big data problems.

Keywords: computer, it community, industry, big data

Procedia PDF Downloads 161
278 R Data Science for Technology Management

Authors: Sunghae Jun

Abstract:

Technology management (TM) is important issue in a company improving the competitiveness. Among many activities of TM, technology analysis (TA) is important factor, because most decisions for management of technology are decided by the results of TA. TA is to analyze the developed results of target technology using statistics or Delphi. TA based on Delphi is depended on the experts’ domain knowledge, in comparison, TA by statistics and machine learning algorithms use objective data such as patent or paper instead of the experts’ knowledge. Many quantitative TA methods based on statistics and machine learning have been studied, and these have been used for technology forecasting, technological innovation, and management of technology. They applied diverse computing tools and many analytical methods case by case. It is not easy to select the suitable software and statistical method for given TA work. So, in this paper, we propose a methodology for quantitative TA using statistical computing software called R and data science to construct a general framework of TA. From the result of case study, we also show how our methodology is applied to real field. This research contributes to R&D planning and technology valuation in TM areas.

Keywords: technology management, R system, R data science, statistics, machine learning

Procedia PDF Downloads 432
277 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 42
276 Examining the Modular End of Line Control Unit Design Criteria for Vehicle Sliding Door System Slide Profile

Authors: Orhan Kurtuluş, Cüneyt Yavuz

Abstract:

The end of the line controls of the finished products in the automotive industry is important. The control that has been conducted with the manual methods for the sliding doors tracks is not sufficient and faulty products cannot be identified. As a result, the customer has the faulty products. In the scope of this study, the design criteria of the PLC integrated modular end of line control unit has been examined, designed and manufactured to make the control of the 10 different track profile to 2 different vehicles with an objective to minimize the salvage costs by obtaining more sensitive, certain and accurate measurement results. In the study that started with literature and patent review, the design inputs have been specified, the technical concept has been developed, computer supported mechanic design, control system and automation design, design review and design improvement have been made. Laser analog sensors at high sensitivity, probes and modular blocks have been used in the unit. The measurement has been conducted in the system and it is observed that measurement results are more sensitive than the previous methods.

Keywords: control unit design, end of line, modular design, sliding door system

Procedia PDF Downloads 408
275 The Use of Rule-Based Cellular Automata to Track and Forecast the Dispersal of Classical Biocontrol Agents at Scale, with an Application to the Fopius arisanus Fruit Fly Parasitoid

Authors: Agboka Komi Mensah, John Odindi, Elfatih M. Abdel-Rahman, Onisimo Mutanga, Henri Ez Tonnang

Abstract:

Ecosystems are networks of organisms and populations that form a community of various species interacting within their habitats. Such habitats are defined by abiotic and biotic conditions that establish the initial limits to a population's growth, development, and reproduction. The habitat’s conditions explain the context in which species interact to access resources such as food, water, space, shelter, and mates, allowing for feeding, dispersal, and reproduction. Dispersal is an essential life-history strategy that affects gene flow, resource competition, population dynamics, and species distributions. Despite the importance of dispersal in population dynamics and survival, understanding the mechanism underpinning the dispersal of organisms remains challenging. For instance, when an organism moves into an ecosystem for survival and resource competition, its progression is highly influenced by extrinsic factors such as its physiological state, climatic variables and ability to evade predation. Therefore, greater spatial detail is necessary to understand organism dispersal dynamics. Understanding organisms dispersal can be addressed using empirical and mechanistic modelling approaches, with the adopted approach depending on the study's purpose Cellular automata (CA) is an example of these approaches that have been successfully used in biological studies to analyze the dispersal of living organisms. Cellular automata can be briefly described as occupied cells by an individual that evolves based on proper decisions based on a set of neighbours' rules. However, in the ambit of modelling individual organisms dispersal at the landscape scale, we lack user friendly tools that do not require expertise in mathematical models and computing ability; such as a visual analytics framework for tracking and forecasting the dispersal behaviour of organisms. The term "visual analytics" (VA) describes a semiautomated approach to electronic data processing that is guided by users who can interact with data via an interface. Essentially, VA converts large amounts of quantitative or qualitative data into graphical formats that can be customized based on the operator's needs. Additionally, this approach can be used to enhance the ability of users from various backgrounds to understand data, communicate results, and disseminate information across a wide range of disciplines. To support effective analysis of the dispersal of organisms at the landscape scale, we therefore designed Pydisp which is a free visual data analytics tool for spatiotemporal dispersal modeling built in Python. Its user interface allows users to perform a quick and interactive spatiotemporal analysis of species dispersal using bioecological and climatic data. Pydisp enables reuse and upgrade through the use of simple principles such as Fuzzy cellular automata algorithms. The potential of dispersal modeling is demonstrated in a case study by predicting the dispersal of Fopius arisanus (Sonan), endoparasitoids to control Bactrocera dorsalis (Hendel) (Diptera: Tephritidae) in Kenya. The results obtained from our example clearly illustrate the parasitoid's dispersal process at the landscape level and confirm that dynamic processes in an agroecosystem are better understood when designed using mechanistic modelling approaches. Furthermore, as demonstrated in the example, the built software is highly effective in portraying the dispersal of organisms despite the unavailability of detailed data on the species dispersal mechanisms.

Keywords: cellular automata, fuzzy logic, landscape, spatiotemporal

Procedia PDF Downloads 54
274 Formulation of Optimal Shifting Sequence for Multi-Speed Automatic Transmission

Authors: Sireesha Tamada, Debraj Bhattacharjee, Pranab K. Dan, Prabha Bhola

Abstract:

The most important component in an automotive transmission system is the gearbox which controls the speed of the vehicle. In an automatic transmission, the right positioning of actuators ensures efficient transmission mechanism embodiment, wherein the challenge lies in formulating the number of actuators associated with modelling a gearbox. Data with respect to actuation and gear shifting sequence has been retrieved from the available literature, including patent documents, and has been used in this proposed heuristics based methodology for modelling actuation sequence in a gear box. This paper presents a methodological approach in designing a gearbox for the purpose of obtaining an optimal shifting sequence. The computational model considers factors namely, the number of stages and gear teeth as input parameters since these two are the determinants of the gear ratios in an epicyclic gear train. The proposed transmission schematic or stick diagram aids in developing the gearbox layout design. The number of iterations and development time required to design a gearbox layout is reduced by using this approach.

Keywords: automatic transmission, gear-shifting, multi-stage planetary gearbox, rank ordered clustering

Procedia PDF Downloads 291
273 Blame Classification through N-Grams in E-Commerce Customer Reviews

Authors: Subhadeep Mandal, Sujoy Bhattacharya, Pabitra Mitra, Diya Guha Roy, Seema Bhattacharya

Abstract:

E-commerce firms allow customers to evaluate and review the things they buy as a positive or bad experience. The e-commerce transaction processes are made up of a variety of diverse organizations and activities that operate independently but are connected together to complete the transaction (from placing an order to the goods reaching the client). After a negative shopping experience, clients frequently disregard the critical assessment of these businesses and submit their feedback on an all-over basis, which benefits certain enterprises but is tedious for others. In this article, we solely dealt with negative reviews and attempted to distinguish between negative reviews where the e-commerce firm is explicitly blamed by customers for a bad purchasing experience and other negative reviews.

Keywords: e-commerce, online shopping, customer reviews, customer behaviour, text analytics, n-grams classification

Procedia PDF Downloads 227
272 Emerging Technology for Business Intelligence Applications

Authors: Hsien-Tsen Wang

Abstract:

Business Intelligence (BI) has long helped organizations make informed decisions based on data-driven insights and gain competitive advantages in the marketplace. In the past two decades, businesses witnessed not only the dramatically increasing volume and heterogeneity of business data but also the emergence of new technologies, such as Artificial Intelligence (AI), Semantic Web (SW), Cloud Computing, and Big Data. It is plausible that the convergence of these technologies would bring more value out of business data by establishing linked data frameworks and connecting in ways that enable advanced analytics and improved data utilization. In this paper, we first review and summarize current BI applications and methodology. Emerging technologies that can be integrated into BI applications are then discussed. Finally, we conclude with a proposed synergy framework that aims at achieving a more flexible, scalable, and intelligent BI solution.

Keywords: business intelligence, artificial intelligence, semantic web, big data, cloud computing

Procedia PDF Downloads 69
271 Video Analytics on Pedagogy Using Big Data

Authors: Jamuna Loganath

Abstract:

Education is the key to the development of any individual’s personality. Today’s students will be tomorrow’s citizens of the global society. The education of the student is the edifice on which his/her future will be built. Schools therefore should provide an all-round development of students so as to foster a healthy society. The behaviors and the attitude of the students in school play an essential role for the success of the education process. Frequent reports of misbehaviors such as clowning, harassing classmates, verbal insults are becoming common in schools today. If this issue is left unattended, it may develop a negative attitude and increase the delinquent behavior. So, the need of the hour is to find a solution to this problem. To solve this issue, it is important to monitor the students’ behaviors in school and give necessary feedback and mentor them to develop a positive attitude and help them to become a successful grownup. Nevertheless, measuring students’ behavior and attitude is extremely challenging. None of the present technology has proven to be effective in this measurement process because actions, reactions, interactions, response of the students are rarely used in the course of the data due to complexity. The purpose of this proposal is to recommend an effective supervising system after carrying out a feasibility study by measuring the behavior of the Students. This can be achieved by equipping schools with CCTV cameras. These CCTV cameras installed in various schools of the world capture the facial expressions and interactions of the students inside and outside their classroom. The real time raw videos captured from the CCTV can be uploaded to the cloud with the help of a network. The video feeds get scooped into various nodes in the same rack or on the different racks in the same cluster in Hadoop HDFS. The video feeds are converted into small frames and analyzed using various Pattern recognition algorithms and MapReduce algorithm. Then, the video frames are compared with the bench marking database (good behavior). When misbehavior is detected, an alert message can be sent to the counseling department which helps them in mentoring the students. This will help in improving the effectiveness of the education process. As Video feeds come from multiple geographical areas (schools from different parts of the world), BIG DATA helps in real time analysis as it analyzes computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. It also analyzes data that can’t be analyzed by traditional software applications such as RDBMS, OODBMS. It has also proven successful in handling human reactions with ease. Therefore, BIG DATA could certainly play a vital role in handling this issue. Thus, effectiveness of the education process can be enhanced with the help of video analytics using the latest BIG DATA technology.

Keywords: big data, cloud, CCTV, education process

Procedia PDF Downloads 218
270 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics

Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima

Abstract:

This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.

Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks

Procedia PDF Downloads 130
269 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh

Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila

Abstract:

Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.

Keywords: data culture, data-driven organization, data mesh, data quality for business success

Procedia PDF Downloads 97
268 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 47
267 Analyzing Large Scale Recurrent Event Data with a Divide-And-Conquer Approach

Authors: Jerry Q. Cheng

Abstract:

Currently, in analyzing large-scale recurrent event data, there are many challenges such as memory limitations, unscalable computing time, etc. In this research, a divide-and-conquer method is proposed using parametric frailty models. Specifically, the data is randomly divided into many subsets, and the maximum likelihood estimator from each individual data set is obtained. Then a weighted method is proposed to combine these individual estimators as the final estimator. It is shown that this divide-and-conquer estimator is asymptotically equivalent to the estimator based on the full data. Simulation studies are conducted to demonstrate the performance of this proposed method. This approach is applied to a large real dataset of repeated heart failure hospitalizations.

Keywords: big data analytics, divide-and-conquer, recurrent event data, statistical computing

Procedia PDF Downloads 136
266 Inferring Cognitive Skill in Concept Space

Authors: Rania A. Aboalela, Javed I. Khan

Abstract:

This research presents a learning assessment theory of Cognitive Skill in Concept Space (CS2) to measure the assessed knowledge in terms of cognitive skill levels of the concepts. The cognitive skill levels refer to levels such as if a student has acquired the state at the level of understanding, or applying, or analyzing, etc. The theory is comprised of three constructions: Graph paradigm of a semantic/ ontological scheme, the concept states of the theory and the assessment analytics which is the process to estimate the sets of concept state at a certain skill level. Concept state means if a student has already learned, or is ready to learn, or is not ready to learn a certain skill level. The experiment is conducted to prove the validation of the theory CS2.

Keywords: cognitive skill levels, concept states, concept space, knowledge assessment theory

Procedia PDF Downloads 286
265 A Case Study on the Impact of Technology Readiness in a Department of Clinical Nurses

Authors: Julie Delany

Abstract:

To thrive in today’s digital climate, it is vital that organisations adopt new technology and prepare for rising digital trends. This proves more difficult in government where, traditionally, people lack change readiness. While individuals may have a desire to work smarter, this does not necessarily mean embracing technology. This paper discusses the rollout of an application into a small department of highly experienced nurses. The goal was to both streamline the department's workflow and provide a platform for gathering essential business metrics. The biggest challenges were adoption and motivating the nurses to change their routines and learn new computer skills. Two-thirds struggled with the change, and as a result, some jeopardised the validity of the business metrics. In conclusion, there are lessons learned and recommendations for similar projects.

Keywords: change ready, information technology, end-user, iterative method, rollout plan, data analytics

Procedia PDF Downloads 110