Search results for: real time data processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 40633

Search results for: real time data processing

38353 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks

Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios

Abstract:

To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.

Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand

Procedia PDF Downloads 142
38352 A Near-Optimal Domain Independent Approach for Detecting Approximate Duplicates

Authors: Abdelaziz Fellah, Allaoua Maamir

Abstract:

We propose a domain-independent merging-cluster filter approach complemented with a set of algorithms for identifying approximate duplicate entities efficiently and accurately within a single and across multiple data sources. The near-optimal merging-cluster filter (MCF) approach is based on the Monge-Elkan well-tuned algorithm and extended with an affine variant of the Smith-Waterman similarity measure. Then we present constant, variable, and function threshold algorithms that work conceptually in a divide-merge filtering fashion for detecting near duplicates as hierarchical clusters along with their corresponding representatives. The algorithms take recursive refinement approaches in the spirit of filtering, merging, and updating, cluster representatives to detect approximate duplicates at each level of the cluster tree. Experiments show a high effectiveness and accuracy of the MCF approach in detecting approximate duplicates by outperforming the seminal Monge-Elkan’s algorithm on several real-world benchmarks and generated datasets.

Keywords: data mining, data cleaning, approximate duplicates, near-duplicates detection, data mining applications and discovery

Procedia PDF Downloads 387
38351 Application of Signature Verification Models for Document Recognition

Authors: Boris M. Fedorov, Liudmila P. Goncharenko, Sergey A. Sybachin, Natalia A. Mamedova, Ekaterina V. Makarenkova, Saule Rakhimova

Abstract:

In modern economic conditions, the question of the possibility of correct recognition of a signature on digital documents in order to verify the expression of will or confirm a certain operation is relevant. The additional complexity of processing lies in the dynamic variability of the signature for each individual, as well as in the way information is processed because the signature refers to biometric data. The article discusses the issues of using artificial intelligence models in order to improve the quality of signature confirmation in document recognition. The analysis of several possible options for using the model is carried out. The results of the study are given, in which it is possible to correctly determine the authenticity of the signature on small samples.

Keywords: signature recognition, biometric data, artificial intelligence, neural networks

Procedia PDF Downloads 148
38350 An Evaluation of the Impact of E-Banking on Operational Efficiency of Banks in Nigeria

Authors: Ibrahim Rabiu Darazo

Abstract:

The research has been conducted on the impact of E-banking on the operational efficiency of Banks in Nigeria, A case of some selected banks (Diamond Bank Plc, GTBankPlc, and Fidelity Bank Plc) in Nigeria. The research is a quantitative research which uses both primary and secondary sources of data collection. Questionnaire were used to obtained accurate data, where 150 Questionnaire were distributed among staff and customers of the three Banks , and the data collected where analysed using chi-square, whereas the secondary data where obtained from relevant text books, journals and relevant web sites. It is clear from the findings that, the use of e-banking by the banks has improved the efficiency of these banks, in terms of providing efficient services to customers electronically, using Internet Banking, Telephone Banking ATMs, reducing time taking to serve customers, e-banking allow new customers to open an account online, customers have access to their account at all the time 24/7.E-banking provide access to customers information from the data base and cost of check and postage were eliminated using e-banking. The recommendation at the end of the research include; the Banks should try to update their electronic gadgets, e-fraud(internal & external) should also be controlled, Banks shall employ qualified man power, Biometric ATMs shall be introduce to reduce fraud using ATM Cards, as it is use in other countries like USA.

Keywords: banks, electronic banking, operational efficiency of banks, biometric ATMs

Procedia PDF Downloads 333
38349 Valorization Cascade Approach of Fish By-Products towards a Zero-Waste Future: A Review

Authors: Joana Carvalho, Margarida Soares, André Ribeiro, Lucas Nascimento, Nádia Valério, Zlatina Genisheva

Abstract:

Following the exponential growth of human population, a remarkable increase in the amount of fish waste has been produced worldwide. The fish processing industry generates a considerable amount of by-products which represents a considerable environmental problem. Accordingly, the reuse and valorisation of these by-products is a key process for marine resource preservation. The significant volume of fish waste produced worldwide, along with its environmental impact, underscores the urgent need for the adoption of sustainable practices. The transformative potential of utilizing fish processing waste to create industrial value is gaining recognition. The substantial amounts of waste generated by the fish processing industry present both environmental challenges and economic inefficiencies. Different added-value products can be recovered by the valorisation industries, whereas fishing companies can save costs associated with the management of those wastes, with associated advantages, not only in terms of economic income but also considering the environmental impacts. Fish processing by-products have numerous applications; the target portfolio of products will be fish oil, fish protein hydrolysates, bacteriocins, pigments, vitamins, collagen, and calcium-rich powder, targeting food products, additives, supplements, and nutraceuticals. This literature review focuses on the main valorisation ways of fish wastes and different compounds with a high commercial value obtained by fish by-products and their possible applications in different fields. Highlighting its potential in sustainable resource management strategies can play and important role in reshaping the fish processing industry, driving it towards circular economy and consequently more sustainable future.

Keywords: fish process industry, fish wastes, by-products, circular economy, sustainability

Procedia PDF Downloads 17
38348 Text Based Shuffling Algorithm on Graphics Processing Unit for Digital Watermarking

Authors: Zayar Phyo, Ei Chaw Htoon

Abstract:

In a New-LSB based Steganography method, the Fisher-Yates algorithm is used to permute an existing array randomly. However, that algorithm performance became slower and occurred memory overflow problem while processing the large dimension of images. Therefore, the Text-Based Shuffling algorithm aimed to select only necessary pixels as hiding characters at the specific position of an image according to the length of the input text. In this paper, the enhanced text-based shuffling algorithm is presented with the powered of GPU to improve more excellent performance. The proposed algorithm employs the OpenCL Aparapi framework, along with XORShift Kernel including the Pseudo-Random Number Generator (PRNG) Kernel. PRNG is applied to produce random numbers inside the kernel of OpenCL. The experiment of the proposed algorithm is carried out by practicing GPU that it can perform faster-processing speed and better efficiency without getting the disruption of unnecessary operating system tasks.

Keywords: LSB based steganography, Fisher-Yates algorithm, text-based shuffling algorithm, OpenCL, XORShiftKernel

Procedia PDF Downloads 151
38347 A Study of the Trade-off Energy Consumption-Performance-Schedulability for DVFS Multicore Systems

Authors: Jalil Boudjadar

Abstract:

Dynamic Voltage and Frequency Scaling (DVFS) multicore platforms are promising execution platforms that enable high computational performance, less energy consumption and flexibility in scheduling the system processes. However, the resulting interleaving and memory interference together with per-core frequency tuning make real-time guarantees hard to be delivered. Besides, energy consumption represents a strong constraint for the deployment of such systems on energy-limited settings. Identifying the system configurations that would achieve a high performance and consume less energy while guaranteeing the system schedulability is a complex task in the design of modern embedded systems. This work studies the trade-off between energy consumption, cores utilization and memory bottleneck and their impact on the schedulability of DVFS multicore time-critical systems with a hierarchy of shared memories. We build a model-based framework using Parametrized Timed Automata of UPPAAL to analyze the mutual impact of performance, energy consumption and schedulability of DVFS multicore systems, and demonstrate the trade-off on an actual case study.

Keywords: time-critical systems, multicore systems, schedulability analysis, energy consumption, performance analysis

Procedia PDF Downloads 107
38346 Machine Learning in Agriculture: A Brief Review

Authors: Aishi Kundu, Elhan Raza

Abstract:

"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.

Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting

Procedia PDF Downloads 105
38345 Design and Development of a Computerized Medical Record System for Hospitals in Remote Areas

Authors: Grace Omowunmi Soyebi

Abstract:

A computerized medical record system is a collection of medical information about a person that is stored on a computer. One principal problem of most hospitals in rural areas is using the file management system for keeping records. A lot of time is wasted when a patient visits the hospital, probably in an emergency, and the nurse or attendant has to search through voluminous files before the patient's file can be retrieved, this may cause an unexpected to happen to the patient. This Data Mining application is to be designed using a Structured System Analysis and design method which will help in a well-articulated analysis of the existing file management system, feasibility study, and proper documentation of the Design and Implementation of a Computerized medical record system. This Computerized system will replace the file management system and help to quickly retrieve a patient's record with increased data security, access clinical records for decision-making, and reduce the time range at which a patient gets attended to.

Keywords: programming, computing, data, innovation

Procedia PDF Downloads 119
38344 Expert Supporting System for Diagnosing Lymphoid Neoplasms Using Probabilistic Decision Tree Algorithm and Immunohistochemistry Profile Database

Authors: Yosep Chong, Yejin Kim, Jingyun Choi, Hwanjo Yu, Eun Jung Lee, Chang Suk Kang

Abstract:

For the past decades, immunohistochemistry (IHC) has been playing an important role in the diagnosis of human neoplasms, by helping pathologists to make a clearer decision on differential diagnosis, subtyping, personalized treatment plan, and finally prognosis prediction. However, the IHC performed in various tumors of daily practice often shows conflicting and very challenging results to interpret. Even comprehensive diagnosis synthesizing clinical, histologic and immunohistochemical findings can be helpless in some twisted cases. Another important issue is that the IHC data is increasing exponentially and more and more information have to be taken into account. For this reason, we reached an idea to develop an expert supporting system to help pathologists to make a better decision in diagnosing human neoplasms with IHC results. We gave probabilistic decision tree algorithm and tested the algorithm with real case data of lymphoid neoplasms, in which the IHC profile is more important to make a proper diagnosis than other human neoplasms. We designed probabilistic decision tree based on Bayesian theorem, program computational process using MATLAB (The MathWorks, Inc., USA) and prepared IHC profile database (about 104 disease category and 88 IHC antibodies) based on WHO classification by reviewing the literature. The initial probability of each neoplasm was set with the epidemiologic data of lymphoid neoplasm in Korea. With the IHC results of 131 patients sequentially selected, top three presumptive diagnoses for each case were made and compared with the original diagnoses. After the review of the data, 124 out of 131 were used for final analysis. As a result, the presumptive diagnoses were concordant with the original diagnoses in 118 cases (93.7%). The major reason of discordant cases was that the similarity of the IHC profile between two or three different neoplasms. The expert supporting system algorithm presented in this study is in its elementary stage and need more optimization using more advanced technology such as deep-learning with data of real cases, especially in differentiating T-cell lymphomas. Although it needs more refinement, it may be used to aid pathological decision making in future. A further application to determine IHC antibodies for a certain subset of differential diagnoses might be possible in near future.

Keywords: database, expert supporting system, immunohistochemistry, probabilistic decision tree

Procedia PDF Downloads 224
38343 Effect of Different Processing Methods on the Quality Attributes of Pigeon Pea Used in Bread Production

Authors: B. F. Olanipekun, O. J. Oyelade, C. O. Osemobor

Abstract:

Pigeon pea is a very good source of protein and micronutrient, but it is being underutilized in Nigeria because of several constraints. This research considered the effect of different processing methods on the quality attributes of pigeon pea used in bread production towards enhancing its utility. Pigeon pea was obtained at a local market and processed into the flour using three processing methods: soaking, sprouting and roasting and were used to bake bread in different proportions. Chemical composition and sensory attributes of the breads were thereafter determined. The highest values of protein and ash contents were obtained from 20 % substitution of sprouted pigeon pea in wheat flour and may be attributable to complex biochemical changes occurring during hydration, to invariably lead to protein constituent being broken down. Hydrolytic activities of the enzymes from the sprouted sample resulted in improvement in the constituent of total protein probably due to reduction in the carbohydrate content. Sensory qualities analyses showed that bread produced with soaked and roasted pigeon pea flours at 5 and 10% inclusion, respectively were mostly accepted than other blends, and products with sprouted pigeon pea flour were least accepted. The findings of this research suggest that supplementing wheat flour with sprouted pigeon peas have more nutritional potentials. However, with sensory analysis indices, the soaked and roasted pigeon peas up to 10% are majorly accepted, and also can improve the nutritional status. Overall, this will be very beneficial to population dependent on plant protein in order to combat malnutrition problems.

Keywords: pigeon pea, processing, protein, malnutrition

Procedia PDF Downloads 250
38342 Impacts of Building Design Factors on Auckland School Energy Consumptions

Authors: Bin Su

Abstract:

This study focuses on the impact of school building design factors on winter extra energy consumption which mainly includes space heating, water heating and other appliances related to winter indoor thermal conditions. A number of Auckland schools were randomly selected for the study which introduces a method of using real monthly energy consumption data for a year to calculate winter extra energy data of school buildings. The study seeks to identify the relationships between winter extra energy data related to school building design data related to the main architectural features, building envelope and elements of the sample schools. The relationships can be used to estimate the approximate saving in winter extra energy consumption which would result from a changed design datum for future school development, and identify any major energy-efficient design problems. The relationships are also valuable for developing passive design guides for school energy efficiency.

Keywords: building energy efficiency, building thermal design, building thermal performance, school building design

Procedia PDF Downloads 444
38341 Intelligent Chatbot Generating Dynamic Responses Through Natural Language Processing

Authors: Aarnav Singh, Jatin Moolchandani

Abstract:

The proposed research work aims to build a query-based AI chatbot that can answer any question related to any topic. A chatbot is software that converses with users via text messages. In the proposed system, we aim to build a chatbot that generates a response based on the user’s query. For this, we use natural language processing to analyze the query and some set of texts to form a concise answer. The texts are obtained through web-scrapping and filtering all the credible sources from a web search. The objective of this project is to provide a chatbot that is able to provide simple and accurate answers without the user having to read through a large number of articles and websites. Creating an AI chatbot that can answer a variety of user questions on a variety of topics is the goal of the proposed research project. This chatbot uses natural language processing to comprehend user inquiries and provides succinct responses by examining a collection of writings that were scraped from the internet. The texts are carefully selected from reliable websites that are found via internet searches. This project aims to provide users with a chatbot that provides clear and precise responses, removing the need to go through several articles and web pages in great detail. In addition to exploring the reasons for their broad acceptance and their usefulness across many industries, this article offers an overview of the interest in chatbots throughout the world.

Keywords: Chatbot, Artificial Intelligence, natural language processing, web scrapping

Procedia PDF Downloads 66
38340 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods

Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie

Abstract:

The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.

Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence

Procedia PDF Downloads 248
38339 A Hybrid Data Mining Algorithm Based System for Intelligent Defence Mission Readiness and Maintenance Scheduling

Authors: Shivam Dwivedi, Sumit Prakash Gupta, Durga Toshniwal

Abstract:

It is a challenging task in today’s date to keep defence forces in the highest state of combat readiness with budgetary constraints. A huge amount of time and money is squandered in the unnecessary and expensive traditional maintenance activities. To overcome this limitation Defence Intelligent Mission Readiness and Maintenance Scheduling System has been proposed, which ameliorates the maintenance system by diagnosing the condition and predicting the maintenance requirements. Based on new data mining algorithms, this system intelligently optimises mission readiness for imminent operations and maintenance scheduling in repair echelons. With modified data mining algorithms such as Weighted Feature Ranking Genetic Algorithm and SVM-Random Forest Linear ensemble, it improves the reliability, availability and safety, alongside reducing maintenance cost and Equipment Out of Action (EOA) time. The results clearly conclude that the introduced algorithms have an edge over the conventional data mining algorithms. The system utilizing the intelligent condition-based maintenance approach improves the operational and maintenance decision strategy of the defence force.

Keywords: condition based maintenance, data mining, defence maintenance, ensemble, genetic algorithms, maintenance scheduling, mission capability

Procedia PDF Downloads 297
38338 A Comparative Assessment of Membrane Bioscrubber and Classical Bioscrubber for Biogas Purification

Authors: Ebrahim Tilahun, Erkan Sahinkaya, Bariş Calli̇

Abstract:

Raw biogas is a valuable renewable energy source however it usually needs removal of the impurities. The presence of hydrogen sulfide (H2S) in the biogas has detrimental corrosion effects on the cogeneration units. Removal of H2S from the biogas can therefore significantly improve the biogas quality. In this work, a conventional bioscrubber (CBS), and a dense membrane bioscrubber (DMBS) were comparatively evaluated in terms of H2S removal efficiency (RE), CH4 enrichment and alkaline consumption at gas residence times ranging from 5 to 20 min. Both bioscrubbers were fed with a synthetic biogas containing H2S (1%), CO2 (39%) and CH4 (60%). The results show that high RE (98%) was obtained in the DMBS when gas residence time was 20 min, whereas slightly lower CO2 RE was observed. While in CBS system the outlet H2S concentration was always lower than 250 ppmv, and its H2S RE remained higher than 98% regardless of the gas residence time, although the high alkaline consumption and frequent absorbent replacement limited its cost-effectiveness. The result also indicates that in DMBS when the gas residence time increased to 20 min, the CH4 content in the treated biogas enriched upto 80%. However, while operating the CBS unit the CH4 content of the raw biogas (60%) decreased by three fold. The lower CH4 content in CBS was probably caused by extreme dilution of biogas with air (N2 and O2). According to the results obtained here the DMBS system is a robust and effective biotechnology in comparison with CBS. Hence, DMBS has a better potential for real scale applications.

Keywords: biogas, bioscrubber, desulfurization, PDMS membrane

Procedia PDF Downloads 226
38337 Modeling the Impact of Time Pressure on Activity-Travel Rescheduling Heuristics

Authors: Jingsi Li, Neil S. Ferguson

Abstract:

Time pressure could have an influence on the productivity, quality of decision making, and the efficiency of problem-solving. This has been mostly stemmed from cognitive research or psychological literature. However, a salient scarce discussion has been held for transport adjacent fields. It is conceivable that in many activity-travel contexts, time pressure is a potentially important factor since an excessive amount of decision time may incur the risk of late arrival to the next activity. The activity-travel rescheduling behavior is commonly explained by costs and benefits of factors such as activity engagements, personal intentions, social requirements, etc. This paper hypothesizes that an additional factor of perceived time pressure could affect travelers’ rescheduling behavior, thus leading to an impact on travel demand management. Time pressure may arise from different ways and is assumed here to be essentially incurred due to travelers planning their schedules without an expectation of unforeseen elements, e.g., transport disruption. In addition to a linear-additive utility-maximization model, the less computationally compensatory heuristic models are considered as an alternative to simulate travelers’ responses. The paper will contribute to travel behavior modeling research by investigating the following questions: how to measure the time pressure properly in an activity-travel day plan context? How do travelers reschedule their plans to cope with the time pressure? How would the importance of the activity affect travelers’ rescheduling behavior? What will the behavioral model be identified to describe the process of making activity-travel rescheduling decisions? How do these identified coping strategies affect the transport network? In this paper, a Mixed Heuristic Model (MHM) is employed to identify the presence of different choice heuristics through a latent class approach. The data about travelers’ activity-travel rescheduling behavior is collected via a web-based interactive survey where a fictitious scenario is created comprising multiple uncertain events on the activity or travel. The experiments are conducted in order to gain a real picture of activity-travel reschedule, considering the factor of time pressure. The identified behavioral models are then integrated into a multi-agent transport simulation model to investigate the effect of the rescheduling strategy on the transport network. The results show that an increased proportion of travelers use simpler, non-compensatory choice strategies instead of compensatory methods to cope with time pressure. Specifically, satisfying - one of the heuristic decision-making strategies - is adopted commonly since travelers tend to abandon the less important activities and keep the important ones. Furthermore, the importance of the activity is found to increase the weight of negative information when making trip-related decisions, especially route choices. When incorporating the identified non-compensatory decision-making heuristic models into the agent-based transport model, the simulation results imply that neglecting the effect of perceived time pressure may result in an inaccurate forecast of choice probability and overestimate the affectability to the policy changes.

Keywords: activity-travel rescheduling, decision making under uncertainty, mixed heuristic model, perceived time pressure, travel demand management

Procedia PDF Downloads 112
38336 Integrating RAG with Prompt Engineering for Dynamic Log Parsing and Anomaly Detections

Authors: Liu Lin Xin

Abstract:

With the increasing complexity of systems, log parsing and anomaly detection have become crucial for maintaining system stability. However, traditional methods often struggle with adaptability and accuracy, especially when dealing with rapidly evolving log content and unfamiliar domains. To address these challenges, this paper proposes approach that integrates Retrieval Augmented Generation (RAG) technology with Prompt Engineering for Large Language Models, applied specifically in LogPrompt. This approach enables dynamic log parsing and intelligent anomaly detection by combining real-time information retrieval with prompt optimization. The proposed method significantly enhances the adaptability of log analysis and improves the interpretability of results. Experimental results on several public datasets demonstrate the method's superior performance, particularly in scenarios lacking training data, where it significantly outperforms traditional methods. This paper introduces a novel technical pathway for log parsing and anomaly detection, showcasing the substantial theoretical value and practical potential.

Keywords: log parsing, anomaly detection, RAG, prompt engineering, LLMs

Procedia PDF Downloads 35
38335 U.S. Trade and Trade Balance with China: Testing for Marshall-Lerner Condition and the J-Curve Hypothesis

Authors: Anisul Islam

Abstract:

The U.S. has a very strong trade relationship with China but with a large and persistent trade deficit. Some has argued that the undervalued Chinese Yuan is to be blamed for the persistent trade deficit. The empirical results are mixed at best. This paper empirically estimates the U.S. export function along with the U.S. import function with its trade with China with the purpose of testing for the existence of the Marshall-Lerner (ML) condition as well for the possible existence of the J-curve hypothesis. Annual export and import data will be utilized for as long as the time series data exists. The export and import functions will be estimated using advanced econometric techniques, along with appropriate diagnostic tests performed to examine the validity and reliability of the estimated results. The annual time-series data covers from 1975 to 2022 with a sample size of 48 years, the longest period ever utilized before in any previous study. The data is collected from several sources, such as the World Bank’s World Development Indicators, IMF Financial Statistics, IMF Direction of Trade Statistics, and several other sources. The paper is expected to shed important light on the ongoing debate regarding the persistent U.S. trade deficit with China and the policies that may be useful to reduce such deficits over time. As such, the paper will be of great interest for the academics, researchers, think tanks, global organizations, and policy makers in both China and the U.S.

Keywords: exports, imports, marshall-lerner condition, j-curve hypothesis, united states, china

Procedia PDF Downloads 64
38334 Exchange Rate Forecasting by Econometric Models

Authors: Zahid Ahmad, Nosheen Imran, Nauman Ali, Farah Amir

Abstract:

The objective of the study is to forecast the US Dollar and Pak Rupee exchange rate by using time series models. For this purpose, daily exchange rates of US and Pakistan for the period of January 01, 2007 - June 2, 2017, are employed. The data set is divided into in sample and out of sample data set where in-sample data are used to estimate as well as forecast the models, whereas out-of-sample data set is exercised to forecast the exchange rate. The ADF test and PP test are used to make the time series stationary. To forecast the exchange rate ARIMA model and GARCH model are applied. Among the different Autoregressive Integrated Moving Average (ARIMA) models best model is selected on the basis of selection criteria. Due to the volatility clustering and ARCH effect the GARCH (1, 1) is also applied. Results of analysis showed that ARIMA (0, 1, 1 ) and GARCH (1, 1) are the most suitable models to forecast the future exchange rate. Further the GARCH (1,1) model provided the volatility with non-constant conditional variance in the exchange rate with good forecasting performance. This study is very useful for researchers, policymakers, and businesses for making decisions through accurate and timely forecasting of the exchange rate and helps them in devising their policies.

Keywords: exchange rate, ARIMA, GARCH, PAK/USD

Procedia PDF Downloads 562
38333 An ERP Study of Chinese Pseudo-Object Structures

Authors: Changyin Zhou

Abstract:

Verb-argument relation is a very important aspect of syntax-semantics interaction in sentence processing. Previous ERP (event related potentials) studies in this field mainly concentrated on the relation between the verb and its core arguments. The present study aims to reveal the ERP pattern of Chinese pseudo-object structures (SOSs), in which a peripheral argument is promoted to occupy the position of the patient object, as compared with the patient object structures (POSs). The ERP data were collected when participants were asked to perform acceptability judgments about Chinese phrases. Our result shows that, similar to the previous studies of number-of-argument violations, Chinese SOSs show a bilaterally distributed N400 effect. But different from all the previous studies of verb-argument relations, Chinese SOSs demonstrate a sustained anterior positivity (SAP). This SAP, which is the first report related to complexity of argument structure operation, reflects the integration difficulty of the newly promoted arguments and the progressive nature of well-formedness checking in the processing of Chinese SOSs.

Keywords: Chinese pseudo-object structures, ERP, sustained anterior positivity, verb-argument relation

Procedia PDF Downloads 434
38332 Creative Potential of Children with Learning Disabilities

Authors: John McNamara

Abstract:

Growing up creative is an important idea in today’s classrooms. As education seeks to prepare children for their futures, it is important that the system considers traditional as well as non-traditional pathways. This poster describes the findings of a research study investigating creative potential in children with learning disabilities. Children with learning disabilities were administered the Torrance Test of Creative Problem Solving along with subtests from the Comprehensive Test of Phonological Processing. A quantitative comparative analysis was computed using paired-sample t-tests. Results indicated statistically significant difference between children’s creative problem-solving skills and their reading-based skills. The results lend support to the idea that children with learning disabilities have inherent strengths in the area of creativity. It can be hypothesized that the success of these children may be associated with the notion that they are using a type of neurological processing that is not otherwise used in academic tasks. Children with learning disabilities, a presumed left-side neurological processing problem, process information with the right side of the brain – even with tasks that should be processed with the left side (i.e. language). In over-using their right hemisphere, it is hypothesized that children with learning disabilities have well-developed right hemispheres and, as such, have strengths associated with this type of processing, such as innovation and creativity. The current study lends support to the notion that children with learning disabilities may be particularly primed to succeed in areas that call on creativity and creative thinking.

Keywords: learning disabilities, educational psychology, education, creativity

Procedia PDF Downloads 70
38331 Radio Frequency Identification (Rfid) Cost-Effective, Location-Based System for Managing Construction Materials

Authors: Mourad Bakouka, Abdelaziz Rabehi

Abstract:

Companies need to have logistics and transportation in place that can adapt to the changing nature of construction sites. This ensures they can react quickly when needed. A study was conducted to develop a way to locate and track materials on construction sites. The system is an RFID/GPS integration that's required to pull off this feat. The study also reports how the platform has been used in construction. They found many advantages to using it, including reductions in both time and costs as well as improved management of materials orders. . For example, the time in which a project could start up was shortened from two weeks to three days with just a single digital order. As of now, the technology is still limited in its widespread adoption due largely to overall lack of awareness and difficulty connecting to it. However, as more and more companies embrace it in construction, the technology is expected to become ubiquitous. The developed platform provides contractors and construction managers with real-time information about the status of materials and work, allowing them to better manage the workflow in a project. The study sheds new light on this subject, which is essential to know. This work is becoming increasingly aware of the use of smart tools in constructing buildings.

Keywords: materials management, internet of things (IoT), radio frequency identification (RFID), construction site, supply chain management

Procedia PDF Downloads 82
38330 Improving Cost and Time Control of Construction Projects Management Practices in Nigeria

Authors: Mustapha Yakubu, Ahmed Usman, Hashim Ambursa

Abstract:

This paper presents the findings of a research which sought to investigate techniques used to improve cost and time control of construction projects management practice in Nigeria. However, there is limited research on issues surrounding the practical usage of these techniques. Data were collected through a questionnaire distributed to construction experts through a survey conducted on the 100 construction organisations and 50 construction consultancy firms in the Nigeria aimed at identifying common project cost and time control practices and factors inhibiting effective project control in practice. The study reveals that despite the vast application of control techniques a high proportion of respondents still experienced cost and time overruns on a significant proportion of their projects. Analysis of the survey results concluded that more effort should be geared at the management of the identified top project control inhibiting factors. This paper has outlined some measures for mitigating these inhibiting factors so that the outcome of project time and cost control can be improved in practice.

Keywords: construction project, cost control, Nigeria, time control

Procedia PDF Downloads 314
38329 Effect of Pack Aluminising Conditions on βNiAl Coatings

Authors: A. D. Chandio, P. Xiao

Abstract:

In this study, nickel aluminide coatings were deposited onto CMSX-4 single crystal superalloy and pure Ni substrates by using in-situ chemical vapour deposition (CVD) technique. The microstructural evolutions and coating thickness (CT) were studied upon the variation of processing conditions i.e. time and temperature. The results demonstrated (under identical conditions) that coating formed on pure Ni contains no substrate entrapments and have lower CT in comparison to one deposited on the CMSX-4 counterpart. In addition, the interdiffusion zone (IDZ) of Ni substrate is a γ’-Ni3Al in comparison to the CMSX-4 alloy that is βNiAl phase. The higher CT on CMSX-4 superalloy is attributed to presence of γ-Ni/γ’-Ni3Al structure which contains ~ 15 at.% Al before deposition (that is already present in superalloy). Two main deposition parameters (time and temperature) of the coatings were also studied in addition to standard comparison of substrate effects. The coating formation time was found to exhibit profound effect on CT, whilst temperature was found to change coating activities. In addition, the CT showed linear trend from 800 to 1000 °C, thereafter reduction was observed. This was attributed to the change in coating activities.

Keywords: βNiAl, in-situ CVD, CT, CMSX-4, Ni, microstructure

Procedia PDF Downloads 240
38328 Native Language Identification with Cross-Corpus Evaluation Using Social Media Data: ’Reddit’

Authors: Yasmeen Bassas, Sandra Kuebler, Allen Riddell

Abstract:

Native language identification is one of the growing subfields in natural language processing (NLP). The task of native language identification (NLI) is mainly concerned with predicting the native language of an author’s writing in a second language. In this paper, we investigate the performance of two types of features; content-based features vs. content independent features, when they are evaluated on a different corpus (using social media data “Reddit”). In this NLI task, the predefined models are trained on one corpus (TOEFL), and then the trained models are evaluated on different data using an external corpus (Reddit). Three classifiers are used in this task; the baseline, linear SVM, and logistic regression. Results show that content-based features are more accurate and robust than content independent ones when tested within the corpus and across corpus.

Keywords: NLI, NLP, content-based features, content independent features, social media corpus, ML

Procedia PDF Downloads 137
38327 Development of New Technology Evaluation Model by Using Patent Information and Customers' Review Data

Authors: Kisik Song, Kyuwoong Kim, Sungjoo Lee

Abstract:

Many global firms and corporations derive new technology and opportunity by identifying vacant technology from patent analysis. However, previous studies failed to focus on technologies that promised continuous growth in industrial fields. Most studies that derive new technology opportunities do not test practical effectiveness. Since previous studies depended on expert judgment, it became costly and time-consuming to evaluate new technologies based on patent analysis. Therefore, research suggests a quantitative and systematic approach to technology evaluation indicators by using patent data to and from customer communities. The first step involves collecting two types of data. The data is used to construct evaluation indicators and apply these indicators to the evaluation of new technologies. This type of data mining allows a new method of technology evaluation and better predictor of how new technologies are adopted.

Keywords: data mining, evaluating new technology, technology opportunity, patent analysis

Procedia PDF Downloads 377
38326 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model

Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini

Abstract:

The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.

Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures

Procedia PDF Downloads 124
38325 On the Design of a Secure Two-Party Authentication Scheme for Internet of Things Using Cancelable Biometrics and Physically Unclonable Functions

Authors: Behnam Zahednejad, Saeed Kosari

Abstract:

Widespread deployment of Internet of Things (IoT) has raised security and privacy issues in this environment. Designing a secure two-factor authentication scheme between the user and server is still a challenging task. In this paper, we focus on Cancelable Biometric (CB) as an authentication factor in IoT. We show that previous CB-based scheme fail to provide real two-factor security, Perfect Forward Secrecy (PFS) and suffer database attacks and traceability of the user. Then we propose our improved scheme based on CB and Physically Unclonable Functions (PUF), which can provide real two-factor security, PFS, user’s unlinkability, and resistance to database attack. In addition, Key Compromise Impersonation (KCI) resilience is achieved in our scheme. We also prove the security of our proposed scheme formally using both Real-Or-Random (RoR) model and the ProVerif analysis tool. For the usability of our scheme, we conducted a performance analysis and showed that our scheme has the least communication cost compared to the previous CB-based scheme. The computational cost of our scheme is also acceptable for the IoT environment.

Keywords: IoT, two-factor security, cancelable biometric, key compromise impersonation resilience, perfect forward secrecy, database attack, real-or-random model, ProVerif

Procedia PDF Downloads 102
38324 An Intelligent Baby Care System Based on IoT and Deep Learning Techniques

Authors: Chinlun Lai, Lunjyh Jiang

Abstract:

Due to the heavy burden and pressure of caring for infants, an integrated automatic baby watching system based on IoT smart sensing and deep learning machine vision techniques is proposed in this paper. By monitoring infant body conditions such as heartbeat, breathing, body temperature, sleeping posture, as well as the surrounding conditions such as dangerous/sharp objects, light, noise, humidity and temperature, the proposed system can analyze and predict the obvious/potential dangerous conditions according to observed data and then adopt suitable actions in real time to protect the infant from harm. Thus, reducing the burden of the caregiver and improving safety efficiency of the caring work. The experimental results show that the proposed system works successfully for the infant care work and thus can be implemented in various life fields practically.

Keywords: baby care system, Internet of Things, deep learning, machine vision

Procedia PDF Downloads 224