Search results for: real time simulator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20541

Search results for: real time simulator

19761 Cybersecurity Assessment of Decentralized Autonomous Organizations in Smart Cities

Authors: Claire Biasco, Thaier Hayajneh

Abstract:

A smart city is the integration of digital technologies in urban environments to enhance the quality of life. Smart cities capture real-time information from devices, sensors, and network data to analyze and improve city functions such as traffic analysis, public safety, and environmental impacts. Current smart cities face controversy due to their reliance on real-time data tracking and surveillance. Internet of Things (IoT) devices and blockchain technology are converging to reshape smart city infrastructure away from its centralized model. Connecting IoT data to blockchain applications would create a peer-to-peer, decentralized model. Furthermore, blockchain technology powers the ability for IoT device data to shift from the ownership and control of centralized entities to individuals or communities with Decentralized Autonomous Organizations (DAOs). In the context of smart cities, DAOs can govern cyber-physical systems to have a greater influence over how urban services are being provided. This paper will explore how the core components of a smart city now apply to DAOs. We will also analyze different definitions of DAOs to determine their most important aspects in relation to smart cities. Both categorizations will provide a solid foundation to conduct a cybersecurity assessment of DAOs in smart cities. It will identify the benefits and risks of adopting DAOs as they currently operate. The paper will then provide several mitigation methods to combat cybersecurity risks of DAO integrations. Finally, we will give several insights into what challenges will be faced by DAO and blockchain spaces in the coming years before achieving a higher level of maturity.

Keywords: blockchain, IoT, smart city, DAO

Procedia PDF Downloads 98
19760 Prophylactic Effects of Dairy Kluyveromyces marxianus YAS through Overexpression of BAX, CASP 3, CASP 8 and CASP 9 on Human Colon Cancer Cell Lines

Authors: Amir Saber Gharamaleki, Beitollah Alipour, Zeinab Faghfoori, Ahmad YariKhosroushahi

Abstract:

Colorectal cancer (CRC) is one of the most prevalent cancers and intestinal microbial community plays an important role in colorectal tumorigenesis. Probiotics have recently been assessed as effective anti-proliferative agents and thus this study was performed to examine whether CRC undergo apoptosis by treating with isolated Iranian native dairy yeast, Kluyveromyces marxianus YAS, secretion metabolites. The cytotoxicity assessments on cells (HT-29, Caco-2) were accomplished through 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay as well as qualitative DAPI (4',6-diamidino-2-phenylindole staining) and quantitative (flow cytometry assessments) evaluations of apoptosis. To evaluate the main mechanism of apoptosis, Real time PCR method was applied. Kluyveromyces marxianus YAS secretions (IC50) showed significant cytotoxicity against HT-29 and Caco-2 cancer cell lines (66.57 % and 66.34 % apoptosis) similar to 5-Fluorouracil (5-FU) while apoptosis only was developed in 27.57 % of KDR normal cells. The prophylactic effects of Kluyveromyces marxianus (PTCC 5195), as a reference yeast, was not similar to Kluyveromyces marxianus YAS indicating strain dependency of bioactivities on CRC disease prevention. Based on real time PCR results, the main cytotoxicity is related to apoptosis phenomenon and the core related mechanism is depended on the overexpression of BAX, CASP 9, CASP 8 and CASP 3 inducing apoptosis genes. However, several investigations should be conducted to precisely determine the effective compounds to be used as anticancer therapeutics in the future.

Keywords: anticancer, anti-proliferative, apoptosis, cytotoxicity, yeast

Procedia PDF Downloads 324
19759 Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network

Authors: Jia Xin Low, Keng Wah Choo

Abstract:

This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal.

Keywords: convolutional neural network, discrete wavelet transform, deep learning, heart sound classification

Procedia PDF Downloads 332
19758 Orientia Tsutsugamushi an Emerging Etiology of Acute Encephalitis Syndrome in Northern Part of India

Authors: Amita Jain, Shantanu Prakash, Suruchi Shukla

Abstract:

Introduction: Acute encephalitis syndrome (AES) is a complex multi etiology syndrome posing a great public health problem in the northern part of India. Japanese encephalitis (JE) virus is an established etiology of AES in this region. Recently, Scrub typhus (ST) is being recognized as an emerging aetiology of AES in JE endemic belt. This study was conducted to establish the direct evidence of Central nervous system invasion by Orientia tsutsugamushi leading to AES. Methodology: A total of 849 cases with clinical diagnosis of AES were enrolled from six districts (Deoria and its adjoining area) of the traditional north Indian Japanese encephalitis (JE) belt. Serum and Cerebrospinal fluid samples were collected and tested for major agent causing acute encephalitis. AES cases either positive for anti-ST IgM antibodies or negative for all tested etiologies were investigated for ST-DNA by real-time PCR. Results: Of these 505 cases, 250 patients were laboratory confirmed for O. tsutsugamushi infection either by anti-ST IgM antibodies positivity (n=206) on serum sample or by ST-DNA detection by real-time PCR assay on CSF sample (n=2) or by both (n=42).Total 29 isolate could be sequenced for 56KDa gene. Conclusion: All the strains were found to cluster with Gilliam strains. The majority of the isolates showed a 97–99% sequence similarity with Thailand and Cambodian strains. Gilliam strain of O.tsusugamushi is an emerging as one of the major aetiologies leading to AES in northern part of India.

Keywords: acute encephalitis syndrome, O. tsutsugamushi, Gilliam strain, North India, cerebrospinal fluid

Procedia PDF Downloads 235
19757 Feasibility Study of MongoDB and Radio Frequency Identification Technology in Asset Tracking System

Authors: Mohd Noah A. Rahman, Afzaal H. Seyal, Sharul T. Tajuddin, Hartiny Md Azmi

Abstract:

Taking into consideration the real time situation specifically the higher academic institutions, small, medium to large companies, public to private sectors and the remaining sectors, do experience the inventory or asset shrinkages due to theft, loss or even inventory tracking errors. This happening is due to a zero or poor security systems and measures being taken and implemented in their organizations. Henceforth, implementing the Radio Frequency Identification (RFID) technology into any manual or existing web-based system or web application can simply deter and will eventually solve certain major issues to serve better data retrieval and data access. Having said, this manual or existing system can be enhanced into a mobile-based system or application. In addition to that, the availability of internet connections can aid better services of the system. Such involvement of various technologies resulting various privileges to individuals or organizations in terms of accessibility, availability, mobility, efficiency, effectiveness, real-time information and also security. This paper will look deeper into the integration of mobile devices with RFID technologies with the purpose of asset tracking and control. Next, it is to be followed by the development and utilization of MongoDB as the main database to store data and its association with RFID technology. Finally, the development of a web based system which can be viewed in a mobile based formation with the aid of Hypertext Preprocessor (PHP), MongoDB, Hyper-Text Markup Language 5 (HTML5), Android, JavaScript and AJAX programming language.

Keywords: RFID, asset tracking system, MongoDB, NoSQL

Procedia PDF Downloads 289
19756 The Impact of Behavioral Factors on the Decision Making of Real Estate Investor of Pakistan

Authors: Khalid Bashir, Hammad Zahid

Abstract:

Most of the investors consider that economic and financial information is the most important at the time of making investment decisions. But it is not true, as in the past two decades, the Behavioral aspects and the behavioral biases have gained an important place in the decision-making process of an investor. This study is basically conducted on this fact. The purpose of this study is to examine the impact of behavioral factors on the decision-making of the individual real estate investor in Pakistan. Some important behavioral factors like overconfidence, anchoring, gambler’s fallacy, home bias, loss aversion, regret aversion, mental accounting, herding and representativeness are used in this study to find their impact on the psychology of individual investors. The targeted population is the real estate investor of Pakistan, and a sample of 650 investors is selected on the basis of convenience sampling technique. The data is collected through the questionnaire with a response rate of 46.15 %. Descriptive statistical techniques and SEM are used to analyze the data by using statistical software. The results revealed the fact that some behavioral factors have a significant impact on the decision-making of investors. Among all the behavioral biases, overconfidence, anchoring, gambler’s fallacy, loss aversion and representativeness have a significant positive impact on the decision-making of the individual investor, while the rest of biases like home bias, regret aversion, mental accounting, herding have less impact on the decision-making process of an individual.

Keywords: behavioral finance, anchoring, gambler’s fallacy, loss aversion

Procedia PDF Downloads 55
19755 Modeling Route Selection Using Real-Time Information and GPS Data

Authors: William Albeiro Alvarez, Gloria Patricia Jaramillo, Ivan Reinaldo Sarmiento

Abstract:

Understanding the behavior of individuals and the different human factors that influence the choice when faced with a complex system such as transportation is one of the most complicated aspects of measuring in the components that constitute the modeling of route choice due to that various behaviors and driving mode directly or indirectly affect the choice. During the last two decades, with the development of information and communications technologies, new data collection techniques have emerged such as GPS, geolocation with mobile phones, apps for choosing the route between origin and destination, individual service transport applications among others, where an interest has been generated to improve discrete choice models when considering the incorporation of these developments as well as psychological factors that affect decision making. This paper implements a discrete choice model that proposes and estimates a hybrid model that integrates route choice models and latent variables based on the observation on the route of a sample of public taxi drivers from the city of Medellín, Colombia in relation to its behavior, personality, socioeconomic characteristics, and driving mode. The set of choice options includes the routes generated by the individual service transport applications versus the driver's choice. The hybrid model consists of measurement equations that relate latent variables with measurement indicators and utilities with choice indicators along with structural equations that link the observable characteristics of drivers with latent variables and explanatory variables with utilities.

Keywords: behavior choice model, human factors, hybrid model, real time data

Procedia PDF Downloads 136
19754 Government Size and Economic Growth: Testing the Non-Linear Hypothesis for Nigeria

Authors: R. Santos Alimi

Abstract:

Using time-series techniques, this study empirically tested the validity of existing theory which stipulates there is a nonlinear relationship between government size and economic growth; such that government spending is growth-enhancing at low levels but growth-retarding at high levels, with the optimal size occurring somewhere in between. This study employed three estimation equations. First, for the size of government, two measures are considered as follows: (i) share of total expenditures to gross domestic product, (ii) share of recurrent expenditures to gross domestic product. Second, the study adopted real GDP (without government expenditure component), as a variant measure of economic growth other than the real total GDP, in estimating the optimal level of government expenditure. The study is based on annual Nigeria country-level data for the period 1970 to 2012. Estimation results show that the inverted U-shaped curve exists for the two measures of government size and the estimated optimum shares are 19.81% and 10.98%, respectively. Finally, with the adoption of real GDP (without government expenditure component), the optimum government size was found to be 12.58% of GDP. Our analysis shows that the actual share of government spending on average (2000 - 2012) is about 13.4%.This study adds to the literature confirming that the optimal government size exists not only for developed economies but also for developing economy like Nigeria. Thus, a public intervention threshold level that fosters economic growth is a reality; beyond this point economic growth should be left in the hands of the private sector. This finding has a significant implication for the appraisal of government spending and budgetary policy design.

Keywords: public expenditure, economic growth, optimum level, fully modified OLS

Procedia PDF Downloads 404
19753 Foot-and-Mouth Virus Detection in Asymptomatic Dairy Cows without Foot-and-Mouth Disease Outbreak

Authors: Duanghathai Saipinta, Tanittian Panyamongkol, Witaya Suriyasathaporn

Abstract:

Animal management aims to provide a suitable environment for animals allowing maximal productivity in those animals. Prevention of disease is an important part of animal management. Foot-and-mouth disease (FMD) is a highly contagious viral disease in cattle and is an economically important animal disease worldwide. Monitoring the FMD virus in farms is useful management for the prevention of the FMD outbreak. A recent publication indicated collection samples from nasal swabs can be used for monitoring FMD in symptomatic cows. Therefore, the objectives of this study were to determine the FMD virus in asymptomatic dairy cattle using nasal swab samples during the absence of an FMD outbreak. The study was conducted from December 2020 to June 2021 using 185 asymptomatic signs of FMD dairy cattle in Chiang Mai Province, Thailand. By random cow selection, nasal mucosal swabs were used to collect samples from the selected cows and then were to evaluate the presence of FMD viruses using the real-time rt-PCR assay. In total, 4.9% of dairy cattle detected FMD virus, including 2 dairy farms in Mae-on (8 samples; 9.6%) and 1 farm in the Chai-Prakan district (1 sample; 1.2%). Interestingly, both farms in Mae-on were the outbreak of the FMD after this detection for 6 months. This indicated that the FMD virus presented in asymptomatic cattle might relate to the subsequent outbreak of FMD. The outbreak demonstrates the presence of the virus in the environment. In conclusion, monitoring of FMD can be performed by nasal swab collection. Further investigation is needed to show whether the FMD virus presented in asymptomatic FMD cattle could be the cause of the subsequent FMD outbreak or not.

Keywords: cattle, foot-and-mouth disease, nasal swab, real-time rt-PCR assay

Procedia PDF Downloads 214
19752 Robotic Arm Control with Neural Networks Using Genetic Algorithm Optimization Approach

Authors: Arbnor Pajaziti, Hasan Cana

Abstract:

In this paper, the structural genetic algorithm is used to optimize the neural network to control the joint movements of robotic arm. The robotic arm has also been modeled in 3D and simulated in real-time in MATLAB. It is found that Neural Networks provide a simple and effective way to control the robot tasks. Computer simulation examples are given to illustrate the significance of this method. By combining Genetic Algorithm optimization method and Neural Networks for the given robotic arm with 5 D.O.F. the obtained the results shown that the base joint movements overshooting time without controller was about 0.5 seconds, while with Neural Network controller (optimized with Genetic Algorithm) was about 0.2 seconds, and the population size of 150 gave best results.

Keywords: robotic arm, neural network, genetic algorithm, optimization

Procedia PDF Downloads 504
19751 Automatic Detection and Update of Region of Interest in Vehicular Traffic Surveillance Videos

Authors: Naydelis Brito Suárez, Deni Librado Torres Román, Fernando Hermosillo Reynoso

Abstract:

Automatic detection and generation of a dynamic ROI (Region of Interest) in vehicle traffic surveillance videos based on a static camera in Intelligent Transportation Systems is challenging for computer vision-based systems. The dynamic ROI, being a changing ROI, should capture any other moving object located outside of a static ROI. In this work, the video is represented by a Tensor model composed of a Background and a Foreground Tensor, which contains all moving vehicles or objects. The values of each pixel over a time interval are represented by time series, and some pixel rows were selected. This paper proposes a pixel entropy-based algorithm for automatic detection and generation of a dynamic ROI in traffic videos under the assumption of two types of theoretical pixel entropy behaviors: (1) a pixel located at the road shows a high entropy value due to disturbances in this zone by vehicle traffic, (2) a pixel located outside the road shows a relatively low entropy value. To study the statistical behavior of the selected pixels, detecting the entropy changes and consequently moving objects, Shannon, Tsallis, and Approximate entropies were employed. Although Tsallis entropy achieved very high results in real-time, Approximate entropy showed results slightly better but in greater time.

Keywords: convex hull, dynamic ROI detection, pixel entropy, time series, moving objects

Procedia PDF Downloads 56
19750 Moving Target Defense against Various Attack Models in Time Sensitive Networks

Authors: Johannes Günther

Abstract:

Time Sensitive Networking (TSN), standardized in the IEEE 802.1 standard, has been lent increasing attention in the context of mission critical systems. Such mission critical systems, e.g., in the automotive domain, aviation, industrial, and smart factory domain, are responsible for coordinating complex functionalities in real time. In many of these contexts, a reliable data exchange fulfilling hard time constraints and quality of service (QoS) conditions is of critical importance. TSN standards are able to provide guarantees for deterministic communication behaviour, which is in contrast to common best-effort approaches. Therefore, the superior QoS guarantees of TSN may aid in the development of new technologies, which rely on low latencies and specific bandwidth demands being fulfilled. TSN extends existing Ethernet protocols with numerous standards, providing means for synchronization, management, and overall real-time focussed capabilities. These additional QoS guarantees, as well as management mechanisms, lead to an increased attack surface for potential malicious attackers. As TSN guarantees certain deadlines for priority traffic, an attacker may degrade the QoS by delaying a packet beyond its deadline or even execute a denial of service (DoS) attack if the delays lead to packets being dropped. However, thus far, security concerns have not played a major role in the design of such standards. Thus, while TSN does provide valuable additional characteristics to existing common Ethernet protocols, it leads to new attack vectors on networks and allows for a range of potential attacks. One answer to these security risks is to deploy defense mechanisms according to a moving target defense (MTD) strategy. The core idea relies on the reduction of the attackers' knowledge about the network. Typically, mission-critical systems suffer from an asymmetric disadvantage. DoS or QoS-degradation attacks may be preceded by long periods of reconnaissance, during which the attacker may learn about the network topology, its characteristics, traffic patterns, priorities, bandwidth demands, periodic characteristics on links and switches, and so on. Here, we implemented and tested several MTD-like defense strategies against different attacker models of varying capabilities and budgets, as well as collaborative attacks of multiple attackers within a network, all within the context of TSN networks. We modelled the networks and tested our defense strategies on an OMNET++ testbench, with networks of different sizes and topologies, ranging from a couple dozen hosts and switches to significantly larger set-ups.

Keywords: network security, time sensitive networking, moving target defense, cyber security

Procedia PDF Downloads 60
19749 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.

Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate

Procedia PDF Downloads 162
19748 Effect of Outliers in Assessing Significant Wave Heights Through a Time-Dependent GEV Model

Authors: F. Calderón-Vega, A. D. García-Soto, C. Mösso

Abstract:

Recorded significant wave heights sometimes exhibit large uncommon values (outliers) that can be associated with extreme phenomena such as hurricanes and cold fronts. In this study, some extremely large wave heights recorded in NOAA buoys (National Data Buoy Center, noaa.gov) are used to investigate their effect in the prediction of future wave heights associated with given return periods. Extreme waves are predicted through a time-dependent model based on the so-called generalized extreme value distribution. It is found that the outliers do affect the estimated wave heights. It is concluded that a detailed inspection of outliers is envisaged to determine whether they are real recorded values since this will impact defining design wave heights for coastal protection purposes.

Keywords: GEV model, non-stationary, seasonality, outliers

Procedia PDF Downloads 182
19747 Safe and Scalable Framework for Participation of Nodes in Smart Grid Networks in a P2P Exchange of Short-Term Products

Authors: Maciej Jedrzejczyk, Karolina Marzantowicz

Abstract:

Traditional utility value chain is being transformed during last few years into unbundled markets. Increased distributed generation of energy is one of considerable challenges faced by Smart Grid networks. New sources of energy introduce volatile demand response which has a considerable impact on traditional middlemen in E&U market. The purpose of this research is to search for ways to allow near-real-time electricity markets to transact with surplus energy based on accurate time synchronous measurements. A proposed framework evaluates the use of secure peer-2-peer (P2P) communication and distributed transaction ledgers to provide flat hierarchy, and allow real-time insights into present and forecasted grid operations, as well as state and health of the network. An objective is to achieve dynamic grid operations with more efficient resource usage, higher security of supply and longer grid infrastructure life cycle. Methods used for this study are based on comparative analysis of different distributed ledger technologies in terms of scalability, transaction performance, pluggability with external data sources, data transparency, privacy, end-to-end security and adaptability to various market topologies. An intended output of this research is a design of a framework for safer, more efficient and scalable Smart Grid network which is bridging a gap between traditional components of the energy network and individual energy producers. Results of this study are ready for detailed measurement testing, a likely follow-up in separate studies. New platforms for Smart Grid achieving measurable efficiencies will allow for development of new types of Grid KPI, multi-smart grid branches, markets, and businesses.

Keywords: autonomous agents, Distributed computing, distributed ledger technologies, large scale systems, micro grids, peer-to-peer networks, Self-organization, self-stabilization, smart grids

Procedia PDF Downloads 285
19746 An Investigation on Smartphone-Based Machine Vision System for Inspection

Authors: They Shao Peng

Abstract:

Machine vision system for inspection is an automated technology that is normally utilized to analyze items on the production line for quality control purposes, it also can be known as an automated visual inspection (AVI) system. By applying automated visual inspection, the existence of items, defects, contaminants, flaws, and other irregularities in manufactured products can be easily detected in a short time and accurately. However, AVI systems are still inflexible and expensive due to their uniqueness for a specific task and consuming a lot of set-up time and space. With the rapid development of mobile devices, smartphones can be an alternative device for the visual system to solve the existing problems of AVI. Since the smartphone-based AVI system is still at a nascent stage, this led to the motivation to investigate the smartphone-based AVI system. This study is aimed to provide a low-cost AVI system with high efficiency and flexibility. In this project, the object detection models, which are You Only Look Once (YOLO) model and Single Shot MultiBox Detector (SSD) model, are trained, evaluated, and integrated with the smartphone and webcam devices. The performance of the smartphone-based AVI is compared with the webcam-based AVI according to the precision and inference time in this study. Additionally, a mobile application is developed which allows users to implement real-time object detection and object detection from image storage.

Keywords: automated visual inspection, deep learning, machine vision, mobile application

Procedia PDF Downloads 111
19745 Urban Logistics Dynamics: A User-Centric Approach to Traffic Modelling and Kinetic Parameter Analysis

Authors: Emilienne Lardy, Eric Ballot, Mariam Lafkihi

Abstract:

Efficient urban logistics requires a comprehensive understanding of traffic dynamics, particularly as it pertains to kinetic parameters influencing energy consumption and trip duration estimations. While real-time traffic information is increasingly accessible, current high-precision forecasting services embedded in route planning often function as opaque 'black boxes' for users. These services, typically relying on AI-processed counting data, fall short in accommodating open design parameters essential for management studies, notably within Supply Chain Management. This work revisits the modelling of traffic conditions in the context of city logistics, emphasizing its significance from the user’s point of view, with two focuses. Firstly, the focus is not on the vehicle flow but on the vehicles themselves and the impact of the traffic conditions on their driving behaviour. This means opening the range of studied indicators beyond vehicle speed, to describe extensively the kinetic and dynamic aspects of the driving behaviour. To achieve this, we leverage the Art. Kinema parameters are designed to characterize driving cycles. Secondly, this study examines how the driving context (i.e., exogenous factors to the traffic flow) determines the mentioned driving behaviour. Specifically, we explore how accurately the kinetic behaviour of a vehicle can be predicted based on a limited set of exogenous factors, such as time, day, road type, orientation, slope, and weather conditions. To answer this question, statistical analysis was conducted on real-world driving data, which includes high-frequency measurements of vehicle speed. A Factor Analysis and a Generalized Linear Model have been established to link kinetic parameters with independent categorical contextual variables. The results include an assessment of the adjustment quality and the robustness of the models, as well as an overview of the model’s outputs.

Keywords: factor analysis, generalised linear model, real world driving data, traffic congestion, urban logistics, vehicle kinematics

Procedia PDF Downloads 53
19744 Times2D: A Time-Frequency Method for Time Series Forecasting

Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan

Abstract:

Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.

Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation

Procedia PDF Downloads 25
19743 Real Energy Performance Study of Large-Scale Solar Water Heater by Using Remote Monitoring

Authors: F. Sahnoune, M. Belhamel, M. Zelmat

Abstract:

Solar thermal systems available today provide reliability, efficiency and significant environmental benefits. In housing, they can satisfy the hot water demand and reduce energy bills by 60 % or more. Additionally, collective systems or large scale solar thermal systems are increasingly used in different conditions for hot water applications and space heating in hotels and multi-family homes, hospitals, nursing homes and sport halls as well as in commercial and industrial building. However, in situ real performance data for collective solar water heating systems has not been extensively outlined. This paper focuses on the study of real energy performances of a collective solar water heating system using the remote monitoring technique in Algerian climatic conditions. This is to ensure proper operation of the system at any time, determine the system performance and to check to what extent solar performance guarantee can be achieved. The measurements are performed on an active indirect heating system of 12 m2 flat plate collector’s surface installed in Algiers and equipped with a various sensors. The sensors transmit measurements to a local station which controls the pumps, valves, electrical auxiliaries, etc. The simulation of the installation was developed using the software SOLO 2000. The system provides a yearly solar yield of 6277.5 KWh for an estimated annual need of 7896 kWh; the yearly average solar cover rate amounted to 79.5%. The productivity is in the order of 523.13 kWh / m²/year. Simulation results are compared to measured results and to guaranteed solar performances. The remote monitoring shows that 90% of the expected solar results can be easy guaranteed on a long period. Furthermore, the installed remote monitoring unit was able to detect some dysfunctions. It follows that remote monitoring is an important tool in energy management of some building equipment.

Keywords: large-scale solar water heater, real energy performance, remote monitoring, solar performance guarantee, tool to promote solar water heater

Procedia PDF Downloads 219
19742 Long-Baseline Single-epoch RTK Positioning Method Based on BDS-3 and Galileo Penta-Frequency Ionosphere-Reduced Combinations

Authors: Liwei Liu, Shuguo Pan, Wang Gao

Abstract:

In order to take full advantages of the BDS-3 penta-frequency signals in the long-baseline RTK positioning, a long-baseline RTK positioning method based on the BDS-3 penta-frequency ionospheric-reduced (IR) combinations is proposed. First, the low noise and weak ionospheric delay characteristics of the multi-frequency combined observations of BDS-3is analyzed. Second, the multi-frequency extra-wide-lane (EWL)/ wide-lane (WL) combinations with long-wavelengths are constructed. Third, the fixed IR EWL combinations are used to constrain the IR WL, then constrain narrow-lane (NL)ambiguityies and start multi-epoch filtering. There is no need to consider the influence of ionospheric parameters in the third step. Compared with the estimated ionospheric model, the proposed method reduces the number of parameters by half, so it is suitable for the use of multi-frequency and multi-system real-time RTK. The results using real data show that the stepwise fixed model of the IR EWL/WL/NL combinations can realize long-baseline instantaneous cimeter-level positioning.

Keywords: penta-frequency, ionospheric-reduced (IR), RTK positioning, long-baseline

Procedia PDF Downloads 142
19741 Labview-Based System for Fiber Links Events Detection

Authors: Bo Liu, Qingshan Kong, Weiqing Huang

Abstract:

With the rapid development of modern communication, diagnosing the fiber-optic quality and faults in real-time is widely focused. In this paper, a Labview-based system is proposed for fiber-optic faults detection. The wavelet threshold denoising method combined with Empirical Mode Decomposition (EMD) is applied to denoise the optical time domain reflectometer (OTDR) signal. Then the method based on Gabor representation is used to detect events. Experimental measurements show that signal to noise ratio (SNR) of the OTDR signal is improved by 1.34dB on average, compared with using the wavelet threshold denosing method. The proposed system has a high score in event detection capability and accuracy. The maximum detectable fiber length of the proposed Labview-based system can be 65km.

Keywords: empirical mode decomposition, events detection, Gabor transform, optical time domain reflectometer, wavelet threshold denoising

Procedia PDF Downloads 114
19740 Option Pricing Theory Applied to the Service Sector

Authors: Luke Miller

Abstract:

This paper develops an options pricing methodology to value strategic pricing strategies in the services sector. More specifically, this study provides a unifying taxonomy of current service sector pricing practices, frames these pricing decisions as strategic real options, demonstrates accepted option valuation techniques to assess service sector pricing decisions, and suggests future research areas where pricing decisions and real options overlap. Enhancing revenue in the service sector requires proactive decision making in a world of uncertainty. In an effort to strategically price service products, revenue enhancement necessitates a careful study of the service costs, customer base, competition, legalities, and shared economies with the market. Pricing decisions involve the quality of inputs, manpower, and best practices to maintain superior service. These decisions further hinge on identifying relevant pricing strategies and understanding how these strategies impact a firm’s value. A relatively new area of research applies option pricing theory to investments in real assets and is commonly known as real options. The real options approach is based on the premise that many corporate decisions to invest or divest in assets are simply an option wherein the firm has the right to make an investment without any obligation to act. The decision maker, therefore, has more flexibility and the value of this operating flexibility should be taken into consideration. The real options framework has already been applied to numerous areas including manufacturing, inventory, natural resources, research and development, strategic decisions, technology, and stock valuation. Additionally, numerous surveys have identified a growing need for the real options decision framework within all areas of corporate decision-making. Despite the wide applicability of real options, no study has been carried out linking service sector pricing decisions and real options. This is surprising given the service sector comprises 80% of the US employment and Gross Domestic Product (GDP). Identifying real options as a practical tool to value different service sector pricing strategies is believed to have a significant impact on firm decisions. This paper identifies and discusses four distinct pricing strategies available to the service sector from an options’ perspective: (1) Cost-based profit margin, (2) Increased customer base, (3) Platform pricing, and (4) Buffet pricing. Within each strategy lie several pricing tactics available to the service firm. These tactics can be viewed as options the decision maker has to best manage a strategic position in the market. To demonstrate the effectiveness of including flexibility in the pricing decision, a series of pricing strategies were developed and valued using a real options binomial lattice structure. The options pricing approach discussed in this study allows service firms to directly incorporate market-driven perspectives into the decision process and thus synchronizing service operations with organizational economic goals.

Keywords: option pricing theory, real options, service sector, valuation

Procedia PDF Downloads 342
19739 Trip Reduction in Turbo Machinery

Authors: Pranay Mathur, Carlo Michelassi, Simi Karatha, Gilda Pedoto

Abstract:

Industrial plant uptime is top most importance for reliable, profitable & sustainable operation. Trip and failed start has major impact on plant reliability and all plant operators focussed on efforts required to minimise the trips & failed starts. The performance of these CTQs are measured with 2 metrics, MTBT(Mean time between trips) and SR (Starting reliability). These metrics helps to identify top failure modes and identify units need more effort to improve plant reliability. Baker Hughes Trip reduction program structured to reduce these unwanted trip 1. Real time machine operational parameters remotely available and capturing the signature of malfunction including related boundary condition. 2. Real time alerting system based on analytics available remotely. 3. Remote access to trip logs and alarms from control system to identify the cause of events. 4. Continuous support to field engineers by remotely connecting with subject matter expert. 5. Live tracking of key CTQs 6. Benchmark against fleet 7. Break down to the cause of failure to component level 8. Investigate top contributor, identify design and operational root cause 9. Implement corrective and preventive action 10. Assessing effectiveness of implemented solution using reliability growth models. 11. Develop analytics for predictive maintenance With this approach , Baker Hughes team is able to support customer in achieving their Reliability Key performance Indicators for monitored units, huge cost savings for plant operators. This Presentation explains these approach while providing successful case studies, in particular where 12nos. of LNG and Pipeline operators with about 140 gas compressing line-ups has adopted these techniques and significantly reduce the number of trips and improved MTBT

Keywords: reliability, availability, sustainability, digital infrastructure, weibull, effectiveness, automation, trips, fail start

Procedia PDF Downloads 60
19738 Experiences of Timing Analysis of Parallel Embedded Software

Authors: Muhammad Waqar Aziz, Syed Abdul Baqi Shah

Abstract:

The execution time analysis is fundamental to the successful design and execution of real-time embedded software. In such analysis, the Worst-Case Execution Time (WCET) of a program is a key measure, on the basis of which system tasks are scheduled. The WCET analysis of embedded software is also needed for system understanding and to guarantee its behavior. WCET analysis can be performed statically (without executing the program) or dynamically (through measurement). Traditionally, research on the WCET analysis assumes sequential code running on single-core platforms. However, as computation is steadily moving towards using a combination of parallel programs and multi-core hardware, new challenges in WCET analysis need to be addressed. In this article, we report our experiences of performing the WCET analysis of Parallel Embedded Software (PES) running on multi-core platform. The primary purpose was to investigate how WCET estimates of PES can be computed statically, and how they can be derived dynamically. Our experiences, as reported in this article, include the challenges we faced, possible suggestions to these challenges and the workarounds that were developed. This article also provides observations on the benefits and drawbacks of deriving the WCET estimates using the said methods and provides useful recommendations for further research in this area.

Keywords: embedded software, worst-case execution-time analysis, static flow analysis, measurement-based analysis, parallel computing

Procedia PDF Downloads 310
19737 Q-Learning-Based Path Planning Approach for Unmanned Aerial Vehicles in a Dynamic Environment

Authors: Raja Jarray, Imen Zaghbani, Soufiene Bouallègue

Abstract:

Path planning for Unmanned Aerial Vehicles (UAVs) in dynamic environments poses a significant challenge. Adapting planning algorithms to these complex environments with moving obstacles is a major task in real-world robotics. This article introduces a path-planning strategy based on a Q-learning algorithm, which enables an effective response to avoid moving obstacles while ensuring mission feasibility. A dynamic reward function is introduced, causing the UAV to use the real-time distance between its current position and the destination as training data. The objective of the proposed Q-learning-based path planning algorithm is to guide the drone through an optimal flight itinerary in a dynamic, collision-free environment. The proposed Q-learning-based UAV planner is evaluated considering numerous commonly used performance metrics. Demonstrative results are provided and discussed to show the effectiveness and practicability of such an artificial intelligence-based path planning approach.

Keywords: unmanned aerial vehicles, dynamic path planning, moving obstacles, reinforcement-learning, Q-learning

Procedia PDF Downloads 33
19736 Automated Detection of Targets and Retrieve the Corresponding Analytics Using Augmented Reality

Authors: Suvarna Kumar Gogula, Sandhya Devi Gogula, P. Chanakya

Abstract:

Augmented reality is defined as the collection of the digital (or) computer generated information like images, audio, video, 3d models, etc. and overlay them over the real time environment. Augmented reality can be thought as a blend between completely synthetic and completely real. Augmented reality provides scope in a wide range of industries like manufacturing, retail, gaming, advertisement, tourism, etc. and brings out new dimensions in the modern digital world. As it overlays the content, it makes the users enhance the knowledge by providing the content blended with real world. In this application, we integrated augmented reality with data analytics and integrated with cloud so the virtual content will be generated on the basis of the data present in the database and we used marker based augmented reality where every marker will be stored in the database with corresponding unique ID. This application can be used in wide range of industries for different business processes, but in this paper, we mainly focus on the marketing industry which helps the customer in gaining the knowledge about the products in the market which mainly focus on their prices, customer feedback, quality, and other benefits. This application also focuses on providing better market strategy information for marketing managers who obtain the data about the stocks, sales, customer response about the product, etc. In this paper, we also included the reports from the feedback got from different people after the demonstration, and finally, we presented the future scope of Augmented Reality in different business processes by integrating with new technologies like cloud, big data, artificial intelligence, etc.

Keywords: augmented reality, data analytics, catch room, marketing and sales

Procedia PDF Downloads 221
19735 Quartz Crystal Microbalance Based Hydrophobic Nanosensor for Lysozyme Detection

Authors: F. Yılmaz, Y. Saylan, A. Derazshamshir, S. Atay, A. Denizli

Abstract:

Quartz crystal microbalance (QCM), high-resolution mass-sensing technique, measures changes in mass on oscillating quartz crystal surface by measuring changes in oscillation frequency of crystal in real time. Protein adsorption techniques via hydrophobic interaction between protein and solid support, called hydrophobic interaction chromatography (HIC), can be favorable in many cases. Some nanoparticles can be effectively applied for HIC. HIC takes advantage of the hydrophobicity of proteins by promoting its separation on the basis of hydrophobic interactions between immobilized hydrophobic ligands and nonpolar regions on the surface of the proteins. Lysozyme is found in a variety of vertebrate cells and secretions, such as spleen, milk, tears, and egg white. Its common applications are as a cell-disrupting agent for extraction of bacterial intracellular products, as an antibacterial agent in ophthalmologic preparations, as a food additive in milk products and as a drug for treatment of ulcers and infections. Lysozyme has also been used in cancer chemotherapy. The aim of this study is the synthesis of hydrophobic nanoparticles for Lysozyme detection. For this purpose, methacryoyl-L-phenylalanine was chosen as a hydrophobic matrix. The hydrophobic nanoparticles were synthesized by micro-emulsion polymerization method. Then, hydrophobic QCM nanosensor was characterized by Attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy, atomic force microscopy (AFM) and zeta size analysis. Hydrophobic QCM nanosensor was tested for real-time detection of Lysozyme from aqueous solution. The kinetic and affinity studies were determined by using Lysozyme solutions with different concentrations. The responses related to a mass (Δm) and frequency (Δf) shifts were used to evaluate adsorption properties.

Keywords: nanosensor, HIC, lysozyme, QCM

Procedia PDF Downloads 335
19734 Identifying Pathogenic Mycobacterium Species Using Multiple Gene Phylogenetic Analysis

Authors: Lemar Blake, Chris Oura, Ayanna C. N. Phillips Savage

Abstract:

Improved DNA sequencing technology has greatly enhanced bacterial identification, especially for organisms that are difficult to culture. Mycobacteriosis with consistent hyphema, bilateral exophthalmia, open mouth gape and ocular lesions, were observed in various fish populations at the School of Veterinary Medicine, Aquaculture/Aquatic Animal Health Unit. Objective: To identify the species of Mycobacterium that is affecting aquarium fish at the School of Veterinary Medicine, Aquaculture/Aquatic Animal Health Unit. Method: A total of 13 fish samples were collected and analyzed via: Ziehl-Neelsen, conventional polymerase chain reaction (PCR) and real-time PCR. These tests were carried out simultaneously for confirmation. The following combination of conventional primers: 16s rRNA (564 bp), rpoB (396 bp), sod (408 bp) were used. Concatenation of the gene fragments was carried out to phylogenetically classify the organism. Results: Acid fast non-branching bacilli were detected in all samples from homogenized internal organs. All 13 acid fast samples were positive for Mycobacterium via real-time PCR. Partial gene sequences using all three primer sets were obtained from two samples and demonstrated a novel strain. A strain 99% related to Mycobacterium marinum was also confirmed in one sample, using 16srRNA and rpoB genes. The two novel strains were clustered with the rapid growers and strains that are known to affect humans. Conclusions: Phylogenetic analysis demonstrated two novel Mycobacterium strains with the potential of being zoonotic and one strain 99% related to Mycobacterium marinum.

Keywords: polymerase chain reaction, phylogenetic, DNA sequencing, zoonotic

Procedia PDF Downloads 127
19733 Implementation of an IoT Sensor Data Collection and Analysis Library

Authors: Jihyun Song, Kyeongjoo Kim, Minsoo Lee

Abstract:

Due to the development of information technology and wireless Internet technology, various data are being generated in various fields. These data are advantageous in that they provide real-time information to the users themselves. However, when the data are accumulated and analyzed, more various information can be extracted. In addition, development and dissemination of boards such as Arduino and Raspberry Pie have made it possible to easily test various sensors, and it is possible to collect sensor data directly by using database application tools such as MySQL. These directly collected data can be used for various research and can be useful as data for data mining. However, there are many difficulties in using the board to collect data, and there are many difficulties in using it when the user is not a computer programmer, or when using it for the first time. Even if data are collected, lack of expert knowledge or experience may cause difficulties in data analysis and visualization. In this paper, we aim to construct a library for sensor data collection and analysis to overcome these problems.

Keywords: clustering, data mining, DBSCAN, k-means, k-medoids, sensor data

Procedia PDF Downloads 360
19732 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 101