Search results for: cloud monitoring
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3563

Search results for: cloud monitoring

3353 Scheduling in Cloud Networks Using Chakoos Algorithm

Authors: Masoumeh Ali Pouri, Hamid Haj Seyyed Javadi

Abstract:

Nowadays, cloud processing is one of the important issues in information technology. Since scheduling of tasks graph is an NP-hard problem, considering approaches based on undeterminisitic methods such as evolutionary processing, mostly genetic and cuckoo algorithms, will be effective. Therefore, an efficient algorithm has been proposed for scheduling of tasks graph to obtain an appropriate scheduling with minimum time. In this algorithm, the new approach is based on making the length of the critical path shorter and reducing the cost of communication. Finally, the results obtained from the implementation of the presented method show that this algorithm acts the same as other algorithms when it faces graphs without communication cost. It performs quicker and better than some algorithms like DSC and MCP algorithms when it faces the graphs involving communication cost.

Keywords: cloud computing, scheduling, tasks graph, chakoos algorithm

Procedia PDF Downloads 33
3352 Optimizing Telehealth Internet of Things Integration: A Sustainable Approach through Fog and Cloud Computing Platforms for Energy Efficiency

Authors: Yunyong Guo, Sudhakar Ganti, Bryan Guo

Abstract:

The swift proliferation of telehealth Internet of Things (IoT) devices has sparked concerns regarding energy consumption and the need for streamlined data processing. This paper presents an energy-efficient model that integrates telehealth IoT devices into a platform based on fog and cloud computing. This integrated system provides a sustainable and robust solution to address the challenges. Our model strategically utilizes fog computing as a localized data processing layer and leverages cloud computing for resource-intensive tasks, resulting in a significant reduction in overall energy consumption. The incorporation of adaptive energy-saving strategies further enhances the efficiency of our approach. Simulation analysis validates the effectiveness of our model in improving energy efficiency for telehealth IoT systems, particularly when integrated with localized fog nodes and both private and public cloud infrastructures. Subsequent research endeavors will concentrate on refining the energy-saving model, exploring additional functional enhancements, and assessing its broader applicability across various healthcare and industry sectors.

Keywords: energy-efficient, fog computing, IoT, telehealth

Procedia PDF Downloads 39
3351 Hierarchical Queue-Based Task Scheduling with CloudSim

Authors: Wanqing You, Kai Qian, Ying Qian

Abstract:

The concepts of Cloud Computing provide users with infrastructure, platform and software as service, which make those services more accessible for people via Internet. To better analysis the performance of Cloud Computing provisioning policies as well as resources allocation strategies, a toolkit named CloudSim proposed. With CloudSim, the Cloud Computing environment can be easily constructed by modelling and simulating cloud computing components, such as datacenter, host, and virtual machine. A good scheduling strategy is the key to achieve the load balancing among different machines as well as to improve the utilization of basic resources. Recently, the existing scheduling algorithms may work well in some presumptive cases in a single machine; however they are unable to make the best decision for the unforeseen future. In real world scenario, there would be numbers of tasks as well as several virtual machines working in parallel. Based on the concepts of multi-queue, this paper presents a new scheduling algorithm to schedule tasks with CloudSim by taking into account several parameters, the machines’ capacity, the priority of tasks and the history log.

Keywords: hierarchical queue, load balancing, CloudSim, information technology

Procedia PDF Downloads 393
3350 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models

Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu

Abstract:

Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.

Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging

Procedia PDF Downloads 124
3349 Monitoring the Railways by Means of C-OTDR Technology

Authors: Andrey V. Timofeev

Abstract:

This paper presents development results of the method of seismoacoustic activity monitoring based on usage vibrosensitive properties of optical fibers. Analysis of Rayleigh backscattering radiation parameters changes, which take place due to microscopic seismoacoustic impacts on the optical fiber, allows to determine seismoacoustic emission sources positions and to identify their types. Results of using this approach are successful for complex monitoring of railways.

Keywords: C-OTDR systems, monitoring of railways, Rayleigh backscattering, eismoacoustic activity

Procedia PDF Downloads 360
3348 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 39
3347 Conducting Computational Physics Laboratory Course Using Cloud Storage Space

Authors: Ajay Wadhwa

Abstract:

A Laboratory course on computational physics is different from the conventional lab course on other topics of physics like Mechanics, Heat, Optics, etc. because it involves active participation of the teacher as well as one-to-one interaction between teacher and the student. The course content requires the teacher to teach programming language as well as numerical methods along with their applications in physics. The task becomes more daunting when about 90% of the students in the class have no previous experience of any programming language. In the presented work, we have described a methodology for conducting the computational physics course by using the Google Drive and Dropitto.me cloud storage services. We have evaluated the performance in a class of sixty students by dividing them equally into four groups. One of the groups was made the peer group on whom the presented methodology was tested. The other groups were taught by using conventional method of classroom lectures. In order to assess our methodology, we analyzed the performance of students in four class tests. A study of certain statistical parameters like the mean, standard deviation, and Z-test hypothesis revealed that the cyber methodology based on cloud storage is more efficient than the conventional method of teaching.

Keywords: computational Physics, Z-test hypothesis, cloud storage, Google drive

Procedia PDF Downloads 276
3346 Data Confidentiality in Public Cloud: A Method for Inclusion of ID-PKC Schemes in OpenStack Cloud

Authors: N. Nalini, Bhanu Prakash Gopularam

Abstract:

The term data security refers to the degree of resistance or protection given to information from unintended or unauthorized access. The core principles of information security are the confidentiality, integrity and availability, also referred as CIA triad. Cloud computing services are classified as SaaS, IaaS and PaaS services. With cloud adoption the confidential enterprise data are moved from organization premises to untrusted public network and due to this the attack surface has increased manifold. Several cloud computing platforms like OpenStack, Eucalyptus, Amazon EC2 offer users to build and configure public, hybrid and private clouds. While the traditional encryption based on PKI infrastructure still works in cloud scenario, the management of public-private keys and trust certificates is difficult. The Identity based Public Key Cryptography (also referred as ID-PKC) overcomes this problem by using publicly identifiable information for generating the keys and works well with decentralized systems. The users can exchange information securely without having to manage any trust information. Another advantage is that access control (role based access control policy) information can be embedded into data unlike in PKI where it is handled by separate component or system. In OpenStack cloud platform the keystone service acts as identity service for authentication and authorization and has support for public key infrastructure for auto services. In this paper, we explain OpenStack security architecture and evaluate the PKI infrastructure piece for data confidentiality. We provide method to integrate ID-PKC schemes for securing data while in transit and stored and explain the key measures for safe guarding data against security attacks. The proposed approach uses JPBC crypto library for key-pair generation based on IEEE P1636.3 standard and secure communication to other cloud services.

Keywords: data confidentiality, identity based cryptography, secure communication, open stack key stone, token scoping

Procedia PDF Downloads 348
3345 Process Mining as an Ecosystem Platform to Mitigate a Deficiency of Processes Modelling

Authors: Yusra Abdulsalam Alqamati, Ahmed Alkilany

Abstract:

The teaching staff is a distinct group whose impact is on the educational process and which plays an important role in enhancing the quality of the academic education process. To improve the management effectiveness of the academy, the Teaching Staff Management System (TSMS) proposes that all teacher processes be digitized. Since the BPMN approach can accurately describe the processes, it lacks a clear picture of the process flow map, something that the process mining approach has, which is extracting information from event logs for discovery, monitoring, and model enhancement. Therefore, these two methodologies were combined to create the most accurate representation of system operations, the ability to extract data records and mining processes, recreate them in the form of a Petri net, and then generate them in a BPMN model for a more in-depth view of process flow. Additionally, the TSMS processes will be orchestrated to handle all requests in a guaranteed small-time manner thanks to the integration of the Google Cloud Platform (GCP), the BPM engine, and allowing business owners to take part throughout the entire TSMS project development lifecycle.

Keywords: process mining, BPM, business process model and notation, Petri net, teaching staff, Google Cloud Platform

Procedia PDF Downloads 113
3344 Secure Hashing Algorithm and Advance Encryption Algorithm in Cloud Computing

Authors: Jaimin Patel

Abstract:

Cloud computing is one of the most sharp and important movement in various computing technologies. It provides flexibility to users, cost effectiveness, location independence, easy maintenance, enables multitenancy, drastic performance improvements, and increased productivity. On the other hand, there are also major issues like security. Being a common server, security for a cloud is a major issue; it is important to provide security to protect user’s private data, and it is especially important in e-commerce and social networks. In this paper, encryption algorithms such as Advanced Encryption Standard algorithms, their vulnerabilities, risk of attacks, optimal time and complexity management and comparison with other algorithms based on software implementation is proposed. Encryption techniques to improve the performance of AES algorithms and to reduce risk management are given. Secure Hash Algorithms, their vulnerabilities, software implementations, risk of attacks and comparison with other hashing algorithms as well as the advantages and disadvantages between hashing techniques and encryption are given.

Keywords: Cloud computing, encryption algorithm, secure hashing algorithm, brute force attack, birthday attack, plaintext attack, man in middle attack

Procedia PDF Downloads 251
3343 Factors Affecting the Adoption of Cloud Business Intelligence among Healthcare Sector: A Case Study of Saudi Arabia

Authors: Raed Alsufyani, Hissam Tawfik, Victor Chang, Muthu Ramachandran

Abstract:

This study investigates the factors that influence the decision by players in the healthcare sector to embrace Cloud Business Intelligence Technology with a focus on healthcare organizations in Saudi Arabia. To bring this matter into perspective, this study primarily considers the Technology-Organization-Environment (TOE) framework and the Human Organization-Technology (HOT) fit model. A survey was hypothetically designed based on literature review and was carried out online. Quantitative data obtained was processed from descriptive and one-way frequency statistics to inferential and regression analysis. Data were analysed to establish factors that influence the decision to adopt Cloud Business intelligence technology in the healthcare sector. The implication of the identified factors was measured, and all assumptions were tested. 66.70% of participants in healthcare organization backed the intention to adopt cloud business intelligence system. 99.4% of these participants considered security concerns and privacy risk have been the most significant factors in the adoption of cloud Business Intelligence (CBI) system. Through regression analysis hypothesis testing point that usefulness, service quality, relative advantage, IT infrastructure preparedness, organization structure; vendor support, perceived technical competence, government support, and top management support positively and significantly influence the adoption of (CBI) system. The paper presents quantitative phase that is a part of an on-going project. The project will be based on the consequences learned from this study.

Keywords: cloud computing, business intelligence, HOT-fit model, TOE, healthcare and innovation adoption

Procedia PDF Downloads 140
3342 Analysis of Energy Flows as An Approach for The Formation of Monitoring System in the Sustainable Regional Development

Authors: Inese Trusina, Elita Jermolajeva

Abstract:

Global challenges require a transition from the existing linear economic model to a model that will consider nature as a life support system for the developmenton the way to social well-being in the frame of the ecological economics paradigm. The article presentsbasic definitions for the development of formalized description of sustainabledevelopment monitoring. It provides examples of calculating the parameters of monitoring for the Baltic Sea region countries and their primary interpretation.

Keywords: sustainability, development, power, ecological economics, regional economic, monitoring

Procedia PDF Downloads 93
3341 Simulation-Based Unmanned Surface Vehicle Design Using PX4 and Robot Operating System With Kubernetes and Cloud-Native Tooling

Authors: Norbert Szulc, Jakub Wilk, Franciszek Górski

Abstract:

This paper presents an approach for simulating and testing robotic systems based on PX4, using a local Kubernetes cluster. The approach leverages modern cloud-native tools and runs on single-board computers. Additionally, this solution enables the creation of datasets for computer vision and the evaluation of control system algorithms in an end-to-end manner. This paper compares this approach to method commonly used Docker based approach. This approach was used to develop simulation environment for an unmanned surface vehicle (USV) for RoboBoat 2023 by running a containerized configuration of the PX4 Open-source Autopilot connected to ROS and the Gazebo simulation environment.

Keywords: cloud computing, Kubernetes, single board computers, simulation, ROS

Procedia PDF Downloads 45
3340 Intelligent Process Data Mining for Monitoring for Fault-Free Operation of Industrial Processes

Authors: Hyun-Woo Cho

Abstract:

The real-time fault monitoring and diagnosis of large scale production processes is helpful and necessary in order to operate industrial process safely and efficiently producing good final product quality. Unusual and abnormal events of the process may have a serious impact on the process such as malfunctions or breakdowns. This work try to utilize process measurement data obtained in an on-line basis for the safe and some fault-free operation of industrial processes. To this end, this work evaluated the proposed intelligent process data monitoring framework based on a simulation process. The monitoring scheme extracts the fault pattern in the reduced space for the reliable data representation. Moreover, this work shows the results of using linear and nonlinear techniques for the monitoring purpose. It has shown that the nonlinear technique produced more reliable monitoring results and outperforms linear methods. The adoption of the qualitative monitoring model helps to reduce the sensitivity of the fault pattern to noise.

Keywords: process data, data mining, process operation, real-time monitoring

Procedia PDF Downloads 609
3339 Towards a Resources Provisioning for Dynamic Workflows in the Cloud

Authors: Fairouz Fakhfakh, Hatem Hadj Kacem, Ahmed Hadj Kacem

Abstract:

Cloud computing offers a new model of service provisioning for workflow applications, thanks to its elasticity and its paying model. However, it presents various challenges that need to be addressed in order to be efficiently utilized. The resources provisioning problem for workflow applications has been widely studied. Nevertheless, the existing works did not consider the change in workflow instances while they are being executed. This functionality has become a major requirement to deal with unusual situations and evolution. This paper presents a first step towards the resources provisioning for a dynamic workflow. In fact, we propose a provisioning algorithm which minimizes the overall workflow execution cost, while meeting a deadline constraint. Then, we extend it to support the dynamic adding of tasks. Experimental results show that our proposed heuristic demonstrates a significant reduction in resources cost by using a consolidation process.

Keywords: cloud computing, resources provisioning, dynamic workflow, workflow applications

Procedia PDF Downloads 255
3338 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling

Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng

Abstract:

This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.

Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT

Procedia PDF Downloads 55
3337 System and Method for Providing Web-Based Remote Application Service

Authors: Shuen-Tai Wang, Yu-Ching Lin, Hsi-Ya Chang

Abstract:

With the development of virtualization technologies, a new type of service named cloud computing service is produced. Cloud users usually encounter the problem of how to use the virtualized platform easily over the web without requiring the plug-in or installation of special software. The object of this paper is to develop a system and a method enabling process interfacing within an automation scenario for accessing remote application by using the web browser. To meet this challenge, we have devised a web-based interface that system has allowed to shift the GUI application from the traditional local environment to the cloud platform, which is stored on the remote virtual machine. We designed the sketch of web interface following the cloud virtualization concept that sought to enable communication and collaboration among users. We describe the design requirements of remote application technology and present implementation details of the web application and its associated components. We conclude that this effort has the potential to provide an elastic and resilience environment for several application services. Users no longer have to burden the system maintenances and reduce the overall cost of software licenses and hardware. Moreover, this remote application service represents the next step to the mobile workplace, and it lets user to use the remote application virtually from anywhere.

Keywords: virtualization technology, virtualized platform, web interface, remote application

Procedia PDF Downloads 253
3336 Foot Self-Monitoring Knowledge, Attitude, Practice, and Related Factors among Diabetic Patients: A Descriptive and Correlational Study in a Taiwan Teaching Hospital

Authors: Li-Ching Lin, Yu-Tzu Dai

Abstract:

Recurrent foot ulcers or foot amputation have a major impact on patients with diabetes mellitus (DM), medical professionals, and society. A critical procedure for foot care is foot self-monitoring. Medical professionals’ understanding of patients’ foot self-monitoring knowledge, attitude, and practice is beneficial for raising patients’ disease awareness. This study investigated these and related factors among patients with DM through a descriptive study of the correlations. A scale for measuring the foot self-monitoring knowledge, attitude, and practice of patients with DM was used. Purposive sampling was adopted, and 100 samples were collected from the respondents’ self-reports or from interviews. The statistical methods employed were an independent-sample t-test, one-way analysis of variance, Pearson correlation coefficient, and multivariate regression analysis. The findings were as follows: the respondents scored an average of 12.97 on foot self-monitoring knowledge, and the correct answer rate was 68.26%. The respondents performed relatively lower in foot health screenings and recording, and awareness of neuropathy in the foot. The respondents held a positive attitude toward self-monitoring their feet and a negative attitude toward having others check the soles of their feet. The respondents scored an average of 12.64 on foot self-monitoring practice. Their scores were lower in their frequency of self-monitoring their feet, recording their self-monitoring results, checking their pedal pulse, and examining if their soles were red immediately after taking off their shoes. Significant positive correlations were observed among foot self-monitoring knowledge, attitude, and practice. The correlation coefficient between self-monitoring knowledge and self-monitoring practice was 0.20, and that between self-monitoring attitude and self-monitoring practice was 0.44. Stepwise regression analysis revealed that the main predictive factors of the foot self-monitoring practice in patients with DM were foot self-monitoring attitude, prior experience in foot care, and an educational attainment of college or higher. These factors predicted 33% of the variance. This study concludes that patients with DM lacked foot self-monitoring practice and advises that the patients’ self-monitoring abilities be evaluated first, including whether patients have poor eyesight, difficulties in bending forward due to obesity, and people who can assist them in self-monitoring. In addition, patient education should emphasize self-monitoring knowledge and practice, such as perceptions regarding the symptoms of foot neurovascular lesions, pulse monitoring methods, and new foot self-monitoring equipment. By doing so, new or recurring ulcers may be discovered in their early stages.

Keywords: diabetic foot, foot self-monitoring attitude, foot self-monitoring knowledge, foot self-monitoring practice

Procedia PDF Downloads 169
3335 Heliport Remote Safeguard System Based on Real-Time Stereovision 3D Reconstruction Algorithm

Authors: Ł. Morawiński, C. Jasiński, M. Jurkiewicz, S. Bou Habib, M. Bondyra

Abstract:

With the development of optics, electronics, and computers, vision systems are increasingly used in various areas of life, science, and industry. Vision systems have a huge number of applications. They can be used in quality control, object detection, data reading, e.g., QR-code, etc. A large part of them is used for measurement purposes. Some of them make it possible to obtain a 3D reconstruction of the tested objects or measurement areas. 3D reconstruction algorithms are mostly based on creating depth maps from data that can be acquired from active or passive methods. Due to the specific appliance in airfield technology, only passive methods are applicable because of other existing systems working on the site, which can be blinded on most spectral levels. Furthermore, reconstruction is required to work long distances ranging from hundreds of meters to tens of kilometers with low loss of accuracy even with harsh conditions such as fog, rain, or snow. In response to those requirements, HRESS (Heliport REmote Safeguard System) was developed; which main part is a rotational head with a two-camera stereovision rig gathering images around the head in 360 degrees along with stereovision 3D reconstruction and point cloud combination. The sub-pixel analysis introduced in the HRESS system makes it possible to obtain an increased distance measurement resolution and accuracy of about 3% for distances over one kilometer. Ultimately, this leads to more accurate and reliable measurement data in the form of a point cloud. Moreover, the program algorithm introduces operations enabling the filtering of erroneously collected data in the point cloud. All activities from the programming, mechanical and optical side are aimed at obtaining the most accurate 3D reconstruction of the environment in the measurement area.

Keywords: airfield monitoring, artificial intelligence, stereovision, 3D reconstruction

Procedia PDF Downloads 94
3334 Cloud Enterprise Application Provider Selection Model for the Small and Medium Enterprise: A Pilot Study

Authors: Rowland R. Ogunrinde, Yusmadi Y. Jusoh, Noraini Che Pa, Wan Nurhayati W. Rahman, Azizol B. Abdullah

Abstract:

Enterprise Applications (EAs) aid the organizations achieve operational excellence and competitive advantage. Over time, most Small and Medium Enterprises (SMEs), which are known to be the major drivers of most thriving global economies, use the costly on-premise versions of these applications thereby making business difficult to competitively thrive in the same market environment with their large enterprise counterparts. The advent of cloud computing presents the SMEs an affordable offer and great opportunities as such EAs can be cloud-hosted and rented on a pay-per-use basis which does not require huge initial capital. However, as there are numerous Cloud Service Providers (CSPs) offering EAs as Software-as-a-Service (SaaS), there is a challenge of choosing a suitable provider with Quality of Service (QoS) that meet the organizations’ customized requirements. The proposed model takes care of that and goes a step further to select the most affordable among a selected few of the CSPs. In the earlier stage, before developing the instrument and conducting the pilot test, the researchers conducted a structured interview with three experts to validate the proposed model. In conclusion, the validity and reliability of the instrument were tested through experts, typical respondents, and analyzed with SPSS 22. Results confirmed the validity of the proposed model and the validity and reliability of the instrument.

Keywords: cloud service provider, enterprise application, quality of service, selection criteria, small and medium enterprise

Procedia PDF Downloads 156
3333 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion

Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro

Abstract:

In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.

Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment

Procedia PDF Downloads 8
3332 Water Monitoring Sentinel Cloud Platform: Water Monitoring Platform Based on Satellite Imagery and Modeling Data

Authors: Alberto Azevedo, Ricardo Martins, André B. Fortunato, Anabela Oliveira

Abstract:

Water is under severe threat today because of the rising population, increased agricultural and industrial needs, and the intensifying effects of climate change. Due to sea-level rise, erosion, and demographic pressure, the coastal regions are of significant concern to the scientific community. The Water Monitoring Sentinel Cloud platform (WORSICA) service is focused on providing new tools for monitoring water in coastal and inland areas, taking advantage of remote sensing, in situ and tidal modeling data. WORSICA is a service that can be used to determine the coastline, coastal inundation areas, and the limits of inland water bodies using remote sensing (satellite and Unmanned Aerial Vehicles - UAVs) and in situ data (from field surveys). It applies to various purposes, from determining flooded areas (from rainfall, storms, hurricanes, or tsunamis) to detecting large water leaks in major water distribution networks. This service was built on components developed in national and European projects, integrated to provide a one-stop-shop service for remote sensing information, integrating data from the Copernicus satellite and drone/unmanned aerial vehicles, validated by existing online in-situ data. Since WORSICA is operational using the European Open Science Cloud (EOSC) computational infrastructures, the service can be accessed via a web browser and is freely available to all European public research groups without additional costs. In addition, the private sector will be able to use the service, but some usage costs may be applied, depending on the type of computational resources needed by each application/user. Although the service has three main sub-services i) coastline detection; ii) inland water detection; iii) water leak detection in irrigation networks, in the present study, an application of the service to Óbidos lagoon in Portugal is shown, where the user can monitor the evolution of the lagoon inlet and estimate the topography of the intertidal areas without any additional costs. The service has several distinct methodologies implemented based on the computations of the water indexes (e.g., NDWI, MNDWI, AWEI, and AWEIsh) retrieved from the satellite image processing. In conjunction with the tidal data obtained from the FES model, the system can estimate a coastline with the corresponding level or even topography of the inter-tidal areas based on the Flood2Topo methodology. The outcomes of the WORSICA service can be helpful for several intervention areas such as i) emergency by providing fast access to inundated areas to support emergency rescue operations; ii) support of management decisions on hydraulic infrastructures operation to minimize damage downstream; iii) climate change mitigation by minimizing water losses and reduce water mains operation costs; iv) early detection of water leakages in difficult-to-access water irrigation networks, promoting their fast repair.

Keywords: remote sensing, coastline detection, water detection, satellite data, sentinel, Copernicus, EOSC

Procedia PDF Downloads 96
3331 Implementation of Clinical Monitoring System of Physiological Parameters

Authors: Abdesselam Babouri, Ahcène Lemzadmi, M Rahmane, B. Belhadi, N. Abouchi

Abstract:

Medical monitoring aims at monitoring and remotely controlling the vital physiological parameters of the patient. The physiological sensors provide repetitive measurements of these parameters in the form of electrical signals that vary continuously over time. Various measures allow informing us about the health of the person's physiological data (weight, blood pressure, heart rate or specific to a disease), environmental conditions (temperature, humidity, light, noise level) and displacement and movements (physical efforts and the completion of major daily living activities). The collected data will allow monitoring the patient’s condition and alerting in case of modification. They are also used in the diagnosis and decision making on medical treatment and the health of the patient. This work presents the implementation of a monitoring system to be used for the control of physiological parameters.

Keywords: clinical monitoring, physiological parameters, biomedical sensors, personal health

Procedia PDF Downloads 438
3330 Rapid Monitoring of Earthquake Damages Using Optical and SAR Data

Authors: Saeid Gharechelou, Ryutaro Tateishi

Abstract:

Earthquake is an inevitable catastrophic natural disaster. The damages of buildings and man-made structures, where most of the human activities occur are the major cause of casualties from earthquakes. A comparison of optical and SAR data is presented in the case of Kathmandu valley which was hardly shaken by 2015-Nepal Earthquake. Though many existing researchers have conducted optical data based estimated or suggested combined use of optical and SAR data for improved accuracy, however finding cloud-free optical images when urgently needed are not assured. Therefore, this research is specializd in developing SAR based technique with the target of rapid and accurate geospatial reporting. Should considers that limited time available in post-disaster situation offering quick computation exclusively based on two pairs of pre-seismic and co-seismic single look complex (SLC) images. The InSAR coherence pre-seismic, co-seismic and post-seismic was used to detect the change in damaged area. In addition, the ground truth data from field applied to optical data by random forest classification for detection of damaged area. The ground truth data collected in the field were used to assess the accuracy of supervised classification approach. Though a higher accuracy obtained from the optical data then integration by optical-SAR data. Limitation of cloud-free images when urgently needed for earthquak evevent are and is not assured, thus further research on improving the SAR based damage detection is suggested. Availability of very accurate damage information is expected for channelling the rescue and emergency operations. It is expected that the quick reporting of the post-disaster damage situation quantified by the rapid earthquake assessment should assist in channeling the rescue and emergency operations, and in informing the public about the scale of damage.

Keywords: Sentinel-1A data, Landsat-8, earthquake damage, InSAR, rapid damage monitoring, 2015-Nepal earthquake

Procedia PDF Downloads 140
3329 A Survey on Internet of Things and Fog Computing as a Platform for Internet of Things

Authors: Samira Kalantary, Sara Taghipour, Mansoure Ghias Abadi

Abstract:

The Internet of Things (IOT) is a technological revolution that represents the future of computing and communications. IOT is the convergence of Internet with RFID, NFC, Sensor, and smart objects. Fog Computing is the natural platform for IOT. At present, the IOT as a new network communication technology has rapidly shifted from concept to application under fog computing virtual storage computing platform. In this paper, we describe everything about IOT and difference between cloud computing and fog computing.

Keywords: cloud computing, fog computing, Internet of Things (IoT), IOT application

Procedia PDF Downloads 548
3328 Optimization of Monitoring Networks for Air Quality Management in Urban Hotspots

Authors: Vethathirri Ramanujam Srinivasan, S. M. Shiva Nagendra

Abstract:

Air quality management in urban areas is a serious concern in both developed and developing countries. In this regard, more number of air quality monitoring stations are planned to mitigate air pollution in urban areas. In India, Central Pollution Control Board has set up 574 air quality monitoring stations across the country and proposed to set up another 500 stations in the next few years. The number of monitoring stations for each city has been decided based on population data. The setting up of ambient air quality monitoring stations and their operation and maintenance are highly expensive. Therefore, there is a need to optimize monitoring networks for air quality management. The present paper discusses the various methods such as Indian Standards (IS) method, US EPA method and European Union (EU) method to arrive at the minimum number of air quality monitoring stations. In addition, optimization of rain-gauge method and Inverse Distance Weighted (IDW) method using Geographical Information System (GIS) are also explored in the present work for the design of air quality network in Chennai city. In summary, additionally 18 stations are required for Chennai city, and the potential monitoring locations with their corresponding land use patterns are ranked and identified from the 1km x 1km sized grids.

Keywords: air quality monitoring network, inverse distance weighted method, population based method, spatial variation

Procedia PDF Downloads 156
3327 Design and Implementation of a Monitoring System Using Arduino and MATLAB

Authors: Jonas P. Reges, Jessyca A. Bessa, Auzuir R. Alexandria

Abstract:

The research came up with the need of monitoring them of temperature and relative moisture in past work that enveloped the study of a greenhouse located in the Research and Extension Unit(UEPE). This research brought several unknowns that were resolved from bibliographical research. Based on the studies performed were found some monitoring methods, including the serial communication between the arduino and matlab which showed a great option due to the low cost. The project was conducted in two stages, the first, an algorithm was developed to the Arduino and Matlab, and second, the circuits were assembled and performed the monitoring tests the following variables: moisture, temperature, and distance. During testing it was possible to momentarily observe the change in the levels of monitored variables. The project showed satisfactory results, such as: real-time verification of the change of state variables, the low cost of acquisition of the prototype, possibility of easy change of programming for the execution of monitoring of other variables. Therefore, the project showed the possibility of monitoring through software and hardware that have easy programming and can be used in several areas. However, it is observed also the possibility of improving the project from a remote monitoring via Bluetooth or web server and through the control of monitored variables.

Keywords: automation, monitoring, programming, arduino, matlab

Procedia PDF Downloads 481
3326 Remote Wireless Patient Monitoring System

Authors: Sagar R. Patil, Dinesh R. Gawade, Sudhir N. Divekar

Abstract:

One of the medical devices we found when we visit a hospital care unit such device is ‘patient monitoring system’. This device (patient monitoring system) informs doctors and nurses about the patient’s physiological signals. However, this device (patient monitoring system) does not have a remote monitoring capability, which is necessitates constant onsite attendance by support personnel (doctors and nurses). Thus, we have developed a Remote Wireless Patient Monitoring System using some biomedical sensors and Android OS, which is a portable patient monitoring. This device(Remote Wireless Patient Monitoring System) monitors the biomedical signals of patients in real time and sends them to remote stations (doctors and nurse’s android Smartphone and web) for display and with alerts when necessary. Wireless Patient Monitoring System different from conventional device (Patient Monitoring system) in two aspects: First its wireless communication capability allows physiological signals to be monitored remotely and second, it is portable so patients can move while there biomedical signals are being monitor. Wireless Patient Monitoring is also notable because of its implementation. We are integrated four sensors such as pulse oximeter (SPO2), thermometer, respiration, blood pressure (BP), heart rate and electrocardiogram (ECG) in this device (Wireless Patient Monitoring System) and Monitoring and communication applications are implemented on the Android OS using threads, which facilitate the stable and timely manipulation of signals and the appropriate sharing of resources. The biomedical data will be display on android smart phone as well as on web Using web server and database system we can share these physiological signals with remote place medical personnel’s or with any where in the world medical personnel’s. We verified that the multitasking implementation used in the system was suitable for patient monitoring and for other Healthcare applications.

Keywords: patient monitoring, wireless patient monitoring, bio-medical signals, physiological signals, embedded system, Android OS, healthcare, pulse oximeter (SPO2), thermometer, respiration, blood pressure (BP), heart rate, electrocardiogram (ECG)

Procedia PDF Downloads 541
3325 Quantification Model for Capability Evaluation of Optical-Based in-Situ Monitoring System for Laser Powder Bed Fusion (LPBF) Process

Authors: Song Zhang, Hui Wang, Johannes Henrich Schleifenbaum

Abstract:

Due to the increasing demand for quality assurance and reliability for additive manufacturing, the development of an advanced in-situ monitoring system is required to monitor the process anomalies as input for further process control. Optical-based monitoring systems, such as CMOS cameras and NIR cameras, are proved as effective ways to monitor the geometrical distortion and exceptional thermal distribution. Therefore, many studies and applications are focusing on the availability of the optical-based monitoring system for detecting varied types of defects. However, the capability of the monitoring setup is not quantified. In this study, a quantification model to evaluate the capability of the monitoring setups for the LPBF machine based on acquired monitoring data of a designed test artifact is presented, while the design of the relevant test artifacts is discussed. The monitoring setup is evaluated based on its hardware properties, location of the integration, and light condition. Methodology of data processing to quantify the capacity for each aspect is discussed. The minimal capability of the detectable size of the monitoring set up in the application is estimated by quantifying its resolution and accuracy. The quantification model is validated using a CCD camera-based monitoring system for LPBF machines in the laboratory with different setups. The result shows the model to quantify the monitoring system's performance, which makes the evaluation of monitoring systems with the same concept but different setups possible for the LPBF process and provides the direction to improve the setups.

Keywords: data processing, in-situ monitoring, LPBF process, optical system, quantization model, test artifact

Procedia PDF Downloads 170
3324 Automatic Verification Technology of Virtual Machine Software Patch on IaaS Cloud

Authors: Yoji Yamato

Abstract:

In this paper, we propose an automatic verification technology of software patches for user virtual environments on IaaS Cloud to decrease verification costs of patches. In these days, IaaS services have been spread and many users can customize virtual machines on IaaS Cloud like their own private servers. Regarding to software patches of OS or middleware installed on virtual machines, users need to adopt and verify these patches by themselves. This task increases operation costs of users. Our proposed method replicates user virtual environments, extracts verification test cases for user virtual environments from test case DB, distributes patches to virtual machines on replicated environments and conducts those test cases automatically on replicated environments. We have implemented the proposed method on OpenStack using Jenkins and confirmed the feasibility. Using the implementation, we confirmed the effectiveness of test case creation efforts by our proposed idea of 2-tier abstraction of software functions and test cases. We also evaluated the automatic verification performance of environment replications, test cases extractions and test cases conductions.

Keywords: OpenStack, cloud computing, automatic verification, jenkins

Procedia PDF Downloads 457