Search results for: cloud computing network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5668

Search results for: cloud computing network

5398 Misleading Node Detection and Response Mechanism in Mobile Ad-Hoc Network

Authors: Earleen Jane Fuentes, Regeene Melarese Lim, Franklin Benjamin Tapia, Alexis Pantola

Abstract:

Mobile Ad-hoc Network (MANET) is an infrastructure-less network of mobile devices, also known as nodes. These nodes heavily rely on each other’s resources such as memory, computing power, and energy. Thus, some nodes may become selective in forwarding packets so as to conserve their resources. These nodes are called misleading nodes. Several reputation-based techniques (e.g. CORE, CONFIDANT, LARS, SORI, OCEAN) and acknowledgment-based techniques (e.g. TWOACK, S-TWOACK, EAACK) have been proposed to detect such nodes. These techniques do not appropriately punish misleading nodes. Hence, this paper addresses the limitations of these techniques using a system called MINDRA.

Keywords: acknowledgment-based techniques, mobile ad-hoc network, selfish nodes, reputation-based techniques

Procedia PDF Downloads 350
5397 Development of a Shape Based Estimation Technology Using Terrestrial Laser Scanning

Authors: Gichun Cha, Byoungjoon Yu, Jihwan Park, Minsoo Park, Junghyun Im, Sehwan Park, Sujung Sin, Seunghee Park

Abstract:

The goal of this research is to estimate a structural shape change using terrestrial laser scanning. This study proceeds with development of data reduction and shape change estimation algorithm for large-capacity scan data. The point cloud of scan data was converted to voxel and sampled. Technique of shape estimation is studied to detect changes in structure patterns, such as skyscrapers, bridges, and tunnels based on large point cloud data. The point cloud analysis applies the octree data structure to speed up the post-processing process for change detection. The point cloud data is the relative representative value of shape information, and it used as a model for detecting point cloud changes in a data structure. Shape estimation model is to develop a technology that can detect not only normal but also immediate structural changes in the event of disasters such as earthquakes, typhoons, and fires, thereby preventing major accidents caused by aging and disasters. The study will be expected to improve the efficiency of structural health monitoring and maintenance.

Keywords: terrestrial laser scanning, point cloud, shape information model, displacement measurement

Procedia PDF Downloads 200
5396 Optimization of Topology-Aware Job Allocation on a High-Performance Computing Cluster by Neural Simulated Annealing

Authors: Zekang Lan, Yan Xu, Yingkun Huang, Dian Huang, Shengzhong Feng

Abstract:

Jobs on high-performance computing (HPC) clusters can suffer significant performance degradation due to inter-job network interference. Topology-aware job allocation problem (TJAP) is such a problem that decides how to dedicate nodes to specific applications to mitigate inter-job network interference. In this paper, we study the window-based TJAP on a fat-tree network aiming at minimizing the cost of communication hop, a defined inter-job interference metric. The window-based approach for scheduling repeats periodically, taking the jobs in the queue and solving an assignment problem that maps jobs to the available nodes. Two special allocation strategies are considered, i.e., static continuity assignment strategy (SCAS) and dynamic continuity assignment strategy (DCAS). For the SCAS, a 0-1 integer programming is developed. For the DCAS, an approach called neural simulated algorithm (NSA), which is an extension to simulated algorithm (SA) that learns a repair operator and employs them in a guided heuristic search, is proposed. The efficacy of NSA is demonstrated with a computational study against SA and SCIP. The results of numerical experiments indicate that both the model and algorithm proposed in this paper are effective.

Keywords: high-performance computing, job allocation, neural simulated annealing, topology-aware

Procedia PDF Downloads 69
5395 Factors Affecting U-Computing Use

Authors: Shui Lien Chen, Chen-Yin Kuo

Abstract:

U-computing use has brings many new services of commerce, which could provide a new experience for customer. Location Based Services (LBS) is one of U-computing service. With increase of the smartphone and mobile internet users, there are many small and medium-sized enterprises (SMEs) take LBS in marketing strategy in Taiwan. For example, they would provide Facebook check-in to get a benefit (e.g. discount, free dessert and coupon) to attract customers purchasing. Therefore, this study is to understand which factors would affect SMEs adoption of u-computing and the performances after adopt U-computing. This study collected 187 useful data that were analyzed by SmartPLS 2.0 software. The results of this study are as follows. First, entrepreneurial orientation and market orientation positively affects innovation. Second, business resources and innovation positively affect u-computing use. Finally, U-computing positively affects both business value and customer value.

Keywords: entrepreneurial orientation, market orientation, innovation, business resources, u-computing use, LBS

Procedia PDF Downloads 548
5394 Utilizing Grid Computing to Enhance Power Systems Performance

Authors: Rafid A. Al-Khannak, Fawzi M. Al-Naima

Abstract:

Power load is one of the most important controlling keys which decide power demands and illustrate power usage to shape power market. Hence, power load forecasting is the parameter which facilitates understanding and analyzing all these aspects. In this paper, power load forecasting is solved under MATLAB environment by constructing a neural network for the power load to find an accurate simulated solution with the minimum error. A developed algorithm to achieve load forecasting application with faster technique is the aim for this paper. The algorithm is used to enable MATLAB power application to be implemented by multi machines in the Grid computing system, and to accomplish it within much less time, cost and with high accuracy and quality. Grid Computing, the modern computational distributing technology, has been used to enhance the performance of power applications by utilizing idle and desired Grid contributor(s) by sharing computational power resources.

Keywords: DeskGrid, Grid Server, idle contributor(s), grid computing, load forecasting

Procedia PDF Downloads 447
5393 Performance Evaluation of Task Scheduling Algorithm on LCQ Network

Authors: Zaki Ahmad Khan, Jamshed Siddiqui, Abdus Samad

Abstract:

The Scheduling and mapping of tasks on a set of processors is considered as a critical problem in parallel and distributed computing system. This paper deals with the problem of dynamic scheduling on a special type of multiprocessor architecture known as Linear Crossed Cube (LCQ) network. This proposed multiprocessor is a hybrid network which combines the features of both linear type of architectures as well as cube based architectures. Two standard dynamic scheduling schemes namely Minimum Distance Scheduling (MDS) and Two Round Scheduling (TRS) schemes are implemented on the LCQ network. Parallel tasks are mapped and the imbalance of load is evaluated on different set of processors in LCQ network. The simulations results are evaluated and effort is made by means of through analysis of the results to obtain the best solution for the given network in term of load imbalance left and execution time. The other performance matrices like speedup and efficiency are also evaluated with the given dynamic algorithms.

Keywords: dynamic algorithm, load imbalance, mapping, task scheduling

Procedia PDF Downloads 420
5392 Clouds Influence on Atmospheric Ozone from GOME-2 Satellite Measurements

Authors: S. M. Samkeyat Shohan

Abstract:

This study is mainly focused on the determination and analysis of the photolysis rate of atmospheric, specifically tropospheric, ozone as function of cloud properties through-out the year 2007. The observational basis for ozone concentrations and cloud properties are the measurement data set of the Global Ozone Monitoring Experiment-2 (GOME-2) sensor on board the polar orbiting Metop-A satellite. Two different spectral ranges are used; ozone total column are calculated from the wavelength window 325 – 335 nm, while cloud properties, such as cloud top height (CTH) and cloud optical thick-ness (COT) are derived from the absorption band of molecular oxygen centered at 761 nm. Cloud fraction (CF) is derived from measurements in the ultraviolet, visible and near-infrared range of GOME-2. First, ozone concentrations above clouds are derived from ozone total columns, subtracting the contribution of stratospheric ozone and filtering those satellite measurements which have thin and low clouds. Then, the values of ozone photolysis derived from observations are compared with theoretical modeled results, in the latitudinal belt 5˚N-5˚S and 20˚N - 20˚S, as function of CF and COT. In general, good agreement is found between the data and the model, proving both the quality of the space-borne ozone and cloud properties as well as the modeling theory of ozone photolysis rate. The found discrepancies can, however, amount to approximately 15%. Latitudinal seasonal changes of photolysis rate of ozone are found to be negatively correlated to changes in upper-tropospheric ozone concentrations only in the autumn and summer months within the northern and southern tropical belts, respectively. This fact points to the entangled roles of temperature and nitrogen oxides in the ozone production, which are superimposed on its sole photolysis induced by thick and high clouds in the tropics.

Keywords: cloud properties, photolysis rate, stratospheric ozone, tropospheric ozone

Procedia PDF Downloads 183
5391 EECS: Reimagining the Future of Technology Education through Electrical Engineering and Computer Science Integration

Authors: Yousef Sharrab, Dimah Al-Fraihat, Monther Tarawneh, Aysh Alhroob, Ala’ Khalifeh, Nabil Sarhan

Abstract:

This paper explores the evolution of Electrical Engineering (EE) and Computer Science (CS) education in higher learning, examining the feasibility of unifying them into Electrical Engineering and Computer Science (EECS) for the technology industry. It delves into the historical reasons for their separation and underscores the need for integration. Emerging technologies such as AI, Virtual Reality, IoT, Cloud Computing, and Cybersecurity demand an integrated EE and CS program to enhance students' understanding. The study evaluates curriculum integration models, drawing from prior research and case studies, demonstrating how integration can provide students with a comprehensive knowledge base for industry demands. Successful integration necessitates addressing administrative and pedagogical challenges. For academic institutions considering merging EE and CS programs, the paper offers guidance, advocating for a flexible curriculum encompassing foundational courses and specialized tracks in computer engineering, software engineering, bioinformatics, information systems, data science, AI, robotics, IoT, virtual reality, cybersecurity, and cloud computing. Elective courses are emphasized to keep pace with technological advancements. Implementing this integrated approach can prepare students for success in the technology industry, addressing the challenges of a technologically advanced society reliant on both EE and CS principles. Integrating EE and CS curricula is crucial for preparing students for the future.

Keywords: electrical engineering, computer science, EECS, curriculum integration of EE and CS

Procedia PDF Downloads 19
5390 A Novel Computer-Generated Hologram (CGH) Achieved Scheme Generated from Point Cloud by Using a Lens Array

Authors: Wei-Na Li, Mei-Lan Piao, Nam Kim

Abstract:

We proposed a novel computer-generated hologram (CGH) achieved scheme, wherein the CGH is generated from a point cloud which is transformed by a mapping relationship of a series of elemental images captured from a real three-dimensional (3D) object by using a lens array. This scheme is composed of three procedures: mapping from elemental images to point cloud, hologram generation, and hologram display. A mapping method is figured out to achieve a virtual volume date (point cloud) from a series of elemental images. This mapping method consists of two steps. Firstly, the coordinate (x, y) pairs and its appearing number are calculated from the series of sub-images, which are generated from the elemental images. Secondly, a series of corresponding coordinates (x, y, z) are calculated from the elemental images. Then a hologram is generated from the volume data that is calculated by the previous two steps. Eventually, a spatial light modulator (SLM) and a green laser beam are utilized to display this hologram and reconstruct the original 3D object. In this paper, in order to show a more auto stereoscopic display of a real 3D object, we successfully obtained the actual depth data of every discrete point of the real 3D object, and overcame the inherent drawbacks of the depth camera by obtaining point cloud from the elemental images.

Keywords: elemental image, point cloud, computer-generated hologram (CGH), autostereoscopic display

Procedia PDF Downloads 550
5389 Influence of Model Hydrometeor Form on Probability of Discharge Initiation from Artificial Charged Water Aerosol Cloud

Authors: A. G. Temnikov, O. S. Belova, L. L. Chernensky, T. K. Gerastenok, N. Y. Lysov, A. V. Orlov, D. S. Zhuravkova

Abstract:

Hypothesis of the lightning initiation on the arrays of large hydrometeors are in the consideration. There is no agreement about the form the hydrometeors that could be the best for the lightning initiation from the thundercloud. Artificial charged water aerosol clouds of the positive or negative polarity could help investigate the possible influence of the hydrometeor form on the peculiarities and the probability of the lightning discharge initiation between the thundercloud and the ground. Artificial charged aerosol clouds that could create the electric field strength in the range of 5-6 kV/cm to 16-18 kV/cm have been used in experiments. The array of the model hydrometeors of the volume and plate form has been disposed near the bottom cloud boundary. It was established that the different kinds of the discharge could be initiated in the presence of the model hydrometeors array – from the cloud discharges up to the diffuse and channel discharges between the charged cloud and the ground. It was found that the form of the model hydrometeors could significantly influence the channel discharge initiation from the artificial charged aerosol cloud of the negative or positive polarity correspondingly. Analysis and generalization of the experimental results have shown that the maximal probability of the channel discharge initiation and propagation stimulation has been observed for the artificial charged cloud of the positive polarity when the arrays of the model hydrometeors of the cylinder revolution form have been used. At the same time, for the artificial charged clouds of the negative polarity, application of the model hydrometeor array of the plate rhombus form has provided the maximal probability of the channel discharge formation between the charged cloud and the ground. The established influence of the form of the model hydrometeors on the channel discharge initiation and from the artificial charged water aerosol cloud and its following successful propagation has been related with the different character of the positive and negative streamer and volume leader development on the model hydrometeors array being near the bottom boundary of the charged cloud. The received experimental results have shown the possibly important role of the form of the large hail particles precipitated in thundercloud on the discharge initiation.

Keywords: cloud and channel discharges, hydrometeor form, lightning initiation, negative and positive artificial charged aerosol cloud

Procedia PDF Downloads 287
5388 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry

Authors: C. A. Barros, Ana P. Barroso

Abstract:

Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.

Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis

Procedia PDF Downloads 178
5387 A Pervasive System Architecture for Smart Environments in Internet of Things Context

Authors: Patrick Santos, João Casal, João Santos Luis Varandas, Tiago Alves, Carlos Romeiro, Sérgio Lourenço

Abstract:

Nowadays, technology makes it possible to, in one hand, communicate with various objects of the daily life through the Internet, and in the other, put these objects interacting with each other through this channel. Simultaneously, with the raise of smartphones as the most ubiquitous technology on persons lives, emerge new agents for these devices - Intelligent Personal Assistants. These agents have the goal of helping the user manage and organize his information as well as supporting the user in his/her day-to-day tasks. Moreover, other emergent concept is the Cloud Computing, which allows computation and storage to get out of the users devices, bringing benefits in terms of performance, security, interoperability and others. Connecting these three paradigms, in this work we propose an architecture for an intelligent system which provides an interface that assists the user on smart environments, informing, suggesting actions and allowing to manage the objects of his/her daily life.

Keywords: internet of things, cloud, intelligent personal assistant, architecture

Procedia PDF Downloads 480
5386 BigCrypt: A Probable Approach of Big Data Encryption to Protect Personal and Business Privacy

Authors: Abdullah Al Mamun, Talal Alkharobi

Abstract:

As data size is growing up, people are became more familiar to store big amount of secret information into cloud storage. Companies are always required to need transfer massive business files from one end to another. We are going to lose privacy if we transmit it as it is and continuing same scenario repeatedly without securing the communication mechanism means proper encryption. Although asymmetric key encryption solves the main problem of symmetric key encryption but it can only encrypt limited size of data which is inapplicable for large data encryption. In this paper we propose a probable approach of pretty good privacy for encrypt big data using both symmetric and asymmetric keys. Our goal is to achieve encrypt huge collection information and transmit it through a secure communication channel for committing the business and personal privacy. To justify our method an experimental dataset from three different platform is provided. We would like to show that our approach is working for massive size of various data efficiently and reliably.

Keywords: big data, cloud computing, cryptography, hadoop, public key

Procedia PDF Downloads 297
5385 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment

Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang

Abstract:

2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn  features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.

Keywords: artificial intelligence, machine learning, deep learning, convolutional neural networks

Procedia PDF Downloads 174
5384 Grid Computing for Multi-Objective Optimization Problems

Authors: Aouaouche Elmaouhab, Hassina Beggar

Abstract:

Solving multi-objective discrete optimization applications has always been limited by the resources of one machine: By computing power or by memory, most often both. To speed up the calculations, the grid computing represents a primary solution for the treatment of these applications through the parallelization of these resolution methods. In this work, we are interested in the study of some methods for solving multiple objective integer linear programming problem based on Branch-and-Bound and the study of grid computing technology. This study allowed us to propose an implementation of the method of Abbas and Al on the grid by reducing the execution time. To enhance our contribution, the main results are presented.

Keywords: multi-objective optimization, integer linear programming, grid computing, parallel computing

Procedia PDF Downloads 455
5383 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations

Authors: Deepak Singh, Rail Kuliev

Abstract:

The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.

Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization

Procedia PDF Downloads 40
5382 IoT Based Information Processing and Computing

Authors: Mannan Ahmad Rasheed, Sawera Kanwal, Mansoor Ahmad Rasheed

Abstract:

The Internet of Things (IoT) has revolutionized the way we collect and process information, making it possible to gather data from a wide range of connected devices and sensors. This has led to the development of IoT-based information processing and computing systems that are capable of handling large amounts of data in real time. This paper provides a comprehensive overview of the current state of IoT-based information processing and computing, as well as the key challenges and gaps that need to be addressed. This paper discusses the potential benefits of IoT-based information processing and computing, such as improved efficiency, enhanced decision-making, and cost savings. Despite the numerous benefits of IoT-based information processing and computing, several challenges need to be addressed to realize the full potential of these systems. These challenges include security and privacy concerns, interoperability issues, scalability and reliability of IoT devices, and the need for standardization and regulation of IoT technologies. Moreover, this paper identifies several gaps in the current research related to IoT-based information processing and computing. One major gap is the lack of a comprehensive framework for designing and implementing IoT-based information processing and computing systems.

Keywords: IoT, computing, information processing, Iot computing

Procedia PDF Downloads 144
5381 Functional Neural Network for Decision Processing: A Racing Network of Programmable Neurons Where the Operating Model Is the Network Itself

Authors: Frederic Jumelle, Kelvin So, Didan Deng

Abstract:

In this paper, we are introducing a model of artificial general intelligence (AGI), the functional neural network (FNN), for modeling human decision-making processes. The FNN is composed of multiple artificial mirror neurons (AMN) racing in the network. Each AMN has a similar structure programmed independently by the users and composed of an intention wheel, a motor core, and a sensory core racing at a specific velocity. The mathematics of the node’s formulation and the racing mechanism of multiple nodes in the network will be discussed, and the group decision process with fuzzy logic and the transformation of these conceptual methods into practical methods of simulation and in operations will be developed. Eventually, we will describe some possible future research directions in the fields of finance, education, and medicine, including the opportunity to design an intelligent learning agent with application in AGI. We believe that FNN has a promising potential to transform the way we can compute decision-making and lead to a new generation of AI chips for seamless human-machine interactions (HMI).

Keywords: neural computing, human machine interation, artificial general intelligence, decision processing

Procedia PDF Downloads 94
5380 A Comparative Study on ANN, ANFIS and SVM Methods for Computing Resonant Frequency of A-Shaped Compact Microstrip Antennas

Authors: Ahmet Kayabasi, Ali Akdagli

Abstract:

In this study, three robust predicting methods, namely artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for computing the resonant frequency of A-shaped compact microstrip antennas (ACMAs) operating at UHF band. Firstly, the resonant frequencies of 144 ACMAs with various dimensions and electrical parameters were simulated with the help of IE3D™ based on method of moment (MoM). The ANN, ANFIS and SVM models for computing the resonant frequency were then built by considering the simulation data. 124 simulated ACMAs were utilized for training and the remaining 20 ACMAs were used for testing the ANN, ANFIS and SVM models. The performance of the ANN, ANFIS and SVM models are compared in the training and test process. The average percentage errors (APE) regarding the computed resonant frequencies for training of the ANN, ANFIS and SVM were obtained as 0.457%, 0.399% and 0.600%, respectively. The constructed models were then tested and APE values as 0.601% for ANN, 0.744% for ANFIS and 0.623% for SVM were achieved. The results obtained here show that ANN, ANFIS and SVM methods can be successfully applied to compute the resonant frequency of ACMAs, since they are useful and versatile methods that yield accurate results.

Keywords: a-shaped compact microstrip antenna, artificial neural network (ANN), adaptive neuro-fuzzy inference system (ANFIS), support vector machine (SVM)

Procedia PDF Downloads 414
5379 Analyzing the Quality of Cloud-Based E-Learning Systems on the Perception of the Learners and the Teachers

Authors: R. W. C. Devindi, S. M. Buddika Harshanath

Abstract:

E-learning is a widely used technology for learning in the modern world. With the pandemic situation the popularity of using e-learning has been increased in a larger capacity. The e-learning educational systems require software resources as well as hardware usually but it is hard for most of the education institutions to afford those resources. Also with the massive user load e-learning has to broaden the server side resources as well. Therefore, in the present cloud computing was implemented in order to make the e – learning systems more efficient. The researcher has analyzed the quality of the e-learning systems on the perception of the learners and the teachers with the aid of hypothesis and has given the analyzed results and the discussion in this report. Therefore, the future research will be able to get some steps to increase the quality of the online learning systems furthermore. In the case of e-learning, quality assurance and cost effectiveness are essential. A complex quality assurance system is used in the stated project. There are no well-defined standard evaluation measures in this field. As a result, accurately assessing the e-learning system's overall quality is challenging. The researcher has done the analysis with the aid of standard methods and software.

Keywords: LMS–learning management system, SPSS–statistical package for social sciences (software), eigen value, hypothesis

Procedia PDF Downloads 83
5378 Mobile Cloud Application in Design Build Bridge Construction

Authors: Meng Sun, Bin Wei

Abstract:

In the past decades, design-build has become a more popular project delivery system especially for the large scaled infrastructure project in North America. It provides a one-stop shopping system for the client therefore improves the efficiency of construction, and reduces the risks and overall cost for the clients. Compared to the project with traditional delivery method, design-build project requires contractor and designer to work together efficiently to deliver the best-value solutions through the construction process. How to facilitate a solid integration and efficient interaction between contractor and designer often affects the schedule, budget and quality of the construction therefore becomes a key factor to the success of a design-build project. This paper presents a concept of using modern mobile cloud technology to provide an integrated solution during the design-build construction. It uses mobile cloud architecture to provide a platform for real-time field progress, change request approval, job progress log, and project time entry with devices integration for field information and communications. The paper uses a real filed change notice as an example to demonstrate how mobile cloud technology applies in a design-build project and how it can improve the project efficiency.

Keywords: cloud, design-build, field change notice, mobile application

Procedia PDF Downloads 218
5377 Deep Reinforcement Learning Model Using Parameterised Quantum Circuits

Authors: Lokes Parvatha Kumaran S., Sakthi Jay Mahenthar C., Sathyaprakash P., Jayakumar V., Shobanadevi A.

Abstract:

With the evolution of technology, the need to solve complex computational problems like machine learning and deep learning has shot up. But even the most powerful classical supercomputers find it difficult to execute these tasks. With the recent development of quantum computing, researchers and tech-giants strive for new quantum circuits for machine learning tasks, as present works on Quantum Machine Learning (QML) ensure less memory consumption and reduced model parameters. But it is strenuous to simulate classical deep learning models on existing quantum computing platforms due to the inflexibility of deep quantum circuits. As a consequence, it is essential to design viable quantum algorithms for QML for noisy intermediate-scale quantum (NISQ) devices. The proposed work aims to explore Variational Quantum Circuits (VQC) for Deep Reinforcement Learning by remodeling the experience replay and target network into a representation of VQC. In addition, to reduce the number of model parameters, quantum information encoding schemes are used to achieve better results than the classical neural networks. VQCs are employed to approximate the deep Q-value function for decision-making and policy-selection reinforcement learning with experience replay and the target network.

Keywords: quantum computing, quantum machine learning, variational quantum circuit, deep reinforcement learning, quantum information encoding scheme

Procedia PDF Downloads 92
5376 Design and Implementation of Security Middleware for Data Warehouse Signature, Framework

Authors: Mayada Al Meghari

Abstract:

Recently, grid middlewares have provided large integrated use of network resources as the shared data and the CPU to become a virtual supercomputer. In this work, we present the design and implementation of the middleware for Data Warehouse Signature, DWS Framework. The aim of using the middleware in our DWS framework is to achieve the high performance by the parallel computing. This middleware is developed on Alchemi.Net framework to increase the security among the network nodes through the authentication and group-key distribution model. This model achieves the key security and prevents any intermediate attacks in the middleware. This paper presents the flow process structures of the middleware design. In addition, the paper ensures the implementation of security for DWS middleware enhancement with the authentication and group-key distribution model. Finally, from the analysis of other middleware approaches, the developed middleware of DWS framework is the optimal solution of a complete covering of security issues.

Keywords: middleware, parallel computing, data warehouse, security, group-key, high performance

Procedia PDF Downloads 87
5375 Enhancing Security and Privacy Protocols in Telehealth: A Comprehensive Approach across IoT/Fog/Cloud Environments

Authors: Yunyong Guo, Man Wang, Bryan Guo, Nathan Guo

Abstract:

This paper introduces an advanced security and privacy model tailored for Telehealth systems, emphasizing end-to-end protection across IoT, Fog, and Cloud components. The proposed model integrates encryption, key management, intrusion detection, and privacy-preserving measures to safeguard patient data. A comprehensive simulation study evaluates the model's effectiveness in scenarios such as unauthorized access, physical breaches, and insider threats. Results indicate notable success in detecting and mitigating threats yet underscore areas for refinement. The study contributes insights into the intricate balance between security and usability in Telehealth environments, setting the stage for continued advancements.

Keywords: cloud, enhancing security, fog, IoT, telehealth

Procedia PDF Downloads 35
5374 Improving Fault Tolerance and Load Balancing in Heterogeneous Grid Computing Using Fractal Transform

Authors: Saad M. Darwish, Adel A. El-Zoghabi, Moustafa F. Ashry

Abstract:

The popularity of the Internet and the availability of powerful computers and high-speed networks as low-cost commodity components are changing the way we use computers today. These technical opportunities have led to the possibility of using geographically distributed and multi-owner resources to solve large-scale problems in science, engineering, and commerce. Recent research on these topics has led to the emergence of a new paradigm known as Grid computing. To achieve the promising potentials of tremendous distributed resources, effective and efficient load balancing algorithms are fundamentally important. Unfortunately, load balancing algorithms in traditional parallel and distributed systems, which usually run on homogeneous and dedicated resources, cannot work well in the new circumstances. In this paper, the concept of a fast fractal transform in heterogeneous grid computing based on R-tree and the domain-range entropy is proposed to improve fault tolerance and load balancing algorithm by improve connectivity, communication delay, network bandwidth, resource availability, and resource unpredictability. A novel two-dimension figure of merit is suggested to describe the network effects on load balance and fault tolerance estimation. Fault tolerance is enhanced by adaptively decrease replication time and message cost while load balance is enhanced by adaptively decrease mean job response time. Experimental results show that the proposed method yields superior performance over other methods.

Keywords: Grid computing, load balancing, fault tolerance, R-tree, heterogeneous systems

Procedia PDF Downloads 457
5373 MLOps Scaling Machine Learning Lifecycle in an Industrial Setting

Authors: Yizhen Zhao, Adam S. Z. Belloum, Goncalo Maia Da Costa, Zhiming Zhao

Abstract:

Machine learning has evolved from an area of academic research to a real-word applied field. This change comes with challenges, gaps and differences exist between common practices in academic environments and the ones in production environments. Following continuous integration, development and delivery practices in software engineering, similar trends have happened in machine learning (ML) systems, called MLOps. In this paper we propose a framework that helps to streamline and introduce best practices that facilitate the ML lifecycle in an industrial setting. This framework can be used as a template that can be customized to implement various machine learning experiment. The proposed framework is modular and can be recomposed to be adapted to various use cases (e.g. data versioning, remote training on cloud). The framework inherits practices from DevOps and introduces other practices that are unique to the machine learning system (e.g.data versioning). Our MLOps practices automate the entire machine learning lifecycle, bridge the gap between development and operation.

Keywords: cloud computing, continuous development, data versioning, DevOps, industrial setting, MLOps

Procedia PDF Downloads 230
5372 Observationally Constrained Estimates of Aerosol Indirect Radiative Forcing over Indian Ocean

Authors: Sofiya Rao, Sagnik Dey

Abstract:

Aerosol-cloud-precipitation interaction continues to be one of the largest sources of uncertainty in quantifying the aerosol climate forcing. The uncertainty is increasing from global to regional scale. This problem remains unresolved due to the large discrepancy in the representation of cloud processes in the climate models. Most of the studies on aerosol-cloud-climate interaction and aerosol-cloud-precipitation over Indian Ocean (like INDOEX, CAIPEEX campaign etc.) are restricted to either particular to one season or particular to one region. Here we developed a theoretical framework to quantify aerosol indirect radiative forcing using Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol and cloud products of 15 years (2000-2015) period over the Indian Ocean. This framework relies on the observationally constrained estimate of the aerosol-induced change in cloud albedo. We partitioned the change in cloud albedo into the change in Liquid Water Path (LWP) and Effective Radius of Clouds (Reff) in response to an aerosol optical depth (AOD). Cloud albedo response to an increase in AOD is most sensitive in the range of LWP between 120-300 gm/m² for a range of Reff varying from 8-24 micrometer, which means aerosols are most sensitive to this range of LWP and Reff. Using this framework, aerosol forcing during a transition from indirect to semi-direct effect is also calculated. The outcome of this analysis shows best results over the Arabian Sea in comparison with the Bay of Bengal and the South Indian Ocean because of heterogeneity in aerosol spices over the Arabian Sea. Over the Arabian Sea during Winter Season the more absorbing aerosols are dominating, during Pre-monsoon dust (coarse mode aerosol particles) are more dominating. In winter and pre-monsoon majorly the aerosol forcing is more dominating while during monsoon and post-monsoon season meteorological forcing is more dominating. Over the South Indian Ocean, more or less same types of aerosol (Sea salt) are present. Over the Arabian Sea the Aerosol Indirect Radiative forcing are varying from -5 ± 4.5 W/m² for winter season while in other seasons it is reducing. The results provide observationally constrained estimates of aerosol indirect forcing in the Indian Ocean which can be helpful in evaluating the climate model performance in the context of such complex interactions.

Keywords: aerosol-cloud-precipitation interaction, aerosol-cloud-climate interaction, indirect radiative forcing, climate model

Procedia PDF Downloads 142
5371 Network Security Attacks and Defences

Authors: Ranbir Singh, Deepinder Kaur

Abstract:

Network security is an important aspect in every field like government offices, Educational Institute and any business organization. Network security consists of the policies adopted to prevent and monitor forbidden access, misuse, modification, or denial of a computer network. Network security is very complicated subject and deal by only well trained and experienced people. However, as more and more people become wired, an increasing number of people need to understand the basics of security in a networked world. The history of the network security included an introduction to the TCP/IP and interworking. Network security starts with authenticating, commonly with a username and a password. In this paper, we study about various types of attacks on network security and how to handle or prevent this attack.

Keywords: network security, attacks, denial, authenticating

Procedia PDF Downloads 370
5370 Factorization of Computations in Bayesian Networks: Interpretation of Factors

Authors: Linda Smail, Zineb Azouz

Abstract:

Given a Bayesian network relative to a set I of discrete random variables, we are interested in computing the probability distribution P(S) where S is a subset of I. The general idea is to write the expression of P(S) in the form of a product of factors where each factor is easy to compute. More importantly, it will be very useful to give an interpretation of each of the factors in terms of conditional probabilities. This paper considers a semantic interpretation of the factors involved in computing marginal probabilities in Bayesian networks. Establishing such a semantic interpretations is indeed interesting and relevant in the case of large Bayesian networks.

Keywords: Bayesian networks, D-Separation, level two Bayesian networks, factorization of computation

Procedia PDF Downloads 490
5369 Internet of Things, Edge and Cloud Computing in Rock Mechanical Investigation for Underground Surveys

Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo

Abstract:

Rock mechanical investigation is one of the most crucial activities in underground operations, especially in surveys related to hydrocarbon exploration and production, geothermal reservoirs, energy storage, mining, and geotechnics. There is a wide range of traditional methods for driving, collecting, and analyzing rock mechanics data. However, these approaches may not be suitable or work perfectly in some situations, such as fractured zones. Cutting-edge technologies have been provided to solve and optimize the mentioned issues. Internet of Things (IoT), Edge, and Cloud Computing technologies (ECt & CCt, respectively) are among the most widely used and new artificial intelligence methods employed for geomechanical studies. IoT devices act as sensors and cameras for real-time monitoring and mechanical-geological data collection of rocks, such as temperature, movement, pressure, or stress levels. Structural integrity, especially for cap rocks within hydrocarbon systems, and rock mass behavior assessment, to further activities such as enhanced oil recovery (EOR) and underground gas storage (UGS), or to improve safety risk management (SRM) and potential hazards identification (P.H.I), are other benefits from IoT technologies. EC techniques can process, aggregate, and analyze data immediately collected by IoT on a real-time scale, providing detailed insights into the behavior of rocks in various situations (e.g., stress, temperature, and pressure), establishing patterns quickly, and detecting trends. Therefore, this state-of-the-art and useful technology can adopt autonomous systems in rock mechanical surveys, such as drilling and production (in hydrocarbon wells) or excavation (in mining and geotechnics industries). Besides, ECt allows all rock-related operations to be controlled remotely and enables operators to apply changes or make adjustments. It must be mentioned that this feature is very important in environmental goals. More often than not, rock mechanical studies consist of different data, such as laboratory tests, field operations, and indirect information like seismic or well-logging data. CCt provides a useful platform for storing and managing a great deal of volume and different information, which can be very useful in fractured zones. Additionally, CCt supplies powerful tools for predicting, modeling, and simulating rock mechanical information, especially in fractured zones within vast areas. Also, it is a suitable source for sharing extensive information on rock mechanics, such as the direction and size of fractures in a large oil field or mine. The comprehensive review findings demonstrate that digital transformation through integrated IoT, Edge, and Cloud solutions is revolutionizing traditional rock mechanical investigation. These advanced technologies have empowered real-time monitoring, predictive analysis, and data-driven decision-making, culminating in noteworthy enhancements in safety, efficiency, and sustainability. Therefore, by employing IoT, CCt, and ECt, underground operations have experienced a significant boost, allowing for timely and informed actions using real-time data insights. The successful implementation of IoT, CCt, and ECt has led to optimized and safer operations, optimized processes, and environmentally conscious approaches in underground geological endeavors.

Keywords: rock mechanical studies, internet of things, edge computing, cloud computing, underground surveys, geological operations

Procedia PDF Downloads 20