Search results for: cloud computing privacy
1148 Developing a Cloud Intelligence-Based Energy Management Architecture Facilitated with Embedded Edge Analytics for Energy Conservation in Demand-Side Management
Authors: Yu-Hsiu Lin, Wen-Chun Lin, Yen-Chang Cheng, Chia-Ju Yeh, Yu-Chuan Chen, Tai-You Li
Abstract:
Demand-Side Management (DSM) has the potential to reduce electricity costs and carbon emission, which are associated with electricity used in the modern society. A home Energy Management System (EMS) commonly used by residential consumers in a down-stream sector of a smart grid to monitor, control, and optimize energy efficiency to domestic appliances is a system of computer-aided functionalities as an energy audit for residential DSM. Implementing fault detection and classification to domestic appliances monitored, controlled, and optimized is one of the most important steps to realize preventive maintenance, such as residential air conditioning and heating preventative maintenance in residential/industrial DSM. In this study, a cloud intelligence-based green EMS that comes up with an Internet of Things (IoT) technology stack for residential DSM is developed. In the EMS, Arduino MEGA Ethernet communication-based smart sockets that module a Real Time Clock chip to keep track of current time as timestamps via Network Time Protocol are designed and implemented for readings of load phenomena reflecting on voltage and current signals sensed. Also, a Network-Attached Storage providing data access to a heterogeneous group of IoT clients via Hypertext Transfer Protocol (HTTP) methods is configured to data stores of parsed sensor readings. Lastly, a desktop computer with a WAMP software bundle (the Microsoft® Windows operating system, Apache HTTP Server, MySQL relational database management system, and PHP programming language) serves as a data science analytics engine for dynamic Web APP/REpresentational State Transfer-ful web service of the residential DSM having globally-Advanced Internet of Artificial Intelligence (AI)/Computational Intelligence. Where, an abstract computing machine, Java Virtual Machine, enables the desktop computer to run Java programs, and a mash-up of Java, R language, and Python is well-suited and -configured for AI in this study. Having the ability of sending real-time push notifications to IoT clients, the desktop computer implements Google-maintained Firebase Cloud Messaging to engage IoT clients across Android/iOS devices and provide mobile notification service to residential/industrial DSM. In this study, in order to realize edge intelligence that edge devices avoiding network latency and much-needed connectivity of Internet connections for Internet of Services can support secure access to data stores and provide immediate analytical and real-time actionable insights at the edge of the network, we upgrade the designed and implemented smart sockets to be embedded AI Arduino ones (called embedded AIduino). With the realization of edge analytics by the proposed embedded AIduino for data analytics, an Arduino Ethernet shield WizNet W5100 having a micro SD card connector is conducted and used. The SD library is included for reading parsed data from and writing parsed data to an SD card. And, an Artificial Neural Network library, ArduinoANN, for Arduino MEGA is imported and used for locally-embedded AI implementation. The embedded AIduino in this study can be developed for further applications in manufacturing industry energy management and sustainable energy management, wherein in sustainable energy management rotating machinery diagnostics works to identify energy loss from gross misalignment and unbalance of rotating machines in power plants as an example.Keywords: demand-side management, edge intelligence, energy management system, fault detection and classification
Procedia PDF Downloads 2501147 Application of 3-6 Years Old Children Basketball Appropriate Forms of Teaching Auxiliary Equipment in Early Childhood Basketball Game
Authors: Hai Zeng, Anqing Liu, Shuguang Dan, Ying Zhang, Yan Li, Zihang Zeng
Abstract:
Children are strong; the country strong, the development of children Basketball is a strategic advantage. Common forms of basketball equipment has been difficult to meet the needs of young children teaching the game of basketball, basketball development for 3-6 years old children in the form of appropriate teaching aids is a breakthrough basketball game teaching children bottlenecks, improve teaching critical path pleasure, but also the development of early childhood basketball a necessary requirement. In this study, literature, questionnaires, focus group interviews, comparative analysis, for domestic and foreign use of 12 kinds of basketball teaching aids (cloud computing MINI basketball, adjustable basketball MINI, MINI basketball court, shooting assist paw print ball, dribble goggles, dribbling machine, machine cartoon shooting, rebounding machine, against the mat, elastic belt, ladder, fitness ball), from fun and improve early childhood shooting technique, dribbling technology, as well as offensive and defensive rebounding against technology conduct research on conversion technology. The results show that by using appropriate forms of teaching children basketball aids, can effectively improve children's fun basketball game, targeted to improve a technology, different types of aids from different perspectives enrich the connotation of children basketball game. Recommended for children of color psychology, cartoon and environmentally friendly material production aids, and increase research efforts basketball aids children, encourage children to sports teachers aids applications.Keywords: appropriate forms of children basketball, auxiliary equipment, appli, MINI basketball, 3-6 years old children, teaching
Procedia PDF Downloads 3851146 New Technologies in Corporate Finance Management in the Digital Economy: Case of Kyrgyzstan
Authors: Marat Kozhomberdiev
Abstract:
The research will investigate the modern corporate finance management technologies currently used in the era of digitalization of the global economy and the degree to which financial institutions are utilizing these new technologies in the field of corporate finance management in Kyrgyzstan. The main purpose of the research is to reveal the role of financial management technologies as joint service centers, intercompany banks, specialized payment centers in the third-world country. Particularly, the analysis of the implacability of automated corporate finance management systems such as enterprise resource planning system (ERP) and treasury management system (TMS) will be carried out. Moreover, the research will investigate the role of cloud accounting systems in corporate finance management in Kyrgyz banks and whether it has any impact on the field of improving corporate finance management. The study will utilize a data collection process via surveying 3 banks in Kyrgyzstan, namely Mol-Bulak, RSK, and KICB. The banks were chosen based on their ownerships, such as state banks, private banks with local authorized capital, and private bank with international capital. The regression analysis will be utilized to reveal the correlation between the ownership of the bank and the use of new financial management technologies. The research will provide policy recommendations to both private and state banks on developing strategies for switching and utilizing modern corporate finance management technologies in their daily operations.Keywords: digital economy, corporate finance, digital environment, digital technologies, cloud technologies, financial management
Procedia PDF Downloads 701145 D-Wave Quantum Computing Ising Model: A Case Study for Forecasting of Heat Waves
Authors: Dmytro Zubov, Francesco Volponi
Abstract:
In this paper, D-Wave quantum computing Ising model is used for the forecasting of positive extremes of daily mean air temperature. Forecast models are designed with two to five qubits, which represent 2-, 3-, 4-, and 5-day historical data respectively. Ising model’s real-valued weights and dimensionless coefficients are calculated using daily mean air temperatures from 119 places around the world, as well as sea level (Aburatsu, Japan). In comparison with current methods, this approach is better suited to predict heat wave values because it does not require the estimation of a probability distribution from scarce observations. Proposed forecast quantum computing algorithm is simulated based on traditional computer architecture and combinatorial optimization of Ising model parameters for the Ronald Reagan Washington National Airport dataset with 1-day lead-time on learning sample (1975-2010 yr). Analysis of the forecast accuracy (ratio of successful predictions to total number of predictions) on the validation sample (2011-2014 yr) shows that Ising model with three qubits has 100 % accuracy, which is quite significant as compared to other methods. However, number of identified heat waves is small (only one out of nineteen in this case). Other models with 2, 4, and 5 qubits have 20 %, 3.8 %, and 3.8 % accuracy respectively. Presented three-qubit forecast model is applied for prediction of heat waves at other five locations: Aurel Vlaicu, Romania – accuracy is 28.6 %; Bratislava, Slovakia – accuracy is 21.7 %; Brussels, Belgium – accuracy is 33.3 %; Sofia, Bulgaria – accuracy is 50 %; Akhisar, Turkey – accuracy is 21.4 %. These predictions are not ideal, but not zeros. They can be used independently or together with other predictions generated by different method(s). The loss of human life, as well as environmental, economic, and material damage, from extreme air temperatures could be reduced if some of heat waves are predicted. Even a small success rate implies a large socio-economic benefit.Keywords: heat wave, D-wave, forecast, Ising model, quantum computing
Procedia PDF Downloads 4971144 Audio Information Retrieval in Mobile Environment with Fast Audio Classifier
Authors: Bruno T. Gomes, José A. Menezes, Giordano Cabral
Abstract:
With the popularity of smartphones, mobile apps emerge to meet the diverse needs, however the resources at the disposal are limited, either by the hardware, due to the low computing power, or the software, that does not have the same robustness of desktop environment. For example, in automatic audio classification (AC) tasks, musical information retrieval (MIR) subarea, is required a fast processing and a good success rate. However the mobile platform has limited computing power and the best AC tools are only available for desktop. To solve these problems the fast classifier suits, to mobile environments, the most widespread MIR technologies, seeking a balance in terms of speed and robustness. At the end we found that it is possible to enjoy the best of MIR for mobile environments. This paper presents the results obtained and the difficulties encountered.Keywords: audio classification, audio extraction, environment mobile, musical information retrieval
Procedia PDF Downloads 5441143 A Novel Approach to Design and Implement Context Aware Mobile Phone
Authors: G. S. Thyagaraju, U. P. Kulkarni
Abstract:
Context-aware computing refers to a general class of computing systems that can sense their physical environment, and adapt their behaviour accordingly. Context aware computing makes systems aware of situations of interest, enhances services to users, automates systems and personalizes applications. Context-aware services have been introduced into mobile devices, such as PDA and mobile phones. In this paper we are presenting a novel approaches used to realize the context aware mobile. The context aware mobile phone (CAMP) proposed in this paper senses the users situation automatically and provides user context required services. The proposed system is developed by using artificial intelligence techniques like Bayesian Network, fuzzy logic and rough sets theory based decision table. Bayesian Network to classify the incoming call (high priority call, low priority call and unknown calls), fuzzy linguistic variables and membership degrees to define the context situations, the decision table based rules for service recommendation. To exemplify and demonstrate the effectiveness of the proposed methods, the context aware mobile phone is tested for college campus scenario including different locations like library, class room, meeting room, administrative building and college canteen.Keywords: context aware mobile, fuzzy logic, decision table, Bayesian probability
Procedia PDF Downloads 3651142 Analyzing Large Scale Recurrent Event Data with a Divide-And-Conquer Approach
Authors: Jerry Q. Cheng
Abstract:
Currently, in analyzing large-scale recurrent event data, there are many challenges such as memory limitations, unscalable computing time, etc. In this research, a divide-and-conquer method is proposed using parametric frailty models. Specifically, the data is randomly divided into many subsets, and the maximum likelihood estimator from each individual data set is obtained. Then a weighted method is proposed to combine these individual estimators as the final estimator. It is shown that this divide-and-conquer estimator is asymptotically equivalent to the estimator based on the full data. Simulation studies are conducted to demonstrate the performance of this proposed method. This approach is applied to a large real dataset of repeated heart failure hospitalizations.Keywords: big data analytics, divide-and-conquer, recurrent event data, statistical computing
Procedia PDF Downloads 1651141 Geomatic Techniques to Filter Vegetation from Point Clouds
Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades
Abstract:
More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud
Procedia PDF Downloads 1541140 Social Media Resignation the Only Way to Protect User Data and Restore Cognitive Balance, a Literature Review
Authors: Rajarshi Motilal
Abstract:
The birth of the Internet and the rise of social media marked an important chapter in the history of humankind. Often termed the fourth scientific revolution, the Internet has changed human lives and cognisance. The birth of Web 2.0, followed by the launch of social media and social networking sites, added another milestone to these technological advancements where connectivity and influx of information became dominant. With billions of individuals using the internet and social media sites in the 21st century, “users” became “consumers”, and orthodox marketing reshaped itself to digital marketing. Furthermore, organisations started using sophisticated algorithms to predict consumer purchase behaviour and manipulate it to sustain themselves in such a competitive environment. The rampant storage and analysis of individual data became the new normal, raising many questions about data privacy. The excessive usage of the Internet among individuals brought in other problems of them becoming addicted to it, scavenging for societal approval and instant gratification, subsequently leading to a collective dualism, isolation, and finally, depression. This study aims to determine the relationship between social media usage in the modern age and the rise of psychological and cognitive imbalances in human minds. The literature review is positioned timely as an addition to the existing work at a time when the world is constantly debating on whether social media resignation is the only way to protect user data and restore the decaying cognitive balance.Keywords: social media, digital marketing, consumer behaviour, internet addiction, data privacy
Procedia PDF Downloads 761139 Management Software for the Elaboration of an Electronic File in the Pharmaceutical Industry Following Mexican Regulations
Authors: M. Peña Aguilar Juan, Ríos Hernández Ezequiel, R. Valencia Luis
Abstract:
For certification, certain goods of public interest, such as medicines and food, it is required the preparation and delivery of a dossier. For its elaboration, legal and administrative knowledge must be taken, as well as organization of the documents of the process, and an order that allows the file verification. Therefore, a virtual platform was developed to support the process of management and elaboration of the dossier, providing accessibility to the information and interfaces that allow the user to know the status of projects. The development of dossier system on the cloud allows the inclusion of the technical requirements for the software management, including the validation and the manufacturing in the field industry. The platform guides and facilitates the dossier elaboration (report, file or history), considering Mexican legislation and regulations, it also has auxiliary tools for its management. This technological alternative provides organization support for documents and accessibility to the information required to specify the successful development of a dossier. The platform divides into the following modules: System control, catalog, dossier and enterprise management. The modules are designed per the structure required in a dossier in those areas. However, the structure allows for flexibility, as its goal is to become a tool that facilitates and does not obstruct processes. The architecture and development of the software allows flexibility for future work expansion to other fields, this would imply feeding the system with new regulations.Keywords: electronic dossier, cloud management software, pharmaceutical industry, sanitary registration
Procedia PDF Downloads 2941138 A Medical Vulnerability Scoring System Incorporating Health and Data Sensitivity Metrics
Authors: Nadir A. Carreon, Christa Sonderer, Aakarsh Rao, Roman Lysecky
Abstract:
With the advent of complex software and increased connectivity, the security of life-critical medical devices is becoming an increasing concern, particularly with their direct impact on human safety. Security is essential, but it is impossible to develop completely secure and impenetrable systems at design time. Therefore, it is important to assess the potential impact on the security and safety of exploiting a vulnerability in such critical medical systems. The common vulnerability scoring system (CVSS) calculates the severity of exploitable vulnerabilities. However, for medical devices it does not consider the unique challenges of impacts to human health and privacy. Thus, the scoring of a medical device on which human life depends (e.g., pacemakers, insulin pumps) can score very low, while a system on which human life does not depend (e.g., hospital archiving systems) might score very high. In this paper, we propose a medical vulnerability scoring system (MVSS) that extends CVSS to address the health and privacy concerns of medical devices. We propose incorporating two new parameters, namely health impact, and sensitivity impact. Sensitivity refers to the type of information that can be stolen from the device, and health represents the impact on the safety of the patient if the vulnerability is exploited (e.g., potential harm, life-threatening). We evaluate fifteen different known vulnerabilities in medical devices and compare MVSS against two state-of-the-art medical device-oriented vulnerability scoring systems and the foundational CVSS.Keywords: common vulnerability system, medical devices, medical device security, vulnerabilities
Procedia PDF Downloads 1661137 i2kit: A Tool for Immutable Infrastructure Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.Keywords: container, deployment, immutable infrastructure, microservice
Procedia PDF Downloads 1791136 Inclusion and Changes of a Research Criterion in the Institute for Quality and Accreditation of Computing, Engineering and Technology Accreditation Model
Authors: J. Daniel Sanchez Ruiz
Abstract:
The paper explains why and how a research criterion was included within an accreditation system for undergraduate engineering programs, in spite of not being a common practice of accreditation agencies at a global level. This paper is divided into three parts. The first presents the context and the motivations that led the Institute for Quality and Accreditation of Computing, Engineering and Technology Programs (ICACIT) to add a research criterion. The second describes the criterion adopted and the feedback received during 2017 accreditation cycle. The third, the author proposes changes to the accreditation criteria that respond in a pertinent way to the results-based accreditation model and the national context. The author seeks to reconcile an outcome based accreditation model, aligned with the established by the International Engineering Alliance, with the particular context of higher education in Peru.Keywords: accreditation, engineering education, quality assurance, research
Procedia PDF Downloads 2811135 Multiscale Hub: An Open-Source Framework for Practical Atomistic-To-Continuum Coupling
Authors: Masoud Safdari, Jacob Fish
Abstract:
Despite vast amount of existing theoretical knowledge, the implementation of a universal multiscale modeling, analysis, and simulation software framework remains challenging. Existing multiscale software and solutions are often domain-specific, closed-source and mandate a high-level of experience and skills in both multiscale analysis and programming. Furthermore, tools currently existing for Atomistic-to-Continuum (AtC) multiscaling are developed with the assumptions such as accessibility of high-performance computing facilities to the users. These issues mentioned plus many other challenges have reduced the adoption of multiscale in academia and especially industry. In the current work, we introduce Multiscale Hub (MsHub), an effort towards making AtC more accessible through cloud services. As a joint effort between academia and industry, MsHub provides a universal web-enabled framework for practical multiscaling. Developed on top of universally acclaimed scientific programming language Python, the package currently provides an open-source, comprehensive, easy-to-use framework for AtC coupling. MsHub offers an easy to use interface to prominent molecular dynamics and multiphysics continuum mechanics packages such as LAMMPS and MFEM (a free, lightweight, scalable C++ library for finite element methods). In this work, we first report on the design philosophy of MsHub, challenges identified and issues faced regarding its implementation. MsHub takes the advantage of a comprehensive set of tools and algorithms developed for AtC that can be used for a variety of governing physics. We then briefly report key AtC algorithms implemented in MsHub. Finally, we conclude with a few examples illustrating the capabilities of the package and its future directions.Keywords: atomistic, continuum, coupling, multiscale
Procedia PDF Downloads 1771134 Optimization of Topology-Aware Job Allocation on a High-Performance Computing Cluster by Neural Simulated Annealing
Authors: Zekang Lan, Yan Xu, Yingkun Huang, Dian Huang, Shengzhong Feng
Abstract:
Jobs on high-performance computing (HPC) clusters can suffer significant performance degradation due to inter-job network interference. Topology-aware job allocation problem (TJAP) is such a problem that decides how to dedicate nodes to specific applications to mitigate inter-job network interference. In this paper, we study the window-based TJAP on a fat-tree network aiming at minimizing the cost of communication hop, a defined inter-job interference metric. The window-based approach for scheduling repeats periodically, taking the jobs in the queue and solving an assignment problem that maps jobs to the available nodes. Two special allocation strategies are considered, i.e., static continuity assignment strategy (SCAS) and dynamic continuity assignment strategy (DCAS). For the SCAS, a 0-1 integer programming is developed. For the DCAS, an approach called neural simulated algorithm (NSA), which is an extension to simulated algorithm (SA) that learns a repair operator and employs them in a guided heuristic search, is proposed. The efficacy of NSA is demonstrated with a computational study against SA and SCIP. The results of numerical experiments indicate that both the model and algorithm proposed in this paper are effective.Keywords: high-performance computing, job allocation, neural simulated annealing, topology-aware
Procedia PDF Downloads 1161133 Secure and Privacy-Enhanced Blockchain-Based Authentication System for University User Management
Authors: Ali El Ksimi
Abstract:
In today's digital academic environment, secure authentication methods are essential for managing sensitive user data, including that of students and faculty. The rise in cyber threats and data breaches has exposed the vulnerabilities of traditional authentication systems used in universities. Passwords, often the first line of defense, are particularly susceptible to hacking, phishing, and brute-force attacks. While multi-factor authentication (MFA) provides an additional layer of security, it can still be compromised and often adds complexity and inconvenience for users. As universities seek more robust security measures, blockchain technology emerges as a promising solution. Renowned for its decentralization, immutability, and transparency, blockchain has the potential to transform how user management is conducted in academic institutions. In this article, we explore a system that leverages blockchain technology specifically for managing user accounts within a university setting. The system enables the secure creation and management of accounts for different roles, such as administrators, teachers, and students. Each user is authenticated through a decentralized application (DApp) that ensures their data is securely stored and managed on the blockchain. By eliminating single points of failure and utilizing cryptographic techniques, the system enhances the security and integrity of user management processes. We will delve into the technical architecture, security benefits, and implementation considerations of this approach. By integrating blockchain into user management, we aim to address the limitations of traditional systems and pave the way for the future of digital security in education.Keywords: blockchain, university, authentication, decentralization, cybersecurity, user management, privacy
Procedia PDF Downloads 231132 Study and Simulation of a Dynamic System Using Digital Twin
Authors: J.P. Henriques, E. R. Neto, G. Almeida, G. Ribeiro, J.V. Coutinho, A.B. Lugli
Abstract:
Industry 4.0, or the Fourth Industrial Revolution, is transforming the relationship between people and machines. In this scenario, some technologies such as Cloud Computing, Internet of Things, Augmented Reality, Artificial Intelligence, Additive Manufacturing, among others, are making industries and devices increasingly intelligent. One of the most powerful technologies of this new revolution is the Digital Twin, which allows the virtualization of a real system or process. In this context, the present paper addresses the linear and nonlinear dynamic study of a didactic level plant using Digital Twin. In the first part of the work, the level plant is identified at a fixed point of operation, BY using the existing method of least squares means. The linearized model is embedded in a Digital Twin using Automation Studio® from Famous Technologies. Finally, in order to validate the usage of the Digital Twin in the linearized study of the plant, the dynamic response of the real system is compared to the Digital Twin. Furthermore, in order to develop the nonlinear model on a Digital Twin, the didactic level plant is identified by using the method proposed by Hammerstein. Different steps are applied to the plant, and from the Hammerstein algorithm, the nonlinear model is obtained for all operating ranges of the plant. As for the linear approach, the nonlinear model is embedded in the Digital Twin, and the dynamic response is compared to the real system in different points of operation. Finally, yet importantly, from the practical results obtained, one can conclude that the usage of Digital Twin to study the dynamic systems is extremely useful in the industrial environment, taking into account that it is possible to develop and tune controllers BY using the virtual model of the real systems.Keywords: industry 4.0, digital twin, system identification, linear and nonlinear models
Procedia PDF Downloads 1481131 A Review of Fractal Dimension Computing Methods Applied to Wear Particles
Authors: Manish Kumar Thakur, Subrata Kumar Ghosh
Abstract:
Various types of particles found in lubricant may be characterized by their fractal dimension. Some of the available methods are: yard-stick method or structured walk method, box-counting method. This paper presents a review of the developments and progress in fractal dimension computing methods as applied to characteristics the surface of wear particles. An overview of these methods, their implementation, their advantages and their limits is also present here. It has been accepted that wear particles contain major information about wear and friction of materials. Morphological analysis of wear particles from a lubricant is a very effective way for machine condition monitoring. Fractal dimension methods are used to characterize the morphology of the found particles. It is very useful in the analysis of complexity of irregular substance. The aim of this review is to bring together the fractal methods applicable for wear particles.Keywords: fractal dimension, morphological analysis, wear, wear particles
Procedia PDF Downloads 4901130 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture
Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán
Abstract:
Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing
Procedia PDF Downloads 931129 Web Service Architectural Style Selection in Multi-Criteria Requirements
Authors: Ahmad Mohsin, Syda Fatima, Falak Nawaz, Aman Ullah Khan
Abstract:
Selection of an appropriate architectural style is vital to the success of target web service under development. The nature of architecture design and selection for service-oriented computing applications is quite different as compared to traditional software. Web Services have complex and rigorous architectural styles to choose. Due to this, selection for accurate architectural style for web services development has become a more complex decision to be made by architects. Architectural style selection is a multi-criteria decision and demands lots of experience in service oriented computing. Decision support systems are good solutions to simplify the selection process of a particular architectural style. Our research suggests a new approach using DSS for selection of architectural styles while developing a web service to cater FRs and NFRs. Our proposed DSS helps architects to select right web service architectural pattern according to the domain and non-functional requirements. In this paper, a rule base DSS has been developed using CLIPS (C Language Integrated Production System) to support decisions using multi-criteria requirements. This DSS takes architectural characteristics, domain requirements and software architect preferences for NFRs as input for different architectural styles in use today in service-oriented computing. Weighted sum model has been applied to prioritize quality attributes and domain requirements. Scores are calculated using multiple criterions to choose the final architecture style.Keywords: software architecture, web-service, rule-based, DSS, multi-criteria requirements, quality attributes
Procedia PDF Downloads 3641128 An Approach to Building a Recommendation Engine for Travel Applications Using Genetic Algorithms and Neural Networks
Authors: Adrian Ionita, Ana-Maria Ghimes
Abstract:
The lack of features, design and the lack of promoting an integrated booking application are some of the reasons why most online travel platforms only offer automation of old booking processes, being limited to the integration of a smaller number of services without addressing the user experience. This paper represents a practical study on how to improve travel applications creating user-profiles through data-mining based on neural networks and genetic algorithms. Choices made by users and their ‘friends’ in the ‘social’ network context can be considered input data for a recommendation engine. The purpose of using these algorithms and this design is to improve user experience and to deliver more features to the users. The paper aims to highlight a broader range of improvements that could be applied to travel applications in terms of design and service integration, while the main scientific approach remains the technical implementation of the neural network solution. The motivation of the technologies used is also related to the initiative of some online booking providers that have made the fact that they use some ‘neural network’ related designs public. These companies use similar Big-Data technologies to provide recommendations for hotels, restaurants, and cinemas with a neural network based recommendation engine for building a user ‘DNA profile’. This implementation of the ‘profile’ a collection of neural networks trained from previous user choices, can improve the usability and design of any type of application.Keywords: artificial intelligence, big data, cloud computing, DNA profile, genetic algorithms, machine learning, neural networks, optimization, recommendation system, user profiling
Procedia PDF Downloads 1621127 Factors Affecting M-Government Deployment and Adoption
Authors: Saif Obaid Alkaabi, Nabil Ayad
Abstract:
Governments constantly seek to offer faster, more secure, efficient and effective services for their citizens. Recent changes and developments to communication services and technologies, mainly due the Internet, have led to immense improvements in the way governments of advanced countries carry out their interior operations Therefore, advances in e-government services have been broadly adopted and used in various developed countries, as well as being adapted to developing countries. The implementation of advances depends on the utilization of the most innovative structures of data techniques, mainly in web dependent applications, to enhance the main functions of governments. These functions, in turn, have spread to mobile and wireless techniques, generating a new advanced direction called m-government. This paper discusses a selection of available m-government applications and several business modules and frameworks in various fields. Practically, the m-government models, techniques and methods have become the improved version of e-government. M-government offers the potential for applications which will work better, providing citizens with services utilizing mobile communication and data models incorporating several government entities. Developing countries can benefit greatly from this innovation due to the fact that a large percentage of their population is young and can adapt to new technology and to the fact that mobile computing devices are more affordable. The use of models of mobile transactions encourages effective participation through the use of mobile portals by businesses, various organizations, and individual citizens. Although the application of m-government has great potential, it does have major limitations. The limitations include: the implementation of wireless networks and relative communications, the encouragement of mobile diffusion, the administration of complicated tasks concerning the protection of security (including the ability to offer privacy for information), and the management of the legal issues concerning mobile applications and the utilization of services.Keywords: e-government, m-government, system dependability, system security, trust
Procedia PDF Downloads 3811126 Parallel Random Number Generation for the Modern Supercomputer Architectures
Authors: Roman Snytsar
Abstract:
Pseudo-random numbers are often used in scientific computing such as the Monte Carlo Simulations or the Quantum Inspired Optimization. Requirements for a parallel random number generator running in the modern multi-core vector environment are more stringent than those for sequential random number generators. As well as passing the usual quality tests, the output of the parallel random number generator must be verifiable and reproducible throughout the concurrent execution. We propose a family of vectorized Permuted Congruential Generators. Implementations are available for multiple modern vector modern computer architectures. Besides demonstrating good single core performance, the generators scale easily across many processor cores and multiple distributed nodes. We provide performance and parallel speedup analysis and comparisons between the implementations.Keywords: pseudo-random numbers, quantum optimization, SIMD, parallel computing
Procedia PDF Downloads 1201125 The Technophobia among Older Adults in China
Authors: Erhong Sun, Xuchun Ye
Abstract:
Technophobia, namely the fear or dislike of modern advanced technologies, plays a central role in age-related digital divides and is considered a new risk factor for older adults, which can affect the daily lives of people through low adherence to digital living. Indeed, there is considerable heterogeneity in the group of older adults who feel technophobia. Therefore, the aim of this study was to identify different technophobia typologies of older people and to examine their associations with the subjective age factor. A sample of 704 retired elderly over the age of 55 was recruited in China. Technophobia and subjective age were assessed with a questionnaire, respectively. Latent profile analysis was used to identify technophobia subgroups, using three dimensions including techno-anxiety, techno-paranoia, and privacy concerns as indicators. The association between the identified technophobia subgroups and subjective age was explored. In summary, four different technophobia typologies were identified among older adults in China. Combined with an investigation of personal background characteristics and subjective age, it draws a more nuanced image of the technophobia phenome among older adults in China. First, not all older adults suffer from technophobia, with about half of the elderly subjects belonging to the profiles of “Low-technophobia” and “Medium-technophobia.” Second, privacy concern plays an important role in the classification of technophobia among older adults. Third, subjective age might be a protective factor for technophobia in older adults. Although the causal direction between identified technophobia typologies and subjective age remains uncertain, our suggests that future interventions should better focus on subjective age by breaking the age stereotype of technology to reduce the negative effect of technophobia on older. Future development of this research will involve extensive investigation of the detailed impact of technophobia in senior populations, measurement of the negative outcomes, as well as formulation of innovative educational and clinical pathways.Keywords: technophobia, older adults, latent profile analysis, subjective age
Procedia PDF Downloads 721124 3D Dentofacial Surgery Full Planning Procedures
Authors: Oliveira M., Gonçalves L., Francisco I., Caramelo F., Vale F., Sanz D., Domingues M., Lopes M., Moreia D., Lopes T., Santos T., Cardoso H.
Abstract:
The ARTHUR project consists of a platform that allows the virtual performance of maxillofacial surgeries, offering, in a photorealistic concept, the possibility for the patient to have an idea of the surgical changes before they are performed on their face. For this, the system brings together several image formats, dicoms and objs that, after loading, will generate the bone volume, soft tissues and hard tissues. The system also incorporates the patient's stereophotogrammetry, in addition to their data and clinical history. After loading and inserting data, the clinician can virtually perform the surgical operation and present the final result to the patient, generating a new facial surface that contemplates the changes made in the bone and tissues of the maxillary area. This tool acts in different situations that require facial reconstruction, however this project focuses specifically on two types of use cases: bone congenital disfigurement and acquired disfiguration such as oral cancer with bone attainment. Being developed a cloud based solution, with mobile support, the tool aims to reduce the decision time window of patient. Because the current simulations are not realistic or, if realistic, need time due to the need of building plaster models, patient rates on decision, rely on a long time window (1,2 months), because they don’t identify themselves with the presented surgical outcome. On the other hand, this planning was performed time based on average estimated values of the position of the maxilla and mandible. The team was based on averages of the facial measurements of the population, without specifying racial variability, so the proposed solution was not adjusted to the real individual physiognomic needs.Keywords: 3D computing, image processing, image registry, image reconstruction
Procedia PDF Downloads 2061123 A Study on the Korean Connected Industrial Parks Smart Logistics It Financial Enterprise Architecture
Authors: Ilgoun Kim, Jongpil Jeong
Abstract:
Recently, a connected industrial parks (CIPs) architecture using new technologies such as RFID, cloud computing, CPS, Big Data, 5G 5G, IIOT, VR-AR, and ventral AI algorithms based on IoT has been proposed. This researcher noted the vehicle junction problem (VJP) as a more specific detail of the CIPs architectural models. The VJP noted by this researcher includes 'efficient AI physical connection challenges for vehicles' through ventilation, 'financial and financial issues with complex vehicle physical connections,' and 'welfare and working conditions of the performing personnel involved in complex vehicle physical connections.' In this paper, we propose a public solution architecture for the 'electronic financial problem of complex vehicle physical connections' as a detailed task during the vehicle junction problem (VJP). The researcher sought solutions to businesses, consumers, and Korean social problems through technological advancement. We studied how the beneficiaries of technological development can benefit from technological development with many consumers in Korean society and many small and small Korean company managers, not some specific companies. In order to more specifically implement the connected industrial parks (CIPs) architecture using the new technology, we noted the vehicle junction problem (VJP) within the smart factory industrial complex and noted the process of achieving the vehicle junction problem performance among several electronic processes. This researcher proposes a more detailed, integrated public finance enterprise architecture among the overall CIPs architectures. The main details of the public integrated financial enterprise architecture were largely organized into four main categories: 'business', 'data', 'technique', and 'finance'.Keywords: enterprise architecture, IT Finance, smart logistics, CIPs
Procedia PDF Downloads 1661122 Predicting Photovoltaic Energy Profile of Birzeit University Campus Based on Weather Forecast
Authors: Muhammad Abu-Khaizaran, Ahmad Faza’, Tariq Othman, Yahia Yousef
Abstract:
This paper presents a study to provide sufficient and reliable information about constructing a Photovoltaic energy profile of the Birzeit University campus (BZU) based on the weather forecast. The developed Photovoltaic energy profile helps to predict the energy yield of the Photovoltaic systems based on the weather forecast and hence helps planning energy production and consumption. Two models will be developed in this paper; a Clear Sky Irradiance model and a Cloud-Cover Radiation model to predict the irradiance for a clear sky day and a cloudy day, respectively. The adopted procedure for developing such models takes into consideration two levels of abstraction. First, irradiance and weather data were acquired by a sensory (measurement) system installed on the rooftop of the Information Technology College building at Birzeit University campus. Second, power readings of a fully operational 51kW commercial Photovoltaic system installed in the University at the rooftop of the adjacent College of Pharmacy-Nursing and Health Professions building are used to validate the output of a simulation model and to help refine its structure. Based on a comparison between a mathematical model, which calculates Clear Sky Irradiance for the University location and two sets of accumulated measured data, it is found that the simulation system offers an accurate resemblance to the installed PV power station on clear sky days. However, these comparisons show a divergence between the expected energy yield and actual energy yield in extreme weather conditions, including clouding and soiling effects. Therefore, a more accurate prediction model for irradiance that takes into consideration weather factors, such as relative humidity and cloudiness, which affect irradiance, was developed; Cloud-Cover Radiation Model (CRM). The equivalent mathematical formulas implement corrections to provide more accurate inputs to the simulation system. The results of the CRM show a very good match with the actual measured irradiance during a cloudy day. The developed Photovoltaic profile helps in predicting the output energy yield of the Photovoltaic system installed at the University campus based on the predicted weather conditions. The simulation and practical results for both models are in a very good match.Keywords: clear-sky irradiance model, cloud-cover radiation model, photovoltaic, weather forecast
Procedia PDF Downloads 1321121 Combination between Intrusion Systems and Honeypots
Authors: Majed Sanan, Mohammad Rammal, Wassim Rammal
Abstract:
Today, security is a major concern. Intrusion Detection, Prevention Systems and Honeypot can be used to moderate attacks. Many researchers have proposed to use many IDSs ((Intrusion Detection System) time to time. Some of these IDS’s combine their features of two or more IDSs which are called Hybrid Intrusion Detection Systems. Most of the researchers combine the features of Signature based detection methodology and Anomaly based detection methodology. For a signature based IDS, if an attacker attacks slowly and in organized way, the attack may go undetected through the IDS, as signatures include factors based on duration of the events but the actions of attacker do not match. Sometimes, for an unknown attack there is no signature updated or an attacker attack in the mean time when the database is updating. Thus, signature-based IDS fail to detect unknown attacks. Anomaly based IDS suffer from many false-positive readings. So there is a need to hybridize those IDS which can overcome the shortcomings of each other. In this paper we propose a new approach to IDS (Intrusion Detection System) which is more efficient than the traditional IDS (Intrusion Detection System). The IDS is based on Honeypot Technology and Anomaly based Detection Methodology. We have designed Architecture for the IDS in a packet tracer and then implemented it in real time. We have discussed experimental results performed: both the Honeypot and Anomaly based IDS have some shortcomings but if we hybridized these two technologies, the newly proposed Hybrid Intrusion Detection System (HIDS) is capable enough to overcome these shortcomings with much enhanced performance. In this paper, we present a modified Hybrid Intrusion Detection System (HIDS) that combines the positive features of two different detection methodologies - Honeypot methodology and anomaly based intrusion detection methodology. In the experiment, we ran both the Intrusion Detection System individually first and then together and recorded the data from time to time. From the data we can conclude that the resulting IDS are much better in detecting intrusions from the existing IDSs.Keywords: security, intrusion detection, intrusion prevention, honeypot, anomaly-based detection, signature-based detection, cloud computing, kfsensor
Procedia PDF Downloads 3821120 Computing Customer Lifetime Value in E-Commerce Websites with Regard to Returned Orders and Payment Method
Authors: Morteza Giti
Abstract:
As online shopping is becoming increasingly popular, computing customer lifetime value for better knowing the customers is also gaining more importance. Two distinct factors that can affect the value of a customer in the context of online shopping is the number of returned orders and payment method. Returned orders are those which have been shipped but not collected by the customer and are returned to the store. Payment method refers to the way that customers choose to pay for the price of the order which are usually two: Pre-pay and Cash-on-delivery. In this paper, a novel model called RFMSP is presented to calculated the customer lifetime value, taking these two parameters into account. The RFMSP model is based on the common RFM model while adding two extra parameter. The S represents the order status and the P indicates the payment method. As a case study for this model, the purchase history of customers in an online shop is used to compute the customer lifetime value over a period of twenty months.Keywords: RFMSP model, AHP, customer lifetime value, k-means clustering, e-commerce
Procedia PDF Downloads 3181119 Big Data Analysis with Rhipe
Authors: Byung Ho Jung, Ji Eun Shin, Dong Hoon Lim
Abstract:
Rhipe that integrates R and Hadoop environment made it possible to process and analyze massive amounts of data using a distributed processing environment. In this paper, we implemented multiple regression analysis using Rhipe with various data sizes of actual data. Experimental results for comparing the performance of our Rhipe with stats and biglm packages available on bigmemory, showed that our Rhipe was more fast than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases. We also compared the computing speeds of pseudo-distributed and fully-distributed modes for configuring Hadoop cluster. The results showed that fully-distributed mode was faster than pseudo-distributed mode, and computing speeds of fully-distributed mode were faster as the number of data nodes increases.Keywords: big data, Hadoop, Parallel regression analysis, R, Rhipe
Procedia PDF Downloads 497