Search results for: ICT Deployment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 468

Search results for: ICT Deployment

168 To Cloudify or Not to Cloudify

Authors: Laila Yasir Al-Harthy, Ali H. Al-Badi

Abstract:

As an emerging business model, cloud computing has been initiated to satisfy the need of organizations and to push Information Technology as a utility. The shift to the cloud has changed the way Information Technology departments are managed traditionally and has raised many concerns for both, public and private sectors. The purpose of this study is to investigate the possibility of cloud computing services replacing services provided traditionally by IT departments. Therefore, it aims to 1) explore whether organizations in Oman are ready to move to the cloud; 2) identify the deciding factors leading to the adoption or rejection of cloud computing services in Oman; and 3) provide two case studies, one for a successful Cloud provider and another for a successful adopter. This paper is based on multiple research methods including conducting a set of interviews with cloud service providers and current cloud users in Oman; and collecting data using questionnaires from experts in the field and potential users of cloud services. Despite the limitation of bandwidth capacity and Internet coverage offered in Oman that create a challenge in adopting the cloud, it was found that many information technology professionals are encouraged to move to the cloud while few are resistant to change. The recent launch of a new Omani cloud service provider and the entrance of other international cloud service providers in the Omani market make this research extremely valuable as it aims to provide real-life experience as well as two case studies on the successful provision of cloud services and the successful adoption of these services.

Keywords: cloud computing, cloud deployment models, cloud service models, deciding factors

Procedia PDF Downloads 297
167 Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores

Authors: Ankit Sinha, Soham Banerjee, Pratik Chattopadhyay

Abstract:

Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model.

Keywords: retail stores, faster-RCNN, object localization, ResNet-18, triplet loss, data augmentation, product recognition

Procedia PDF Downloads 156
166 Feature Analysis of Predictive Maintenance Models

Authors: Zhaoan Wang

Abstract:

Research in predictive maintenance modeling has improved in the recent years to predict failures and needed maintenance with high accuracy, saving cost and improving manufacturing efficiency. However, classic prediction models provide little valuable insight towards the most important features contributing to the failure. By analyzing and quantifying feature importance in predictive maintenance models, cost saving can be optimized based on business goals. First, multiple classifiers are evaluated with cross-validation to predict the multi-class of failures. Second, predictive performance with features provided by different feature selection algorithms are further analyzed. Third, features selected by different algorithms are ranked and combined based on their predictive power. Finally, linear explainer SHAP (SHapley Additive exPlanations) is applied to interpret classifier behavior and provide further insight towards the specific roles of features in both local predictions and global model behavior. The results of the experiments suggest that certain features play dominant roles in predictive models while others have significantly less impact on the overall performance. Moreover, for multi-class prediction of machine failures, the most important features vary with type of machine failures. The results may lead to improved productivity and cost saving by prioritizing sensor deployment, data collection, and data processing of more important features over less importance features.

Keywords: automated supply chain, intelligent manufacturing, predictive maintenance machine learning, feature engineering, model interpretation

Procedia PDF Downloads 133
165 Towards the Modeling of Lost Core Viability in High-Pressure Die Casting: A Fluid-Structure Interaction Model with 2-Phase Flow Fluid Model

Authors: Sebastian Kohlstädt, Michael Vynnycky, Stephan Goeke, Jan Jäckel, Andreas Gebauer-Teichmann

Abstract:

This paper summarizes the progress in the latest computational fluid dynamics research towards the modeling in of lost core viability in high-pressure die casting. High-pressure die casting is a process that is widely employed in the automotive and neighboring industries due to its advantages in casting quality and cost efficiency. The degrees of freedom are however somewhat limited as it has been so far difficult to use lost cores in the process. This is right now changing and the deployment of lost cores is considered a future growth potential for high-pressure die casting companies. The use of this technology itself is difficult though. The strength of the core material, as chiefly salt is used, is limited and experiments have shown that the cores will not hold under all circumstances and process designs. For this purpose, the publicly available CFD library foam-extend (OpenFOAM) is used, and two additional fluid models for incompressible and compressible two-phase flow are implemented as fluid solver models into the FSI library. For this purpose, the volume-of-fluid (VOF) methodology is used. The necessity for the fluid-structure interaction (FSI) approach is shown by a simple CFD model geometry. The model is benchmarked against analytical models and experimental data. Sufficient agreement is found with the analytical models and good agreement with the experimental data. An outlook on future developments concludes the paper.

Keywords: CFD, fluid-structure interaction, high-pressure die casting, multiphase flow

Procedia PDF Downloads 332
164 Automated Distribution System Management: Substation Remote Diagnostic and Operation Solution for Obafemi Awolowo University

Authors: Aderonke Oluseun Akinwumi, Olusola A. Komolaf

Abstract:

This paper gives information about the wide array of challenges facing both the electric utilities and consumers in the distribution system in developing countries, using Obafemi Awolowo University, Ile-Ife Nigeria as a case study. It also proffers cost-effective solution through remote monitoring, diagnostic and operation of distribution networks without compromising the system reliability. As utilities move from manned and unintelligent networks to completely unmanned smart grids, switching activities at substations and feeders will be managed and controlled remotely by dedicated systems hence this design. The Substation Remote Diagnostic and Operation Solution (sRDOs) would remotely monitor the load on Medium Voltage (MV) and Low Voltage (LV) feeders as well as distribution transformers and allow the utility disconnect non-paying customers with absolutely no extra resource deployment and without interrupting supply to paying customers. The aftermath of the implementation of this design improved the lifetime of key distribution infrastructure by automatically isolating feeders during overload conditions and more importantly erring consumers. This increased the ratio of revenue generated on electricity bills to total network load.

Keywords: electric utility, consumers, remote monitoring, diagnostic, system reliability, manned and unintelligent networks, unmanned smart grids, switching activities, medium voltage, low voltage, distribution transformer

Procedia PDF Downloads 130
163 Using LTE-Sim in New Hanover Decision Algorithm for 2-Tier Macrocell-Femtocell LTE Network

Authors: Umar D. M., Aminu A. M., Izaddeen K. Y.

Abstract:

Deployments of mini macrocell base stations also referred to as femtocells, improve the quality of service of indoor and outdoor users. Nevertheless, mobility management remains a key issue with regards to their deployment. This paper is leaned towards this issue, with an in-depth focus on the most important aspect of mobility management -handover. In handover management, making a handover decision in the LTE two-tier macrocell femtocell network is a crucial research area. Decision algorithms in this research are classified and comparatively analyzed according to received signal strength, user equipment speed, cost function, and interference. However, it was observed that most of the discussed decision algorithms fail to consider cell selection with hybrid access policy in a single macrocell multiple femtocell scenario, another observation was a majority of these algorithms lack the incorporation of user equipment residence parameter. Not including this parameter boosts the number of unnecessary handover occurrence. To deal with these issues, a sophisticated handover decision algorithm is proposed. The proposed algorithm considers the user’s velocity, received signal strength, residence time, as well as the femtocell base station’s access policy. Simulation results have shown that the proposed algorithm reduces the number of unnecessary handovers when compared to conventional received signal strength-based handover decision algorithm.

Keywords: user-equipment, radio signal service, long term evolution, mobility management, handoff

Procedia PDF Downloads 125
162 Modeling Battery Degradation for Electric Buses: Assessment of Lifespan Reduction from In-Depot Charging

Authors: Anaissia Franca, Julian Fernandez, Curran Crawford, Ned Djilali

Abstract:

A methodology to estimate the state-of-charge (SOC) of battery electric buses, including degradation effects, for a given driving cycle is presented to support long-term techno-economic analysis integrating electric buses and charging infrastructure. The degradation mechanisms, characterized by both capacity and power fade with time, have been modeled using an electrochemical model for Li-ion batteries. Iterative changes in the negative electrode film resistance and decrease in available lithium as a function of utilization is simulated for every cycle. The cycles are formulated to follow typical transit bus driving patterns. The power and capacity decay resulting from the degradation model are introduced as inputs to a longitudinal chassis dynamic analysis that calculates the power consumption of the bus for a given driving cycle to find the state-of-charge of the battery as a function of time. The method is applied to an in-depot charging scenario, for which the bus is charged exclusively at the depot, overnight and to its full capacity. This scenario is run both with and without including degradation effects over time to illustrate the significant impact of degradation mechanisms on bus performance when doing feasibility studies for a fleet of electric buses. The impact of battery degradation on battery lifetime is also assessed. The modeling tool can be further used to optimize component sizing and charging locations for electric bus deployment projects.

Keywords: battery electric bus, E-bus, in-depot charging, lithium-ion battery, battery degradation, capacity fade, power fade, electric vehicle, SEI, electrochemical models

Procedia PDF Downloads 325
161 Development of a Flexible Lora-Based Wireless Sensory System for Long-Time Health Monitoring of Civil Structures

Authors: Hui Zhang, Sherif Beskhyroun

Abstract:

In this study, a highly flexible LoRa-Based wireless sensing system was used to assess the strain state performance of building structures. The system was developed to address the local damage limitation of structural health monitoring (SHM) systems. The system is part of an intelligent SHM system designed to monitor, collect and transmit strain changes in key structural components. The main purpose of the wireless sensor system is to reduce the development and installation costs, and reduce the power consumption of the system, so as to achieve long-time monitoring. The highly stretchable flexible strain gauge is mounted on the surface of the structure and is waterproof, heat resistant, and low temperature resistant, greatly reducing the installation and maintenance costs of the sensor. The system was also developed with the aim of using LoRa wireless communication technology to achieve both low power consumption and long-distance transmission, therefore solving the problem of large-scale deployment of sensors to cover more areas in large structures. In the long-term monitoring of the building structure, the system shows very high performance, very low actual power consumption, and wireless transmission stability. The results show that the developed system has a high resolution, sensitivity, and high possibility of long-term monitoring.

Keywords: LoRa, SHM system, strain measurement, civil structures, flexible sensing system

Procedia PDF Downloads 103
160 Gulfnet: The Advent of Computer Networking in Saudi Arabia and Its Social Impact

Authors: Abdullah Almowanes

Abstract:

The speed of adoption of new information and communication technologies is often seen as an indicator of the growth of knowledge- and technological innovation-based regional economies. Indeed, technological progress and scientific inquiry in any society have undergone a particularly profound transformation with the introduction of computer networks. In the spring of 1981, the Bitnet network was launched to link thousands of nodes all over the world. In 1985 and as one of the first adopters of Bitnet, Saudi Arabia launched a Bitnet-based network named Gulfnet that linked computer centers, universities, and libraries of Saudi Arabia and other Gulf countries through high speed communication lines. In this paper, the origins and the deployment of Gulfnet are discussed as well as social, economical, political, and cultural ramifications of the new information reality created by the network. Despite its significance, the social and cultural aspects of Gulfnet have not been investigated in history of science and technology literature to a satisfactory degree before. The presented research is based on an extensive archival research aimed at seeking out and analyzing of primary evidence from archival sources and records. During its decade and a half-long existence, Gulfnet demonstrated that the scope and functionality of public computer networks in Saudi Arabia have to be fine-tuned for compliance with Islamic culture and political system of the country. It also helped lay the groundwork for the subsequent introduction of the Internet. Since 1980s, in just few decades, the proliferation of computer networks has transformed communications world-wide.

Keywords: Bitnet, computer networks, computing and culture, Gulfnet, Saudi Arabia

Procedia PDF Downloads 245
159 A Literature Review on Emotion Recognition Using Wireless Body Area Network

Authors: Christodoulou Christos, Politis Anastasios

Abstract:

The utilization of Wireless Body Area Network (WBAN) is experiencing a notable surge in popularity as a result of its widespread implementation in the field of smart health. WBANs utilize small sensors implanted within the human body to monitor and record physiological indicators. These sensors transmit the collected data to hospitals and healthcare facilities through designated access points. Bio-sensors exhibit a diverse array of shapes and sizes, and their deployment can be tailored to the condition of the individual. Multiple sensors may be strategically placed within, on, or around the human body to effectively observe, record, and transmit essential physiological indicators. These measurements serve as a basis for subsequent analysis, evaluation, and therapeutic interventions. In conjunction with physical health concerns, numerous smartwatches are engineered to employ artificial intelligence techniques for the purpose of detecting mental health conditions such as depression and anxiety. The utilization of smartwatches serves as a secure and cost-effective solution for monitoring mental health. Physiological signals are widely regarded as a highly dependable method for the recognition of emotions due to the inherent inability of individuals to deliberately influence them over extended periods of time. The techniques that WBANs employ to recognize emotions are thoroughly examined in this article.

Keywords: emotion recognition, wireless body area network, WBAN, ERC, wearable devices, psychological signals, emotion, smart-watch, prediction

Procedia PDF Downloads 50
158 Digital Manufacturing: Evolution and a Process Oriented Approach to Align with Business Strategy

Authors: Abhimanyu Pati, Prabir K. Bandyopadhyay

Abstract:

The paper intends to highlight the significance of Digital Manufacturing (DM) strategy in support and achievement of business strategy and goals of any manufacturing organization. Towards this end, DM initiatives have been given a process perspective, while not undermining its technological significance, with a view to link its benefits directly with fulfilment of customer needs and expectations in a responsive and cost-effective manner. A digital process model has been proposed to categorize digitally enabled organizational processes with a view to create synergistic groups, which adopt and use digital tools having similar characteristics and functionalities. This will throw future opportunities for researchers and developers to create a unified technology environment for integration and orchestration of processes. Secondly, an effort has been made to apply “what” and “how” features of Quality Function Deployment (QFD) framework to establish the relationship between customers’ needs – both for external and internal customers, and the features of various digital processes, which support for the achievement of these customer expectations. The paper finally concludes that in the present highly competitive environment, business organizations cannot thrive to sustain unless they understand the significance of digital strategy and integrate it with their business strategy with a clearly defined implementation roadmap. A process-oriented approach to DM strategy will help business executives and leaders to appreciate its value propositions and its direct link to organization’s competitiveness.

Keywords: knowledge management, cloud computing, knowledge management approaches, cloud-based knowledge management

Procedia PDF Downloads 309
157 The DAQ Debugger for iFDAQ of the COMPASS Experiment

Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius

Abstract:

In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.

Keywords: DAQ Debugger, data acquisition system, FPGA, system signals, Qt framework

Procedia PDF Downloads 284
156 Numerical Investigation of a Spiral Bladed Tidal Turbine

Authors: Mohammad Fereidoonnezhad, Seán Leen, Stephen Nash, Patrick McGarry

Abstract:

From the perspective of research innovation, the tidal energy industry is still in its early stages. While a very small number of turbines have progressed to utility-scale deployment, blade breakage is commonly reported due to the enormous hydrodynamic loading applied to devices. The aim of this study is the development of computer simulation technologies for the design of next-generation fibre-reinforced composite tidal turbines. This will require significant technical advances in the areas of tidal turbine testing and multi-scale computational modelling. The complex turbine blade profiles are designed to incorporate non-linear distributions of airfoil sections to optimize power output and self-starting capability while reducing power fluctuations. A number of candidate blade geometries are investigated, ranging from spiral geometries to parabolic geometries, with blades arranged in both cylindrical and spherical configurations on a vertical axis turbine. A combined blade element theory (BET-start-up model) is developed in MATLAB to perform computationally efficient parametric design optimisation for a range of turbine blade geometries. Finite element models are developed to identify optimal fibre-reinforced composite designs to increase blade strength and fatigue life. Advanced fluid-structure-interaction models are also carried out to compute blade deflections following design optimisation.

Keywords: tidal turbine, composite materials, fluid-structure-interaction, start-up capability

Procedia PDF Downloads 122
155 Integrating Cost-Benefit Assessment and Contract Design to Support Industrial Symbiosis Deployment

Authors: Robin Molinier

Abstract:

Industrial symbiosis (I.S) is the realization of Industrial Ecology (I.E) principles in production systems in function. I.S consists in the use of waste materials, fatal energy, recirculated utilities and infrastructure/service sharing as resources for production. Environmental benefits can be achieved from resource conservation but economic profitability is required by the participating actors. I.S indeed involves several actors with their own objectives and resources so that each one must be satisfied by ex-ante arrangements to commit toward I.S execution (investments and transactions). Following the Resource-Based View of transactions we build a modular framework to assess global I.S profitability and to specify each actor’s contributions to costs and benefits in line with their resource endowments and performance requirements formulations. I.S projects specificities implied by the need for customization (asset specificity, non-homogeneity) induce the use of long-term contracts for transactions following Transaction costs economics arguments. Thus we propose first a taxonomy of costs and value drivers for I.S and an assignment to each actor of I.S specific risks that we identified as load profiles mismatch, quality problems and value fluctuations. Then appropriate contractual guidelines (pricing, cost sharing and warranties) that support mutual profitability are derived from the detailed identification of contributions by the cost-benefits model. This analytical framework helps identifying what points to focus on when bargaining over contracting for transactions and investments. Our methodology is applied to I.S archetypes raised from a literature survey on eco-industrial parks initiatives and practitioners interviews.

Keywords: contracts, cost-benefit analysis, industrial symbiosis, risks

Procedia PDF Downloads 340
154 Use of Galileo Advanced Features in Maritime Domain

Authors: Olivier Chaigneau, Damianos Oikonomidis, Marie-Cecile Delmas

Abstract:

GAMBAS (Galileo Advanced features for the Maritime domain: Breakthrough Applications for Safety and security) is a project funded by the European Space Program Agency (EUSPA) aiming at identifying the search-and-rescue and ship security alert system needs for maritime users (including operators and fishing stakeholders) and developing operational concepts to answer these needs. The general objective of the GAMBAS project is to support the deployment of Galileo exclusive features in the maritime domain in order to improve safety and security at sea, detection of illegal activities and associated surveillance means, resilience to natural and human-induced emergency situations, and develop, integrate, demonstrate, standardize and disseminate these new associated capabilities. The project aims to demonstrate: improvement of the SAR (Search And Rescue) and SSAS (Ship Security Alert System) detection and response to maritime distress through the integration of new features into the beacon for SSAS in terms of cost optimization, user-friendly aspects, integration of Galileo and OS NMA (Open Service Navigation Message Authentication) reception for improved authenticated localization performance and reliability, and at sea triggering capabilities, optimization of the responsiveness of RCCs (Rescue Co-ordination Centre) towards the distress situations affecting vessels, the adaptation of the MCCs (Mission Control Center) and MEOLUT (Medium Earth Orbit Local User Terminal) to the data distribution of SSAS alerts.

Keywords: Galileo new advanced features, maritime, safety, security

Procedia PDF Downloads 92
153 Digital Platform for Psychological Assessment Supported by Sensors and Efficiency Algorithms

Authors: Francisco M. Silva

Abstract:

Technology is evolving, creating an impact on our everyday lives and the telehealth industry. Telehealth encapsulates the provision of healthcare services and information via a technological approach. There are several benefits of using web-based methods to provide healthcare help. Nonetheless, few health and psychological help approaches combine this method with wearable sensors. This paper aims to create an online platform for users to receive self-care help and information using wearable sensors. In addition, researchers developing a similar project obtain a solid foundation as a reference. This study provides descriptions and analyses of the software and hardware architecture. Exhibits and explains a heart rate dynamic and efficient algorithm that continuously calculates the desired sensors' values. Presents diagrams that illustrate the website deployment process and the webserver means of handling the sensors' data. The goal is to create a working project using Arduino compatible hardware. Heart rate sensors send their data values to an online platform. A microcontroller board uses an algorithm to calculate the sensor heart rate values and outputs it to a web server. The platform visualizes the sensor's data, summarizes it in a report, and creates alerts for the user. Results showed a solid project structure and communication from the hardware and software. The web server displays the conveyed heart rate sensor's data on the online platform, presenting observations and evaluations.

Keywords: Arduino, heart rate BPM, microcontroller board, telehealth, wearable sensors, web-based healthcare

Procedia PDF Downloads 126
152 Load Balancing Technique for Energy - Efficiency in Cloud Computing

Authors: Rani Danavath, V. B. Narsimha

Abstract:

Cloud computing is emerging as a new paradigm of large scale distributed computing. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., three service models, and four deployment networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics models. Load balancing is one of the main challenges in cloud computing, which is required to distribute the dynamic workload across multiple nodes, to ensure that no single node is overloaded. It helps in optimal utilization of resources, enhancing the performance of the system. The goal of the load balancing is to minimize the resource consumption and carbon emission rate, that is the direct need of cloud computing. This determined the need of new metrics energy consumption and carbon emission for energy-efficiency load balancing techniques in cloud computing. Existing load balancing techniques mainly focuses on reducing overhead, services, response time and improving performance etc. In this paper we introduced a Technique for energy-efficiency, but none of the techniques have considered the energy consumption and carbon emission. Therefore, our proposed work will go towards energy – efficiency. So this energy-efficiency load balancing technique can be used to improve the performance of cloud computing by balancing the workload across all the nodes in the cloud with the minimum resource utilization, in turn, reducing energy consumption, and carbon emission to an extent, which will help to achieve green computing.

Keywords: cloud computing, distributed computing, energy efficiency, green computing, load balancing, energy consumption, carbon emission

Procedia PDF Downloads 449
151 Sustainability of Photovoltaic Recycling Planning

Authors: Jun-Ki Choi

Abstract:

The usage of valuable resources and the potential for waste generation at the end of the life cycle of photovoltaic (PV) technologies necessitate a proactive planning for a PV recycling infrastructure. To ensure the sustainability of PV in large scales of deployment, it is vital to develop and institute low-cost recycling technologies and infrastructure for the emerging PV industry in parallel with the rapid commercialization of these new technologies. There are various issues involved in the economics of PV recycling and this research examine those at macro and micro levels, developing a holistic interpretation of the economic viability of the PV recycling systems. This study developed mathematical models to analyze the profitability of recycling technologies and to guide tactical decisions for allocating optimal location of PV take-back centers (PVTBC), necessary for the collection of end of life products. The economic decision is usually based on the level of the marginal capital cost of each PVTBC, cost of reverse logistics, distance traveled, and the amount of PV waste collected from various locations. Results illustrated that the reverse logistics costs comprise a major portion of the cost of PVTBC; PV recycling centers can be constructed in the optimally selected locations to minimize the total reverse logistics cost for transporting the PV wastes from various collection facilities to the recycling center. In the micro- process level, automated recycling processes should be developed to handle the large amount of growing PV wastes economically. The market price of the reclaimed materials are important factors for deciding the profitability of the recycling process and this illustrates the importance of the recovering the glass and expensive metals from PV modules.

Keywords: photovoltaic, recycling, mathematical models, sustainability

Procedia PDF Downloads 255
150 Coherent All-Fiber and Polarization Maintaining Source for CO2 Range-Resolved Differential Absorption Lidar

Authors: Erwan Negre, Ewan J. O'Connor, Juha Toivonen

Abstract:

The need for CO2 monitoring technologies grows simultaneously with the worldwide concerns regarding environmental challenges. To that purpose, we developed a compact coherent all-fiber ranged-resolved Differential Absorption Lidar (RR-DIAL). It has been designed along a tunable 2x1fiber optic switch set to a frequency of 1 Hz between two Distributed FeedBack (DFB) lasers emitting in the continuous-wave mode at 1571.41 nm (absorption line of CO2) and 1571.25 nm (CO2 absorption-free line), with linewidth and tuning range of respectively 1 MHz and 3 nm over operating wavelength. A three stages amplification through Erbium and Erbium-Ytterbium doped fibers coupled to a Radio Frequency (RF) driven Acousto-Optic Modulator (AOM) generates 100 ns pulses at a repetition rate from 10 to 30 kHz with a peak power up to 2.5 kW and a spatial resolution of 15 m, allowing fast and highly resolved CO2 profiles. The same afocal collection system is used for the output of the laser source and the backscattered light which is then directed to a circulator before being mixed with the local oscillator for heterodyne detection. Packaged in an easily transportable box which also includes a server and a Field Programmable Gate Array (FPGA) card for on-line data processing and storing, our setup allows an effective and quick deployment for versatile in-situ analysis, whether it be vertical atmospheric monitoring, large field mapping or sequestration site continuous oversight. Setup operation and results from initial field measurements will be discussed.

Keywords: CO2 profiles, coherent DIAL, in-situ atmospheric sensing, near infrared fiber source

Procedia PDF Downloads 128
149 Flirting with Ephemerality and the Daily Production of the Fleeting City

Authors: Rafael Martinez

Abstract:

Our view of cities is dominated by the built environment. Buildings, streets, avenues, bridges, flyovers, and so on virtually exclude anything not fixed, permanently alterable or indefinitely temporal. Yet, city environments can also be shaped by temporally produced structures which, regardless of their transience, act as thresholds separating or segregating people and spaces. Academic works on cities conceptualize them, whether temporary or permanent, as tangible environments. This paper considers the idea of the ephemeral city, a city purposely produced and lived in as an impermanent, fluid and transitional environment resulting from an alignment of different forces. In particular, the paper proposes to observe how certain performative practices inform the emergence of ephemeral spaces in the city’s daily life. With Singapore as its backdrop and focusing foreign workers, the paper aims at documenting how everyday life practices, such as flirting, result in production of transitional space, informed by semiotic blurs, and yet material, perceptible, human and tangible for some. In this paper, it is argued that flirting for Singapore's foreign workers entails skillful understanding of what is proposed as the 'flirting cartography.' Thus, spatially, flirtation becomes not only a matter to be taken for granted but also a form of producing a fleeting space that requires deployment of various techniques drawn upon a particular knowledge. The paper is based upon a performative methodology which seeks to understand the praxis and rationale of the ephemerality of some spaces produced by foreign workers within this cosmopolitan city. By resorting to this methodological approach, the paper aims to establish the connection between the visibility gained by usually marginalized populations through their ephemeral reclamation of public spaces in the city.

Keywords: ephemeral, flirting, Singapore, space

Procedia PDF Downloads 107
148 Development of Mobile Application for Internship Program Management Using the Concept of Model View Controller (MVC) Pattern

Authors: Shutchapol Chopvitayakun

Abstract:

Nowadays, especially for the last 5 years, mobile devices, mobile applications and mobile users, through the deployment of wireless communication and mobile phone cellular network, all these components are growing significantly bigger and stronger. They are being integrated into each other to create multiple purposes and pervasive deployments into every business and non-business sector such as education, medicine, traveling, finance, real estate and many more. Objective of this study was to develop a mobile application for seniors or last-year students who enroll the internship program at each tertiary school (undergraduate school) and do onsite practice at real field sties, real organizations and real workspaces. During the internship session, all students as the interns are required to exercise, drilling and training onsite with specific locations and specific tasks or may be some assignments from their supervisor. Their work spaces are both private and government corporates and enterprises. This mobile application is developed under schema of a transactional processing system that enables users to keep daily work or practice log, monitor true working locations and ability to follow daily tasks of each trainee. Moreover, it provides useful guidance from each intern’s advisor, in case of emergency. Finally, it can summarize all transactional data then calculate each internship cumulated hours from the field practice session for each individual intern.

Keywords: internship, mobile application, Android OS, smart phone devices, mobile transactional processing system, guidance and monitoring, tertiary education, senior students, model view controller (MVC)

Procedia PDF Downloads 315
147 Simulation, Optimization, and Analysis Approach of Microgrid Systems

Authors: Saqib Ali

Abstract:

Sources are classified into two depending upon the factor of reviving. These sources, which cannot be revived into their original shape once they are consumed, are considered as nonrenewable energy resources, i.e., (coal, fuel) Moreover, those energy resources which are revivable to the original condition even after being consumed are known as renewable energy resources, i.e., (wind, solar, hydel) Renewable energy is a cost-effective way to generate clean and green electrical energy Now a day’s majority of the countries are paying heed to energy generation from RES Pakistan is mostly relying on conventional energy resources which are mostly nonrenewable in nature coal, fuel is one of the major resources, and with the advent of time their prices are increasing on the other hand RES have great potential in the country with the deployment of RES greater reliability and an effective power system can be obtained In this thesis, a similar concept is being used and a hybrid power system is proposed which is composed of intermixing of renewable and nonrenewable sources The Source side is composed of solar, wind, fuel cells which will be used in an optimal manner to serve load The goal is to provide an economical, reliable, uninterruptable power supply. This is achieved by optimal controller (PI, PD, PID, FOPID) Optimization techniques are applied to the controllers to achieve the desired results. Advanced algorithms (Particle swarm optimization, Flower Pollination Algorithm) will be used to extract the desired output from the controller Detailed comparison in the form of tables and results will be provided, which will highlight the efficiency of the proposed system.

Keywords: distributed generation, demand-side management, hybrid power system, micro grid, renewable energy resources, supply-side management

Procedia PDF Downloads 97
146 Global Healthcare Village Based on Mobile Cloud Computing

Authors: Laleh Boroumand, Muhammad Shiraz, Abdullah Gani, Rashid Hafeez Khokhar

Abstract:

Cloud computing being the use of hardware and software that are delivered as a service over a network has its application in the area of health care. Due to the emergency cases reported in most of the medical centers, prompt for an efficient scheme to make health data available with less response time. To this end, we propose a mobile global healthcare village (MGHV) model that combines the components of three deployment model which include country, continent and global health cloud to help in solving the problem mentioned above. In the creation of continent model, two (2) data centers are created of which one is local and the other is global. The local replay the request of residence within the continent, whereas the global replay the requirements of others. With the methods adopted, there is an assurance of the availability of relevant medical data to patients, specialists, and emergency staffs regardless of locations and time. From our intensive experiment using the simulation approach, it was observed that, broker policy scheme with respect to optimized response time, yields a very good performance in terms of reduction in response time. Though, our results are comparable to others when there is an increase in the number of virtual machines (80-640 virtual machines). The proportionality in increase of response time is within 9%. The results gotten from our simulation experiments shows that utilizing MGHV leads to the reduction of health care expenditures and helps in solving the problems of unqualified medical staffs faced by both developed and developing countries.

Keywords: cloud computing (MCC), e-healthcare, availability, response time, service broker policy

Procedia PDF Downloads 377
145 A Novel Rapid Well Control Technique Modelled in Computational Fluid Dynamics Software

Authors: Michael Williams

Abstract:

The ability to control a flowing well is of the utmost important. During the kill phase, heavy weight kill mud is circulated around the well. While increasing bottom hole pressure near wellbore formation, the damage is increased. The addition of high density spherical objects has the potential to minimise this near wellbore damage, increase bottom hole pressure and reduce operational time to kill the well. This operational time saving is seen in the rapid deployment of high density spherical objects instead of building high density drilling fluid. The research aims to model the well kill process using a Computational Fluid Dynamics software. A model has been created as a proof of concept to analyse the flow of micron sized spherical objects in the drilling fluid. Initial results show that this new methodology of spherical objects in drilling fluid agrees with traditional stream lines seen in non-particle flow. Additional models have been created to demonstrate that areas of higher flow rate around the bit can lead to increased probability of wash out of formations but do not affect the flow of micron sized spherical objects. Interestingly, areas that experience dimensional changes such as tool joints and various BHA components do not appear at this initial stage to experience increased velocity or create areas of turbulent flow, which could lead to further borehole stability. In conclusion, the initial models of this novel well control methodology have not demonstrated any adverse flow patterns, which would conclude that this model may be viable under field conditions.

Keywords: well control, fluid mechanics, safety, environment

Procedia PDF Downloads 171
144 Integrating Knowledge Distillation of Multiple Strategies

Authors: Min Jindong, Wang Mingxia

Abstract:

With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.

Keywords: object detection, knowledge distillation, convolutional network, model compression

Procedia PDF Downloads 278
143 Russia’s Role in Resolving the Nagorno-Karabakh Conflict 1990-2020

Authors: Friba Haidari

Abstract:

The aim of the study is to identify Russia's role in managing the Nagorno-Karabakh conflict betweenArmenia and Azerbaijan during the years 1990 to 2020. The Nagorno-Karabakh crisis can not be considered a mere territorial conflict but also a crossroads of interests of foreign actors. Geopolitical rivalries and the access to energy by regional and trans-regional actors have complicated the crisis and created a security challenge in the region, which is likely to escalate into a full-blown war between the parties involved. The geopolitical situation of Nagorno-Karabakh and its current situation have affected all peripheral states in some way. Russia, as one of the main actors in this scene, has been actively involved since the beginning of the crisis. The Russians have always sought to strengthen their influence and presence in the Nagorno-Karabakh crisis. Russia's efforts to weaken the role of the Minsk Group, The presence of Western actors, and the deployment of Russian forces in the disputed area can be assessed in this context. However, this study seeks to answer the question of what role did Russia play in managing the Nagorno-Karabakh conflict between Armenia and Azerbaijan between 1990 and 2020? The study hypothesizes that Russia has prevented the escalation of the Nagorno-Karabakh conflict through mediation and some coercion. This study is divided into four parts, including conflict management as a theoretical framework; Examining the competition and the role of actors in the Caucasus region, especially the role of the Minsk Group, and what approach or tools and methods Russia has used in its foreign policy in managing the conflict, and finally what are the relations between the countries involved and what will be Russia's role in the future? Was discussed. This study examines the analysis and transfer of ideas and information using authoritative international sources with an explanatory method and shares its results with everyone.

Keywords: Russia, conflict, nagorno-karabakh, management

Procedia PDF Downloads 91
142 Monitoring of the Chillon Viaducts after Rehabilitation with Ultra High Performance Fiber Reinforced Cement-Based Composite

Authors: Henar Martín-Sanz García, Eleni Chatzi, Eugen Brühwiler

Abstract:

Located on the shore of Geneva Lake, in Switzerland, the Chillon Viaducts are two parallel structures consisted of post-tensioned concrete box girders, with a total length of 2 kilometers and 100m spans. Built in 1969, the bridges currently accommodate a traffic load of 50.000 vehicles per day, thereby holding a key role both in terms of historic value as well as socio-economic significance. Although several improvements have been carried out in the past two decades, recent inspections demonstrate an Alkali-Aggregate reaction in the concrete deck and piers reducing the concrete strength. In order to prevent further expansion of this issue, a layer of 40 mm of Ultra High Performance Fiber Reinforced cement-based Composite (UHPFRC) (incorporating rebars) was casted over the slabs, acting as a waterproof membrane and providing significant increase in resistance of the bridge structure by composite UHPFRC – RC composite action in particular of the deck slab. After completing the rehabilitation works, a Structural Monitoring campaign was installed on the deck slab in one representative span, based on accelerometers, strain gauges, thermal and humidity sensors. This campaign seeks to reveal information on the behavior of UHPFRC-concrete composite systems, such as increase in stiffness, fatigue strength, durability and long-term performance. Consequently, the structural monitoring is expected to last for at least three years. A first insight of the analyzed results from the initial months of measurements is presented herein, along with future improvements or necessary changes on the deployment.

Keywords: composite materials, rehabilitation, structural health monitoring, UHPFRC

Procedia PDF Downloads 279
141 Human Vibrotactile Discrimination Thresholds for Simultaneous and Sequential Stimuli

Authors: Joanna Maj

Abstract:

Body machine interfaces (BMIs) afford users a non-invasive way coordinate movement. Vibrotactile stimulation has been incorporated into BMIs to allow feedback in real-time and guide movement control to benefit patients with cognitive deficits, such as stroke survivors. To advance research in this area, we examined vibrational discrimination thresholds at four body locations to determine suitable application sites for future multi-channel BMIs using vibration cues to guide movement planning and control. Twelve healthy adults had a pair of small vibrators (tactors) affixed to the skin at each location: forearm, shoulders, torso, and knee. A "standard" stimulus (186 Hz; 750 ms) and "probe" stimuli (11 levels ranging from 100 Hz to 235 Hz; 750 ms) were delivered. Probe and test stimulus pairs could occur sequentially or simultaneously (timing). Participants verbally indicated which stimulus felt more intense. Stimulus order was counterbalanced across tactors and body locations. Probabilities that probe stimuli felt more intense than the standard stimulus were computed and fit with a cumulative Gaussian function; the discrimination threshold was defined as one standard deviation of the underlying distribution. Threshold magnitudes depended on stimulus timing and location. Discrimination thresholds were better for stimuli applied sequentially vs. simultaneously at the torso as well as the knee. Thresholds were small (better) and relatively insensitive to timing differences for vibrations applied at the shoulder. BMI applications requiring multiple channels of simultaneous vibrotactile stimulation should therefore consider the shoulder as a deployment site for a vibrotactile BMI interface.

Keywords: electromyography, electromyogram, neuromuscular disorders, biomedical instrumentation, controls engineering

Procedia PDF Downloads 64
140 The Security Trade-Offs in Resource Constrained Nodes for IoT Application

Authors: Sultan Alharby, Nick Harris, Alex Weddell, Jeff Reeve

Abstract:

The concept of the Internet of Things (IoT) has received much attention over the last five years. It is predicted that the IoT will influence every aspect of our lifestyles in the near future. Wireless Sensor Networks are one of the key enablers of the operation of IoTs, allowing data to be collected from the surrounding environment. However, due to limited resources, nature of deployment and unattended operation, a WSN is vulnerable to various types of attack. Security is paramount for reliable and safe communication between IoT embedded devices, but it does, however, come at a cost to resources. Nodes are usually equipped with small batteries, which makes energy conservation crucial to IoT devices. Nevertheless, security cost in terms of energy consumption has not been studied sufficiently. Previous research has used a security specification of 802.15.4 for IoT applications, but the energy cost of each security level and the impact on quality of services (QoS) parameters remain unknown. This research focuses on the cost of security at the IoT media access control (MAC) layer. It begins by studying the energy consumption of IEEE 802.15.4 security levels, which is followed by an evaluation for the impact of security on data latency and throughput, and then presents the impact of transmission power on security overhead, and finally shows the effects of security on memory footprint. The results show that security overhead in terms of energy consumption with a payload of 24 bytes fluctuates between 31.5% at minimum level over non-secure packets and 60.4% at the top security level of 802.15.4 security specification. Also, it shows that security cost has less impact at longer packet lengths, and more with smaller packet size. In addition, the results depicts a significant impact on data latency and throughput. Overall, maximum authentication length decreases throughput by almost 53%, and encryption and authentication together by almost 62%.

Keywords: energy consumption, IEEE 802.15.4, IoT security, security cost evaluation

Procedia PDF Downloads 168
139 Fast Aerodynamic Evaluation of Transport Aircraft in Early Phases

Authors: Xavier Bertrand, Alexandre Cayrel

Abstract:

The early phase of an aircraft development is instrumental as it really drives the potential of a new concept. Any weakness in the high-level design (wing planform, moveable surfaces layout etc.) will be extremely difficult and expensive to recover later in the aircraft development process. Aerodynamic evaluation in this very early development phase is driven by two main criteria: a short lead-time to allow quick iterations of the geometrical design, and a high quality of the calculations to get an accurate & reliable assessment of the current status. These two criteria are usually quite contradictory. Actually, short lead time of a couple of hours from end-to-end can be obtained with very simple tools (semi-empirical methods for instance) although their accuracy is limited, whereas higher quality calculations require heavier/more complex tools, which obviously need more complex inputs as well, and a significantly longer lead time. At this point, the choice has to be done between accuracy and lead-time. A brand new approach has been developed within Airbus, aiming at obtaining quickly high quality evaluations of the aerodynamic of an aircraft. This methodology is based on a joint use of Surrogate Modelling and a lifting line code. The Surrogate Modelling is used to get the wing sections characteristics (e.g. lift coefficient vs. angle of attack), whatever the airfoil geometry, the status of the moveable surfaces (aileron/spoilers) or the high-lift devices deployment. From these characteristics, the lifting line code is used to get the 3D effects on the wing whatever the flow conditions (low/high Mach numbers etc.). This methodology has been applied successfully to a concept of medium range aircraft.

Keywords: aerodynamics, lifting line, surrogate model, CFD

Procedia PDF Downloads 359