Search results for: vision computing
1452 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements
Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo
Abstract:
Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation
Procedia PDF Downloads 1781451 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads
Authors: Gaurav Kumar Sinha
Abstract:
In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies
Procedia PDF Downloads 671450 Yawning Computing Using Bayesian Networks
Authors: Serge Tshibangu, Turgay Celik, Zenzo Ncube
Abstract:
Road crashes kill nearly over a million people every year, and leave millions more injured or permanently disabled. Various annual reports reveal that the percentage of fatal crashes due to fatigue/driver falling asleep comes directly after the percentage of fatal crashes due to intoxicated drivers. This percentage is higher than the combined percentage of fatal crashes due to illegal/Un-Safe U-turn and illegal/Un-Safe reversing. Although a relatively small percentage of police reports on road accidents highlights drowsiness and fatigue, the importance of these factors is greater than we might think, hidden by the undercounting of their events. Some scenarios show that these factors are significant in accidents with killed and injured people. Thus the need for an automatic drivers fatigue detection system in order to considerably reduce the number of accidents owing to fatigue.This research approaches the drivers fatigue detection problem in an innovative way by combining cues collected from both temporal analysis of drivers’ faces and environment. Monotony in driving environment is inter-related with visual symptoms of fatigue on drivers’ faces to achieve fatigue detection. Optical and infrared (IR) sensors are used to analyse the monotony in driving environment and to detect the visual symptoms of fatigue on human face. Internal cues from drivers faces and external cues from environment are combined together using machine learning algorithms to automatically detect fatigue.Keywords: intelligent transportation systems, bayesian networks, yawning computing, machine learning algorithms
Procedia PDF Downloads 4551449 The Sublimation Of Personal Drama Into Mythological Tale: ‘‘The Search Of Golden Fleece’’ By Alexander Mcqueen, Givenchy
Authors: Ani Hambardzumyan
Abstract:
The influence of Greek culture and Greek mythology on the fashion industry is enormous. The first reason behind this is that Greek culture is one of the core elements to form the clothing tradition in Europe. French fashion houses have always been considered one of the leading cloth representatives in the world. As we could perceive in the first chapter, they are among the first ones to get inspired from Greek cultural heritage and apply it while creating their garments. The French fashion industry has kept traditional classical elements in clothes for decades. However, from the second half of the 20th century, this idea started to alter step by step. Society was transforming its vision with the influence of avant-garde movements. Hence, the fashion industry needed to transform its conception as well. However, it should be mentioned that fashion brands never stopped looking at the past when creating a new perspective or vision. Paradoxically, Greek mythology and clothing tradition continued to be applied even in the search of new ideas or new interpretations. In 1997 Alexander McQueen presents his first Haute Couture collection for French fashion house Givenchy, inspired by Greek mythology and titled ‘‘Search for The Golden Fleece.’’ Perhaps, this was one of the most controversial Haute Couture shows that French audience could expect to see and French media could capture and write about. The paper discuss Spring/Summer 1997 collection ‘‘The Search of Golden Fleece’’ by Alexander McQueen. It should be mentioned that there has not been yet conducted researches to analyze the mythological and archetypal nature of the collection, as well as general observations that go beyond traditional historical reviews are few in number. Here we will observe designer’s transformative new approach regarding Greek heritage and the media’s perception of it while collection was presented. On top of that, we will observe Alexander McQueen life in the parallel line with the fashion show since the collection is nothing else but the sublimation of his personal journey and drama.Keywords: mythology, mcqueen, the argonaut, french fashion, golden fleece, givenchy
Procedia PDF Downloads 1161448 FRATSAN: A New Software for Fractal Analysis of Signals
Authors: Hamidreza Namazi
Abstract:
Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign fractal characteristics to a dataset which may be a theoretical dataset or a pattern or signal extracted from phenomena including natural geometric objects, sound, market fluctuations, heart rates, digital images, molecular motion, networks, etc. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. For this purpose a Visual C++ based software called FRATSAN (FRActal Time Series ANalyser) was developed which extract information from signals through three measures. These measures are Fractal Dimensions, Jeffrey’s Measure and Hurst Exponent. After computing these measures, the software plots the graphs for each measure. Besides computing three measures the software can classify whether the signal is fractal or no. In fact, the software uses a dynamic method of analysis for all the measures. A sliding window is selected with a value equal to 10% of the total number of data entries. This sliding window is moved one data entry at a time to obtain all the measures. This makes the computation very sensitive to slight changes in data, thereby giving the user an acute analysis of the data. In order to test the performance of this software a set of EEG signals was given as input and the results were computed and plotted. This software is useful not only for fundamental fractal analysis of signals but can be used for other purposes. For instance by analyzing the Hurst exponent plot of a given EEG signal in patients with epilepsy the onset of seizure can be predicted by noticing the sudden changes in the plot.Keywords: EEG signals, fractal analysis, fractal dimension, hurst exponent, Jeffrey’s measure
Procedia PDF Downloads 4671447 Improving Lane Detection for Autonomous Vehicles Using Deep Transfer Learning
Authors: Richard O’Riordan, Saritha Unnikrishnan
Abstract:
Autonomous Vehicles (AVs) are incorporating an increasing number of ADAS features, including automated lane-keeping systems. In recent years, many research papers into lane detection algorithms have been published, varying from computer vision techniques to deep learning methods. The transition from lower levels of autonomy defined in the SAE framework and the progression to higher autonomy levels requires increasingly complex models and algorithms that must be highly reliable in their operation and functionality capacities. Furthermore, these algorithms have no room for error when operating at high levels of autonomy. Although the current research details existing computer vision and deep learning algorithms and their methodologies and individual results, the research also details challenges faced by the algorithms and the resources needed to operate, along with shortcomings experienced during their detection of lanes in certain weather and lighting conditions. This paper will explore these shortcomings and attempt to implement a lane detection algorithm that could be used to achieve improvements in AV lane detection systems. This paper uses a pre-trained LaneNet model to detect lane or non-lane pixels using binary segmentation as the base detection method using an existing dataset BDD100k followed by a custom dataset generated locally. The selected roads will be modern well-laid roads with up-to-date infrastructure and lane markings, while the second road network will be an older road with infrastructure and lane markings reflecting the road network's age. The performance of the proposed method will be evaluated on the custom dataset to compare its performance to the BDD100k dataset. In summary, this paper will use Transfer Learning to provide a fast and robust lane detection algorithm that can handle various road conditions and provide accurate lane detection.Keywords: ADAS, autonomous vehicles, deep learning, LaneNet, lane detection
Procedia PDF Downloads 1041446 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman Electricity Transmission Company
Authors: Rahma Al Balushi
Abstract:
Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS dept. This paper will describe in detail the GIS data submission process and the journey to develop the current process. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting, and data alterations salso aided to reduce the missing attributes of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the year 2017 and the year 2021. Overall, concluding that by governance, asset information & GIS department can control GIS data process; collect, properly record, and manage asset data and information within OETC network. This control extends to other applications and systems integrated with/related to GIS systems.Keywords: asset management ISO55001, standard procedures process, governance, geodatabase, NOC, CMMS
Procedia PDF Downloads 2071445 Aesthetic and Social Vision in Abubakar Gimba’s a Toast in the Cemetery
Authors: James Funsho Tope
Abstract:
Being the prolific writer that he is, Gimba’s collection of Short Stories, A Toast in the Cemetery, brings out the themes of decay and corruption in the urban setting through the use of images, symbols, setting and character. Gimba seeks through these media to reveal the decay and corruption in the society. Gimba uses aesthetics to convey his message, thus making a call for change in the fabrics of society.Keywords: corruption, decay, character, setting, symbolism, images, society
Procedia PDF Downloads 6061444 Vieira Da Silva's Tiles at Universidade Federal Rural Do Rio de Janeiro: A Conservation and Restoration Project
Authors: Adriana Anselmo Oliveira
Abstract:
The present project showcases a tile work from the Franco-Portuguese artist Maria Helena Vieira da Silva (1908-1992). It is a set of 8 panels composed of figurative and geometric tiles, with extra tiles framing nearby doors and windows in a study room in the (UFRRJ, Universidade Federal Rural do Rio de Janeiro). The aforementioned work was created between 1942 and 1943, during the artist's 6 year exile in the Brazilian city. This one-of-a-kind tileset was designed and made by Vieira da Silva between 1942 and 1943. Over the years, several units were lost, which led to their replacement in the nineties. However, these replacements don't do justice to the original work of art. In 2007, a project was initiated to fully repair and maintain the set. Three panels are removed and restored, but the project is halted. To this day, the three fully restored panels remain in boxes. In 2016 a new restoration project is submitted by the (Faculdade de Belas Artes da Universidade de Lisboa) in collaboration with de (Fundacão Árpád Szenes-Vieira da Silva). There are many varied opinions on restoring and conserving older pieces of art, however, we have the moral duty to safeguard the original materials used by the artist along with the artists original vision and also to care for the future generations of students who will use the space in which the tile-work was inserted. Many tiles have been replaced by white tiles, tiles with a divergent colour pallet and technique, and in a few cases, the incorrect place or way around. These many factors make it increasingly difficult to maintain the artists original vision and destroy and chance of coherence within the artwork itself. The conservative technician cannot make new images to fill the empty spaces or mark the remaining images with their own creative input. with reliable photographic documentation that can provide us with the necessary vision to allow us to proceed with an accurate reconstruction, we have the obligation to proceed and return the piece of art to its true form, as in its current state, it is impossible to maintain its original glory. Using the information we have, we must find a way to differentiate the original tiles from the reconstructions in order to recreate and reclaim the original message from the artist. The objective of this project is to understand the significance of tiles in Vieira da Silva's art as well as the influence they had on the artist's pictorial language since the colour definition on tile work is vastly different from the painting process as the materials change during their merger. Another primary goal is to understand what the previous interventions achieved besides increasing the artworks durability. The main objective is to submit a proposal that can salvage the artist's visual intention and supports it for posteriority. In summary, this proposal goes further than the usual conservative interventions as it intends to recreate the original artistic worth, prioritising the aesthetics and keeping its soul alive.Keywords: Vieira da Silva, tiles, conservation, restoration
Procedia PDF Downloads 1541443 Image Based Landing Solutions for Large Passenger Aircraft
Authors: Thierry Sammour Sawaya, Heikki Deschacht
Abstract:
In commercial aircraft operations, almost half of the accidents happen during approach or landing phases. Automatic guidance and automatic landings have proven to bring significant safety value added for this challenging landing phase. This is why Airbus and ScioTeq have decided to work together to explore the capability of image-based landing solutions as additional landing aids to further expand the possibility to perform automatic approach and landing to runways where the current guiding systems are either not fitted or not optimum. Current systems for automated landing often depend on radio signals provided by airport ground infrastructure on the airport or satellite coverage. In addition, these radio signals may not always be available with the integrity and performance required for safe automatic landing. Being independent from these radio signals would widen the operations possibilities and increase the number of automated landings. Airbus and ScioTeq are joining their expertise in the field of Computer Vision in the European Program called Clean Sky 2 Large Passenger Aircraft, in which they are leading the IMBALS (IMage BAsed Landing Solutions) project. The ultimate goal of this project is to demonstrate, develop, validate and verify a certifiable automatic landing system guiding an airplane during the approach and landing phases based on an onboard camera system capturing images, enabling automatic landing independent from radio signals and without precision instrument for landing. In the frame of this project, ScioTeq is responsible for the development of the Image Processing Platform (IPP), while Airbus is responsible for defining the functional and system requirements as well as the testing and integration of the developed equipment in a Large Passenger Aircraft representative environment. The aim of this paper will be to describe the system as well as the associated methods and tools developed for validation and verification.Keywords: aircraft landing system, aircraft safety, autoland, avionic system, computer vision, image processing
Procedia PDF Downloads 1011442 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights
Authors: Julian Wise
Abstract:
Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.Keywords: mineral technology, big data, machine learning operations, data lake
Procedia PDF Downloads 1121441 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model
Authors: Anshika Kankane, Dongshik Kang
Abstract:
Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching
Procedia PDF Downloads 1061440 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 601439 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman electricity Transmission Company
Authors: Rahma Saleh Hussein Al Balushi
Abstract:
Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS department. This paper will describe in detail the current GIS data submission process and the journey for developing it. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, and updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) for excavation permits and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting and data alterations has also contributed to reducing the missing attributes and enhance data quality index of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the years 2017 and year 2022. Overall, concluding that by governance, asset information & GIS department can control the GIS data process; collect, properly record, and manage asset data and information within the OETC network. This control extends to other applications and systems integrated with/related to GIS systems.Keywords: asset management ISO55001, standard procedures process, governance, CMMS
Procedia PDF Downloads 1251438 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack
Authors: Vincent Andrew Cappellano
Abstract:
In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.Keywords: architecture, resiliency, availability, cyber-attack
Procedia PDF Downloads 1081437 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm
Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy
Abstract:
IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.Keywords: IoT, fog networks, data stewardship, dynamic access policy
Procedia PDF Downloads 591436 The Saudi Arabia 2030 Strategy: Translation Reception and Translator Readiness
Authors: Budur Alsulami
Abstract:
One of the aims of the recently implemented Saudi Arabia Vision 2030 strategy is focused on strengthening education, entertainment, and tourism to attract international visitors to the country. To promote and increase the tourism sector, tourism translation can serve the tourism industry by translating various materials that promote the country’s tourism such as brochures, catalogues, and websites. In order to achieve the goal of enhancing tourism in Saudi Arabia, promotional texts related to tourism and Saudi culture will need to be translated into English and addressed to non-Arabic-speaking potential tourists. This research aims to measure student readiness to be professional translators who can introduce and promote Saudi Arabia to non-Arabic-speaking tourists. The study will also evaluate students' abilities to promote and convey Saudi culture to non-Arabic tourists by translating tourism texts. Translating tourism materials demands considerable effort and specific translation skills to capture tourists' interest and encourage visits. Numerous scholars have explored challenges in translating tourism promotional materials, focusing on translation methods, cultural issues, course design, and necessary knowledge for tourism translation. Based on these insights, experts recommend that translators prioritize audience expectations, cultural appropriateness, and linguistic conventions while revising course syllabi to include practical skills. This research aims to assess students' readiness to become professional translators aligned with Vision 2030 tourism goals. To accomplish this, in the first stage of the project, twenty students from two Saudi Arabian Universities who have completed at least two years of Translation Studies were invited to translate two tourism texts of 300 words each. These tourism texts contain information about famous tourist sights and traditional food in Saudi Arabia and contained cultural terms and heritage information. The students then completed a questionnaire about the challenges of the text and the process of their translation, and then participated in a semi-structured interview. In the second stage of the project, the students’ translations will be evaluated by a qualified National Accreditation Authority of Translators and Interpreters (NAATI) examiner applying the NAATI rubrics. Finally, these translations will be read and assessed by fifteen to twenty native and near-native readers of English, who will evaluate the quality of the translations based on their understanding and perception of these texts. Results analysed to date suggest that a number of student translators faced challenges such as choosing a suitable translation method, omitting some key terms or words during the translation process, and managing their time, all of which may indicate a lack of practice in translating texts of this nature and lack of awareness regarding translation strategies most suitable for the genre.Keywords: Saudi Arabia Vision 2030, translation, tourism, reader reception, culture, heritage, translator training/competencies
Procedia PDF Downloads 71435 Pattern of Anisometropia, Management and Outcome of Anisometropic Amblyopia
Authors: Husain Rajib, T. H. Sheikh, D. G. Jewel
Abstract:
Background: Amblyopia is a frequent cause of monocular blindness in children. It can be unilateral or bilateral reduction of best corrected visual acuity associated with decrement in visual processing, accomodation, motility, spatial perception or spatial projection. Anisometropia is an important risk factor for amblyopia that develops when unequal refractive error causes the image to be blurred in the critical developmental period and central inhibition of the visual signal originating from the affected eye associated with significant visual problems including anisokonia, strabismus, and reduced stereopsis. Methods: It is a prospective hospital based study of newly diagnosed of amblyopia seen at the pediatric clinic of Chittagong Eye Infirmary & Training Complex. There were 50 anisometropic amblyopia subjects were examined & questionnaire was piloted. Included were all patients diagnosed with refractive amblyopia between 3 to 13 years, without previous amblyopia treatment, and whose parents were interested to participate in the study. Patients diagnosed with strabismic amblyopia were excluded. Patients were first corrected with the best correction for a month. When the VA in the amblyopic eye did not improve over month, then occlusion treatment was started. Occlusion was done daily for 6-8 hours (full time) together with vision therapy. The occlusion was carried out for 3 months. Results: In this study about 8% subjects had anisometropia from myopia, 18% from hyperopia, 74% from astigmatism. The initial mean visual acuity was 0.74 ± 0.39 Log MAR and after intervention of amblyopia therapy with active vision therapy mean visual acuity was 0.34 ± 0.26 Log MAR. About 94% of subjects were improving at least two lines. The depth of amblyopia associated with type of anisometropic refractive error and magnitude of Anisometropia (p<0.005). By doing this study 10% mild amblyopia, 64% moderate and 26% severe amblyopia were found. Binocular function also decreases with magnitude of Anisometropia. Conclusion: Anisometropic amblyopia is a most important factor in pediatric age group because it can lead to visual impairment. Occlusion therapy with at least one instructed hour of active visual activity practiced out of school hours was effective in anisometropic amblyopes who were diagnosed at the age of 8 years and older, and the patients complied well with the treatment.Keywords: refractive error, anisometropia, amblyopia, strabismic amblyopia
Procedia PDF Downloads 2751434 Recurrence of Pterygium after Surgery and the Effect of Surgical Technique on the Recurrence of Pterygium in Patients with Pterygium
Authors: Luksanaporn Krungkraipetch
Abstract:
A pterygium is an eye surface lesion that begins in the limbal conjunctiva and progresses to the cornea. The lesion is more common in the nasal limbus than in the temporal, and it has a distinctive wing-like aspect. Indications for surgery, in decreasing order of significance, are grown over the corneal center, decreased vision due to corneal deformation, documented growth, sensations of discomfort, and aesthetic concerns. Recurrent pterygium results in the loss of time, the expense of therapy, and the potential for vision impairment. The objective of this study is to find out how often the recurrence of pterygium after surgery occurs, what effect the surgery technique has, and what causes them to come back in people with pterygium. Materials and Methods: Observational case control in retrospect: the study involves a retrospective analysis of 164 patient samples. Data analysis is descriptive statistics analysis, i.e., basic data details about pterygium surgery and the risk of recurrent pterygium. For factor analysis, the inferential statistics odds ratio (OR) and 95% confidence interval (CI) ANOVA are utilized. A p-value of 0.05 was deemed statistically important. Results: The majority of patients, according to the results, were female (60.4%). Twenty-four of the 164 (14.6%) patients who underwent surgery exhibited recurrent pterygium. The average age is 55.33 years old. Postoperative recurrence was reported in 19 cases (79.3%) of bare sclera techniques and five cases (20.8%) of conjunctival autograft techniques. The recurrence interval is 10.25 months, with the most common (54.17 percent) being 12 months. In 91.67 percent of cases, all follow-ups are successful. The most common recurrence level is 1 (25%). A surgical complication is a subconjunctival hemorrhage (33.33 percent). Comparing the surgeries done on people with recurrent pterygium didn't show anything important (F = 1.13, p = 0.339). Age significantly affected the recurrence of pterygium (95% CI, 6.79-63.56; OR = 20.78, P 0.001). Conclusion: This study discovered a 14.6% rate of pterygium recurrence after pterygium surgery. Across all surgeries and patients, the rate of recurrence was four times higher with the bare sclera method than with conjunctival autograft. The researchers advise selecting a more conventional surgical technique to avoid a recurrence.Keywords: pterygium, recurrence pterygium, pterygium surgery, excision pterygium
Procedia PDF Downloads 881433 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments
Authors: Rahul Paul, Peter Mctaggart, Luke Skinner
Abstract:
Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry
Procedia PDF Downloads 991432 Psychodiagnostic Tool Development for Measurement of Social Responsibility in Ukrainian Organizations
Authors: Olena Kovalchuk
Abstract:
How to define the understanding of social responsibility issues by Ukrainian companies is a contravention question. Thus, one of the practical uses of social responsibility is a diagnostic tool development for educational, business or scientific purposes. So the purpose of this research is to develop a tool for measurement of social responsibility in organization. Methodology: A 21-item questionnaire “Organization Social Responsibility Scale” was developed. This tool was adapted for the Ukrainian sample and based on the questionnaire “Perceived Role of Ethics and Social Responsibility” which connects ethical and socially responsible behavior to different aspects of the organizational effectiveness. After surveying the respondents, the factor analysis was made by the method of main compounds with orthogonal rotation VARIMAX. On the basis of the obtained results the 21-item questionnaire was developed (Cronbach’s alpha – 0,768; Inter-Item Correlations – 0,34). Participants: 121 managers at all levels of Ukrainian organizations (57 males; 65 females) took part in the research. Results: Factor analysis showed five ethical dilemmas concerning the social responsibility and profit compatibility in Ukrainian organizations. Below we made an attempt to interpret them: — Social responsibility vs profit. Corporate social responsibility can be a way to reduce operational costs. A firm’s first priority is employees’ morale. Being ethical and socially responsible is the priority of the organization. The most loaded question is "Corporate social responsibility can reduce operational costs". Significant effect of this factor is 0.768. — Profit vs social responsibility. Efficiency is much more important to a firm than ethics or social responsibility. Making the profit is the most important concern for a firm. The dominant question is "Efficiency is much more important to a firm than whether or not the firm is seen as ethical or socially responsible". Significant effect of this factor is 0.793. — A balanced combination of social responsibility and profit. Organization with social responsibility policy is more attractive for its stakeholders. The most loaded question is "Social responsibility and profitability can be compatible". Significant effect of this factor is 0.802. — Role of Social Responsibility in the successful organizational performance. Understanding the value of social responsibility and business ethics. Well-being and welfare of the society. The dominant question is "Good ethics is often good business". Significant effect of this factor is 0.727. — Global vision of social responsibility. Issues related to global social responsibility and sustainability. Innovative approaches to poverty reduction. Awareness of climate change problems. Global vision for successful business. The dominant question is "The overall effectiveness of a business can be determined to a great extent by the degree to which it is ethical and socially responsible". Significant effect of this factor is 0.842. The theoretical contribution. The perspective of the study is to develop a tool for measurement social responsibility in organizations and to test questionnaire’s adequacy for social and cultural context. Practical implications. The research results can be applied for designing a training programme for business school students to form their global vision for successful business as well as the ability to solve ethical dilemmas in managerial practice. Researchers interested in social responsibility issues are welcome to join the project.Keywords: corporate social responsibility, Cronbach’s alpha, ethical behaviour, psychodiagnostic tool
Procedia PDF Downloads 3631431 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 1271430 Well-Being and Helping Technology for Retired Population in Finland
Authors: R. Pääkkönen, L. Korpinen
Abstract:
This study aimed to evaluate parameters influencing well-being and how to maintain well-being as long as possible after retirement. There is contradictory information on the health changes after retirement in Finland. This work is based on interviews, statistics, and literature evaluation of Finland. Most often, balance, multitasking reaction time, and adaptation of vision in dim and darks areas are worsened. Slowing is one characteristic that is difficult to measure properly. The most important is try to determine ways to manage daily activities and symptoms of disease after retirement. Medicine is advancing, problems are often also on the economic side. Information of technical aids is important. It is worth planning a retirement age.Keywords: retirement, working, aging, wellness
Procedia PDF Downloads 2381429 Local Homology Modules
Authors: Fatemeh Mohammadi Aghjeh Mashhad
Abstract:
In this paper, we give several ways for computing generalized local homology modules by using Gorenstein flat resolutions. Also, we find some bounds for vanishing of generalized local homology modules.Keywords: a-adic completion functor, generalized local homology modules, Gorenstein flat modules
Procedia PDF Downloads 4191428 Heat Transfer and Diffusion Modelling
Authors: R. Whalley
Abstract:
The heat transfer modelling for a diffusion process will be considered. Difficulties in computing the time-distance dynamics of the representation will be addressed. Incomplete and irrational Laplace function will be identified as the computational issue. Alternative approaches to the response evaluation process will be provided. An illustration application problem will be presented. Graphical results confirming the theoretical procedures employed will be provided.Keywords: heat, transfer, diffusion, modelling, computation
Procedia PDF Downloads 5531427 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations
Authors: Deepak Singh, Rail Kuliev
Abstract:
The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization
Procedia PDF Downloads 701426 The Effectiveness of Guest Lecturers with Disabilities in the Classroom
Authors: Afshin Gharib
Abstract:
Often, instructors prefer to bring into class a guest lecturer who can provide an “experiential” perspective on a particular topic. The assumption is that the personal experience brought into the classroom makes the material resonate more with students and that students would have a preference for material being taught from an experiential perspective. The question we asked in the present study was whether a guest lecture from an “experiential” expert with a disability (e.g. a guest suffering from cone-rod dystrophy lecturing on vision, or a dyslexic lecturing on the psychology of reading) would be more effective than the course instructor in capturing students attention and conveying information in an Introduction to Psychology class. Students in two sections of Introduction to Psychology (N = 25 in each section) listened to guest lecturers with disabilities lecturing on a topic related to their disability, one in the area of Sensation and Perception (the guest lecturer is vision impaired) and one in the area of Language Development (the guest lecturer is dyslexic). The Guest lecturers lectured on the same topic in both sections, however, each lecturer used their own experiences to highlight the topics they cover in one section but not the other (counterbalanced between sections), providing students in one section with experiential testimony. Following each of the 4 lectures (two experiential, two non-experiential) students rated the lecture on several dimensions including overall quality, level of engagement, and performance. In addition, students in both sections were tested on the same test items from the lecture material to ascertain degree of learning, and given identical “pop” quizzes two weeks after the exam to measure retention. It was hypothesized that students would find the experiential lectures from lecturers talking about their disabilities more engaging, learn more from them, and retain the material for longer. We found that students in fact preferred the course instructor to the guests, regardless of whether the guests included a discussion of their own disability in their lectures. Performance on the exam questions and the pop quiz items were not different between “experiential” and “non-experiential” lectures, suggesting that guest lecturers who discuss their own disabilities in lecture are not more effective in conveying material and students are not more likely to retain material delivered by “experiential” guests. In future research we hope to explore the reasons for students preference for their regular instructor over guest lecturers.Keywords: guest lecturer, student perception, retention, experiential
Procedia PDF Downloads 171425 Evolving Urban Landscapes: Smart Cities and Sustainable Futures
Authors: Mehrzad Soltani, Pegah Rezaei
Abstract:
In response to the escalating challenges posed by resource scarcity, urban congestion, and the dearth of green spaces, contemporary urban areas have undergone a remarkable transformation into smart cities. This evolution necessitates a strategic and forward-thinking approach to urban development, with the primary objective of diminishing and eventually eradicating dependence on non-renewable energy sources. This steadfast commitment to sustainable development is geared toward the continual enhancement of our global urban milieu, ensuring a healthier and more prosperous environment for forthcoming generations. This transformative vision has been meticulously shaped by an extensive research framework, incorporating in-depth field studies and investigations conducted at both neighborhood and city levels. Our holistic strategy extends its purview to encompass major cities and states, advocating for the realization of exceptional development firmly rooted in the principles of sustainable intelligence. At its core, this approach places a paramount emphasis on stringent pollution control measures, concurrently safeguarding ecological equilibrium and regional cohesion. Central to the realization of this vision is the widespread adoption of environmentally friendly materials and components, championing the cultivation of plant life and harmonious green spaces, and the seamless integration of intelligent lighting and irrigation systems. These systems, including solar panels and solar energy utilization, are deployed wherever feasible, effectively meeting the essential lighting and irrigation needs of these dynamic urban ecosystems. Overall, the transformation of urban areas into smart cities necessitates a holistic and innovative approach to urban development. By actively embracing sustainable intelligence and adhering to strict environmental standards, these cities pave the way for a brighter and more sustainable future, one that is marked by resilient, thriving, and eco-conscious urban communities.Keywords: smart city, green urban, sustainability, urban management
Procedia PDF Downloads 721424 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 1901423 A Study on Shear Field Test Method in Timber Shear Modulus Determination Using Stereo Vision System
Authors: Niaz Gharavi, Hexin Zhang
Abstract:
In the structural timber design, the shear modulus of the timber beam is an important factor that needs to be determined accurately. According to BS EN 408, shear modulus can be determined using torsion test or shear field test method. Although torsion test creates pure shear status in the beam, it does not represent the real-life situation when the beam is in the service. On the other hand, shear field test method creates similar loading situation as in reality. The latter method is based on shear distortion measurement of the beam at the zone with the constant transverse load in the standardized four-point bending test as indicated in BS EN 408. Current testing practice code advised using two metallic arms act as an instrument to measure the diagonal displacement of the constructing square. Timber is not a homogenous material, but a heterogeneous and this characteristic makes timber to undergo a non-uniform deformation. Therefore, the dimensions and the location of the constructing square in the area with the constant transverse force might alter the shear modulus determination. This study aimed to investigate the impact of the shape, size, and location of the square in the shear field test method. A binocular stereo vision system was developed to capture the 3D displacement of a grid of target points. This approach is an accurate and non-contact method to extract the 3D coordination of targeted object using two cameras. Two group of three glue laminated beams were produced and tested by the mean of four-point bending test according to BS EN 408. Group one constructed using two materials, laminated bamboo lumber and structurally graded C24 timber and group two consisted only structurally graded C24 timber. Analysis of Variance (ANOVA) was performed on the acquired data to evaluate the significance of size and location of the square in the determination of shear modulus of the beam. The results have shown that the size of the square is an affecting factor in shear modulus determination. However, the location of the square in the area with the constant shear force does not affect the shear modulus.Keywords: shear field test method, BS EN 408, timber shear modulus, photogrammetry approach
Procedia PDF Downloads 211