Search results for: scalable multi-qubit teleportation
Advancements in Quantum Teleportation: Exploring Hardware, Software, Mathematical Formalism, and New Versions
Authors: Siddharth Chander, Carmen Constantin
Abstract:
Quantum computing is on the verge of surpassing classical computing, unlocking a realm of possibilities. One ground-breaking application is quantum teleportation, which allows the transmission of information between quantum device users, regardless of distance. Initially, an Italian group led by Sandu Popescu conducted the first experimental implementation of quantum teleportation, validating that classical channels alone couldn’t achieve it [1]. More recently, in 2017, Chinese scientists demonstrated successful teleportation from Tibet to a satellite [2]. This paper comprehensively explores quantum teleportation from various angles, covering hardware, software, and mathematical formalism, including traditional linear algebra and cutting-edge string diagrams. It also introduces novel adaptations of the classical quan-tum teleportation protocol. The paper seamlessly integrates various topics to elucidate quantum teleportation’s creation, operation, and applications. It thoroughly examines the protocol’s architecture, functionality, and practical implications. In summary, this cohesive framework provides a comprehensive understanding of quantum teleportation, addressing its theoretical underpinnings, hardware requirements, and the motivation behind adapting the innovative protocol. A new perspective on the Spectral Theorem through linear algebra, a new scalable multiqubit protocol, and the integration of string diagrams for quantum teleportation comprehension is included in this paper.Keywords: quantum teleportation, quantum mechanics, communication, information processing, mathematical formalism, quantum bits, superposition, entanglement, quantum circuits, quantum algorithms, qiskit, quantum computers, linear algebra, hilbert spaces, quantum measurement, eigenvalues, eigenvectors, mixed states, density matrices, quantum gates, bell measurement, non-maximally entangled states, hybrid quantum teleportation, diagrammatic reasoning, string diagrams, scalable multi-qubit teleportation, quantum networks, quantum internet, secure communication, quantum applications
Procedia PDF Downloads 3Science behind Quantum Teleportation
Authors: Ananya G., B. Varshitha, Shwetha S., Kavitha S. N., Praveen Kumar Gupta
Abstract:
Teleportation is the ability to travel by just reappearing at some other spot. Though teleportation has never been achieved, quantum teleportation is possible. Quantum teleportation is a process of transferring the quantum state of a particle onto another particle, under the circumstance that one does not get to know any information about the state in the process of transformation. This paper presents a brief overview of quantum teleportation, discussing the topics like Entanglement, EPR Paradox, Bell's Theorem, Qubits, elements for a successful teleport, some examples of advanced teleportation systems (also covers few ongoing experiments), applications (that includes quantum cryptography), and the current hurdles for future scientists interested in this field. Finally, major advantages and limitations to the existing teleportation theory are discussed.Keywords: teleportation, quantum teleportation, quantum entanglement, qubits, EPR paradox, bell states, quantum particles, spooky action at a distance
Procedia PDF Downloads 125Quantum Computing with Qudits on a Graph
Authors: Aleksey Fedorov
Abstract:
Building a scalable platform for quantum computing remains one of the most challenging tasks in quantum science and technologies. However, the implementation of most important quantum operations with qubits (quantum analogues of classical bits), such as multiqubit Toffoli gate, requires either a polynomial number of operation or a linear number of operations with the use of ancilla qubits. Therefore, the reduction of the number of operations in the presence of scalability is a crucial goal in quantum information processing. One of the most elegant ideas in this direction is to use qudits (multilevel systems) instead of qubits and rely on additional levels of qudits instead of ancillas. Although some of the already obtained results demonstrate a reduction of the number of operation, they suffer from high complexity and/or of the absence of scalability. We show a strong reduction of the number of operations for the realization of the Toffoli gate by using qudits for a scalable multi-qudit processor. This is done on the basis of a general relation between the dimensionality of qudits and their topology of connections, that we derived.Keywords: quantum computing, qudits, Toffoli gates, gate decomposition
Procedia PDF Downloads 155Discontinuous Spacetime with Vacuum Holes as Explanation for Gravitation, Quantum Mechanics and Teleportation
Authors: Constantin Z. Leshan
Abstract:
Hole Vacuum theory is based on discontinuous spacetime that contains vacuum holes. Vacuum holes can explain gravitation, some laws of quantum mechanics and allow teleportation of matter. All massive bodies emit a flux of holes which curve the spacetime; if we increase the concentration of holes, it leads to length contraction and time dilation because the holes do not have the properties of extension and duration. In the limited case when space consists of holes only, the distance between every two points is equal to zero and time stops - outside of the Universe, the extension and duration properties do not exist. For this reason, the vacuum hole is the only particle in physics capable of describing gravitation using its own properties only. All microscopic particles must 'jump' continually and 'vibrate' due to the appearance of holes (impassable microscopic 'walls' in space), and it is the cause of the quantum behavior. Vacuum holes can explain the entanglement, non-locality, wave properties of matter, tunneling, uncertainty principle and so on. Particles do not have trajectories because spacetime is discontinuous and has impassable microscopic 'walls' due to the simple mechanical motion is impossible at small scale distances; it is impossible to 'trace' a straight line in the discontinuous spacetime because it contains the impassable holes. Spacetime 'boils' continually due to the appearance of the vacuum holes. For teleportation to be possible, we must send a body outside of the Universe by enveloping it with a closed surface consisting of vacuum holes. Since a material body cannot exist outside of the Universe, it reappears instantaneously in a random point of the Universe. Since a body disappears in one volume and reappears in another random volume without traversing the physical space between them, such a transportation method can be called teleportation (or Hole Teleportation). It is shown that Hole Teleportation does not violate causality and special relativity due to its random nature and other properties. Although Hole Teleportation has a random nature, it can be used for colonization of extrasolar planets by the help of the method called 'random jumps': after a large number of random teleportation jumps, there is a probability that the spaceship may appear near a habitable planet. We can create vacuum holes experimentally using the method proposed by Descartes: we must remove a body from the vessel without permitting another body to occupy this volume.Keywords: border of the Universe, causality violation, perfect isolation, quantum jumps
Procedia PDF Downloads 432Realizing Teleportation Using Black-White Hole Capsule Constructed by Space-Time Microstrip Circuit Control
Authors: Mapatsakon Sarapat, Mongkol Ketwongsa, Somchat Sonasang, Preecha Yupapin
Abstract:
The designed and performed preliminary tests on a space-time control circuit using a two-level system circuit with a 4-5 cm diameter microstrip for realistic teleportation have been demonstrated. It begins by calculating the parameters that allow a circuit that uses the alternative current (AC) at a specified frequency as the input signal. A method that causes electrons to move along the circuit perimeter starting at the speed of light, which found satisfaction based on the wave-particle duality. It is able to establish the supersonic speed (faster than light) for the electron cloud in the middle of the circuit, creating a timeline and propulsive force as well. The timeline is formed by the stretching and shrinking time cancellation in the relativistic regime, in which the absolute time has vanished. In fact, both black holes and white holes are created from time signals at the beginning, where the speed of electrons travels close to the speed of light. They entangle together like a capsule until they reach the point where they collapse and cancel each other out, which is controlled by the frequency of the circuit. Therefore, we can apply this method to large-scale circuits such as potassium, from which the same method can be applied to form the system to teleport living things. In fact, the black hole is a hibernation system environment that allows living things to live and travel to the destination of teleportation, which can be controlled from position and time relative to the speed of light. When the capsule reaches its destination, it increases the frequency of the black holes and white holes canceling each other out to a balanced environment. Therefore, life can safely teleport to the destination. Therefore, there must be the same system at the origin and destination, which could be a network. Moreover, it can also be applied to space travel as well. The design system will be tested on a small system using a microstrip circuit system that we can create in the laboratory on a limited budget that can be used in both wired and wireless systems.Keywords: quantum teleportation, black-white hole, time, timeline, relativistic electronics
Procedia PDF Downloads 77Saving Energy through Scalable Architecture
Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala
Abstract:
In this paper, we focus on the importance of scalable architecture for data centers and buildings in general to help an enterprise achieve environmental sustainability. The scalable architecture helps in many ways, such as adaptability to the business and user requirements, promotes high availability and disaster recovery solutions that are cost effective and low maintenance. The scalable architecture also plays a vital role in three core areas of sustainability: economy, environment, and social, which are also known as the 3 pillars of a sustainability model. If the architecture is scalable, it has many advantages. A few examples are that scalable architecture helps businesses and industries to adapt to changing technology, drive innovation, promote platform independence, and build resilience against natural disasters. Most importantly, having a scalable architecture helps industries bring in cost-effective measures for energy consumption, reduce wastage, increase productivity, and enable a robust environment. It also helps in the reduction of carbon emissions with advanced monitoring and metering capabilities. Scalable architectures help in reducing waste by optimizing the designs to utilize materials efficiently, minimize resources, decrease carbon footprints by using low-impact materials that are environmentally friendly. In this paper we also emphasize the importance of cultural shift towards the reuse and recycling of natural resources for a balanced ecosystem and maintain a circular economy. Also, since all of us are involved in the use of computers, much of the scalable architecture we have studied is related to data centers.Keywords: scalable architectures, sustainability, application design, disruptive technology, machine learning and natural language processing, AI, social media platform, cloud computing, advanced networking and storage devices, advanced monitoring and metering infrastructure, climate change
Procedia PDF Downloads 115Performance Analysis of Scalable Secure Multicasting in Social Networking
Authors: R. Venkatesan, A. Sabari
Abstract:
Developments of social networking internet scenario are recommended for the requirements of scalable, authentic, secure group communication model like multicasting. Multicasting is an inter network service that offers efficient delivery of data from a source to multiple destinations. Even though multicast has been very successful at providing an efficient and best-effort data delivery service for huge groups, it verified complex process to expand other features to multicast in a scalable way. Separately, the requirement for secure electronic information had become gradually more apparent. Since multicast applications are deployed for mainstream purpose the need to secure multicast communications will become significant.Keywords: multicasting, scalability, security, social network
Procedia PDF Downloads 295Temporal Conundrums: Navigating the Gravitational Time of Flow
Authors: Ogaeze Onyedikachukwu Francis
Abstract:
Let’s embark on a microcosmic exploration of the universe to delve into the gravitational time flow and its profound implications for manipulating temporal distances, ushering in the possibilities of time travel and inter-universe leaps with instantaneous teleportation. Envision the universe reduced to a minimalist scenario—two perfectly identical mass spheres intricately entwined in a manner where any alteration affecting one sphere instantaneously impacts the other. However, the complexity deepens: despite their indistinguishable nature, the gravitational pull between these spheres—coined the “gravitational Time of flow” in essence dynamics research—remains constant, ensuring universal stability. Consider now tampering with one of these spheres to test the veracity of their entanglement and sameness. Introducing a third body disrupts the equilibrium, complicating gravitational laws while maintaining their essence. This interference alters the gravitational time flow between the spheres, unraveling their initial entanglement as they diverge into distinct entities owing to the influence of the additional body. Yet, a reaffirmation of their initial entwined state becomes feasible by recalibrating the spatial arrangement and gravitational dynamics among the three bodies and beyond. This contemplation underscores the gravitational law as the linchpin connecting and anchoring the universe’s fabric, cocooning all within its omnipresent grasp. Our focal point—the gravitational time of flow—emerges as a gateway to unraveling the mysteries behind temporal distance manipulation, offering tantalizing prospects for traversing realms of time and space with unprecedented fluidity and expanding horizons in the realms of scientific inquiry and exploration.Keywords: time, space, gravity, gravitational time flow, temporal leap, temporal-distance manipulation, multi-verse, teleportation, gravitational time flow device, time travel, distance
Procedia PDF Downloads 13Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion
Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro
Abstract:
In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment
Procedia PDF Downloads 52An Approach of High Scalable Production Capacity by Adaption of the Concept 'Everything as a Service'
Authors: Johannes Atug, Stefan Braunreuther, Gunther Reinhart
Abstract:
Volatile markets, as well as increasing global competition in manufacturing, lead to a high demand of flexible and agile production systems. These advanced production systems in turn conduct to high capital expenditure along with high investment risks. Developments in production regarding digitalization and cyber-physical systems result to a merger of informational- and operational technology. The approach of this paper is to benefit from this merger and present a framework of a production network with scalable production capacity and low capital expenditure by adaptation of the IT concept 'everything as a service' into the production environment.Keywords: digital manufacturing system, everything as a service, reconfigurable production, value network
Procedia PDF Downloads 347Scalable UI Test Automation for Large-scale Web Applications
Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani
Abstract:
This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.Keywords: aws, elastic container service, scalability, serverless, ui automation test
Procedia PDF Downloads 111Scalable Learning of Tree-Based Models on Sparsely Representable Data
Authors: Fares Hedayatit, Arnauld Joly, Panagiotis Papadimitriou
Abstract:
Many machine learning tasks such as text annotation usually require training over very big datasets, e.g., millions of web documents, that can be represented in a sparse input space. State-of the-art tree-based ensemble algorithms cannot scale to such datasets, since they include operations whose running time is a function of the input space size rather than a function of the non-zero input elements. In this paper, we propose an efficient splitting algorithm to leverage input sparsity within decision tree methods. Our algorithm improves training time over sparse datasets by more than two orders of magnitude and it has been incorporated in the current version of scikit-learn.org, the most popular open source Python machine learning library.Keywords: big data, sparsely representable data, tree-based models, scalable learning
Procedia PDF Downloads 270A Parallel Approach for 3D-Variational Data Assimilation on GPUs in Ocean Circulation Models
Authors: Rossella Arcucci, Luisa D'Amore, Simone Celestino, Giuseppe Scotti, Giuliano Laccetti
Abstract:
This work is the first dowel in a rather wide research activity in collaboration with Euro Mediterranean Center for Climate Changes, aimed at introducing scalable approaches in Ocean Circulation Models. We discuss designing and implementation of a parallel algorithm for solving the Variational Data Assimilation (DA) problem on Graphics Processing Units (GPUs). The algorithm is based on the fully scalable 3DVar DA model, previously proposed by the authors, which uses a Domain Decomposition approach (we refer to this model as the DD-DA model). We proceed with an incremental porting process consisting of 3 distinct stages: requirements and source code analysis, incremental development of CUDA kernels, testing and optimization. Experiments confirm the theoretic performance analysis based on the so-called scale up factor demonstrating that the DD-DA model can be suitably mapped on GPU architectures.Keywords: data assimilation, GPU architectures, ocean models, parallel algorithm
Procedia PDF Downloads 416Large-Area Film Fabrication for Perovskite Solar Cell via Scalable Thermal-Assisted and Meniscus-Guided Bar Coating
Authors: Gizachew Belay Adugna
Abstract:
Scalable and cost-effective device fabrication techniques are urgent to commercialize the perovskite solar cells (PSCs) for the next photovoltaic (PV) technology. Herein, large-area films of perovskite and hole-transporting materials (HTMs) were developed via a rapid and scalable thermal-assisting bar-coating process in the open air. High-quality and large crystalline grains of MAPbI₃ with homogenous morphology and thickness were obtained on a large-area (10 cm×10 cm) solution-sheared mp-TiO₂/c-TiO₂/FTO substrate. Encouraging photovoltaic performance of 19.02% was achieved for devices fabricated from the bar-coated perovskite film compared to that from the small-scale spin-coated film (17.27%) with 2,2′,7,7′-tetrakis-(N,N-di-p-methoxyphenylamine)-9,9′-spirobifluorene (spiro-OMeTAD) as an HTM whereas a higher power conversion efficiency of 19.89% with improved device stability was achieved by capping a fluorinated (HYC-2) HTM as an alternative to the traditional spiro-OMeTAD. The fluorinated exhibited better molecular packing in the HTM film and deeper HOMO level compared to the nonfluorinated counterpart; thus, improved hole mobility and overall charge extraction in the device were demonstrated. Furthermore, excellent film processability and an impressive PCE of 18.52% were achieved in the large area bar-coated HYC-2 prepared sequentially on the perovskite underlayer in the open atmosphere, compared to the bar-coated spiro-OMeTAD/perovskite (17.51%). This all-solution approach demonstrated the feasibility of high-quality films on a large-area substrate for PSCs, which is a vital step toward industrial-scale PV production.Keywords: perovskite solar cells, hole transporting materials, up-scaling process, power conversion efficiency
Procedia PDF Downloads 75A Scalable Model of Fair Socioeconomic Relations Based on Blockchain and Machine Learning Algorithms-1: On Hyperinteraction and Intuition
Authors: Merey M. Sarsengeldin, Alexandr S. Kolokhmatov, Galiya Seidaliyeva, Alexandr Ozerov, Sanim T. Imatayeva
Abstract:
This series of interdisciplinary studies is an attempt to investigate and develop a scalable model of fair socioeconomic relations on the base of blockchain using positive psychology techniques and Machine Learning algorithms for data analytics. In this particular study, we use hyperinteraction approach and intuition to investigate their influence on 'wisdom of crowds' via created mobile application which was created for the purpose of this research. Along with the public blockchain and private Decentralized Autonomous Organization (DAO) which were elaborated by us on the base of Ethereum blockchain, a model of fair financial relations of members of DAO was developed. We developed a smart contract, so-called, Fair Price Protocol and use it for implementation of model. The data obtained from mobile application was analyzed by ML algorithms. A model was tested on football matches.Keywords: blockchain, Naïve Bayes algorithm, hyperinteraction, intuition, wisdom of crowd, decentralized autonomous organization
Procedia PDF Downloads 178Scalable Cloud-Based LEO Satellite Constellation Simulator
Authors: Karim Sobh, Khaled El-Ayat, Fady Morcos, Amr El-Kadi
Abstract:
Distributed applications deployed on LEO satellites and ground stations require substantial communication between different members in a constellation to overcome the earth coverage barriers imposed by GEOs. Applications running on LEO constellations suffer the earth line-of-sight blockage effect. They need adequate lab testing before launching to space. We propose a scalable cloud-based net-work simulation framework to simulate problems created by the earth line-of-sight blockage. The framework utilized cloud IaaS virtual machines to simulate LEO satellites and ground stations distributed software. A factorial ANOVA statistical analysis is conducted to measure simulator overhead on overall communication performance. The results showed a very low simulator communication overhead. Consequently, the simulation framework is proposed as a candidate for testing LEO constellations with distributed software in the lab before space launch.Keywords: LEO, cloud computing, constellation, satellite, network simulation, netfilter
Procedia PDF Downloads 391Influence of Scalable Energy-Related Sensor Parameters on Acoustic Localization Accuracy in Wireless Sensor Swarms
Authors: Joyraj Chakraborty, Geoffrey Ottoy, Jean-Pierre Goemaere, Lieven De Strycker
Abstract:
Sensor swarms can be a cost-effectieve and more user-friendly alternative for location based service systems in different application like health-care. To increase the lifetime of such swarm networks, the energy consumption should be scaled to the required localization accuracy. In this paper we have investigated some parameter for energy model that couples localization accuracy to energy-related sensor parameters such as signal length,Bandwidth and sample frequency. The goal is to use the model for the localization of undetermined environmental sounds, by means of wireless acoustic sensors. we first give an overview of TDOA-based localization together with the primary sources of TDOA error (including reverberation effects, Noise). Then we show that in localization, the signal sample rate can be under the Nyquist frequency, provided that enough frequency components remain present in the undersampled signal. The resulting localization error is comparable with that of similar localization systems.Keywords: sensor swarms, localization, wireless sensor swarms, scalable energy
Procedia PDF Downloads 427Establishing Multi-Leveled Computability as a Living-System Evolutionary Context
Authors: Ron Cottam, Nils Langloh, Willy Ranson, Roger Vounckx
Abstract:
We start by formally describing the requirements for environmental-reaction survival computation in a natural temporally-demanding medium, and develop this into a more general model of the evolutionary context as a computational machine. The effect of this development is to replace deterministic logic by a modified form which exhibits a continuous range of dimensional fractal diffuseness between the isolation of perfectly ordered localization and the extended communication associated with nonlocality as represented by pure causal chaos. We investigate the appearance of life and consciousness in the derived general model, and propose a representation of Nature within which all localizations have the character of quasi-quantal entities. We compare our conclusions with Heisenberg’s uncertainty principle and nonlocal teleportation, and maintain that computability is the principal influence on evolution in the model we propose.Keywords: computability, evolution, life, localization, modeling, nonlocality
Procedia PDF Downloads 402Repeatable Scalable Business Models: Can Innovation Drive an Entrepreneurs Un-Validated Business Model?
Authors: Paul Ojeaga
Abstract:
Can the level of innovation use drive un-validated business models across regions? To what extent does industrial sector attractiveness drive firm’s success across regions at the time of start-up? This study examines the role of innovation on start-up success in six regions of the world (namely Sub Saharan Africa, the Middle East and North Africa, Latin America, South East Asia Pacific, the European Union and the United States representing North America) using macroeconomic variables. While there have been studies using firm level data, results from such studies are not suitable for national policy decisions. The need to drive a regional innovation policy also begs for an answer, therefore providing room for this study. Results using dynamic panel estimation show that innovation counts in the early infancy stage of new business life cycle. The results are robust even after controlling for time fixed effects and the study present variance-covariance estimation robust standard errors.Keywords: industrial economics, un-validated business models, scalable models, entrepreneurship
Procedia PDF Downloads 284Ultrafine Non Water Soluble Drug Particles
Authors: Shahnaz Mansouri, David Martin, Xiao Dong Chen, Meng Wai Woo
Abstract:
Ultrafine hydrophobic and non-water-soluble drugs can increase the percentage of absorbed compared to their initial dosage. This paper provides a scalable new method of making ultrafine particles of substantially insoluble water compounds specifically, submicron particles of ethanol soluble and water insoluble pharmaceutical materials by steaming an ethanol droplet to prepare a suspension and then followed by immediate drying. This suspension is formed by adding evaporated water molecules as an anti-solvent to the solute of the samples and in early stage of precipitation continued to dry by evaporating both solvent and anti-solvent. This fine particle formation has produced fast dispersion powder in water. The new method is an extension of the antisolvent vapour precipitation technique which exposes a droplet to an antisolvent vapour with reference to the dissolved materials within the droplet. Ultrafine vitamin D3 and ibuprofen particles in the submicron ranges were produced. This work will form the basis for using spray dryers as high-throughput scalable micro-precipitators.Keywords: single droplet drying, nano size particles, non-water-soluble drugs, precipitators
Procedia PDF Downloads 487Petra: Simplified, Scalable Verification Using an Object-Oriented, Compositional Process Calculus
Authors: Aran Hakki, Corina Cirstea, Julian Rathke
Abstract:
Formal methods are yet to be utilized in mainstream software development due to issues in scaling and implementation costs. This work is about developing a scalable, simplified, pragmatic, formal software development method with strong correctness properties and guarantees that are easy prove. The method aims to be easy to learn, use and apply without extensive training and experience in formal methods. Petra is proposed as an object-oriented, process calculus with composable data types and sequential/parallel processes. Petra has a simple denotational semantics, which includes a definition of Correct by Construction. The aim is for Petra is to be standard which can be implemented to execute on various mainstream programming platforms such as Java. Work towards an implementation of Petra as a Java EDSL (Embedded Domain Specific Language) is also discussed.Keywords: compositionality, formal method, software verification, Java, denotational semantics, rewriting systems, rewriting semantics, parallel processing, object-oriented programming, OOP, programming language, correct by construction
Procedia PDF Downloads 148Simple and Scalable Thermal-Assisted Bar-Coating Process for Perovskite Solar Cell Fabrication in Open Atmosphere
Authors: Gizachew Belay Adugna
Abstract:
Perovskite solar cells (PSCs) shows rapid development as an emerging photovoltaic material; however, the fast device degradation due to the organic nature, mainly hole transporting material (HTM) and lack of robust and reliable upscaling process for photovoltaic module hindered its commercialization. Herein, HTM molecules with/without fluorine-substituted cyclopenta[2,1-b;3,4-b’]dithiophene derivatives (HYC-oF, HYC-mF, and HYC-H) were developed for PSCs application. The fluorinated HTM molecules exhibited better hole mobility and overall charge extraction in the devices mainly due to strong molecular interaction and packing in the film. Thus, the highest power conversion efficiency (PCE) of 19.64% with improved long stability was achieved for PSCs based on HYC-oF HTM. Moreover, the fluorinated HYC-oF demonstrated excellent film processability in a larger-area substrate (10 cm×10 cm) prepared sequentially with the absorption perovskite underlayer via a scalable bar coating process in ambient air and owned a higher PCE of 18.49% compared to the conventional spiro-OMeTAD (17.51%). The result demonstrates a facile development of HTM towards stable and efficient PSCs for future industrial-scale PV modules.Keywords: perovskite solar cells, upscaling film coating, power conversion efficiency, solution processing
Procedia PDF Downloads 79Scalable Systolic Multiplier over Binary Extension Fields Based on Two-Level Karatsuba Decomposition
Authors: Chiou-Yng Lee, Wen-Yo Lee, Chieh-Tsai Wu, Cheng-Chen Yang
Abstract:
Shifted polynomial basis (SPB) is a variation of polynomial basis representation. SPB has potential for efficient bit-level and digit-level implementations of multiplication over binary extension fields with subquadratic space complexity. For efficient implementation of pairing computation with large finite fields, this paper presents a new SPB multiplication algorithm based on Karatsuba schemes, and used that to derive a novel scalable multiplier architecture. Analytical results show that the proposed multiplier provides a trade-off between space and time complexities. Our proposed multiplier is modular, regular, and suitable for very-large-scale integration (VLSI) implementations. It involves less area complexity compared to the multipliers based on traditional decomposition methods. It is therefore, more suitable for efficient hardware implementation of pairing based cryptography and elliptic curve cryptography (ECC) in constraint driven applications.Keywords: digit-serial systolic multiplier, elliptic curve cryptography (ECC), Karatsuba algorithm (KA), shifted polynomial basis (SPB), pairing computation
Procedia PDF Downloads 367Persistent Homology of Convection Cycles in Network Flows
Authors: Minh Quang Le, Dane Taylor
Abstract:
Convection is a well-studied topic in fluid dynamics, yet it is less understood in the context of networks flows. Here, we incorporate techniques from topological data analysis (namely, persistent homology) to automate the detection and characterization of convective/cyclic/chiral flows over networks, particularly those that arise for irreversible Markov chains (MCs). As two applications, we study convection cycles arising under the PageRank algorithm, and we investigate chiral edges flows for a stochastic model of a bi-monomer's configuration dynamics. Our experiments highlight how system parameters---e.g., the teleportation rate for PageRank and the transition rates of external and internal state changes for a monomer---can act as homology regularizers of convection, which we summarize with persistence barcodes and homological bifurcation diagrams. Our approach establishes a new connection between the study of convection cycles and homology, the branch of mathematics that formally studies cycles, which has diverse potential applications throughout the sciences and engineering.Keywords: homology, persistent homolgy, markov chains, convection cycles, filtration
Procedia PDF Downloads 142Mining Coupled to Agriculture: Systems Thinking in Scalable Food Production
Authors: Jason West
Abstract:
Low profitability in agriculture production along with increasing scrutiny over environmental effects is limiting food production at scale. In contrast, the mining sector offers access to resources including energy, water, transport and chemicals for food production at low marginal cost. Scalable agricultural production can benefit from the nexus of resources (water, energy, transport) offered by mining activity in remote locations. A decision support bioeconomic model for controlled environment vertical farms was used. Four submodels were used: crop structure, nutrient requirements, resource-crop integration, and economic. They escalate to a macro mathematical model. A demonstrable dynamic systems framework is needed to prove productive outcomes are feasible. We demonstrate a generalized bioeconomic macro model for controlled environment production systems in minesites using systems dynamics modeling methodology. Despite the complexity of bioeconomic modelling of resource-agricultural dynamic processes and interactions, the economic potential greater than general economic models would assume. Scalability of production as an input becomes a key success feature.Keywords: crop production systems, mathematical model, mining, agriculture, dynamic systems
Procedia PDF Downloads 81Three-Dimensional Carbon Foam Based Asymmetric Assembly of Metal Oxides Electrodes for High-Performance Solid-State Micro-Supercapacitor
Authors: Sumana Kumar, Abha Misra
Abstract:
Micro-supercapacitors hold great attention as one of the promising energy storage devices satisfying the increasing quest for miniaturized and portable devices. Despite having impressive power density, superior cyclic lifetime, and high charge-discharge rates, micro-supercapacitors still suffer from low energy density, which limits their practical application. The energy density (E=1/2CV²) can be increased either by increasing specific capacitance (C) or voltage range (V). Asymmetric micro-supercapacitors have attracted great attention by using two different electrode materials to expand the voltage window and thus increase the energy density. Currently, versatile fabrication technologies such as inkjet printing, lithography, laser scribing, etc., are used to directly or indirectly pattern the electrode material; these techniques still suffer from scalable production and cost inefficiency. Here, we demonstrate the scalable production of a three-dimensional (3D) carbon foam (CF) based asymmetric micro-supercapacitor by spray printing technique on an array of interdigital electrodes. The solid-state asymmetric micro-supercapacitor comprised of CF-MnO positive electrode and CF-Fe₂O₃ negative electrode achieves a high areal capacitance of 18.4 mF/cm² (2326.8 mF/cm³) at 5 mV/s and a wider potential window of 1.4 V. Consequently, a superior energy density of 5 µWh/cm² is obtained, and high cyclic stability is confirmed with retention of the initial capacitance by 86.1% after 10000 electrochemical cycles. The optimized decoration of pseudocapacitive metal oxides in the 3D carbon network helps in high electrochemical utilization of materials where the 3D interconnected network of carbon provides overall electrical conductivity and structural integrity. The research provides a simple and scalable spray printing method to fabricate an asymmetric micro-supercapacitor using a custom-made mask that can be integrated on a large scale.Keywords: asymmetric micro-supercapacitors, high energy-density, hybrid materials, three-dimensional carbon-foam
Procedia PDF Downloads 119An Agile, Intelligent and Scalable Framework for Global Software Development
Authors: Raja Asad Zaheer, Aisha Tanveer, Hafza Mehreen Fatima
Abstract:
Global Software Development (GSD) is becoming a common norm in software industry, despite of the fact that global distribution of the teams presents special issues for effective communication and coordination of the teams. Now trends are changing and project management for distributed teams is no longer in a limbo. GSD can be effectively established using agile and project managers can use different agile techniques/tools for solving the problems associated with distributed teams. Agile methodologies like scrum and XP have been successfully used with distributed teams. We have employed exploratory research method to analyze different recent studies related to challenges of GSD and their proposed solutions. In our study, we had deep insight in six commonly faced challenges: communication and coordination, temporal differences, cultural differences, knowledge sharing/group awareness, speed and communication tools. We have established that each of these challenges cannot be neglected for distributed teams of any kind. They are interlinked and as an aggregated whole can cause the failure of projects. In this paper we have focused on creating a scalable framework for detecting and overcoming these commonly faced challenges. In the proposed solution, our objective is to suggest agile techniques/tools relevant to a particular problem faced by the organizations related to the management of distributed teams. We focused mainly on scrum and XP techniques/tools because they are widely accepted and used in the industry. Our solution identifies the problem and suggests an appropriate technique/tool to help solve the problem based on globally shared knowledgebase. We can establish a cause and effect relationship using a fishbone diagram based on the inputs provided for issues commonly faced by organizations. Based on the identified cause, suitable tool is suggested, our framework suggests a suitable tool. Hence, a scalable, extensible, self-learning, intelligent framework proposed will help implement and assess GSD to achieve maximum out of it. Globally shared knowledgebase will help new organizations to easily adapt best practices set forth by the practicing organizations.Keywords: agile project management, agile tools/techniques, distributed teams, global software development
Procedia PDF Downloads 325Homogeneous Anti-Corrosion Coating of Spontaneously Dissolved Defect-Free Graphene
Authors: M. K. Bin Subhan, P. Cullen, C. Howard
Abstract:
A recent study by the World Corrosion Organization estimated that corrosion related damage causes $2.5tr worth of damage every year. As such, a low cost easily scalable solution is required to the corrosion problem which is economically viable. Graphene is an ideal anti-corrosion barrier layer material due to its excellent barrier properties and chemical stability, which makes it impermeable to all molecules. However, attempts to employ graphene as a barrier layer has been hampered by the fact that defect sites in graphene accelerate corrosion due to the inert nature of graphene which promotes galvanic corrosion at the expense of the metal. The recent discovery of spontaneous dissolution of charged graphite intercalation compounds in aprotic solvents enables defect free graphene platelets to be employed for anti-corrosion applications. These ‘inks’ of defect-free charged graphene platelets in solution can be coated onto a metallic surfaces via electroplating to form a homogeneous barrier layer. In this paper, initial data showing homogeneous coatings of graphene barrier layers on steel coupons via electroplating will be presented. This easily scalable technique also provides a controllable method for applying different barrier thicknesses from ultra thin layers to thick opaque coatings making it useful for a wide range of applications.Keywords: anti-corrosion, defect-free, electroplating, graphene
Procedia PDF Downloads 134Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 34Flame Spray Pyrolysis as a High-Throughput Method to Generate Gadolinium Doped Titania Nanoparticles for Augmented Radiotherapy
Authors: Malgorzata J. Rybak-Smith, Benedicte Thiebaut, Simon Johnson, Peter Bishop, Helen E. Townley
Abstract:
Gadolinium doped titania (TiO2:Gd) nanoparticles (NPs) can be activated by X-ray radiation to generate Reactive Oxygen Species (ROS), which can be effective in killing cancer cells. As such, treatment with these NPs can be used to enhance the efficacy of conventional radiotherapy. Incorporation of the NPs in to tumour tissue will permit the extension of radiotherapy to currently untreatable tumours deep within the body, and also reduce damage to neighbouring healthy cells. In an attempt to find a fast and scalable method for the synthesis of the TiO2:Gd NPs, the use of Flame Spray Pyrolysis (FSP) was investigated. A series of TiO2 NPs were generated with 1, 2, 5 and 7 mol% gadolinium dopant. Post-synthesis, the TiO2:Gd NPs were silica-coated to improve their biocompatibility. Physico-chemical characterisation was used to determine the size and stability in aqueous suspensions of the NPs. All analysed TiO2:Gd NPs were shown to have relatively high photocatalytic activity. Furthermore, the FSP synthesized silica-coated TiO2:Gd NPs generated enhanced ROS in chemico. Studies on rhabdomyosarcoma (RMS) cell lines (RD & RH30) demonstrated that in the absence of irradiation all TiO2:Gd NPs were inert. However, application of TiO2:Gd NPs to RMS cells, followed by irradiation, showed a significant decrease in cell proliferation. Consequently, our studies showed that the X-ray-activatable TiO2:Gd NPs can be prepared by a high-throughput scalable technique to provide a novel and affordable anticancer therapy.Keywords: cancer, gadolinium, ROS, titania nanoparticles, X-ray
Procedia PDF Downloads 434- « Previous
- Next »