Search results for: edge computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1779

Search results for: edge computing

339 Gulfnet: The Advent of Computer Networking in Saudi Arabia and Its Social Impact

Authors: Abdullah Almowanes

Abstract:

The speed of adoption of new information and communication technologies is often seen as an indicator of the growth of knowledge- and technological innovation-based regional economies. Indeed, technological progress and scientific inquiry in any society have undergone a particularly profound transformation with the introduction of computer networks. In the spring of 1981, the Bitnet network was launched to link thousands of nodes all over the world. In 1985 and as one of the first adopters of Bitnet, Saudi Arabia launched a Bitnet-based network named Gulfnet that linked computer centers, universities, and libraries of Saudi Arabia and other Gulf countries through high speed communication lines. In this paper, the origins and the deployment of Gulfnet are discussed as well as social, economical, political, and cultural ramifications of the new information reality created by the network. Despite its significance, the social and cultural aspects of Gulfnet have not been investigated in history of science and technology literature to a satisfactory degree before. The presented research is based on an extensive archival research aimed at seeking out and analyzing of primary evidence from archival sources and records. During its decade and a half-long existence, Gulfnet demonstrated that the scope and functionality of public computer networks in Saudi Arabia have to be fine-tuned for compliance with Islamic culture and political system of the country. It also helped lay the groundwork for the subsequent introduction of the Internet. Since 1980s, in just few decades, the proliferation of computer networks has transformed communications world-wide.

Keywords: Bitnet, computer networks, computing and culture, Gulfnet, Saudi Arabia

Procedia PDF Downloads 245
338 An Investigation of the Structural and Microstructural Properties of Zn1-xCoxO Thin Films Applied as Gas Sensors

Authors: Ariadne C. Catto, Luis F. da Silva, Khalifa Aguir, Valmor Roberto Mastelaro

Abstract:

Zinc oxide (ZnO) pure or doped are one of the most promising metal oxide semiconductors for gas sensing applications due to the well-known high surface-to-volume area and surface conductivity. It was shown that ZnO is an excellent gas-sensing material for different gases such as CO, O2, NO2 and ethanol. In this context, pure and doped ZnO exhibiting different morphologies and a high surface/volume ratio can be a good option regarding the limitations of the current commercial sensors. Different studies showed that the sensitivity of metal-doped ZnO (e.g. Co, Fe, Mn,) enhanced its gas sensing properties. Motivated by these considerations, the aim of this study consisted on the investigation of the role of Co ions on structural, morphological and the gas sensing properties of nanostructured ZnO samples. ZnO and Zn1-xCoxO (0 < x < 5 wt%) thin films were obtained via the polymeric precursor method. The sensitivity, selectivity, response time and long-term stability gas sensing properties were investigated when the sample was exposed to a different concentration range of ozone (O3) at different working temperatures. The gas sensing property was probed by electrical resistance measurements. The long and short-range order structure around Zn and Co atoms were investigated by X-ray diffraction and X-ray absorption spectroscopy. X-ray photoelectron spectroscopy measurement was performed in order to identify the elements present on the film surface as well as to determine the sample composition. Microstructural characteristics of the films were analyzed by a field-emission scanning electron microscope (FE-SEM). Zn1-xCoxO XRD patterns were indexed to the wurtzite ZnO structure and any second phase was observed even at a higher cobalt content. Co-K edge XANES spectra revealed the predominance of Co2+ ions. XPS characterization revealed that Co-doped ZnO samples possessed a higher percentage of oxygen vacancies than the ZnO samples, which also contributed to their excellent gas sensing performance. Gas sensor measurements pointed out that ZnO and Co-doped ZnO samples exhibit a good gas sensing performance concerning the reproducibility and a fast response time (around 10 s). Furthermore, the Co addition contributed to reduce the working temperature for ozone detection and improve the selective sensing properties.

Keywords: cobalt-doped ZnO, nanostructured, ozone gas sensor, polymeric precursor method

Procedia PDF Downloads 245
337 Field-Free Orbital Hall Current-Induced Deterministic Switching in the MO/Co₇₁Gd₂₉/Ru Structure

Authors: Zelalem Abebe Bekele, Kun Lei, Xiukai Lan, Xiangyu Liu, Hui Wen, Kaiyou Wang

Abstract:

Spin-polarized currents offer an efficient means of manipulating the magnetization of a ferromagnetic layer for big data and neuromorphic computing. Research has shown that the orbital Hall effect (OHE) can produce orbital currents, potentially surpassing the counter spin currents induced by the spin Hall effect. However, it’s essential to note that orbital currents alone cannot exert torque directly on a ferromagnetic layer, necessitating a conversion process from orbital to spin currents. Here, we present an efficient method for achieving perpendicularly magnetized spin-orbit torque (SOT) switching by harnessing the localized orbital Hall current generated from a Mo layer within a Mo/CoGd device. Our investigation reveals a remarkable enhancement in the interface-induced planar Hall effect (PHE) within the Mo/CoGd bilayer, resulting in the generation of a z-polarized planar current for manipulating the magnetization of CoGd layer without the need for an in-plane magnetic field. Furthermore, the Mo layer induces out-of-plane orbital current, boosting the in-plane and out-of-plane spin polarization by converting the orbital current into spin current within the dual-property CoGd layer. At the optimal Mo layer thickness, a low critical magnetization switching current density of 2.51×10⁶ A cm⁻² is achieved. This breakthrough opens avenues for all-electrical control energy-efficient magnetization switching through orbital current, advancing the field of spin-orbitronics.

Keywords: spin-orbit torque, orbital hall effect, spin hall current, orbital hall current, interface-generated planar hall current, anisotropic magnetoresistance

Procedia PDF Downloads 53
336 Emergence of Information Centric Networking and Web Content Mining: A Future Efficient Internet Architecture

Authors: Sajjad Akbar, Rabia Bashir

Abstract:

With the growth of the number of users, the Internet usage has evolved. Due to its key design principle, there is an incredible expansion in its size. This tremendous growth of the Internet has brought new applications (mobile video and cloud computing) as well as new user’s requirements i.e. content distribution environment, mobility, ubiquity, security and trust etc. The users are more interested in contents rather than their communicating peer nodes. The current Internet architecture is a host-centric networking approach, which is not suitable for the specific type of applications. With the growing use of multiple interactive applications, the host centric approach is considered to be less efficient as it depends on the physical location, for this, Information Centric Networking (ICN) is considered as the potential future Internet architecture. It is an approach that introduces uniquely named data as a core Internet principle. It uses the receiver oriented approach rather than sender oriented. It introduces the naming base information system at the network layer. Although ICN is considered as future Internet architecture but there are lot of criticism on it which mainly concerns that how ICN will manage the most relevant content. For this Web Content Mining(WCM) approaches can help in appropriate data management of ICN. To address this issue, this paper contributes by (i) discussing multiple ICN approaches (ii) analyzing different Web Content Mining approaches (iii) creating a new Internet architecture by merging ICN and WCM to solve the data management issues of ICN. From ICN, Content-Centric Networking (CCN) is selected for the new architecture, whereas, Agent-based approach from Web Content Mining is selected to find most appropriate data.

Keywords: agent based web content mining, content centric networking, information centric networking

Procedia PDF Downloads 473
335 Embedded System of Signal Processing on FPGA: Underwater Application Architecture

Authors: Abdelkader Elhanaoui, Mhamed Hadji, Rachid Skouri, Said Agounad

Abstract:

The purpose of this paper is to study the phenomenon of acoustic scattering by using a new method. The signal processing (Fast Fourier Transform FFT Inverse Fast Fourier Transform iFFT and BESSEL functions) is widely applied to obtain information with high precision accuracy. Signal processing has a wider implementation in general-purpose pro-cessors. Our interest was focused on the use of FPGAs (Field-Programmable Gate Ar-rays) in order to minimize the computational complexity in single processor architecture, then be accelerated on FPGA and meet real-time and energy efficiency requirements. Gen-eral-purpose processors are not efficient for signal processing. We implemented the acous-tic backscattered signal processing model on the Altera DE-SOC board and compared it to Odroid xu4. By comparison, the computing latency of Odroid xu4 and FPGA is 60 sec-onds and 3 seconds, respectively. The detailed SoC FPGA-based system has shown that acoustic spectra are performed up to 20 times faster than the Odroid xu4 implementation. FPGA-based system of processing algorithms is realized with an absolute error of about 10⁻³. This study underlines the increasing importance of embedded systems in underwater acoustics, especially in non-destructive testing. It is possible to obtain information related to the detection and characterization of submerged cells. So we have achieved good exper-imental results in real-time and energy efficiency.

Keywords: DE1 FPGA, acoustic scattering, form function, signal processing, non-destructive testing

Procedia PDF Downloads 75
334 Bioethanol Production from Wild Sorghum (Sorghum arundinacieum) and Spear Grass (Heteropogon contortus)

Authors: Adeyinka Adesanya, Isaac Bamgboye

Abstract:

There is a growing need to develop the processes to produce renewable fuels and chemicals due to the economic, political, and environmental concerns associated with fossil fuels. Lignocellulosic biomass is an excellent renewable feedstock because it is both abundant and inexpensive. This project aims at producing bioethanol from lignocellulosic plants (Sorghum Arundinacieum and Heteropogon Contortus) by biochemical means, computing the energy audit of the process and determining the fuel properties of the produced ethanol. Acid pretreatment (0.5% H2SO4 solution) and enzymatic hydrolysis (using malted barley as enzyme source) were employed. The ethanol yield of wild sorghum was found to be 20% while that of spear grass was 15%. The fuel properties of the bioethanol from wild sorghum are 1.227 centipoise for viscosity, 1.10 g/cm3 for density, 0.90 for specific gravity, 78 °C for boiling point and the cloud point was found to be below -30 °C. That of spear grass was 1.206 centipoise for viscosity, 0.93 g/cm3 for density 1.08 specific gravity, 78 °C for boiling point and the cloud point was also found to be below -30 °C. The energy audit shows that about 64 % of the total energy was used up during pretreatment, while product recovery which was done manually demanded about 31 % of the total energy. Enzymatic hydrolysis, fermentation, and distillation total energy input were 1.95 %, 1.49 % and 1.04 % respectively, the alcoholometric strength of bioethanol from wild sorghum was found to be 47 % and the alcoholometric strength of bioethanol from spear grass was 72 %. Also, the energy efficiency of the bioethanol production for both grasses was 3.85 %.

Keywords: lignocellulosic biomass, wild sorghum, spear grass, biochemical conversion

Procedia PDF Downloads 234
333 City Buses and Sustainable Urban Mobility in Kano Metropolis 1967-2015: An Historical Perspective

Authors: Yusuf Umar Madugu

Abstract:

Since its creation in 1967, Kano has tremendously undergone political, social and economic transformations. Public urban transportation has been playing a vital role in sustaining economic growth of Kano metropolis, especially with the existence of modern buses with the regular network of roads, in all the main centers of trade. This study, therefore, centers on the role of intra-city buses in molding the economy of Kano. Its main focus is post-colonial Kano (i.e. 1967-2015), a period that witnessed rapid expansion of commercial activities and ever increasing urbanization which goes along with it population explosion. The commuters patronized the urban transport, a situation that made the business lucrative. More so, the traders who had come from within and outside Kano relied heavily on commercial vehicles to transport their merchandise to their various destinations. Commercial road transport system, therefore, had become well organized in Kano with a significant number of people earning their means of livelihood from it. It also serves as a source of revenue to governments at different levels. However, the study of transport and development as an academic discipline is inter-disciplinary in nature. This study, therefore, employs the services and the methodologies of other disciplines such as Geography, History, Urban and Regional Planning, Engineering, Computer Science, Economics, etc. to provide a comprehensive picture of the issues under investigation. The source materials for this study included extensive use of written literature and oral information. In view of the crucial importance of intra-city commercial transport services, this study demonstrates its role in the overall economic transformation of the study area. It generally also, contributed in opening up a new ground and looked into the history of commercial transport system. At present, Kano Metropolitan area is located between latitude 110 50’ and 12007’, and longitude 80 22’ and 80 47’ within the Semi-Arid Sudan Savannah Zone of West Africa about 840kilometers of the edge of the Sahara desert. The Metropolitan area has expanded over the years and has become the third largest conurbation in Nigeria with a population of about 4million. It is made up of eight local government areas viz: Kano Municipal, Gwale, Dala, Tarauni, Nasarawa, Fage, Ungogo, and Kumbotso.

Keywords: assessment, buses, city, mobility, sustainable

Procedia PDF Downloads 223
332 Digital Manufacturing: Evolution and a Process Oriented Approach to Align with Business Strategy

Authors: Abhimanyu Pati, Prabir K. Bandyopadhyay

Abstract:

The paper intends to highlight the significance of Digital Manufacturing (DM) strategy in support and achievement of business strategy and goals of any manufacturing organization. Towards this end, DM initiatives have been given a process perspective, while not undermining its technological significance, with a view to link its benefits directly with fulfilment of customer needs and expectations in a responsive and cost-effective manner. A digital process model has been proposed to categorize digitally enabled organizational processes with a view to create synergistic groups, which adopt and use digital tools having similar characteristics and functionalities. This will throw future opportunities for researchers and developers to create a unified technology environment for integration and orchestration of processes. Secondly, an effort has been made to apply “what” and “how” features of Quality Function Deployment (QFD) framework to establish the relationship between customers’ needs – both for external and internal customers, and the features of various digital processes, which support for the achievement of these customer expectations. The paper finally concludes that in the present highly competitive environment, business organizations cannot thrive to sustain unless they understand the significance of digital strategy and integrate it with their business strategy with a clearly defined implementation roadmap. A process-oriented approach to DM strategy will help business executives and leaders to appreciate its value propositions and its direct link to organization’s competitiveness.

Keywords: knowledge management, cloud computing, knowledge management approaches, cloud-based knowledge management

Procedia PDF Downloads 307
331 Computerized Analysis of Phonological Structure of 10,400 Brazilian Sign Language Signs

Authors: Wanessa G. Oliveira, Fernando C. Capovilla

Abstract:

Capovilla and Raphael’s Libras Dictionary documents a corpus of 4,200 Brazilian Sign Language (Libras) signs. Duduchi and Capovilla’s software SignTracking permits users to retrieve signs even when ignoring the gloss corresponding to it and to discover the meaning of all 4,200 signs sign simply by clicking on graphic menus of the sign characteristics (phonemes). Duduchi and Capovilla have discovered that the ease with which any given sign can be retrieved is an inverse function of the average popularity of its component phonemes. Thus, signs composed of rare (distinct) phonemes are easier to retrieve than are those composed of common phonemes. SignTracking offers a means of computing the average popularity of the phonemes that make up each one of 4,200 signs. It provides a precise measure of the degree of ease with which signs can be retrieved, and sign meanings can be discovered. Duduchi and Capovilla’s logarithmic model proved valid: The degree with which any given sign can be retrieved is an inverse function of the arithmetic mean of the logarithm of the popularity of each component phoneme. Capovilla, Raphael and Mauricio’s New Libras Dictionary documents a corpus of 10,400 Libras signs. The present analysis revealed Libras DNA structure by mapping the incidence of 501 sign phonemes resulting from the layered distribution of five parameters: 163 handshape phonemes (CherEmes-ManusIculi); 34 finger shape phonemes (DactilEmes-DigitumIculi); 55 hand placement phonemes (ArtrotoToposEmes-ArticulatiLocusIculi); 173 movement dimension phonemes (CinesEmes-MotusIculi) pertaining to direction, frequency, and type; and 76 Facial Expression phonemes (MascarEmes-PersonalIculi).

Keywords: Brazilian sign language, lexical retrieval, libras sign, sign phonology

Procedia PDF Downloads 344
330 Micro-Milling Process Development of Advanced Materials

Authors: M. A. Hafiz, P. T. Matevenga

Abstract:

Micro-level machining of metals is a developing field which has shown to be a prospective approach to produce features on the parts in the range of a few to a few hundred microns with acceptable machining quality. It is known that the mechanics (i.e. the material removal mechanism) of micro-machining and conventional machining have significant differences due to the scaling effects associated with tool-geometry, tool material and work piece material characteristics. Shape memory alloys (SMAs) are those metal alloys which display two exceptional properties, pseudoelasticity and the shape memory effect (SME). Nickel-titanium (NiTi) alloys are one of those unique metal alloys. NiTi alloys are known to be difficult-to-cut materials specifically by using conventional machining techniques due to their explicit properties. Their high ductility, high amount of strain hardening, and unusual stress–strain behaviour are the main properties accountable for their poor machinability in terms of tool wear and work piece quality. The motivation of this research work was to address the challenges and issues of micro-machining combining with those of machining of NiTi alloy which can affect the desired performance level of machining outputs. To explore the significance of range of cutting conditions on surface roughness and tool wear, machining tests were conducted on NiTi. Influence of different cutting conditions and cutting tools on surface and sub-surface deformation in work piece was investigated. Design of experiments strategy (L9 Array) was applied to determine the key process variables. The dominant cutting parameters were determined by analysis of variance. These findings showed that feed rate was the dominant factor on surface roughness whereas depth of cut found to be dominant factor as far as tool wear was concerned. The lowest surface roughness was achieved at the feed rate of equal to the cutting edge radius where as the lowest flank wear was observed at lowest depth of cut. Repeated machining trials have yet to be carried out in order to observe the tool life, sub-surface deformation and strain induced hardening which are also expecting to be amongst the critical issues in micro machining of NiTi. The machining performance using different cutting fluids and strategies have yet to be studied.

Keywords: nickel titanium, micro-machining, surface roughness, machinability

Procedia PDF Downloads 339
329 Designing Electrically Pumped Photonic Crystal Surface Emitting Lasers Based on a Honeycomb Nanowire Pattern

Authors: Balthazar Temu, Zhao Yan, Bogdan-Petrin Ratiu, Sang Soon Oh, Qiang Li

Abstract:

Photonic crystal surface emitting lasers (PCSELs) has recently become an area of active research because of the advantages these lasers have over the edge emitting lasers and vertical cavity surface emitting lasers (VCSELs). PCSELs can emit laser beams with high power (from the order of few milliwatts to Watts or even tens of Watts) which scales with the emission area while maintaining single mode operation even at large emission areas. Most PCSELs reported in the literature are air-hole based, with only few demonstrations of nanowire based PCSELs. We previously reported an optically pumped, nanowire based PCSEL operating in the O band by using the honeycomb lattice. The nanowire based PCSELs have the advantage of being able to grow on silicon platform without threading dislocations. It is desirable to extend their operating wavelength to C band to open more applications including eye-safe sensing, lidar and long haul optical communications. In this work we first analyze how the lattice constant , nanowire diameter, nanowire height and side length of the hexagon in the honeycomb pattern can be changed to increase the operating wavelength of the honeycomb based PCSELs to the C band. Then as an attempt to make our device electrically pumped, we present the finite-difference time-domain (FDTD) simulation results with metals on the nanowire. The results for different metals on the nanowire are presented in order to choose the metal which gives the device with the best quality factor. The metals under consideration are those which form good ohmic contact with p-type doped InGaAs with low contact resistivity and decent sticking coefficient to the semiconductor. Such metals include Tungsten, Titanium, Palladium and Platinum. Using the chosen metal we demonstrate the impact of thickness of the metal for a given nanowire height on the quality factor of the device. We also investigate how the height of the nanowire affects the quality factor for a fixed thickness of the metal. Finally, the main steps in making the practical device are discussed.

Keywords: designing nanowire PCSEL, designing PCSEL on silicon substrates, low threshold nanowire laser, simulation of photonic crystal lasers.

Procedia PDF Downloads 12
328 Cryptographic Resource Allocation Algorithm Based on Deep Reinforcement Learning

Authors: Xu Jie

Abstract:

As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decision-making problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security) by modeling the multi-job collaborative cryptographic service scheduling problem as a multi-objective optimized job flow scheduling problem and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real-time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing and effectively solves the problem of complex resource scheduling in cryptographic services.

Keywords: cloud computing, cryptography on-demand service, reinforcement learning, workflow scheduling

Procedia PDF Downloads 10
327 Transforming Healthcare with Immersive Visualization: An Analysis of Virtual and Holographic Health Information Platforms

Authors: Hossein Miri, Zhou YongQi, Chan Bormei-Suy

Abstract:

The development of advanced technologies and innovative solutions has opened up exciting new possibilities for revolutionizing healthcare systems. One such emerging concept is the use of virtual and holographic health information platforms that aim to provide interactive and personalized medical information to users. This paper provides a review of notable virtual and holographic health information platforms. It begins by highlighting the need for information visualization and 3D representation in healthcare. It then proceeds to provide background knowledge on information visualization and historical developments in 3D visualization technology. Additional domain knowledge concerning holography, holographic computing, and mixed reality is then introduced, followed by highlighting some of their common applications and use cases. After setting the scene and defining the context, the need and importance of virtual and holographic visualization in medicine are discussed. Subsequently, some of the current research areas and applications of digital holography and holographic technology are explored, alongside the importance and role of virtual and holographic visualization in genetics and genomics. An analysis of the key principles and concepts underlying virtual and holographic health information systems is presented, as well as their potential implications for healthcare are pointed out. The paper concludes by examining the most notable existing mixed-reality applications and systems that help doctors visualize diagnostic and genetic data and assist in patient education and communication. This paper is intended to be a valuable resource for researchers, developers, and healthcare professionals who are interested in the use of virtual and holographic technologies to improve healthcare.

Keywords: virtual, holographic, health information platform, personalized interactive medical information

Procedia PDF Downloads 86
326 Development of an Integrated Route Information Management Software

Authors: Oluibukun G. Ajayi, Joseph O. Odumosu, Oladimeji T. Babafemi, Azeez Z. Opeyemi, Asaleye O. Samuel

Abstract:

The need for the complete automation of every procedure of surveying and most especially, its engineering applications cannot be overemphasized due to the many demerits of the conventional manual or analogue approach. This paper presents the summarized details of the development of a Route Information Management (RIM) software. The software, codenamed ‘AutoROUTE’, was encoded using Microsoft visual studio-visual basic package, and it offers complete automation of the computational procedures and plan production involved in route surveying. It was experimented using a route survey data (longitudinal profile and cross sections) of a 2.7 km road which stretches from Dama to Lunko village in Minna, Niger State, acquired with the aid of a Hi-Target DGPS receiver. The developed software (AutoROUTE) is capable of computing the various simple curve parameters, horizontal curve, and vertical curve, and it can also plot road alignment, longitudinal profile, and cross-section with a capability to store this on the SQL incorporated into the Microsoft visual basic software. The plotted plans with AutoROUTE were compared with the plans produced with the conventional AutoCAD Civil 3D software, and AutoROUTE proved to be more user-friendly and accurate because it plots in three decimal places whereas AutoCAD plots in two decimal places. Also, it was discovered that AutoROUTE software is faster in plotting and the stages involved is less cumbersome compared to AutoCAD Civil 3D software.

Keywords: automated systems, cross sections, curves, engineering construction, longitudinal profile, route surveying

Procedia PDF Downloads 147
325 A Next-Generation Blockchain-Based Data Platform: Leveraging Decentralized Storage and Layer 2 Scaling for Secure Data Management

Authors: Kenneth Harper

Abstract:

The rapid growth of data-driven decision-making across various industries necessitates advanced solutions to ensure data integrity, scalability, and security. This study introduces a decentralized data platform built on blockchain technology to improve data management processes in high-volume environments such as healthcare and financial services. The platform integrates blockchain networks using Cosmos SDK and Polkadot Substrate alongside decentralized storage solutions like IPFS and Filecoin, and coupled with decentralized computing infrastructure built on top of Avalanche. By leveraging advanced consensus mechanisms, we create a scalable, tamper-proof architecture that supports both structured and unstructured data. Key features include secure data ingestion, cryptographic hashing for robust data lineage, and Zero-Knowledge Proof mechanisms that enhance privacy while ensuring compliance with regulatory standards. Additionally, we implement performance optimizations through Layer 2 scaling solutions, including ZK-Rollups, which provide low-latency data access and trustless data verification across a distributed ledger. The findings from this exercise demonstrate significant improvements in data accessibility, reduced operational costs, and enhanced data integrity when tested in real-world scenarios. This platform reference architecture offers a decentralized alternative to traditional centralized data storage models, providing scalability, security, and operational efficiency.

Keywords: blockchain, cosmos SDK, decentralized data platform, IPFS, ZK-Rollups

Procedia PDF Downloads 24
324 Development of a Matlab® Program for the Bi-Dimensional Truss Analysis Using the Stiffness Matrix Method

Authors: Angel G. De Leon Hernandez

Abstract:

A structure is defined as a physical system or, in certain cases, an arrangement of connected elements, capable of bearing certain loads. The structures are presented in every part of the daily life, e.g., in the designing of buildings, vehicles and mechanisms. The main goal of a structure designer is to develop a secure, aesthetic and maintainable system, considering the constraint imposed to every case. With the advances in the technology during the last decades, the capabilities of solving engineering problems have increased enormously. Nowadays the computers, play a critical roll in the structural analysis, pitifully, for university students the vast majority of these software are inaccessible due to the high complexity and cost they represent, even when the software manufacturers offer student versions. This is exactly the reason why the idea of developing a more reachable and easy-to-use computing tool. This program is designed as a tool for the university students enrolled in courser related to the structures analysis and designs, as a complementary instrument to achieve a better understanding of this area and to avoid all the tedious calculations. Also, the program can be useful for graduated engineers in the field of structural design and analysis. A graphical user interphase is included in the program to make it even simpler to operate it and understand the information requested and the obtained results. In the present document are included the theoretical basics in which the program is based to solve the structural analysis, the logical path followed in order to develop the program, the theoretical results, a discussion about the results and the validation of those results.

Keywords: stiffness matrix method, structural analysis, Matlab® applications, programming

Procedia PDF Downloads 121
323 Efficient DNN Training on Heterogeneous Clusters with Pipeline Parallelism

Authors: Lizhi Ma, Dan Liu

Abstract:

Pipeline parallelism has been widely used to accelerate distributed deep learning to alleviate GPU memory bottlenecks and to ensure that models can be trained and deployed smoothly under limited graphics memory conditions. However, in highly heterogeneous distributed clusters, traditional model partitioning methods are not able to achieve load balancing. The overlap of communication and computation is also a big challenge. In this paper, HePipe is proposed, an efficient pipeline parallel training method for highly heterogeneous clusters. According to the characteristics of the neural network model pipeline training task, oriented to the 2-level heterogeneous cluster computing topology, a training method based on the 2-level stage division of neural network modeling and partitioning is designed to improve the parallelism. Additionally, a multi-forward 1F1B scheduling strategy is designed to accelerate the training time of each stage by executing the computation units in advance to maximize the overlap between the forward propagation communication and backward propagation computation. Finally, a dynamic recomputation strategy based on task memory requirement prediction is proposed to improve the fitness ratio of task and memory, which improves the throughput of the cluster and solves the memory shortfall problem caused by memory differences in heterogeneous clusters. The empirical results show that HePipe improves the training speed by 1.6×−2.2× over the existing asynchronous pipeline baselines.

Keywords: pipeline parallelism, heterogeneous cluster, model training, 2-level stage partitioning

Procedia PDF Downloads 16
322 Virtual Screening and in Silico Toxicity Property Prediction of Compounds against Mycobacterium tuberculosis Lipoate Protein Ligase B (LipB)

Authors: Junie B. Billones, Maria Constancia O. Carrillo, Voltaire G. Organo, Stephani Joy Y. Macalino, Inno A. Emnacen, Jamie Bernadette A. Sy

Abstract:

The drug discovery and development process is generally known to be a very lengthy and labor-intensive process. Therefore, in order to be able to deliver prompt and effective responses to cure certain diseases, there is an urgent need to reduce the time and resources needed to design, develop, and optimize potential drugs. Computer-aided drug design (CADD) is able to alleviate this issue by applying computational power in order to streamline the whole drug discovery process, starting from target identification to lead optimization. This drug design approach can be predominantly applied to diseases that cause major public health concerns, such as tuberculosis. Hitherto, there has been no concrete cure for this disease, especially with the continuing emergence of drug resistant strains. In this study, CADD is employed for tuberculosis by first identifying a key enzyme in the mycobacterium’s metabolic pathway that would make a good drug target. One such potential target is the lipoate protein ligase B enzyme (LipB), which is a key enzyme in the M. tuberculosis metabolic pathway involved in the biosynthesis of the lipoic acid cofactor. Its expression is considerably up-regulated in patients with multi-drug resistant tuberculosis (MDR-TB) and it has no known back-up mechanism that can take over its function when inhibited, making it an extremely attractive target. Using cutting-edge computational methods, compounds from AnalytiCon Discovery Natural Derivatives database were screened and docked against the LipB enzyme in order to rank them based on their binding affinities. Compounds which have better binding affinities than LipB’s known inhibitor, decanoic acid, were subjected to in silico toxicity evaluation using the ADMET and TOPKAT protocols. Out of the 31,692 compounds in the database, 112 of these showed better binding energies than decanoic acid. Furthermore, 12 out of the 112 compounds showed highly promising ADMET and TOPKAT properties. Future studies involving in vitro or in vivo bioassays may be done to further confirm the therapeutic efficacy of these 12 compounds, which eventually may then lead to a novel class of anti-tuberculosis drugs.

Keywords: pharmacophore, molecular docking, lipoate protein ligase B (LipB), ADMET, TOPKAT

Procedia PDF Downloads 422
321 Durability of Functionally Graded Concrete

Authors: Prasanna Kumar Acharya, Mausam Kumari Yadav

Abstract:

Cement concrete has emerged as the most consumed construction material. It has also dominated all other construction materials because of its versatility. Apart from numerous advantages it has a disadvantage concerning durability. The large structures constructed with cement concrete involving the consumption of huge natural materials remain in serviceable condition for 5 – 7 decades only while structures made with stones stand for many centuries. The short life span of structures not only affects the economy but also affects the ecology greatly. As such, the improvement of durability of cement concrete is a global concern and scientists around the globe are trying for this purpose. Functionally graded concrete (FGC) is an exciting development. In contrast to conventional concrete, FGC demonstrates different characteristics depending on its thickness, which enables it to conform to particular structural specifications. The purpose of FGC is to improve the performance and longevity of conventional concrete structures with cutting-edge building materials. By carefully distributing various kinds and amounts of reinforcements, additives, mix designs and/or aggregates throughout the concrete matrix, this variety is produced. A key component of functionally graded concrete's performance is its durability, which affects the material's capacity to tolerate aggressive environmental influences and load-bearing circumstances. This paper reports the durability of FGC made using Portland slag cement (PSC). For this purpose, control concretes (CC) of M20, M30 and M40 grades were designed. Single-layered samples were prepared using each grade of concrete. Further using combinations of M20 + M30, M30 + M40 and M40 + M20, doubled layered concrete samples in a depth ratio of 1:1 was prepared those are herein called FGC samples. The efficiency of FGC samples was compared with that of the higher-grade concrete of parent materials in terms of compressive strength, water absorption, sorptivity, acid resistance, sulphate resistance, chloride resistance and abrasion resistance. The properties were checked at the age of 28 and 91 days. Apart from strength and durability parameters, the microstructure of CC and FGC were studied in terms of X-ray diffraction, scanning electron microscopy and energy-dispersive X-ray. The result of the study revealed that there is an increase in the efficiency of concrete evaluated in terms of strength and durability when it is made functionally graded using a layered technology having different grades of concrete in layers. The results may help to enhance the efficiency of structural concrete and its durability.

Keywords: fresh on compacted, functionally graded concrete, acid, chloride, sulphate test, sorptivity, abrasion, water absorption test

Procedia PDF Downloads 16
320 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis

Procedia PDF Downloads 325
319 Continuous FAQ Updating for Service Incident Ticket Resolution

Authors: Kohtaroh Miyamoto

Abstract:

As enterprise computing becomes more and more complex, the costs and technical challenges of IT system maintenance and support are increasing rapidly. One popular approach to managing IT system maintenance is to prepare and use an FAQ (Frequently Asked Questions) system to manage and reuse systems knowledge. Such an FAQ system can help reduce the resolution time for each service incident ticket. However, there is a major problem where over time the knowledge in such FAQs tends to become outdated. Much of the knowledge captured in the FAQ requires periodic updates in response to new insights or new trends in the problems addressed in order to maintain its usefulness for problem resolution. These updates require a systematic approach to define the exact portion of the FAQ and its content. Therefore, we are working on a novel method to hierarchically structure the FAQ and automate the updates of its structure and content. We use structured information and the unstructured text information with the timelines of the information in the service incident tickets. We cluster the tickets by structured category information, by keywords, and by keyword modifiers for the unstructured text information. We also calculate an urgency score based on trends, resolution times, and priorities. We carefully studied the tickets of one of our projects over a 2.5-year time period. After the first 6 months, we started to create FAQs and confirmed they improved the resolution times. We continued observing over the next 2 years to assess the ongoing effectiveness of our method for the automatic FAQ updates. We improved the ratio of tickets covered by the FAQ from 32.3% to 68.9% during this time. Also, the average time reduction of ticket resolution was between 31.6% and 43.9%. Subjective analysis showed more than 75% reported that the FAQ system was useful in reducing ticket resolution times.

Keywords: FAQ system, resolution time, service incident tickets, IT system maintenance

Procedia PDF Downloads 338
318 DLtrace: Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps

Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li

Abstract:

With the widespread popularity of mobile devices and the development of artificial intelligence (AI), deep learning (DL) has been extensively applied in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace; a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Moreover, using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. We conducted an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace has a more robust performance than FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.

Keywords: mobile computing, deep learning apps, sensitive information, static analysis

Procedia PDF Downloads 175
317 Modeling the Effects of Leachate-Impacted Groundwater on the Water Quality of a Large Tidal River

Authors: Emery Coppola Jr., Marwan Sadat, Il Kim, Diane Trube, Richard Kurisko

Abstract:

Contamination sites like landfills often pose significant risks to receptors like surface water bodies. Surface water bodies are often a source of recreation, including fishing and swimming, which not only enhances their value but also serves as a direct exposure pathway to humans, increasing their need for protection from water quality degradation. In this paper, a case study presents the potential effects of leachate-impacted groundwater from a large closed sanitary landfill on the surface water quality of the nearby Raritan River, situated in New Jersey. The study, performed over a two year period, included in-depth field evaluation of both the groundwater and surface water systems, and was supplemented by computer modeling. The analysis required delineation of a representative average daily groundwater discharge from the Landfill shoreline into the large, highly tidal Raritan River, with a corresponding estimate of daily mass loading of potential contaminants of concern. The average daily groundwater discharge into the river was estimated from a high-resolution water level study and a 24-hour constant-rate aquifer pumping test. The significant tidal effects induced on groundwater levels during the aquifer pumping test were filtered out using an advanced algorithm, from which aquifer parameter values were estimated using conventional curve match techniques. The estimated hydraulic conductivity values obtained from individual observation wells closely agree with tidally-derived values for the same wells. Numerous models were developed and used to simulate groundwater contaminant transport and surface water quality impacts. MODFLOW with MT3DMS was used to simulate the transport of potential contaminants of concern from the down-gradient edge of the Landfill to the Raritan River shoreline. A surface water dispersion model based upon a bathymetric and flow study of the river was used to simulate the contaminant concentrations over space within the river. The modeling results helped demonstrate that because of natural attenuation, the Landfill does not have a measurable impact on the river, which was confirmed by an extensive surface water quality study.

Keywords: groundwater flow and contaminant transport modeling, groundwater/surface water interaction, landfill leachate, surface water quality modeling

Procedia PDF Downloads 257
316 3D Simulation of Orthodontic Tooth Movement in the Presence of Horizontal Bone Loss

Authors: Azin Zargham, Gholamreza Rouhi, Allahyar Geramy

Abstract:

One of the most prevalent types of alveolar bone loss is horizontal bone loss (HBL) in which the bone height around teeth is reduced homogenously. In the presence of HBL the magnitudes of forces during orthodontic treatment should be altered according to the degree of HBL, in a way that without further bone loss, desired tooth movement can be obtained. In order to investigate the appropriate orthodontic force system in the presence of HBL, a three-dimensional numerical model capable of the simulation of orthodontic tooth movement was developed. The main goal of this research was to evaluate the effect of different degrees of HBL on a long-term orthodontic tooth movement. Moreover, the effect of different force magnitudes on orthodontic tooth movement in the presence of HBL was studied. Five three-dimensional finite element models of a maxillary lateral incisor with 0 mm, 1.5 mm, 3 mm, 4.5 mm and 6 mm of HBL were constructed. The long-term orthodontic tooth tipping movements were attained during a 4-weeks period in an iterative process through the external remodeling of the alveolar bone based on strains in periodontal ligament as the bone remodeling mechanical stimulus. To obtain long-term orthodontic tooth movement in each iteration, first the strains in periodontal ligament under a 1-N tipping force were calculated using finite element analysis. Then, bone remodeling and the subsequent tooth movement were computed in a post-processing software using a custom written program. Incisal edge, cervical, and apical area displacement in the models with different alveolar bone heights (0, 1.5, 3, 4.5, 6 mm bone loss) in response to a 1-N tipping force were calculated. Maximum tooth displacement was found to be 2.65 mm at the top of the crown of the model with a 6 mm bone loss. Minimum tooth displacement was 0.45 mm at the cervical level of the model with a normal bone support. Tooth tipping degrees of models in response to different tipping force magnitudes were also calculated for models with different degrees of HBL. Degrees of tipping tooth movement increased as force level was increased. This increase was more prominent in the models with smaller degrees of HBL. By using finite element method and bone remodeling theories, this study indicated that in the presence of HBL, under the same load, long-term orthodontic tooth movement will increase. The simulation also revealed that even though tooth movement increases with increasing the force, this increase was only prominent in the models with smaller degrees of HBL, and tooth models with greater degrees of HBL will be less affected by the magnitude of an orthodontic force. Based on our results, the applied force magnitude must be reduced in proportion of degree of HBL.

Keywords: bone remodeling, finite element method, horizontal bone loss, orthodontic tooth movement.

Procedia PDF Downloads 341
315 Study of Durability of Porous Polymer Materials, Glass-Fiber-Reinforced Polyurethane Foam (R-PUF) in MarkIII Containment Membrane System

Authors: Florent Cerdan, Anne-Gaëlle Denay, Annette Roy, Jean-Claude Grandidier, Éric Laine

Abstract:

The insulation of MarkIII membrane of the Liquid Natural Gas Carriers (LNGC) consists of a load- bearing system made of panels in reinforced polyurethane foam (R-PUF). During the shipping, the cargo containment shall be potentially subject to risk events which can be water leakage through the wall ballast tank. The aim of these present works is to further develop understanding of water transfer mechanisms and water effect on properties of R-PUF. This multi-scale approach contributes to improve the durability. Macroscale / Mesoscale Firstly, the use of the gravimetric technique has allowed to define, at room temperature, the water transfer mechanisms and kinetic diffusion, in the R-PUF. The solubility follows a first kinetic fast growing connected to the water absorption by the micro-porosity, and then evolves linearly slowly, this second stage is connected to molecular diffusion and dissolution of water in the dense membranes polyurethane. Secondly, in the purpose of improving the understanding of the transfer mechanism, the study of the evolution of the buoyant force has been established. It allowed to identify the effect of the balance of total and partial pressure of mixture gas contained in pores surface. Mesoscale / Microscale The differential scanning calorimetry (DSC) and Dynamical Mechanical Analysis (DMA), have been used to investigate the hydration of the hard and soft segments of the polyurethane matrix. The purpose was to identify the sensitivity of these two phases. It been shown that the glass transition temperatures shifts towards the low temperatures when the solubility of the water increases. These observations permit to conclude to a plasticization of the polymer matrix. Microscale The Fourier Transform Infrared (FTIR) study has been used to investigate the characterization of functional groups on the edge, the center and mid-way of the sample according the duration of submersion. More water there is in the material, more the water fix themselves on the urethanes groups and more specifically on amide groups. The pic of C=O urethane shifts at lower frequencies quickly before 24 hours of submersion then grows slowly. The intensity of the pic decreases more flatly after that.

Keywords: porous materials, water sorption, glass transition temperature, DSC, DMA, FTIR, transfer mechanisms

Procedia PDF Downloads 526
314 Numerical Investigation on the Influence of Incoming Flow Conditions on the Rotating Stall in Centrifugal Pump

Authors: Wanru Huang, Fujun Wang, Chaoyue Wang, Yuan Tang, Zhifeng Yao, Ruofu Xiao, Xin Chen

Abstract:

Rotating stall in centrifugal pump is an unsteady flow phenomenon that causes instabilities and high hydraulic losses. It typically occurs at low flow rates due to large flow separation in impeller blade passage. In order to reveal the influence of incoming flow conditions on rotating stall in centrifugal pump, a numerical method for investigating rotating stall was established. This method is based on a modified SST k-ω turbulence model and a fine mesh model was adopted. The calculated flow velocity in impeller by this method was in good agreement with PIV results. The effects of flow rate and sealing-ring leakage on stall characteristics of centrifugal pump were studied by using the proposed numerical approach. The flow structures in impeller under typical flow rates and typical sealing-ring leakages were analyzed. It is found that the stall vortex frequency and circumferential propagation velocity increase as flow rate decreases. With the flow rate decreases from 0.40Qd to 0.30Qd, the stall vortex frequency increases from 1.50Hz to 2.34Hz, the circumferential propagation velocity of the stall vortex increases from 3.14rad/s to 4.90rad/s. Under almost all flow rate conditions where rotating stall is present, there is low frequency of pressure pulsation between 0Hz-5Hz. The corresponding pressure pulsation amplitude increases with flow rate decreases. Taking the measuring point at the leading edge of the blade pressure surface as an example, the flow rate decreases from 0.40Qd to 0.30Qd, the pressure fluctuation amplitude increases by 86.9%. With the increase of leakage, the flow structure in the impeller becomes more complex, and the 8-shaped stall vortex is no longer stable. On the basis of the 8-shaped stall vortex, new vortex nuclei are constantly generated and fused with the original vortex nuclei under large leakage. The upstream and downstream vortex structures of the 8-shaped stall vortex have different degrees of swimming in the flow passage, and the downstream vortex swimming is more obvious. The results show that the proposed numerical approach could capture the detail vortex characteristics, and the incoming flow conditions have significant effects on the stall vortex in centrifugal pumps.

Keywords: centrifugal pump, rotating stall, numerical simulation, flow condition, vortex frequency

Procedia PDF Downloads 136
313 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach

Authors: Jean Berger, Nassirou Lo, Martin Noel

Abstract:

Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.

Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization

Procedia PDF Downloads 370
312 DWDM Network Implementation in the Honduran Telecommunications Company "Hondutel"

Authors: Tannia Vindel, Carlos Mejia, Damaris Araujo, Carlos Velasquez, Darlin Trejo

Abstract:

The DWDM (Dense Wavelenght Division Multiplexing) is in constant growth around the world by consumer demand to meet their needs. Since its inception in this operation arises the need for a system which enable us to expand the communication of an entire nation to improve the computing trends of their societies according to their customs and geographical location. The Honduran Company of Telecommunications (HONDUTEL), provides the internet services and data transport technology with a PDH and SDH, which represents in the Republic of Honduras C. A., the option of viability for the consumer in terms of purchase value and its ease of acquisition; but does not have the efficiency in terms of technological advance and represents an obstacle that limits the long-term socio-economic development in comparison with other countries in the region and to be able to establish a competition between telecommunications companies that are engaged in this heading. For that reason we propose to establish a new technological trend implemented in Europe and that is applied in our country that allows us to provide a data transfer in broadband as it is DWDM, in this way we will have a stable service and quality that will allow us to compete in this globalized world, and that must be replaced by one that would provide a better service and which must be in the forefront. Once implemented the DWDM is build upon the existing resources, such as the equipment used, and you will be given life to a new stage providing a business image to the Republic of Honduras C,A, as a nation, to ensure the data transport and broadband internet to a meaningful relationship. Same benefits in the first instance to existing customers and to all the institutions were bidden to these public and private need of such services.

Keywords: demultiplexers, light detectors, multiplexers, optical amplifiers, optical fibers, PDH, SDH

Procedia PDF Downloads 262
311 Design and Implementation of Low-code Model-building Methods

Authors: Zhilin Wang, Zhihao Zheng, Linxin Liu

Abstract:

This study proposes a low-code model-building approach that aims to simplify the development and deployment of artificial intelligence (AI) models. With an intuitive way to drag and drop and connect components, users can easily build complex models and integrate multiple algorithms for training. After the training is completed, the system automatically generates a callable model service API. This method not only lowers the technical threshold of AI development and improves development efficiency but also enhances the flexibility of algorithm integration and simplifies the deployment process of models. The core strength of this method lies in its ease of use and efficiency. Users do not need to have a deep programming background and can complete the design and implementation of complex models with a simple drag-and-drop operation. This feature greatly expands the scope of AI technology, allowing more non-technical people to participate in the development of AI models. At the same time, the method performs well in algorithm integration, supporting many different types of algorithms to work together, which further improves the performance and applicability of the model. In the experimental part, we performed several performance tests on the method. The results show that compared with traditional model construction methods, this method can make more efficient use, save computing resources, and greatly shorten the model training time. In addition, the system-generated model service interface has been optimized for high availability and scalability, which can adapt to the needs of different application scenarios.

Keywords: low-code, model building, artificial intelligence, algorithm integration, model deployment

Procedia PDF Downloads 27
310 Comparative Evaluation of Vanishing Interfacial Tension Approach for Minimum Miscibility Pressure Determination

Authors: Waqar Ahmad Butt, Gholamreza Vakili Nezhaad, Ali Soud Al Bemani, Yahya Al Wahaibi

Abstract:

Minimum miscibility pressure (MMP) plays a great role in determining the displacement efficiency of different gas injection processes. Experimental techniques for MMP determination include industrially recommended slim tube, vanishing interfacial tension (VIT) and rising bubble apparatus (RBA). In this paper, MMP measurement study using slim tube and VIT experimental techniques for two different crude oil samples (M and N) both in live and stock tank oil forms is being presented. VIT measured MMP values for both 'M' and 'N' live crude oils were close to slim tube determined MMP values with 6.4 and 5 % deviation respectively. Whereas for both oil samples in stock tank oil form, VIT measured MMP showed a higher unacceptable deviation from slim tube determined MMP. This higher difference appears to be related to high stabilized crude oil heavier fraction and lack of multiple contacts miscibility. None of the different nine deployed crude oil and CO2 MMP computing correlations could result in reliable MMP, close to slim tube determined MMP. Since VIT determined MMP values for both considered live crude oils are in close match with slim tube determined MMP values, it confirms reliable, reproducible, rapid and cheap alternative for live crude oil MMP determination. Whereas VIT MMP determination for stock tank oil case needed further investigation about stabilization / destabilization mechanism of oil heavier ends and multiple contacts miscibility development issues.

Keywords: minimum miscibility pressure, interfacial tension, multiple contacts miscibility, heavier ends

Procedia PDF Downloads 267