Search results for: Discrete Cuckoo Optimization Algorithm (DCOA)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6457

Search results for: Discrete Cuckoo Optimization Algorithm (DCOA)

2977 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery

Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene

Abstract:

Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.

Keywords: multi-objective, analysis, data flow, freight delivery, methodology

Procedia PDF Downloads 166
2976 Sharing Tacit Knowledge: The Essence of Knowledge Management

Authors: Ayesha Khatun

Abstract:

In 21st century where markets are unstable, technologies rapidly proliferate, competitors multiply, products and services become obsolete almost overnight and customers demand low cost high value product, leveraging and harnessing knowledge is not just a potential source of competitive advantage rather a necessity in technology based and information intensive industries. Knowledge management focuses on leveraging the available knowledge and sharing the same among the individuals in the organization so that the employees can make best use of it towards achieving the organizational goals. Knowledge is not a discrete object. It is embedded in people and so difficult to transfer outside the immediate context that it becomes a major competitive advantage. However, internal transfer of knowledge among the employees is essential to maximize the use of knowledge available in the organization in an unstructured manner. But as knowledge is the source of competitive advantage for the organization it is also the source of competitive advantage for the individuals. People think that knowledge is power and sharing the same may lead to lose the competitive position. Moreover, the very nature of tacit knowledge poses many difficulties in sharing the same. But sharing tacit knowledge is the vital part of knowledge management process because it is the tacit knowledge which is inimitable. Knowledge management has been made synonymous with the use of software and technology leading to the management of explicit knowledge only ignoring personal interaction and forming of informal networks which are considered as the most successful means of sharing tacit knowledge. Factors responsible for effective sharing of tacit knowledge are grouped into –individual, organizational and technological factors. Different factors under each category have been identified. Creating a positive organizational culture, encouraging personal interaction, practicing reward system are some of the strategies that can help to overcome many of the barriers to effective sharing of tacit knowledge. Methodology applied here is completely secondary. Extensive review of relevant literature has been undertaken for the purpose.

Keywords: knowledge, tacit knowledge, knowledge management, sustainable competitive advantage, organization, knowledge sharing

Procedia PDF Downloads 386
2975 Multiple Negative-Differential Resistance Regions Based on AlN/GaN Resonant Tunneling Structures by the Vertical Growth of Molecular Beam Epitaxy

Authors: Yao Jiajia, Wu Guanlin, LIU Fang, Xue Junshuai, Zhang Jincheng, Hao Yue

Abstract:

Resonant tunneling diodes (RTDs) based on GaN have been extensively studied. However, no results of multiple logic states achieved by RTDs were reported by the methods of epitaxy in the GaN materials. In this paper, the multiple negative-differential resistance regions by combining two discrete double-barrier RTDs in series have been first demonstrated. Plasma-assisted molecular beam epitaxy (PA-MBE) was used to grow structures consisting of two vertical RTDs. The substrate was a GaN-on-sapphire template. Each resonant tunneling structure was composed of a double barrier of AlN and a single well of GaN with undoped 4-nm space layers of GaN on each side. The AlN barriers were 1.5 nm thick, and the GaN well was 2 nm thick. The resonant tunneling structures were separated from each other by 30-nm thick n+ GaN layers. The bottom and top layers of the structures, grown neighboring to the spacer layers that consist of 200-nm-thick n+ GaN. These devices with two tunneling structures exhibited uniform peaks and valleys current and also had two negative differential resistance NDR regions equally spaced in bias voltage. The current-voltage (I-V) characteristics of resonant tunneling structures with diameters of 1 and 2 μm were analyzed in this study. These structures exhibit three stable operating points, which are investigated in detail. This research demonstrates that using molecular beam epitaxy MBE to vertically grow multiple resonant tunneling structures is a promising method for achieving multiple negative differential resistance regions and stable logic states. These findings have significant implications for the development of digital circuits capable of multi-value logic, which can be achieved with a small number of devices.

Keywords: GaN, AlN, RTDs, MBE, logic state

Procedia PDF Downloads 75
2974 Experimental-Numerical Inverse Approaches in the Characterization and Damage Detection of Soft Viscoelastic Layers from Vibration Test Data

Authors: Alaa Fezai, Anuj Sharma, Wolfgang Mueller-Hirsch, André Zimmermann

Abstract:

Viscoelastic materials have been widely used in the automotive industry over the last few decades with different functionalities. Besides their main application as a simple and efficient surface damping treatment, they may ensure optimal operating conditions for on-board electronics as thermal interface or sealing layers. The dynamic behavior of viscoelastic materials is generally dependent on many environmental factors, the most important being temperature and strain rate or frequency. Prior to the reliability analysis of systems including viscoelastic layers, it is, therefore, crucial to accurately predict the dynamic and lifetime behavior of these materials. This includes the identification of the dynamic material parameters under critical temperature and frequency conditions along with a precise damage localization and identification methodology. The goal of this work is twofold. The first part aims at applying an inverse viscoelastic material-characterization approach for a wide frequency range and under different temperature conditions. For this sake, dynamic measurements are carried on a single lap joint specimen using an electrodynamic shaker and an environmental chamber. The specimen consists of aluminum beams assembled to adapter plates through a viscoelastic adhesive layer. The experimental setup is reproduced in finite element (FE) simulations, and frequency response functions (FRF) are calculated. The parameters of both the generalized Maxwell model and the fractional derivatives model are identified through an optimization algorithm minimizing the difference between the simulated and the measured FRFs. The second goal of the current work is to guarantee an on-line detection of the damage, i.e., delamination in the viscoelastic bonding of the described specimen during frequency monitored end-of-life testing. For this purpose, an inverse technique, which determines the damage location and size based on the modal frequency shift and on the change of the mode shapes, is presented. This includes a preliminary FE model-based study correlating the delamination location and size to the change in the modal parameters and a subsequent experimental validation achieved through dynamic measurements of specimen with different, pre-generated crack scenarios and comparing it to the virgin specimen. The main advantage of the inverse characterization approach presented in the first part resides in the ability of adequately identifying the material damping and stiffness behavior of soft viscoelastic materials over a wide frequency range and under critical temperature conditions. Classic forward characterization techniques such as dynamic mechanical analysis are usually linked to limitations under critical temperature and frequency conditions due to the material behavior of soft viscoelastic materials. Furthermore, the inverse damage detection described in the second part guarantees an accurate prediction of not only the damage size but also its location using a simple test setup and outlines; therefore, the significance of inverse numerical-experimental approaches in predicting the dynamic behavior of soft bonding layers applied in automotive electronics.

Keywords: damage detection, dynamic characterization, inverse approaches, vibration testing, viscoelastic layers

Procedia PDF Downloads 194
2973 Block Mining: Block Chain Enabled Process Mining Database

Authors: James Newman

Abstract:

Process mining is an emerging technology that looks to serialize enterprise data in time series data. It has been used by many companies and has been the subject of a variety of research papers. However, the majority of current efforts have looked at how to best create process mining from standard relational databases. This paper is the first pass at outlining a database custom-built for the minimal viable product of process mining. We present Block Miner, a blockchain protocol to store process mining data across a distributed network. We demonstrate the feasibility of storing process mining data on the blockchain. We present a proof of concept and show how the intersection of these two technologies helps to solve a variety of issues, including but not limited to ransomware attacks, tax documentation, and conflict resolution.

Keywords: blockchain, process mining, memory optimization, protocol

Procedia PDF Downloads 80
2972 A Combined Meta-Heuristic with Hyper-Heuristic Approach to Single Machine Production Scheduling Problem

Authors: C. E. Nugraheni, L. Abednego

Abstract:

This paper is concerned with minimization of mean tardiness and flow time in a real single machine production scheduling problem. Two variants of genetic algorithm as meta-heuristic are combined with hyper-heuristic approach are proposed to solve this problem. These methods are used to solve instances generated with real world data from a company. Encouraging results are reported.

Keywords: hyper-heuristics, evolutionary algorithms, production scheduling, meta-heuristic

Procedia PDF Downloads 370
2971 A Parking Demand Forecasting Method for Making Parking Policy in the Center of Kabul City

Authors: Roien Qiam, Shoshi Mizokami

Abstract:

Parking demand in the Central Business District (CBD) has enlarged with the increase of the number of private vehicles due to rapid economic growth, lack of an efficient public transport and traffic management system. This has resulted in low mobility, poor accessibility, serious congestion, high rates of traffic accident fatalities and injuries and air pollution, mainly because people have to drive slowly around to find a vacant spot. With parking pricing and enforcement policy, considerable advancement could be found, and on-street parking spaces could be managed efficiently and effectively. To evaluate parking demand and making parking policy, it is required to understand the current parking condition and driver’s behavior, understand how drivers choose their parking type and location as well as their behavior toward finding a vacant parking spot under parking charges and search times. This study illustrates the result from an observational, revealed and stated preference surveys and experiment. Attained data shows that there is a gap between supply and demand in parking and it has maximized. For the modeling of the parking decision, a choice model was constructed based on discrete choice modeling theory and multinomial logit model estimated by using SP survey data; the model represents the choice of an alternative among different alternatives which are priced on-street, off-street, and illegal parking. Individuals choose a parking type based on their preference concerning parking charges, searching times, access times and waiting times. The parking assignment model was obtained directly from behavioral model and is used in parking simulation. The study concludes with an evaluation of parking policy.

Keywords: CBD, parking demand forecast, parking policy, parking choice model

Procedia PDF Downloads 179
2970 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 147
2969 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media

Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding

Abstract:

A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.

Keywords: discrete elements, Hertzian contact, polydispersity, weakly nonlinear, wave propagation

Procedia PDF Downloads 187
2968 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core

Authors: Yashas Bedre Raghavendra, Pim Vullers

Abstract:

This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.

Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction

Procedia PDF Downloads 58
2967 Layersomes for Oral Delivery of Amphotericin B

Authors: A. C. Rana, Abhinav Singh Rana

Abstract:

Layer by layer coating of biocompatible polyelectrolytes converts the liposomes into stable version i.e 'layersomes'. This system was further used to deliver the Amphotericin B through the oral route. Extensive optimization of different process variables resulted in the formation of layersomes with the particle size of 238.4±5.1, PDI of 0.24±0.16, the zeta potential of 34.6±1.3, and entrapment efficiency of 71.3±1.2. TEM analysis further confirmed the formation of spherical particles. Trehalose (10% w/w) resulted in the formation of fluffy and easy to redisperse cake in freeze dried layersomes. Controlled release up to 50 % within 24 h was observed in the case of layersomes. The layersomes were found stable in simulated biological fluids and resulted in the 3.59 fold higher bioavailability in comparison to free Amp-B. Furthermore, the developed formulation was found to be safe in comparison to Fungizone as indicated by blood urea nitrogen (BUN) and creatinine level.

Keywords: amphotericin B, layersomes, liposomes, toxicity

Procedia PDF Downloads 516
2966 Determination of the Minimum Time and the Optimal Trajectory of a Moving Robot Using Picard's Method

Authors: Abbes Lounis, Kahina Louadj, Mohamed Aidene

Abstract:

This paper presents an optimal control problem applied to a robot; the problem is to determine a command which makes it possible to reach a final state from a given initial state in record time. The approach followed to solve this optimization problem with constraints on the control starts by presenting the equations of motion of the dynamic system then by applying Pontryagin's maximum principle (PMP) to determine the optimal control, and Picard's successive approximation method combined with the shooting method to solve the resulting differential system.

Keywords: robotics, Pontryagin's Maximum Principle, PMP, Picard's method, shooting method, non-linear differential systems

Procedia PDF Downloads 245
2965 Development of an Implicit Coupled Partitioned Model for the Prediction of the Behavior of a Flexible Slender Shaped Membrane in Interaction with Free Surface Flow under the Influence of a Moving Flotsam

Authors: Mahtab Makaremi Masouleh, Günter Wozniak

Abstract:

This research is part of an interdisciplinary project, promoting the design of a light temporary installable textile defence system against flood. In case river water levels increase abruptly especially in winter time, one can expect massive extra load on a textile protective structure in term of impact as a result of floating debris and even tree trunks. Estimation of this impulsive force on such structures is of a great importance, as it can ensure the reliability of the design in critical cases. This fact provides the motivation for the numerical analysis of a fluid structure interaction application, comprising flexible slender shaped and free-surface water flow, where an accelerated heavy flotsam tends to approach the membrane. In this context, the analysis on both the behavior of the flexible membrane and its interaction with moving flotsam is conducted by finite elements based solvers of the explicit solver and implicit Abacus solver available as products of SIMULIA software. On the other hand, a study on how free surface water flow behaves in response to moving structures, has been investigated using the finite volume solver of Star CCM+ from Siemens PLM Software. An automatic communication tool (CSE, SIMULIA Co-Simulation Engine) and the implementation of an effective partitioned strategy in form of an implicit coupling algorithm makes it possible for partitioned domains to be interconnected powerfully. The applied procedure ensures stability and convergence in the solution of these complicated issues, albeit with high computational cost; however, the other complexity of this study stems from mesh criterion in the fluid domain, where the two structures approach each other. This contribution presents the approaches for the establishment of a convergent numerical solution and compares the results with experimental findings.

Keywords: co-simulation, flexible thin structure, fluid-structure interaction, implicit coupling algorithm, moving flotsam

Procedia PDF Downloads 376
2964 Subjective Evaluation of Mathematical Morphology Edge Detection on Computed Tomography (CT) Images

Authors: Emhimed Saffor

Abstract:

In this paper, the problem of edge detection in digital images is considered. Three methods of edge detection based on mathematical morphology algorithm were applied on two sets (Brain and Chest) CT images. 3x3 filter for first method, 5x5 filter for second method and 7x7 filter for third method under MATLAB programming environment. The results of the above-mentioned methods are subjectively evaluated. The results show these methods are more efficient and satiable for medical images, and they can be used for different other applications.

Keywords: CT images, Matlab, medical images, edge detection

Procedia PDF Downloads 315
2963 Review of Suitable Advanced Oxidation Processes for Degradation of Organic Compounds in Produced Water during Enhanced Oil Recovery

Authors: Smita Krishnan, Krittika Chandran, Chandra Mohan Sinnathambi

Abstract:

Produced water and its treatment and management are growing challenges in all producing regions. This water is generally considered as a nonrevenue product, but it can have significant value in enhanced oil recovery techniques if it meets the required quality standards. There is also an interest in the beneficial uses of produced water for agricultural and industrial applications. Advanced Oxidation Process is a chemical technology that has been growing recently in the wastewater treatment industry, and it is highly recommended for non-easily removal of organic compounds. The efficiency of AOPs is compound specific, therefore, the optimization of each process should be done based on different aspects.

Keywords: advanced oxidation process, photochemical processes, degradation, organic contaminants

Procedia PDF Downloads 493
2962 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves

Authors: Shengnan Chen, Shuhua Wang

Abstract:

Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.

Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves

Procedia PDF Downloads 271
2961 Optimization of Human Hair Concentration for a Natural Rubber Based Composite

Authors: Richu J. Babu, Sony Mathew, Sharon Rony Jacob, Soney C. George, Jibin C. Jacob

Abstract:

Human hair is a non-biodegradable waste available in plenty throughout the world but is rarely explored for applications in engineering fields. Tensile strength of human hair ranges from 170 to 220 MPa. This property of human hair can be made use in the field of making bio-composites[1]. The composite is prepared by commixing the human hair and natural rubber in a two roll mill along with additives followed by vulcanization. Here the concentration of the human hair is varied by fine-tuning the fiber length as 20 mm and sundry tests like tensile, abrasion, tear and hardness were conducted. While incrementing the fiber length up to a certain range the mechanical properties shows superior amendments.

Keywords: human hair, natural rubber, composite, vulcanization, fiber loading

Procedia PDF Downloads 369
2960 Matrix Completion with Heterogeneous Cost

Authors: Ilqar Ramazanli

Abstract:

The matrix completion problem has been studied broadly under many underlying conditions. The problem has been explored under adaptive or non-adaptive, exact or estimation, single-phase or multi-phase, and many other categories. In most of these cases, the observation cost of each entry is uniform and has the same cost across the columns. However, in many real-life scenarios, we could expect elements from distinct columns or distinct positions to have a different cost. In this paper, we explore this generalization under adaptive conditions. We approach the problem under two different cost models. The first one is that entries from different columns have different observation costs, but within the same column, each entry has a uniform cost. The second one is any two entry has different observation cost, despite being the same or different columns. We provide complexity analysis of our algorithms and provide tightness guarantees.

Keywords: matroid optimization, matrix completion, linear algebra, algorithms

Procedia PDF Downloads 89
2959 Motion Planning and Posture Control of the General 3-Trailer System

Authors: K. Raghuwaiya, B. Sharma, J. Vanualailai

Abstract:

This paper presents a set of artificial potential field functions that improves upon; in general, the motion planning and posture control, with theoretically guaranteed point and posture stabilities, convergence and collision avoidance properties of the general 3-trailer system in a priori known environment. We basically design and inject two new concepts; ghost walls and the distance optimization technique (DOT) to strengthen point and posture stabilities, in the sense of Lyapunov, of our dynamical model. This new combination of techniques emerges as a convenient mechanism for obtaining feasible orientations at the target positions with an overall reduction in the complexity of the navigation laws. Simulations are provided to demonstrate the effectiveness of the controls laws.

Keywords: artificial potential fields, 3-trailer systems, motion planning, posture

Procedia PDF Downloads 412
2958 ANAC-id - Facial Recognition to Detect Fraud

Authors: Giovanna Borges Bottino, Luis Felipe Freitas do Nascimento Alves Teixeira

Abstract:

This article aims to present a case study of the National Civil Aviation Agency (ANAC) in Brazil, ANAC-id. ANAC-id is the artificial intelligence algorithm developed for image analysis that recognizes standard images of unobstructed and uprighted face without sunglasses, allowing to identify potential inconsistencies. It combines YOLO architecture and 3 libraries in python - face recognition, face comparison, and deep face, providing robust analysis with high level of accuracy.

Keywords: artificial intelligence, deepface, face compare, face recognition, YOLO, computer vision

Procedia PDF Downloads 142
2957 Reservoir Potential, Net Pay Zone and 3D Modeling of Cretaceous Clastic Reservoir in Eastern Sulieman Belt Pakistan

Authors: Hadayat Ullah, Pervez Khalid, Saad Ahmed Mashwani, Zaheer Abbasi, Mubashir Mehmood, Muhammad Jahangir, Ehsan ul Haq

Abstract:

The aim of the study is to explore subsurface structures through data that is acquired from the seismic survey to delineate the characteristics of the reservoir through petrophysical analysis. Ghazij Shale of Eocene age is regional seal rock in this field. In this research work, 3D property models of subsurface were prepared by applying Petrel software to identify various lithologies and reservoir fluids distribution throughout the field. The 3D static modeling shows a better distribution of the discrete and continuous properties in the field. This model helped to understand the reservoir properties and enhance production by selecting the best location for future drilling. A complete workflow is proposed for formation evaluation, electrofacies modeling, and structural interpretation of the subsurface geology. Based on the wireline logs, it is interpreted that the thickness of the Pab Sandstone varies from 250 m to 350 m in the entire study area. The sandstone is massive with high porosity and intercalated layers of shales. Faulted anticlinal structures are present in the study area, which are favorable for the accumulation of hydrocarbon. 3D structural models and various seismic attribute models were prepared to analyze the reservoir character of this clastic reservoir. Based on wireline logs and seismic data, clean sand, shaly sand, and shale are marked as dominant facies in the study area. However, clean sand facies are more favorable to act as a potential net pay zone.

Keywords: cretaceous, pab sandstone, petrophysics, electrofacies, hydrocarbon

Procedia PDF Downloads 127
2956 Finite Element Model to Evaluate Gas Conning Phenomenon in Naturally Fractured Oil Reservoirs

Authors: Reda Abdel Azim

Abstract:

Gas conning phenomenon considered one of the prevalent matter in oil field applications as it significantly affects the amount of produced oil, increase cost of production operation and it has a direct effect on oil reservoirs recovery efficiency as well. Therefore, evaluation of such phenomenon and study the reservoir mechanisms that may strongly affect invading gas to the producing formation is crucial. Gas conning is a result of an imbalance between two major forces controlling the oil production: gravitational and viscous forces especially in naturally fractured reservoirs where the capillary pressure forces are negligible. Once the gas invading the producing formation near the wellbore due to large producing oil rate, the oil gas contact will change and such reservoirs are prone to gas conning. Moreover, the oil volume expected to be produced requires the use of long horizontal perforated well. This work presents a numerical simulation study to predict and propose solutions to gas coning in naturally fractured oil reservoirs. The simulation work is based on discrete fractures and permeability tensors approaches. The governing equations are discretized using finite element approach and Galerkin’s least square technique (GLS) is employed to stabilize the equation solutions. The developed simulator is validated against Eclipse-100 using horizontal fractures. The matrix and fracture properties are modelled. Critical rate, breakthrough time and GOR are determined to be used in investigation of the effect of matrix and fracture properties on gas coning. Results show that fracture distribution in terms of diverse dip and azimuth has a great effect on conning occurring. In addition, fracture porosity, anisotropy ratio, and fracture aperture.

Keywords: gas conning, finite element, fractured reservoirs, multiphase

Procedia PDF Downloads 185
2955 Relevant LMA Features for Human Motion Recognition

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets.

Keywords: discriminative LMA features, features reduction, human motion recognition, random forest

Procedia PDF Downloads 180
2954 The Application of Data Mining Technology in Building Energy Consumption Data Analysis

Authors: Liang Zhao, Jili Zhang, Chongquan Zhong

Abstract:

Energy consumption data, in particular those involving public buildings, are impacted by many factors: the building structure, climate/environmental parameters, construction, system operating condition, and user behavior patterns. Traditional methods for data analysis are insufficient. This paper delves into the data mining technology to determine its application in the analysis of building energy consumption data including energy consumption prediction, fault diagnosis, and optimal operation. Recent literature are reviewed and summarized, the problems faced by data mining technology in the area of energy consumption data analysis are enumerated, and research points for future studies are given.

Keywords: data mining, data analysis, prediction, optimization, building operational performance

Procedia PDF Downloads 835
2953 Radar-Based Classification of Pedestrian and Dog Using High-Resolution Raw Range-Doppler Signatures

Authors: C. Mayr, J. Periya, A. Kariminezhad

Abstract:

In this paper, we developed a learning framework for the classification of vulnerable road users (VRU) by their range-Doppler signatures. The frequency-modulated continuous-wave (FMCW) radar raw data is first pre-processed to obtain robust object range-Doppler maps per coherent time interval. The complex-valued range-Doppler maps captured from our outdoor measurements are further fed into a convolutional neural network (CNN) to learn the classification. This CNN has gone through a hyperparameter optimization process for improved learning. By learning VRU range-Doppler signatures, the three classes 'pedestrian', 'dog', and 'noise' are classified with an average accuracy of almost 95%. Interestingly, this classification accuracy holds for a combined longitudinal and lateral object trajectories.

Keywords: machine learning, radar, signal processing, autonomous driving

Procedia PDF Downloads 225
2952 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 55
2951 A Time-Reducible Approach to Compute Determinant |I-X|

Authors: Wang Xingbo

Abstract:

Computation of determinant in the form |I-X| is primary and fundamental because it can help to compute many other determinants. This article puts forward a time-reducible approach to compute determinant |I-X|. The approach is derived from the Newton’s identity and its time complexity is no more than that to compute the eigenvalues of the square matrix X. Mathematical deductions and numerical example are presented in detail for the approach. By comparison with classical approaches the new approach is proved to be superior to the classical ones and it can naturally reduce the computational time with the improvement of efficiency to compute eigenvalues of the square matrix.

Keywords: algorithm, determinant, computation, eigenvalue, time complexity

Procedia PDF Downloads 401
2950 Structural Analysis of an Active Morphing Wing for Enhancing UAV Performance

Authors: E. Kaygan, A. Gatto

Abstract:

A numerical study of a design concept for actively controlling wing twist is described in this paper. The concept consists of morphing elements which were designed to provide a rigid and seamless skin while maintaining structural rigidity. The wing structure is first modeled in CATIA V5 then imported into ANSYS for structural analysis. Athena Vortex Lattice method (AVL) is used to estimate aerodynamic response as well as aerodynamic loads of morphing wings, afterwards a structural optimization performed via ANSYS Static. Overall, the results presented in this paper show that the concept provides efficient wing twist while preserving an aerodynamically smooth and compliant surface. Sufficient structural rigidity in bending is also obtained. This concept is suggested as a possible alternative for morphing skin applications. 

Keywords: aircraft, morphing, skin, twist

Procedia PDF Downloads 381
2949 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 294
2948 Graphene Materials for Efficient Hybrid Solar Cells: A Spectroscopic Investigation

Authors: Mohammed Khenfouch, Fokotsa V. Molefe, Bakang M. Mothudi

Abstract:

Nowadays, graphene and its composites are universally known as promising materials. They show their potential in a large field of applications including photovoltaics. This study reports on the role of nanohybrids and nanosystems known as strong light harvesters in the efficiency of graphene hybrid solar cells. Our system included Graphene/ZnO/Porphyrin/P3HT layers. Moreover, the physical properties including surface/interface, optical and vibrational properties were also studied. Our investigations confirmed the interaction between the different components as well as the sensitivity of their photonics to the synthesis conditions. Remarkable energy and charge transfer were detected and deeply investigated. Hence, the optimization of the conditions will lead to the fabrication of higher conversion efficiency in graphene solar cells.

Keywords: graphene, optoelectronics, nanohybrids, solar cells

Procedia PDF Downloads 156