Search results for: Artificial Bee Colony algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5420

Search results for: Artificial Bee Colony algorithm

1040 Revolutionizing Autonomous Trucking Logistics with Customer Relationship Management Cloud

Authors: Sharda Kumari, Saiman Shetty

Abstract:

Autonomous trucking is just one of the numerous significant shifts impacting fleet management services. The Society of Automotive Engineers (SAE) has defined six levels of vehicle automation that have been adopted internationally, including by the United States Department of Transportation. On public highways in the United States, organizations are testing driverless vehicles with at least Level 4 automation which indicates that a human is present in the vehicle and can disable automation, which is usually done while the trucks are not engaged in highway driving. However, completely driverless vehicles are presently being tested in the state of California. While autonomous trucking can increase safety, decrease trucking costs, provide solutions to trucker shortages, and improve efficiencies, logistics, too, requires advancements to keep up with trucking innovations. Given that artificial intelligence, machine learning, and automated procedures enable people to do their duties in other sectors with fewer resources, CRM (Customer Relationship Management) can be applied to the autonomous trucking business to provide the same level of efficiency. In a society witnessing significant digital disruptions, fleet management is likewise being transformed by technology. Utilizing strategic alliances to enhance core services is an effective technique for capitalizing on innovations and delivering enhanced services. Utilizing analytics on CRM systems improves cost control of fuel strategy, fleet maintenance, driver behavior, route planning, road safety compliance, and capacity utilization. Integration of autonomous trucks with automated fleet management, yard/terminal management, and customer service is possible, thus having significant power to redraw the lines between the public and private spheres in autonomous trucking logistics.

Keywords: autonomous vehicles, customer relationship management, customer experience, autonomous trucking, digital transformation

Procedia PDF Downloads 91
1039 Real-Time Path Planning for Unmanned Air Vehicles Using Improved Rapidly-Exploring Random Tree and Iterative Trajectory Optimization

Authors: A. Ramalho, L. Romeiro, R. Ventura, A. Suleman

Abstract:

A real-time path planning framework for Unmanned Air Vehicles, and in particular multi-rotors is proposed. The framework is designed to provide feasible trajectories from the current UAV position to a goal state, taking into account constraints such as obstacle avoidance, problem kinematics, and vehicle limitations such as maximum speed and maximum acceleration. The framework computes feasible paths online, allowing to avoid new, unknown, dynamic obstacles without fully re-computing the trajectory. These features are achieved using an iterative process in which the robot computes and optimizes the trajectory while performing the mission objectives. A first trajectory is computed using a modified Rapidly-Exploring Random Tree (RRT) algorithm, that provides trajectories that respect a maximum curvature constraint. The trajectory optimization is accomplished using the Interior Point Optimizer (IPOPT) as a solver. The framework has proven to be able to compute a trajectory and optimize to a locally optimal with computational efficiency making it feasible for real-time operations.

Keywords: interior point optimization, multi-rotors, online path planning, rapidly exploring random trees, trajectory optimization

Procedia PDF Downloads 129
1038 Humans, Social Robots, and Mutual Love: An Application of Aristotle’s Nicomachean Ethics

Authors: Ruby Jean Hornsby

Abstract:

In our rapidly advancing techno-moral world, human-robot relationships are increasingly becoming a part of intimate human life. Indeed, social robots - that is, autonomous or semi-autonomous embodied artificial agents that generally possess human or animal-like qualities (such as responding to environmental stimuli, communicating, learning, performing human tasks, and making autonomous decisions) - have been designed to function as human friends. In light of such advances, immediate philosophical scrutiny is imperative in order to examine the extent to which human-robot interactions constitute genuine friendship and therefore contribute towards the good human life. Aristotle's conception of friendship is philosophically illuminating and sufficiently broad in scope to guide such analysis. On his account, it is necessary (though not sufficient) that for a friendship to exist between two agents - A and B - both agents must have a mutual love for one another. Aristotle claims that A loves B if: Condition 1: A desires those apparent good (qua pleasant, useful, or virtuous) properties attributable to B, and Condition 2: A has goodwill (wishes what is best) for B. This paper argues that human-robot interaction can (and does) successfully meet both conditions; as such, it demonstrates that robots and humans can reciprocally love one another. It will argue for this position by first justifying the claim that a human can desire apparent good features attributable to a robot (i.e., by taking them to be pleasant and/or useful) and outlining how it is that a human can wish a robot well in light of that robot's (quasi-) interests. Next, the paper will argue that a robot can (quasi-)desire certain properties that are attributable to a human before elucidating how it is possible for a robot to act in the interests of a human. Accordingly, this paper will conclude that it is already the case that humans can formulate relationships with robots that involve reciprocated love. This is significant because it suggests that social robots are candidates for human friendship and can therefore contribute toward flourishing human futures.

Keywords: ancient philosophy, friendship, inter-disciplinary applied ethics, love, social robotics

Procedia PDF Downloads 94
1037 Development of a Shape Based Estimation Technology Using Terrestrial Laser Scanning

Authors: Gichun Cha, Byoungjoon Yu, Jihwan Park, Minsoo Park, Junghyun Im, Sehwan Park, Sujung Sin, Seunghee Park

Abstract:

The goal of this research is to estimate a structural shape change using terrestrial laser scanning. This study proceeds with development of data reduction and shape change estimation algorithm for large-capacity scan data. The point cloud of scan data was converted to voxel and sampled. Technique of shape estimation is studied to detect changes in structure patterns, such as skyscrapers, bridges, and tunnels based on large point cloud data. The point cloud analysis applies the octree data structure to speed up the post-processing process for change detection. The point cloud data is the relative representative value of shape information, and it used as a model for detecting point cloud changes in a data structure. Shape estimation model is to develop a technology that can detect not only normal but also immediate structural changes in the event of disasters such as earthquakes, typhoons, and fires, thereby preventing major accidents caused by aging and disasters. The study will be expected to improve the efficiency of structural health monitoring and maintenance.

Keywords: terrestrial laser scanning, point cloud, shape information model, displacement measurement

Procedia PDF Downloads 223
1036 Offline Signature Verification Using Minutiae and Curvature Orientation

Authors: Khaled Nagaty, Heba Nagaty, Gerard McKee

Abstract:

A signature is a behavioral biometric that is used for authenticating users in most financial and legal transactions. Signatures can be easily forged by skilled forgers. Therefore, it is essential to verify whether a signature is genuine or forged. The aim of any signature verification algorithm is to accommodate the differences between signatures of the same person and increase the ability to discriminate between signatures of different persons. This work presented in this paper proposes an automatic signature verification system to indicate whether a signature is genuine or not. The system comprises four phases: (1) The pre-processing phase in which image scaling, binarization, image rotation, dilation, thinning, and connecting ridge breaks are applied. (2) The feature extraction phase in which global and local features are extracted. The local features are minutiae points, curvature orientation, and curve plateau. The global features are signature area, signature aspect ratio, and Hu moments. (3) The post-processing phase, in which false minutiae are removed. (4) The classification phase in which features are enhanced before feeding it into the classifier. k-nearest neighbors and support vector machines are used. The classifier was trained on a benchmark dataset to compare the performance of the proposed offline signature verification system against the state-of-the-art. The accuracy of the proposed system is 92.3%.

Keywords: signature, ridge breaks, minutiae, orientation

Procedia PDF Downloads 136
1035 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate

Authors: Susan Diamond

Abstract:

Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare. 

Keywords: deep learning, machine learning, cognitive computing, model training

Procedia PDF Downloads 198
1034 Analysis of Brain Signals Using Neural Networks Optimized by Co-Evolution Algorithms

Authors: Zahra Abdolkarimi, Naser Zourikalatehsamad,

Abstract:

Up to 40 years ago, after recognition of epilepsy, it was generally believed that these attacks occurred randomly and suddenly. However, thanks to the advance of mathematics and engineering, such attacks can be predicted within a few minutes or hours. In this way, various algorithms for long-term prediction of the time and frequency of the first attack are presented. In this paper, by considering the nonlinear nature of brain signals and dynamic recorded brain signals, ANFIS model is presented to predict the brain signals, since according to physiologic structure of the onset of attacks, more complex neural structures can better model the signal during attacks. Contribution of this work is the co-evolution algorithm for optimization of ANFIS network parameters. Our objective is to predict brain signals based on time series obtained from brain signals of the people suffering from epilepsy using ANFIS. Results reveal that compared to other methods, this method has less sensitivity to uncertainties such as presence of noise and interruption in recorded signals of the brain as well as more accuracy. Long-term prediction capacity of the model illustrates the usage of planted systems for warning medication and preventing brain signals.

Keywords: co-evolution algorithms, brain signals, time series, neural networks, ANFIS model, physiologic structure, time prediction, epilepsy suffering, illustrates model

Procedia PDF Downloads 264
1033 Design and Implementation of a Lab Bench for Synthetic Aperture Radar Imaging System

Authors: Karthiyayini Nagarajan, P. V. RamaKrishna

Abstract:

Radar Imaging techniques provides extensive applications in the field of remote sensing, majorly Synthetic Aperture Radar(SAR) that provide high resolution target images. This paper work puts forward the effective and realizable signal generation and processing for SAR images. The major units in the system include camera, signal generation unit, signal processing unit and display screen. The real radio channel is replaced by its mathematical model based on optical image to calculate a reflected signal model in real time. Signal generation realizes the algorithm and forms the radar reflection model. Signal processing unit provides range and azimuth resolution through matched filtering and spectrum analysis procedure to form radar image on the display screen. The restored image has the same quality as that of the optical image. This SAR imaging system has been designed and implemented using MATLAB and Quartus II tools on Stratix III device as a System(lab bench) that works in real time to study/investigate on radar imaging rudiments and signal processing scheme for educational and research purposes.

Keywords: synthetic aperture radar, radio reflection model, lab bench

Procedia PDF Downloads 456
1032 Stock Market Prediction Using Convolutional Neural Network That Learns from a Graph

Authors: Mo-Se Lee, Cheol-Hwi Ahn, Kee-Young Kwahk, Hyunchul Ahn

Abstract:

Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN (Convolutional Neural Network), which is known as effective solution for recognizing and classifying images, has been popularly applied to classification and prediction problems in various fields. In this study, we try to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. In specific, we propose to apply CNN as the binary classifier that predicts stock market direction (up or down) by using a graph as its input. That is, our proposal is to build a machine learning algorithm that mimics a person who looks at the graph and predicts whether the trend will go up or down. Our proposed model consists of four steps. In the first step, it divides the dataset into 5 days, 10 days, 15 days, and 20 days. And then, it creates graphs for each interval in step 2. In the next step, CNN classifiers are trained using the graphs generated in the previous step. In step 4, it optimizes the hyper parameters of the trained model by using the validation dataset. To validate our model, we will apply it to the prediction of KOSPI200 for 1,986 days in eight years (from 2009 to 2016). The experimental dataset will include 14 technical indicators such as CCI, Momentum, ROC and daily closing price of KOSPI200 of Korean stock market.

Keywords: convolutional neural network, deep learning, Korean stock market, stock market prediction

Procedia PDF Downloads 417
1031 A Fuzzy Analytic Hierarchy Process Approach for the Decision of Maintenance Priorities of Building Entities: A Case Study in a Facilities Management Company

Authors: Wai Ho Darrell Kwok

Abstract:

Building entities are valuable assets of a society, however, all of them are suffered from the ravages of weather and time. Facilitating onerous maintenance activities is the only way to either maintain or enhance the value and contemporary standard of the premises. By the way, maintenance budget is always bounded by the corresponding threshold limit. In order to optimize the limited resources allocation in carrying out maintenance, there is a substantial need to prioritize maintenance work. This paper reveals the application of Fuzzy AHP in a Facilities Management Company determining the maintenance priorities on the basis of predetermined criteria, viz., Building Status (BS), Effects on Fabrics (EF), Effects on Sustainability (ES), Effects on Users (EU), Importance of Usage (IU) and Physical Condition (PC) in dealing with categorized 8 predominant building components maintenance aspects for building premises. From the case study, it is found that ‘building exterior repainting or re-tiling’, ‘spalling concrete repair works among exterior area’ and ‘lobby renovation’ are the top three maintenance priorities from facilities manager and maintenance expertise personnel. Through the application of the Fuzzy AHP for maintenance priorities decision algorithm, a more systemic and easier comparing scalar linearity factors being explored even in considering other multiple criteria decision scenarios of building maintenance issue.

Keywords: building maintenance, fuzzy AHP, maintenance priority, multi-criteria decision making

Procedia PDF Downloads 229
1030 Using Closed Frequent Itemsets for Hierarchical Document Clustering

Authors: Cheng-Jhe Lee, Chiun-Chieh Hsu

Abstract:

Due to the rapid development of the Internet and the increased availability of digital documents, the excessive information on the Internet has led to information overflow problem. In order to solve these problems for effective information retrieval, document clustering in text mining becomes a popular research topic. Clustering is the unsupervised classification of data items into groups without the need of training data. Many conventional document clustering methods perform inefficiently for large document collections because they were originally designed for relational database. Therefore they are impractical in real-world document clustering and require special handling for high dimensionality and high volume. We propose the FIHC (Frequent Itemset-based Hierarchical Clustering) method, which is a hierarchical clustering method developed for document clustering, where the intuition of FIHC is that there exist some common words for each cluster. FIHC uses such words to cluster documents and builds hierarchical topic tree. In this paper, we combine FIHC algorithm with ontology to solve the semantic problem and mine the meaning behind the words in documents. Furthermore, we use the closed frequent itemsets instead of only use frequent itemsets, which increases efficiency and scalability. The experimental results show that our method is more accurate than those of well-known document clustering algorithms.

Keywords: FIHC, documents clustering, ontology, closed frequent itemset

Procedia PDF Downloads 381
1029 Controller Design for Highly Maneuverable Aircraft Technology Using Structured Singular Value and Direct Search Method

Authors: Marek Dlapa

Abstract:

The algebraic approach is applied to the control of the HiMAT (Highly Maneuverable Aircraft Technology). The objective is to find a robust controller which guarantees robust stability and decoupled control of longitudinal model of a scaled remotely controlled vehicle version of the advanced fighter HiMAT. Control design is performed by decoupling the nominal MIMO (multi-input multi-output) system into two identical SISO (single-input single-output) plants which are approximated by a 4th order transfer function. The algebraic approach is then used for pole placement design, and the nominal closed-loop poles are tuned so that the peak of the µ-function is minimal. As an optimization tool, evolutionary algorithm Differential Migration is used in order to overcome the multimodality of the cost function yielding simple controller with decoupling for nominal plant which is compared with the D-K iteration through simulations of standard longitudinal manoeuvres documenting decoupled control obtained from algebraic approach for nominal plant as well as worst case perturbation.

Keywords: algebraic approach, evolutionary computation, genetic algorithms, HiMAT, robust control, structured singular value

Procedia PDF Downloads 129
1028 Real-Time Classification of Hemodynamic Response by Functional Near-Infrared Spectroscopy Using an Adaptive Estimation of General Linear Model Coefficients

Authors: Sahar Jahani, Meryem Ayse Yucel, David Boas, Seyed Kamaledin Setarehdan

Abstract:

Near-infrared spectroscopy allows monitoring of oxy- and deoxy-hemoglobin concentration changes associated with hemodynamic response function (HRF). HRF is usually affected by natural physiological hemodynamic (systemic interferences) which occur in all body tissues including brain tissue. This makes HRF extraction a very challenging task. In this study, we used Kalman filter based on a general linear model (GLM) of brain activity to define the proportion of systemic interference in the brain hemodynamic. The performance of the proposed algorithm is evaluated in terms of the peak to peak error (Ep), mean square error (MSE), and Pearson’s correlation coefficient (R2) criteria between the estimated and the simulated hemodynamic responses. This technique also has the ability of real time estimation of single trial functional activations as it was applied to classify finger tapping versus resting state. The average real-time classification accuracy of 74% over 11 subjects demonstrates the feasibility of developing an effective functional near infrared spectroscopy for brain computer interface purposes (fNIRS-BCI).

Keywords: hemodynamic response function, functional near-infrared spectroscopy, adaptive filter, Kalman filter

Procedia PDF Downloads 142
1027 AI-Assisted Business Chinese Writing: Comparing the Textual Performances Between Independent Writing and Collaborative Writing

Authors: Stephanie Liu Lu

Abstract:

With the proliferation of artificial intelligence tools in the field of education, it is crucial to explore their impact on language learning outcomes. This paper examines the use of AI tools, such as ChatGPT, in practical writing within business Chinese teaching to investigate how AI can enhance practical writing skills and teaching effectiveness. The study involved third and fourth-year university students majoring in accounting and finance from a university in Hong Kong within the context of a business correspondence writing class. Students were randomly assigned to a control group, who completed business letter writing independently, and an experimental group, who completed the writing with the assistance of AI. In the latter, the AI-assisted business letters were initially drafted by the students issuing commands and interacting with the AI tool, followed by the students' revisions of the draft. The paper assesses the performance of both groups in terms of grammatical expression, communicative effect, and situational awareness. Additionally, the study collected dialogue texts from interactions between students and the AI tool to explore factors that affect text generation and the potential impact of AI on enhancing students' communicative and identity awareness. By collecting and comparing textual performances, it was found that students assisted by AI showed better situational awareness, as well as more skilled organization and grammar. However, the research also revealed that AI-generated articles frequently lacked a proper balance of identity and writing purpose due to limitations in students' communicative awareness and expression during the instruction and interaction process. Furthermore, the revision of drafts also tested the students' linguistic foundation, logical thinking abilities, and practical workplace experience. Therefore, integrating AI tools and related teaching into the curriculum is key to the future of business Chinese teaching.

Keywords: AI-assistance, business Chinese, textual analysis, language education

Procedia PDF Downloads 45
1026 Effect of pH-Dependent Surface Charge on the Electroosmotic Flow through Nanochannel

Authors: Partha P. Gopmandal, Somnath Bhattacharyya, Naren Bag

Abstract:

In this article, we have studied the effect of pH-regulated surface charge on the electroosmotic flow (EOF) through nanochannel filled with binary symmetric electrolyte solution. The channel wall possesses either an acidic or a basic functional group. Going beyond the widely employed Debye-Huckel linearization, we develop a mathematical model based on Nernst-Planck equation for the charged species, Poisson equation for the induced potential, Stokes equation for fluid flow. A finite volume based numerical algorithm is adopted to study the effect of key parameters on the EOF. We have computed the coupled governing equations through the finite volume method and our results found to be in good agreement with the analytical solution obtained from the corresponding linear model based on low surface charge condition or strong electrolyte solution. The influence of the surface charge density, reaction constant of the functional groups, bulk pH, and concentration of the electrolyte solution on the overall flow rate is studied extensively. We find the effect of surface charge diminishes with the increase in electrolyte concentration. In addition for strong electrolyte, the surface charge becomes independent of pH due to complete dissociation of the functional groups.

Keywords: electroosmosis, finite volume method, functional group, surface charge

Procedia PDF Downloads 408
1025 Information Visualization Methods Applied to Nanostructured Biosensors

Authors: Osvaldo N. Oliveira Jr.

Abstract:

The control of molecular architecture inherent in some experimental methods to produce nanostructured films has had great impact on devices of various types, including sensors and biosensors. The self-assembly monolayers (SAMs) and the electrostatic layer-by-layer (LbL) techniques, for example, are now routinely used to produce tailored architectures for biosensing where biomolecules are immobilized with long-lasting preserved activity. Enzymes, antigens, antibodies, peptides and many other molecules serve as the molecular recognition elements for detecting an equally wide variety of analytes. The principles of detection are also varied, including electrochemical methods, fluorescence spectroscopy and impedance spectroscopy. In this presentation an overview will be provided of biosensors made with nanostructured films to detect antibodies associated with tropical diseases and HIV, in addition to detection of analytes of medical interest such as cholesterol and triglycerides. Because large amounts of data are generated in the biosensing experiments, use has been made of computational and statistical methods to optimize performance. Multidimensional projection techniques such as Sammon´s mapping have been shown more efficient than traditional multivariate statistical analysis in identifying small concentrations of anti-HIV antibodies and for distinguishing between blood serum samples of animals infected with two tropical diseases, namely Chagas´ disease and Leishmaniasis. Optimization of biosensing may include a combination of another information visualization method, the Parallel Coordinate technique, with artificial intelligence methods in order to identify the most suitable frequencies for reaching higher sensitivity using impedance spectroscopy. Also discussed will be the possible convergence of technologies, through which machine learning and other computational methods may be used to treat data from biosensors within an expert system for clinical diagnosis.

Keywords: clinical diagnosis, information visualization, nanostructured films, layer-by-layer technique

Procedia PDF Downloads 321
1024 Design and Analysis of Adaptive Type-I Progressive Hybrid Censoring Plan under Step Stress Partially Accelerated Life Testing Using Competing Risk

Authors: Ariful Islam, Showkat Ahmad Lone

Abstract:

Statistical distributions have long been employed in the assessment of semiconductor devices and product reliability. The power function-distribution is one of the most important distributions in the modern reliability practice and can be frequently preferred over mathematically more complex distributions, such as the Weibull and the lognormal, because of its simplicity. Moreover, it may exhibit a better fit for failure data and provide more appropriate information about reliability and hazard rates in some circumstances. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests for competing risk based on adoptive type-I progressive hybrid censoring criteria. The life data of the units under test is assumed to follow Mukherjee-Islam distribution. The point and interval maximum-likelihood estimations are obtained for distribution parameters and tampering coefficient. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.

Keywords: adoptive progressive hybrid censoring, competing risk, mukherjee-islam distribution, partially accelerated life testing, simulation study

Procedia PDF Downloads 338
1023 Stackelberg Security Game for Optimizing Security of Federated Internet of Things Platform Instances

Authors: Violeta Damjanovic-Behrendt

Abstract:

This paper presents an approach for optimal cyber security decisions to protect instances of a federated Internet of Things (IoT) platform in the cloud. The presented solution implements the repeated Stackelberg Security Game (SSG) and a model called Stochastic Human behaviour model with AttRactiveness and Probability weighting (SHARP). SHARP employs the Subjective Utility Quantal Response (SUQR) for formulating a subjective utility function, which is based on the evaluations of alternative solutions during decision-making. We augment the repeated SSG (including SHARP and SUQR) with a reinforced learning algorithm called Naïve Q-Learning. Naïve Q-Learning belongs to the category of active and model-free Machine Learning (ML) techniques in which the agent (either the defender or the attacker) attempts to find an optimal security solution. In this way, we combine GT and ML algorithms for discovering optimal cyber security policies. The proposed security optimization components will be validated in a collaborative cloud platform that is based on the Industrial Internet Reference Architecture (IIRA) and its recently published security model.

Keywords: security, internet of things, cloud computing, stackelberg game, machine learning, naive q-learning

Procedia PDF Downloads 343
1022 Optimization of Lubricant Distribution with Alternative Coordinates and Number of Warehouses Considering Truck Capacity and Time Windows

Authors: Taufik Rizkiandi, Teuku Yuri M. Zagloel, Andri Dwi Setiawan

Abstract:

Distribution and growth in the transportation and warehousing business sector decreased by 15,04%. There was a decrease in Gross Domestic Product (GDP) contribution level from rank 7 of 4,41% in 2019 to 3,81% in rank 8 in 2020. A decline in the transportation and warehousing business sector contributes to GDP, resulting in oil and gas companies implementing an efficient supply chain strategy to ensure the availability of goods, especially lubricants. Fluctuating demand for lubricants and warehouse service time limits are essential things that are taken into account in determining an efficient route. Add depots points as a solution so that demand for lubricants is fulfilled (not stock out). However, adding a depot will increase operating costs and storage costs. Therefore, it is necessary to optimize the addition of depots using the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). This research case study was conducted at an oil and gas company that produces lubricants from 2019 to 2021. The study results obtained the optimal route and the addition of a depot with a minimum additional cost. The total cost remains efficient with the addition of a depot when compared to one depot from Jakarta.

Keywords: CVRPTW, optimal route, depot, tabu search algorithm

Procedia PDF Downloads 126
1021 Cost Analysis of Optimized Fast Network Mobility in IEEE 802.16e Networks

Authors: Seyyed Masoud Seyyedoshohadaei, Borhanuddin Mohd Ali

Abstract:

To support group mobility, the NEMO Basic Support Protocol has been standardized as an extension of Mobile IP that enables an entire network to change its point of attachment to the Internet. Using NEMO in IEEE 802.16e (WiMax) networks causes latency in handover procedure and affects seamless communication of real-time applications. To decrease handover latency and service disruption time, an integrated scheme named Optimized Fast NEMO (OFNEMO) was introduced by authors of this paper. In OFNEMO a pre-establish multi tunnels concept, cross function optimization and cross layer design are used. In this paper, an analytical model is developed to evaluate total cost consisting of signaling and packet delivery costs of the OFNEMO compared with RFC3963. Results show that OFNEMO increases probability of predictive mode compared with RFC3963 due to smaller handover latency. Even though OFNEMO needs extra signalling to pre-establish multi tunnel, it has less total cost thanks to its optimized algorithm. OFNEMO can minimize handover latency for supporting real time application in moving networks.

Keywords: fast mobile IPv6, handover latency, IEEE802.16e, network mobility

Procedia PDF Downloads 182
1020 Investigation of the Factors Influencing the Construction Planning Process Using Participant Observation Method

Authors: Ashokkumar Subbiah

Abstract:

This study investigates the impact of factors that influenced the success of construction planning for a major construction project in Qatar. An approach of participant observation is adopted which is informed by the principles of ethnography: one that reports the participants’ view of their world rather than imposing an artificial theoretical framework upon it. As participant observant, key factors were observed and identified that had an impact on the management and execution of the construction planning. It is found that a ‘shadow culture’ exists between the project participants which, it is argued, is only observable from the perspective of an embedded participant observer. The shadow culture acts to enable the management of the planning process, and its efficacy relates to the ‘quality’ of human inter-relationships amongst immediate stakeholders. Whilst this study uses the concept of shadow culture, it is treated as both a methodological stance and one of the findings of this research in the context of the major construction project in Qatar. The concept of shadow culture is not imposed upon the findings, but instead is used as a research tool: respondents report their own worldview and this is reported from the view of a participant observant in a manner that is understandable and useful to those who are not part of the construction project. The findings of this study identify similar factors influencing the planning process of the Qatar project, but the shadow culture predominantly influences these factors towards the failure of planning process. The research concludes by questioning the assumption that construction planning is a mechanistic process that has to be conducted solely by the planning team. Instead, it is a highly social phenomenon in which the seemingly mechanistic process is made workable by the quality of relationships that exist in the project. Drawing on this the final section provides a series of recommendations that may be helpful in enhancing the efficacy of project planning; these include better training/education at the pre-construction phase; recognition of the importance of shadow processes at management levels, and better appreciation of the impact of contract type and chosen procurement route.

Keywords: construction planning, participant observation, project participants, shadow culture

Procedia PDF Downloads 285
1019 Calculating Asphaltenes Precipitation Onset Pressure by Using Cardanol as Precipitation Inhibitor: A Strategy to Increment the Oil Well Production

Authors: Camilo A. Guerrero-Martin, Erik Montes Paez, Marcia C. K. Oliveira, Jonathan Campos, Elizabete F. Lucas

Abstract:

Asphaltenes precipitation is considered as a formation damage problem, which can reduce the oil recovery factor. It fouls piping and surface installations, as well as cause serious flow assurance complications and decline oil well production. Therefore, researchers have shown an interest in chemical treatments to control this phenomenon. The aim of this paper is to assess the asphaltenes precipitation onset of crude oils in the presence of cardanol, by titrating the crude with n-heptane. Moreover, based on this results obtained at atmosphere pressure, the asphaltenes precipitation onset pressure were calculated to predict asphaltenes precipitation in the reservoir, by using differential liberation and refractive index data of the oils. The influence of cardanol concentrations in the asphaltenes stabilization of three Brazilian crude oils samples (with similar API densities) was studied. Therefore, four formulations of cardanol in toluene were prepared: 0, 3, 5, 10 and 15 m/m%. The formulations were added to the crude at 2:98 ratio. The petroleum samples were characterized by API density, elemental analysis and differential liberation test. The asphaltenes precipitation onset (APO) was determined by titrating with n-heptane and monitoring with near-infrared (NIR). UV-Vis spectroscopy experiments were also done to assess the precipitate asphaltenes content. The asphaltenes precipitation envelopes (APE) were also determined by numerical simulation (Multiflash). In addition, the adequate artificial lift systems (ALS) for the oils were selected. It was based on the downhole well profile and a screening methodology. Finally, the oil flowrates were modelling by NODAL analysis production system in the PIPESIM software. The results of this study show that the asphaltenes precipitation onset of the crude oils were 2.2, 2.3 and 6.0 mL of n-heptane/g of oil. The cardanol was an effective inhibitor of asphaltenes precipitation for the crude oils used in this study, since it displaces the precipitation pressure of the oil to lower values. This indicates that cardanol can increase the oil wells productivity.

Keywords: asphaltenes, NODAL analysis production system, precipitation pressure onset, inhibitory molecule

Procedia PDF Downloads 163
1018 The Mechanism of Design and Analysis Modeling of Performance of Variable Speed Wind Turbine and Dynamical Control of Wind Turbine Power

Authors: Mohammadreza Heydariazad

Abstract:

Productivity growth of wind energy as a clean source needed to achieve improved strategy in production and transmission and management of wind resources in order to increase quality of power and reduce costs. New technologies based on power converters that cause changing turbine speed to suit the wind speed blowing turbine improve extraction efficiency power from wind. This article introduces variable speed wind turbines and optimization of power, and presented methods to use superconducting inductor in the composition of power converter and is proposed the dc measurement for the wind farm and especially is considered techniques available to them. In fact, this article reviews mechanisms and function, changes of wind speed turbine according to speed control strategies of various types of wind turbines and examines power possible transmission and ac from producing location to suitable location for a strong connection integrating wind farm generators, without additional cost or equipment. It also covers main objectives of the dynamic control of wind turbines, and the methods of exploitation and the ways of using it that includes the unique process of these components. Effective algorithm is presented for power control in order to extract maximum active power and maintains power factor at the desired value.

Keywords: wind energy, generator, superconducting inductor, wind turbine power

Procedia PDF Downloads 317
1017 Deorbiting Performance of Electrodynamic Tethers to Mitigate Space Debris

Authors: Giulia Sarego, Lorenzo Olivieri, Andrea Valmorbida, Carlo Bettanini, Giacomo Colombatti, Marco Pertile, Enrico C. Lorenzini

Abstract:

International guidelines recommend removing any artificial body in Low Earth Orbit (LEO) within 25 years from mission completion. Among disposal strategies, electrodynamic tethers appear to be a promising option for LEO, thanks to the limited storage mass and the minimum interface requirements to the host spacecraft. In particular, recent technological advances make it feasible to deorbit large objects with tether lengths of a few kilometers or less. To further investigate such an innovative passive system, the European Union is currently funding the project E.T.PACK – Electrodynamic Tether Technology for Passive Consumable-less Deorbit Kit in the framework of the H2020 Future Emerging Technologies (FET) Open program. The project focuses on the design of an end of life disposal kit for LEO satellites. This kit aims to deploy a taped tether that can be activated at the spacecraft end of life to perform autonomous deorbit within the international guidelines. In this paper, the orbital performance of the E.T.PACK deorbiting kit is compared to other disposal methods. Besides, the orbital decay prediction is parametrized as a function of spacecraft mass and tether system performance. Different values of length, width, and thickness of the tether will be evaluated for various scenarios (i.e., different initial orbital parameters). The results will be compared to other end-of-life disposal methods with similar allocated resources. The analysis of the more innovative system’s performance with the tape coated with a thermionic material, which has a low work-function (LWT), for which no active component for the cathode is required, will also be briefly discussed. The results show that the electrodynamic tether option can be a competitive and performant solution for satellite disposal compared to other deorbit technologies.

Keywords: deorbiting performance, H2020, spacecraft disposal, space electrodynamic tethers

Procedia PDF Downloads 158
1016 Fuzzy Logic Based Ventilation for Controlling Harmful Gases in Livestock Houses

Authors: Nuri Caglayan, H. Kursat Celik

Abstract:

There are many factors that influence the health and productivity of the animals in livestock production fields, including temperature, humidity, carbon dioxide (CO2), ammonia (NH3), hydrogen sulfide (H2S), physical activity and particulate matter. High NH3 concentrations reduce feed consumption and cause daily weight gain. At high concentrations, H2S causes respiratory problems and CO2 displace oxygen, which can cause suffocation or asphyxiation. Good air quality in livestock facilities can have an impact on the health and well-being of animals and humans. Air quality assessment basically depends on strictly given limits without taking into account specific local conditions between harmful gases and other meteorological factors. The stated limitations may be eliminated. using controlling systems based on neural networks and fuzzy logic. This paper describes a fuzzy logic based ventilation algorithm, which can calculate different fan speeds under pre-defined boundary conditions, for removing harmful gases from the production environment. In the paper, a fuzzy logic model has been developed based on a Mamedani’s fuzzy method. The model has been built on MATLAB software. As the result, optimum fan speeds under pre-defined boundary conditions have been presented.

Keywords: air quality, fuzzy logic model, livestock housing, fan speed

Procedia PDF Downloads 359
1015 Convergence and Stability in Federated Learning with Adaptive Differential Privacy Preservation

Authors: Rizwan Rizwan

Abstract:

This paper provides an overview of Federated Learning (FL) and its application in enhancing data security, privacy, and efficiency. FL utilizes three distinct architectures to ensure privacy is never compromised. It involves training individual edge devices and aggregating their models on a server without sharing raw data. This approach not only provides secure models without data sharing but also offers a highly efficient privacy--preserving solution with improved security and data access. Also we discusses various frameworks used in FL and its integration with machine learning, deep learning, and data mining. In order to address the challenges of multi--party collaborative modeling scenarios, a brief review FL scheme combined with an adaptive gradient descent strategy and differential privacy mechanism. The adaptive learning rate algorithm adjusts the gradient descent process to avoid issues such as model overfitting and fluctuations, thereby enhancing modeling efficiency and performance in multi-party computation scenarios. Additionally, to cater to ultra-large-scale distributed secure computing, the research introduces a differential privacy mechanism that defends against various background knowledge attacks.

Keywords: federated learning, differential privacy, gradient descent strategy, convergence, stability, threats

Procedia PDF Downloads 12
1014 Utilization of Pozzolonic Material for the Enhancement of the Concrete Strength: A Comprehensive Review Paper

Authors: M. Parvez Alam, M. Bilal Khan

Abstract:

Concrete is the material of choice where strength, performance, durability, impermeability, fire resistance, and abrasion resistance are required. The hunger for the higher strength leads to other materials to achieve the desired results and thus, emerged the contribution of cementitious material for the strength of concrete In present day constructions, concrete is chosen as one of the best choices by civil engineers in construction materials. The concept of sustainability is touching new heights and many pozzolonic materials are tried and tested as partial replacement for the cement. In this paper, comprehensive review of available literatures are studied to evaluate the performance of pozzolonic materials such as ceramic waste powder, copper slag, silica fume on the strength of concrete by the partial replacement of ordinary materials such as cement, fine aggregate and coarse aggregate at different percentage of composition. From the study, we conclude that ceramic wastes are suitable to be used in the construction industry, and more significantly on the making of concrete. Ceramic wastes are found to be suitable for usage as substitution for fine and coarse aggregates and partial substitution in cement production. They were found to be performing better than normal concrete, in properties such as density, durability, permeability, and compressive strength. Copper slag is the waste material of matte smelting and refining of copper such that each ton of copper generates approximately 2.5 tons of copper slag. Copper slag is one of the materials that is considered as a waste which could have a promising future in construction Industry as partial or full substitute of aggregates. Silica fume, also known as micro silica or condensed silica fume, is a relatively new material compared to fly ash, It is another material that is used as an artificial pozzolonic admixture. High strength concrete made with silica fume provides high abrasion/corrosion resistance.

Keywords: concrete, pozzolonic materials, ceramic waste powder, copper slag

Procedia PDF Downloads 304
1013 Upcoming Fight Simulation with Smart Shadow

Authors: Ramiz Kuliev, Fuad Kuliev-Smirnov

Abstract:

The 'Shadow Sparring' training exercise is widely used in the training of boxers and martial artists. The main disadvantage of the usual shadow sparring is that the trainer cannot fully control such training and evaluate its results. During the competition, the athlete, preparing for the upcoming fight, imagines the Shadow (upcoming opponent) in accordance with his own imagination. A ‘Smart-Shadow Sparring’ (SSS) is an innovative version of the ‘Shadow Sparring’. During SSS, the fighter will see the Shadow (virtual opponent that moves, defends, and punches) and understand when he misses the punches from the Shadow. The task of a real athlete is to spar with a virtual one, move around, punch in the direction of unprotected areas of the Shadow and dodge his punches. Moves and punches of Shadow are set up before each training. The system will give the coach full information about virtual sparring: (i) how many and what type of punches has the fighter landed, (ii) accuracy of these punches, (iii) how many and what type of virtual punches (punches of Smart-Shadow) has the fighter missed, etc. SSS will be recorded as animated fighting of two fighters and will help the coach to analyze past training. SSS can be configured to fit the physical and technical characteristics of the next real opponent (size, techniques, speed, missed and landed punches, etc.). This will allow to simulate and rehearse the upcoming fight and improve readiness for the next opponent. For amateur fighters, SSS will be reconfigured several times during a tournament, when the real opponent becomes known. SSS can be used in three versions: (1) Digital Shadow: the athlete will see a Shadow on a monitor (2) VR-Shadow: the athlete will see a Shadow in a VR-glasses (3) Smart Shadow: a Shadow will be controlled by artificial intelligence. These technologies are based on the ‘semi-real simulation’ method. The technology allows coaches to train athletes remotely. Simulation of different opponents will help the athletes better prepare for competition. Repeat rehearsals of the upcoming fight will help improve results. SSS can improve results in Boxing, Taekwondo, Karate, and Fencing. 41 sets of medals will be awarded in these sports at the 2020 Olympic Games.

Keywords: boxing, combat sports, fight simulation, shadow sparring

Procedia PDF Downloads 114
1012 Non-Contact Measurement of Soil Deformation in a Cyclic Triaxial Test

Authors: Erica Elice Uy, Toshihiro Noda, Kentaro Nakai, Jonathan Dungca

Abstract:

Deformation in a conventional cyclic triaxial test is normally measured by using point-wise measuring device. In this study, non-contact measurement technique was applied to be able to monitor and measure the occurrence of non-homogeneous behavior of the soil under cyclic loading. Non-contact measurement is executed through image processing. Two-dimensional measurements were performed using Lucas and Kanade optical flow algorithm and it was implemented Labview. In this technique, the non-homogeneous deformation was monitored using a mirrorless camera. A mirrorless camera was used because it is economical and it has the capacity to take pictures at a fast rate. The camera was first calibrated to remove the distortion brought about the lens and the testing environment as well. Calibration was divided into 2 phases. The first phase was the calibration of the camera parameters and distortion caused by the lens. The second phase was to for eliminating the distortion brought about the triaxial plexiglass. A correction factor was established from this phase. A series of consolidated undrained cyclic triaxial test was performed using a coarse soil. The results from the non-contact measurement technique were compared to the measured deformation from the linear variable displacement transducer. It was observed that deformation was higher at the area where failure occurs.

Keywords: cyclic loading, non-contact measurement, non-homogeneous, optical flow

Procedia PDF Downloads 291
1011 Gaits Stability Analysis for a Pneumatic Quadruped Robot Using Reinforcement Learning

Authors: Soofiyan Atar, Adil Shaikh, Sahil Rajpurkar, Pragnesh Bhalala, Aniket Desai, Irfan Siddavatam

Abstract:

Deep reinforcement learning (deep RL) algorithms leverage the symbolic power of complex controllers by automating it by mapping sensory inputs to low-level actions. Deep RL eliminates the complex robot dynamics with minimal engineering. Deep RL provides high-risk involvement by directly implementing it in real-world scenarios and also high sensitivity towards hyperparameters. Tuning of hyperparameters on a pneumatic quadruped robot becomes very expensive through trial-and-error learning. This paper presents an automated learning control for a pneumatic quadruped robot using sample efficient deep Q learning, enabling minimal tuning and very few trials to learn the neural network. Long training hours may degrade the pneumatic cylinder due to jerk actions originated through stochastic weights. We applied this method to the pneumatic quadruped robot, which resulted in a hopping gait. In our process, we eliminated the use of a simulator and acquired a stable gait. This approach evolves so that the resultant gait matures more sturdy towards any stochastic changes in the environment. We further show that our algorithm performed very well as compared to programmed gait using robot dynamics.

Keywords: model-based reinforcement learning, gait stability, supervised learning, pneumatic quadruped

Procedia PDF Downloads 301