Search results for: Whale Optimization Algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6093

Search results for: Whale Optimization Algorithm

3873 Location Management in Wireless Sensor Networks with Mobility

Authors: Amrita Anil Agashe, Sumant Tapas, Ajay Verma Yogesh Sonavane, Sourabh Yeravar

Abstract:

Due to advancement in MEMS technology today wireless sensors network has gained a lot of importance. The wide range of its applications includes environmental and habitat monitoring, object localization, target tracking, security surveillance etc. Wireless sensor networks consist of tiny sensor devices called as motes. The constrained computation power, battery power, storage capacity and communication bandwidth of the tiny motes pose challenging problems in the design and deployment of such systems. In this paper, we propose a ubiquitous framework for Real-Time Tracking, Sensing and Management System using IITH motes. Also, we explain the algorithm that we have developed for location management in wireless sensor networks with the aspect of mobility. Our developed framework and algorithm can be used to detect emergency events and safety threats and provides warning signals to handle the emergency.

Keywords: mobility management, motes, multihop, wireless sensor networks

Procedia PDF Downloads 419
3872 Study on Sharp V-Notch Problem under Dynamic Loading Condition Using Symplectic Analytical Singular Element

Authors: Xiaofei Hu, Zhiyu Cai, Weian Yao

Abstract:

V-notch problem under dynamic loading condition is considered in this paper. In the time domain, the precise time domain expanding algorithm is employed, in which a self-adaptive technique is carried out to improve computing accuracy. By expanding variables in each time interval, the recursive finite element formulas are derived. In the space domain, a Symplectic Analytical Singular Element (SASE) for V-notch problem is constructed addressing the stress singularity of the notch tip. Combining with the conventional finite elements, the proposed SASE can be used to solve the dynamic stress intensity factors (DSIFs) in a simple way. Numerical results show that the proposed SASE for V-notch problem subjected to dynamic loading condition is effective and efficient.

Keywords: V-notch, dynamic stress intensity factor, finite element method, precise time domain expanding algorithm

Procedia PDF Downloads 172
3871 Kou Jump Diffusion Model: An Application to the SP 500; Nasdaq 100 and Russell 2000 Index Options

Authors: Wajih Abbassi, Zouhaier Ben Khelifa

Abstract:

The present research points towards the empirical validation of three options valuation models, the ad-hoc Black-Scholes model as proposed by Berkowitz (2001), the constant elasticity of variance model of Cox and Ross (1976) and the Kou jump-diffusion model (2002). Our empirical analysis has been conducted on a sample of 26,974 options written on three indexes, the S&P 500, Nasdaq 100 and the Russell 2000 that were negotiated during the year 2007 just before the sub-prime crisis. We start by presenting the theoretical foundations of the models of interest. Then we use the technique of trust-region-reflective algorithm to estimate the structural parameters of these models from cross-section of option prices. The empirical analysis shows the superiority of the Kou jump-diffusion model. This superiority arises from the ability of this model to portray the behavior of market participants and to be closest to the true distribution that characterizes the evolution of these indices. Indeed the double-exponential distribution covers three interesting properties that are: the leptokurtic feature, the memory less property and the psychological aspect of market participants. Numerous empirical studies have shown that markets tend to have both overreaction and under reaction over good and bad news respectively. Despite of these advantages there are not many empirical studies based on this model partly because probability distribution and option valuation formula are rather complicated. This paper is the first to have used the technique of nonlinear curve-fitting through the trust-region-reflective algorithm and cross-section options to estimate the structural parameters of the Kou jump-diffusion model.

Keywords: jump-diffusion process, Kou model, Leptokurtic feature, trust-region-reflective algorithm, US index options

Procedia PDF Downloads 429
3870 Smooth Second Order Nonsingular Terminal Sliding Mode Control for a 6 DOF Quadrotor UAV

Authors: V. Tabrizi, A. Vali, R. GHasemi, V. Behnamgol

Abstract:

In this article, a nonlinear model of an under actuated six degrees of freedom (6 DOF) quadrotor UAV is derived on the basis of the Newton-Euler formula. The derivation comprises determining equations of the motion of the quadrotor in three dimensions and approximating the actuation forces through the modeling of aerodynamic coefficients and electric motor dynamics. The robust nonlinear control strategy includes a smooth second order non-singular terminal sliding mode control which is applied to stabilizing this model. The control method is on the basis of super twisting algorithm for removing the chattering and producing smooth control signal. Also, nonsingular terminal sliding mode idea is used for introducing a nonlinear sliding variable that guarantees the finite time convergence in sliding phase. Simulation results show that the proposed algorithm is robust against uncertainty or disturbance and guarantees a fast and precise control signal.

Keywords: quadrotor UAV, nonsingular terminal sliding mode, second order sliding mode t, electronics, control, signal processing

Procedia PDF Downloads 441
3869 Privacy-Preserving Model for Social Network Sites to Prevent Unwanted Information Diffusion

Authors: Sanaz Kavianpour, Zuraini Ismail, Bharanidharan Shanmugam

Abstract:

Social Network Sites (SNSs) can be served as an invaluable platform to transfer the information across a large number of individuals. A substantial component of communicating and managing information is to identify which individual will influence others in propagating information and also whether dissemination of information in the absence of social signals about that information will be occurred or not. Classifying the final audience of social data is difficult as controlling the social contexts which transfers among individuals are not completely possible. Hence, undesirable information diffusion to an unauthorized individual on SNSs can threaten individuals’ privacy. This paper highlights the information diffusion in SNSs and moreover it emphasizes the most significant privacy issues to individuals of SNSs. The goal of this paper is to propose a privacy-preserving model that has urgent regards with individuals’ data in order to control availability of data and improve privacy by providing access to the data for an appropriate third parties without compromising the advantages of information sharing through SNSs.

Keywords: anonymization algorithm, classification algorithm, information diffusion, privacy, social network sites

Procedia PDF Downloads 321
3868 Hardware Implementation and Real-time Experimental Validation of a Direction of Arrival Estimation Algorithm

Authors: Nizar Tayem, AbuMuhammad Moinuddeen, Ahmed A. Hussain, Redha M. Radaydeh

Abstract:

This research paper introduces an approach for estimating the direction of arrival (DOA) of multiple RF noncoherent sources in a uniform linear array (ULA). The proposed method utilizes a Capon-like estimation algorithm and incorporates LU decomposition to enhance the accuracy of DOA estimation while significantly reducing computational complexity compared to existing methods like the Capon method. Notably, the proposed method does not require prior knowledge of the number of sources. To validate its effectiveness, the proposed method undergoes validation through both software simulations and practical experimentation on a prototype testbed constructed using a software-defined radio (SDR) platform and GNU Radio software. The results obtained from MATLAB simulations and real-time experiments provide compelling evidence of the proposed method's efficacy.

Keywords: DOA estimation, real-time validation, software defined radio, computational complexity, Capon's method, GNU radio

Procedia PDF Downloads 75
3867 Development, Optimization and Characterization of Gastroretentive Multiparticulate Drug Delivery System

Authors: Swapnila V. Vanshiv, Hemant P. Joshi, Atul B. Aware

Abstract:

Current study illustrates the formulation of floating microspheres for purpose of gastroretention of Dipyridamole which shows pH dependent solubility, with the highest solubility in acidic pH. The formulation involved hollow microsphere preparation by using solvent evaporation technique. Concentrations of rate controlling polymer, hydrophilic polymer, internal phase ratio, stirring speed were optimized to get desired responses, namely release of Dipyridamole, buoyancy of microspheres, entrapment efficiency of microspheres. In the formulation, the floating microspheres were prepared by using ethyl cellulose as release retardant and HPMC as a low density hydrophilic swellable polymer. Formulated microspheres were evaluated for their physical properties such as particle size and surface morphology by optical microscopy and SEM. Entrapment efficiency, floating behavior and drug release study as well the formulation was evaluated for in vivo gastroretention in rabbits using gamma scintigraphy. Formulation showed 75% drug release up to 10 hr with entrapment efficiency of 91% and 88% buoyancy till 10 hr. Gamma scintigraphic studies revealed that the optimized system was retained in the gastric region (stomach) for a prolonged period i.e. more than 5 hr.

Keywords: Dipyridamole microspheres, gastroretention, HPMC, optimization method

Procedia PDF Downloads 385
3866 Performance Evaluation of Task Scheduling Algorithm on LCQ Network

Authors: Zaki Ahmad Khan, Jamshed Siddiqui, Abdus Samad

Abstract:

The Scheduling and mapping of tasks on a set of processors is considered as a critical problem in parallel and distributed computing system. This paper deals with the problem of dynamic scheduling on a special type of multiprocessor architecture known as Linear Crossed Cube (LCQ) network. This proposed multiprocessor is a hybrid network which combines the features of both linear type of architectures as well as cube based architectures. Two standard dynamic scheduling schemes namely Minimum Distance Scheduling (MDS) and Two Round Scheduling (TRS) schemes are implemented on the LCQ network. Parallel tasks are mapped and the imbalance of load is evaluated on different set of processors in LCQ network. The simulations results are evaluated and effort is made by means of through analysis of the results to obtain the best solution for the given network in term of load imbalance left and execution time. The other performance matrices like speedup and efficiency are also evaluated with the given dynamic algorithms.

Keywords: dynamic algorithm, load imbalance, mapping, task scheduling

Procedia PDF Downloads 451
3865 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 131
3864 Internet Optimization by Negotiating Traffic Times

Authors: Carlos Gonzalez

Abstract:

This paper describes a system to optimize the use of the internet by clients requiring downloading of videos at peak hours. The system consists of a web server belonging to a provider of video contents, a provider of internet communications and a software application running on a client’s computer. The client using the application software will communicate to the video provider a list of the client’s future video demands. The video provider calculates which videos are going to be more in demand for download in the immediate future, and proceeds to request the internet provider the most optimal hours to do the downloading. The times of the downloading will be sent to the application software, which will use the information of pre-established hours negotiated between the video provider and the internet provider to download those videos. The videos will be saved in a special protected section of the user’s hard disk, which will only be accessed by the application software in the client’s computer. When the client is ready to see a video, the application will search the list of current existent videos in the area of the hard disk; if it does exist, it will use this video directly without the need for internet access. We found that the best way to optimize the download traffic of videos is by negotiation between the internet communication provider and the video content provider.

Keywords: internet optimization, video download, future demands, secure storage

Procedia PDF Downloads 136
3863 Remote Sensing through Deep Neural Networks for Satellite Image Classification

Authors: Teja Sai Puligadda

Abstract:

Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.

Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss

Procedia PDF Downloads 159
3862 Influence of Fermentation Conditions on Humic Acids Production by Trichoderma viride Using an Oil Palm Empty Fruit Bunch as the Substrate

Authors: F. L. Motta, M. H. A. Santana

Abstract:

Humic Acids (HA) were produced by a Trichoderma viride strain under submerged fermentation in a medium based on the oil palm Empty Fruit Bunch (EFB) and the main variables of the process were optimized by using response surface methodology. A temperature of 40°C and concentrations of 50g/L EFB, 5.7g/L potato peptone and 0.11g/L (NH4)2SO4 were the optimum levels of the variables that maximize the HA production, within the physicochemical and biological limits of the process. The optimized conditions led to an experimental HA concentration of 428.4±17.5 mg/L, which validated the prediction from the statistical model of 412.0mg/L. This optimization increased about 7–fold the HA production previously reported in the literature. Additionally, the time profiles of HA production and fungal growth confirmed our previous findings that HA production preferably occurs during fungal sporulation. The present study demonstrated that T. viride successfully produced HA via the submerged fermentation of EFB and the process parameters were successfully optimized using a statistics-based response surface model. To the best of our knowledge, the present work is the first report on the optimization of HA production from EFB by a biotechnological process, whose feasibility was only pointed out in previous works.

Keywords: empty fruit bunch, humic acids, submerged fermentation, Trichoderma viride

Procedia PDF Downloads 306
3861 A Discrete Logit Survival Model with a Smooth Baseline Hazard for Age at First Alcohol Intake among Students at Tertiary Institutions in Thohoyandou, South Africa

Authors: A. Bere, H. G. Sithuba, K. Kyei, C. Sigauke

Abstract:

We employ a discrete logit survival model to investigate the risk factors for early alcohol intake among students at two tertiary institutions in Thohoyandou, South Africa. Data were collected from a sample of 744 students using a self-administered questionnaire. Significant covariates were arrived at through a regularization algorithm implemented using the glmmLasso package. The tuning parameter was determined using a five-fold cross-validation algorithm. The baseline hazard was modelled as a smooth function of time through the use of spline functions. The results show that the hazard of initial alcohol intake peaks at the age of about 16 years and that at any given time, being of a male gender, prior use of other drugs, having drinking peers, having experienced negative life events and physical abuse are associated with a higher risk of alcohol intake debut.

Keywords: cross-validation, discrete hazard model, LASSO, smooth baseline hazard

Procedia PDF Downloads 192
3860 Efficient Recommendation System for Frequent and High Utility Itemsets over Incremental Datasets

Authors: J. K. Kavitha, D. Manjula, U. Kanimozhi

Abstract:

Mining frequent and high utility item sets have gained much significance in the recent years. When the data arrives sporadically, incremental and interactive rule mining and utility mining approaches can be adopted to handle user’s dynamic environmental needs and avoid redundancies, using previous data structures, and mining results. The dependence on recommendation systems has exponentially risen since the advent of search engines. This paper proposes a model for building a recommendation system that suggests frequent and high utility item sets over dynamic datasets for a cluster based location prediction strategy to predict user’s trajectories using the Efficient Incremental Rule Mining (EIRM) algorithm and the Fast Update Utility Pattern Tree (FUUP) algorithm. Through comprehensive evaluations by experiments, this scheme has shown to deliver excellent performance.

Keywords: data sets, recommendation system, utility item sets, frequent item sets mining

Procedia PDF Downloads 293
3859 A Fast Convergence Subband BSS Structure

Authors: Salah Al-Din I. Badran, Samad Ahmadi, Ismail Shahin

Abstract:

A blind source separation method is proposed; in this method we use a non-uniform filter bank and a novel normalisation. This method provides a reduced computational complexity and increased convergence speed comparing to the full-band algorithm. Recently, adaptive sub-band scheme has been recommended to solve two problems: reduction of computational complexity and increase the convergence speed of the adaptive algorithm for correlated input signals. In this work the reduction in computational complexity is achieved with the use of adaptive filters of orders less than the full-band adaptive filters, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each sub-band than the input signal at full bandwidth, and can promote better rates of convergence.

Keywords: blind source separation, computational complexity, subband, convergence speed, mixture

Procedia PDF Downloads 555
3858 Targeting Mineral Resources of the Upper Benue trough, Northeastern Nigeria Using Linear Spectral Unmixing

Authors: Bello Yusuf Idi

Abstract:

The Gongola arm of the Upper Banue Trough, Northeastern Nigeria is predominantly covered by the outcrops of Limestone-bearing rocks in form of Sandstone with intercalation of carbonate clay, shale, basaltic, felsphatic and migmatide rocks at subpixel dimension. In this work, subpixel classification algorithm was used to classify the data acquired from landsat 7 Enhance Thematic Mapper (ETM+) satellite system with the aim of producing fractional distribution image for three most economically important solid minerals of the area: Limestone, Basalt and Migmatide. Linear Spectral Unmixing (LSU) algorithm was used to produce fractional distribution image of abundance of the three mineral resources within a 100Km2 portion of the area. The results show that the minerals occur at different proportion all over the area. The fractional map could therefore serve as a guide to the ongoing reconnaissance for the economic potentiality of the formation.

Keywords: linear spectral un-mixing, upper benue trough, gongola arm, geological engineering

Procedia PDF Downloads 375
3857 Neural Networks and Genetic Algorithms Approach for Word Correction and Prediction

Authors: Rodrigo S. Fonseca, Antônio C. P. Veiga

Abstract:

Aiming at helping people with some movement limitation that makes typing and communication difficult, there is a need to customize an assistive tool with a learning environment that helps the user in order to optimize text input, identifying the error and providing the correction and possibilities of choice in the Portuguese language. The work presents an Orthographic and Grammatical System that can be incorporated into writing environments, improving and facilitating the use of an alphanumeric keyboard, using a prototype built using a genetic algorithm in addition to carrying out the prediction, which can occur based on the quantity and position of the inserted letters and even placement in the sentence, ensuring the sequence of ideas using a Long Short Term Memory (LSTM) neural network. The prototype optimizes data entry, being a component of assistive technology for the textual formulation, detecting errors, seeking solutions and informing the user of accurate predictions quickly and effectively through machine learning.

Keywords: genetic algorithm, neural networks, word prediction, machine learning

Procedia PDF Downloads 194
3856 Time-Series Load Data Analysis for User Power Profiling

Authors: Mahdi Daghmhehci Firoozjaei, Minchang Kim, Dima Alhadidi

Abstract:

In this paper, we present a power profiling model for smart grid consumers based on real time load data acquired smart meters. It profiles consumers’ power consumption behaviour using the dynamic time warping (DTW) clustering algorithm. Due to the invariability of signal warping of this algorithm, time-disordered load data can be profiled and consumption features be extracted. Two load types are defined and the related load patterns are extracted for classifying consumption behaviour by DTW. The classification methodology is discussed in detail. To evaluate the performance of the method, we analyze the time-series load data measured by a smart meter in a real case. The results verify the effectiveness of the proposed profiling method with 90.91% true positive rate for load type clustering in the best case.

Keywords: power profiling, user privacy, dynamic time warping, smart grid

Procedia PDF Downloads 151
3855 Performance Comparison of ADTree and Naive Bayes Algorithms for Spam Filtering

Authors: Thanh Nguyen, Andrei Doncescu, Pierre Siegel

Abstract:

Classification is an important data mining technique and could be used as data filtering in artificial intelligence. The broad application of classification for all kind of data leads to be used in nearly every field of our modern life. Classification helps us to put together different items according to the feature items decided as interesting and useful. In this paper, we compare two classification methods Naïve Bayes and ADTree use to detect spam e-mail. This choice is motivated by the fact that Naive Bayes algorithm is based on probability calculus while ADTree algorithm is based on decision tree. The parameter settings of the above classifiers use the maximization of true positive rate and minimization of false positive rate. The experiment results present classification accuracy and cost analysis in view of optimal classifier choice for Spam Detection. It is point out the number of attributes to obtain a tradeoff between number of them and the classification accuracy.

Keywords: classification, data mining, spam filtering, naive bayes, decision tree

Procedia PDF Downloads 411
3854 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines

Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka

Abstract:

To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.

Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps

Procedia PDF Downloads 151
3853 Optimization of Cacao Fermentation in Davao Philippines Using Sustainable Method

Authors: Ian Marc G. Cabugsa, Kim Ryan Won, Kareem Mamac, Manuel Dee, Merlita Garcia

Abstract:

An optimized cacao fermentation technique was developed for the cacao farmers of Davao City Philippines. Cacao samples with weights ranging from 150-250 kilograms were collected from various cacao farms in Davao City and Zamboanga City Philippines. Different fermentation techniques were used starting with design of the sweat box, prefermentation conditionings, number of days for fermentation and number of turns. As the beans are being fermented, its temperature was regularly monitored using a digital thermometer. The resultant cacao beans were assessed using physical and chemical means. For the physical assessment, the bean cut test, bean count tests, and sensory test were used. Quantification of theobromine, caffeine, and antioxidants in the form of equivalent quercetin was used for chemical assessment. Both the theobromine and caffeine were analyzed using HPLC method while the antioxidant was analyzed spectrometrically. To come up with the best fermentation procedure, the different assessment were given priority coefficients wherein the physical tests – taste test, cut, and bean count tests were given priority over the results of the chemical test. The result of the study was an optimized fermentation protocol that is readily adaptable and transferable to any cacao cooperatives or groups in Mindanao or even Philippines as a whole.

Keywords: cacao, fermentation, HPLC, optimization, Philippines

Procedia PDF Downloads 452
3852 An Indoor Guidance System Combining Near Field Communication and Bluetooth Low Energy Beacon Technologies

Authors: Rung-Shiang Cheng, Wei-Jun Hong, Jheng-Syun Wang, Kawuu W. Lin

Abstract:

Users rely increasingly on Location-Based Services (LBS) and automated navigation/guidance systems nowadays. However, while such services are easily implemented in outdoor environments using Global Positioning System (GPS) technology, a requirement still exists for accurate localization and guidance schemes in indoor settings. Accordingly, the present study presents a methodology based on GPS, Bluetooth Low Energy (BLE) beacons, and Near Field Communication (NFC) technology. Through establishing graphic information and the design of algorithm, this study develops a guidance system for indoor and outdoor on smartphones, with aim to provide users a smart life through this system. The presented system is implemented on a smartphone and evaluated on a student campus environment. The experimental results confirm the ability of the presented app to switch automatically from an outdoor mode to an indoor mode and to guide the user to the requested target destination via the shortest possible route.

Keywords: beacon, indoor, BLE, Dijkstra algorithm

Procedia PDF Downloads 302
3851 Optimal Design of InGaP/GaAs Heterojonction Solar Cell

Authors: Djaafar F., Hadri B., Bachir G.

Abstract:

We studied mainly the influence of temperature, thickness, molar fraction and the doping of the various layers (emitter, base, BSF and window) on the performances of a photovoltaic solar cell. In a first stage, we optimized the performances of the InGaP/GaAs dual-junction solar cell while varying its operation temperature from 275°K to 375 °K with an increment of 25°C using a virtual wafer fabrication TCAD Silvaco. The optimization at 300°K led to the following result Icc =14.22 mA/cm2, Voc =2.42V, FF =91.32 %, η = 22.76 % which is close with those found in the literature. In a second stage ,we have varied the molar fraction of different layers as well their thickness and the doping of both emitters and bases and we have registered the result of each variation until obtaining an optimal efficiency of the proposed solar cell at 300°K which was of Icc=14.35mA/cm2,Voc=2.47V,FF=91.34,and η =23.33% for In(1-x)Ga(x)P molar fraction( x=0.5).The elimination of a layer BSF on the back face of our cell, enabled us to make a remarkable improvement of the short-circuit current (Icc=14.70 mA/cm2) and a decrease in open circuit voltage Voc and output η which reached 1.46V and 11.97% respectively. Therefore, we could determine the critical parameters of the cell and optimize its various technological parameters to obtain the best performance for a dual junction solar cell. This work opens the way with new prospects in the field of the photovoltaic one. Such structures will thus simplify the manufacturing processes of the cells; will thus reduce the costs while producing high outputs of photovoltaic conversion.

Keywords: modeling, simulation, multijunction, optimization, silvaco ATLAS

Procedia PDF Downloads 622
3850 Improved Processing Speed for Text Watermarking Algorithm in Color Images

Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari

Abstract:

Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.

Keywords: steganography, watermarking, time complexity measurements, private keys

Procedia PDF Downloads 143
3849 Optimization of Alkali Assisted Microwave Pretreatments of Sorghum Straw for Efficient Bioethanol Production

Authors: Bahiru Tsegaye, Chandrajit Balomajumder, Partha Roy

Abstract:

The limited supply and related negative environmental consequence of fossil fuels are driving researcher for finding sustainable sources of energy. Lignocellulose biomass like sorghum straw is considered as among cheap, renewable and abundantly available sources of energy. However, lignocellulose biomass conversion to bioenergy like bioethanol is hindered due to the reluctant nature of lignin in the biomass. Therefore, removal of lignin is a vital step for lignocellulose conversion to renewable energy. The aim of this study is to optimize microwave pretreatment conditions using design expert software to remove lignin and to release maximum possible polysaccharides from sorghum straw for efficient hydrolysis and fermentation process. Sodium hydroxide concentration between 0.5-1.5%, v/v, pretreatment time from 5-25 minutes and pretreatment temperature from 120-2000C were considered to depolymerize sorghum straw. The effect of pretreatment was studied by analyzing the compositional changes before and after pretreatments following renewable energy laboratory procedure. Analysis of variance (ANOVA) was used to test the significance of the model used for optimization. About 32.8%-48.27% of hemicellulose solubilization, 53% -82.62% of cellulose release, and 49.25% to 78.29% lignin solubilization were observed during microwave pretreatment. Pretreatment for 10 minutes with alkali concentration of 1.5% and temperature of 1400C released maximum cellulose and lignin. At this optimal condition, maximum of 82.62% of cellulose release and 78.29% of lignin removal was achieved. Sorghum straw at optimal pretreatment condition was subjected to enzymatic hydrolysis and fermentation. The efficiency of hydrolysis was measured by analyzing reducing sugars by 3, 5 dinitrisylicylic acid method. Reducing sugars of about 619 mg/g of sorghum straw were obtained after enzymatic hydrolysis. This study showed a significant amount of lignin removal and cellulose release at optimal condition. This enhances the yield of reducing sugars as well as ethanol yield. The study demonstrates the potential of microwave pretreatments for enhancing bioethanol yield from sorghum straw.

Keywords: cellulose, hydrolysis, lignocellulose, optimization

Procedia PDF Downloads 271
3848 Finite Element Modeling of Mass Transfer Phenomenon and Optimization of Process Parameters for Drying of Paddy in a Hybrid Solar Dryer

Authors: Aprajeeta Jha, Punyadarshini P. Tripathy

Abstract:

Drying technologies for various food processing operations shares an inevitable linkage with energy, cost and environmental sustainability. Hence, solar drying of food grains has become imperative choice to combat duo challenges of meeting high energy demand for drying and to address climate change scenario. But performance and reliability of solar dryers depend hugely on sunshine period, climatic conditions, therefore, offer a limited control over drying conditions and have lower efficiencies. Solar drying technology, supported by Photovoltaic (PV) power plant and hybrid type solar air collector can potentially overpower the disadvantages of solar dryers. For development of such robust hybrid dryers; to ensure quality and shelf-life of paddy grains the optimization of process parameter becomes extremely critical. Investigation of the moisture distribution profile within the grains becomes necessary in order to avoid over drying or under drying of food grains in hybrid solar dryer. Computational simulations based on finite element modeling can serve as potential tool in providing a better insight of moisture migration during drying process. Hence, present work aims at optimizing the process parameters and to develop a 3-dimensional (3D) finite element model (FEM) for predicting moisture profile in paddy during solar drying. COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Furthermore, optimization of process parameters (power level, air velocity and moisture content) was done using response surface methodology in design expert software. 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed and validated with experimental data. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Furthermore, optimized process parameters for drying paddy were found to be 700 W, 2.75 m/s at 13% (wb) with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product. PV-integrated hybrid solar dryers can be employed as potential and cutting edge drying technology alternative for sustainable energy and food security.

Keywords: finite element modeling, moisture migration, paddy grain, process optimization, PV integrated hybrid solar dryer

Procedia PDF Downloads 150
3847 Improving Human Hand Localization in Indoor Environment by Using Frequency Domain Analysis

Authors: Wipassorn Vinicchayakul, Pichaya Supanakoon, Sathaporn Promwong

Abstract:

A human’s hand localization is revised by using radar cross section (RCS) measurements with a minimum root mean square (RMS) error matching algorithm on a touchless keypad mock-up model. RCS and frequency transfer function measurements are carried out in an indoor environment on the frequency ranged from 3.0 to 11.0 GHz to cover federal communications commission (FCC) standards. The touchless keypad model is tested in two different distances between the hand and the keypad. The initial distance of 19.50 cm is identical to the heights of transmitting (Tx) and receiving (Rx) antennas, while the second distance is 29.50 cm from the keypad. Moreover, the effects of Rx angles relative to the hand of human factor are considered. The RCS input parameters are compared with power loss parameters at each frequency. From the results, the performance of the RCS input parameters with the second distance, 29.50 cm at 3 GHz is better than the others.

Keywords: radar cross section, fingerprint-based localization, minimum root mean square (RMS) error matching algorithm, touchless keypad model

Procedia PDF Downloads 342
3846 Applying Hybrid Graph Drawing and Clustering Methods on Stock Investment Analysis

Authors: Mouataz Zreika, Maria Estela Varua

Abstract:

Stock investment decisions are often made based on current events of the global economy and the analysis of historical data. Conversely, visual representation could assist investors’ gain deeper understanding and better insight on stock market trends more efficiently. The trend analysis is based on long-term data collection. The study adopts a hybrid method that combines the Clustering algorithm and Force-directed algorithm to overcome the scalability problem when visualizing large data. This method exemplifies the potential relationships between each stock, as well as determining the degree of strength and connectivity, which will provide investors another understanding of the stock relationship for reference. Information derived from visualization will also help them make an informed decision. The results of the experiments show that the proposed method is able to produced visualized data aesthetically by providing clearer views for connectivity and edge weights.

Keywords: clustering, force-directed, graph drawing, stock investment analysis

Procedia PDF Downloads 302
3845 Non-Centrifugal Cane Sugar Production: Heat Transfer Study to Optimize the Use of Energy

Authors: Fabian Velasquez, John Espitia, Henry Hernadez, Sebastian Escobar, Jader Rodriguez

Abstract:

Non-centrifuged cane sugar (NCS) is a concentrated product obtained through the evaporation of water contain from sugarcane juice inopen heat exchangers (OE). The heat supplied to the evaporation stages is obtained from the cane bagasse through the thermochemical process of combustion, where the thermal energy released is transferred to OE by the flue gas. Therefore, the optimization of energy usage becomes essential for the proper design of the production process. For optimize the energy use, it is necessary modeling and simulation of heat transfer between the combustion gases and the juice and to understand the major mechanisms involved in the heat transfer. The main objective of this work was simulated heat transfer phenomena between the flue gas and open heat exchangers using Computational Fluid Dynamics model (CFD). The simulation results were compared to field measured data. Numerical results about temperature profile along the flue gas pipeline at the measurement points are in good accordance with field measurements. Thus, this study could be of special interest in design NCS production process and the optimization of the use of energy.

Keywords: mathematical modeling, design variables, computational fluid dynamics, overall thermal efficiency

Procedia PDF Downloads 125
3844 Bi-objective Network Optimization in Disaster Relief Logistics

Authors: Katharina Eberhardt, Florian Klaus Kaiser, Frank Schultmann

Abstract:

Last-mile distribution is one of the most critical parts of a disaster relief operation. Various uncertainties, such as infrastructure conditions, resource availability, and fluctuating beneficiary demand, render last-mile distribution challenging in disaster relief operations. The need to balance critical performance criteria like response time, meeting demand and cost-effectiveness further complicates the task. The occurrence of disasters cannot be controlled, and the magnitude is often challenging to assess. In summary, these uncertainties create a need for additional flexibility, agility, and preparedness in logistics operations. As a result, strategic planning and efficient network design are critical for an effective and efficient response. Furthermore, the increasing frequency of disasters and the rising cost of logistical operations amplify the need to provide robust and resilient solutions in this area. Therefore, we formulate a scenario-based bi-objective optimization model that integrates pre-positioning, allocation, and distribution of relief supplies extending the general form of a covering location problem. The proposed model aims to minimize underlying logistics costs while maximizing demand coverage. Using a set of disruption scenarios, the model allows decision-makers to identify optimal network solutions to address the risk of disruptions. We provide an empirical case study of the public authorities’ emergency food storage strategy in Germany to illustrate the potential applicability of the model and provide implications for decision-makers in a real-world setting. Also, we conduct a sensitivity analysis focusing on the impact of varying stockpile capacities, single-site outages, and limited transportation capacities on the objective value. The results show that the stockpiling strategy needs to be consistent with the optimal number of depots and inventory based on minimizing costs and maximizing demand satisfaction. The strategy has the potential for optimization, as network coverage is insufficient and relies on very high transportation and personnel capacity levels. As such, the model provides decision support for public authorities to determine an efficient stockpiling strategy and distribution network and provides recommendations for increased resilience. However, certain factors have yet to be considered in this study and should be addressed in future works, such as additional network constraints and heuristic algorithms.

Keywords: humanitarian logistics, bi-objective optimization, pre-positioning, last mile distribution, decision support, disaster relief networks

Procedia PDF Downloads 79