Search results for: total capacity algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15727

Search results for: total capacity algorithm

14707 A Design of Elliptic Curve Cryptography Processor based on SM2 over GF(p)

Authors: Shiji Hu, Lei Li, Wanting Zhou, DaoHong Yang

Abstract:

The data encryption, is the foundation of today’s communication. On this basis, how to improve the speed of data encryption and decryption is always a problem that scholars work for. In this paper, we proposed an elliptic curve crypto processor architecture based on SM2 prime field. In terms of hardware implementation, we optimized the algorithms in different stages of the structure. In finite field modulo operation, we proposed an optimized improvement of Karatsuba-Ofman multiplication algorithm, and shorten the critical path through pipeline structure in the algorithm implementation. Based on SM2 recommended prime field, a fast modular reduction algorithm is used to reduce 512-bit wide data obtained from the multiplication unit. The radix-4 extended Euclidean algorithm was used to realize the conversion between affine coordinate system and Jacobi projective coordinate system. In the parallel scheduling of point operations on elliptic curves, we proposed a three-level parallel structure of point addition and point double based on the Jacobian projective coordinate system. Combined with the scalar multiplication algorithm, we added mutual pre-operation to the point addition and double point operation to improve the efficiency of the scalar point multiplication. The proposed ECC hardware architecture was verified and implemented on Xilinx Virtex-7 and ZYNQ-7 platforms, and each 256-bit scalar multiplication operation took 0.275ms. The performance for handling scalar multiplication is 32 times that of CPU(dual-core ARM Cortex-A9).

Keywords: Elliptic curve cryptosystems, SM2, modular multiplication, point multiplication.

Procedia PDF Downloads 99
14706 Production and Quality Assessment of Antioxidant-Rich Biscuit Produced from Pearl Millet and Orange Peel Flour Blends

Authors: Oloniyo Rebecca Olajumoke

Abstract:

The unstable free radicals molecules oxidize cells throughout the body to cause oxidative stress, which has been implicated in the pathogenesis of many chronic diseases. Thus, the consumption of antioxidant-rich snacks could help to reduce the production of these free radicals in the body. This study aimed at producing antioxidant–rich biscuits from an underutilized pearl millet and agricultural waste from orange peel flour (PMF and OPF, respectively) blends. Biscuits were produced from PMF, and OPF blends using various proportions (95:05; 90:10; 85:15; 80:20 with 100% PMF as control. The functional properties of the flours, as well as the antioxidant properties, physical evaluation, and consumer acceptability of the biscuits, were evaluated. The functional properties of the composite flour showed an increase in oil absorption capacity (7.73-8.80 g/ml), water absorption capacity (6.82-7.21 g/ml), foaming (3.91-5.88 g/ml), and emulsification (52.85-58.82 g/ml) properties. The increased addition of OPF significantly (p<0.05) increased the antioxidant properties of the biscuits produced from the composite flour. For instance, the ferric reducing properties (0.10-0.4 mgAAE/g), total flavonoid (1.20-8.12 mg QE/g), and ABTS radical scavenging (1.17-2.19 mmol/TEAC/g) of the composite flours were increasingly comparable to those of 100% PMF. The physical parameters of the biscuit were significantly different (p<0.05) from one another. The addition of OPF into PMF reduced the weight, diameter, and spread ratio of biscuits produced while contrarily increasing the height of the biscuit. The incorporation of OPF at 5% (95:05) substitution yielded a consumedly acceptable biscuit product. The significant increase in antioxidant properties with an increase in OPF during the production of biscuits would therefore increase the nutritional value and potential health benefits.

Keywords: orange peel, biscuit, antioxidant, pearl millet

Procedia PDF Downloads 95
14705 Pale, Firm and Non-Exudative (PFN): An Emerging Major Broiler Breast Meat Group

Authors: Cintia Midori Kaminishikawahara, Fernanda Jéssica Mendonça, Moisés Grespan, Elza Iouko Ida, Massami Shimokomaki, Adriana Lourenço Soares

Abstract:

The quality of broiler breast meat is changing as a result of continuing emphasis on genetically bird’s selection for efficiently higher meat production. The consumer is experiencing a cooked product that is drier and less juicy when consumed. Breast meat has been classified as PSE (pale, soft, exudative), DFD (dark, firm, dry) and normal color meat. However, recently variations of this color have been observed and they are not in line with the specificity of the meat functional properties. Thus, the objective of this work was to report the finding of a new pale meat color group characterized as Pale, Firm and Non-exudative (PFN) based on its pH, color, meat functional properties and micro structural evaluation. Breast meat fillets samples (n=1045) from commercial line were classified into PSE (pH ≤5.8, L* ≥ 53.0), PFN (pH > 5.8 and L* ≥ 53.0) and Normal (pH >5.8 and L* < 53.0), based on pH and L* values. In sequence, a total of 30 samples of each group were analyzed for the water holding capacity (WHC) and shear force (SF). The incidence was 9.1% for PSE meat, 85.7% for PFN and 5.2% for Normal meat. The PSE meat presented lower values of WHC (P ≤ 0.05) followed in sequence by PFN and Normal samples and also the SF values of fresh PFN was higher than PSE meat (P ≤ 0.05) and similar to Normal samples. Under optical microscopy, the cell diameter was 10% higher for PFN in relation to PSE meat and similar to Normal meat. These preliminary results indicate an emerging group of breast meat and it should be considered that the Pale, Firm and Non-exudative should be considered as an ideal broiler breast meat quality.

Keywords: broiler PSE meat, light microscopy, texture, water holding capacity

Procedia PDF Downloads 354
14704 Estimation of Transition and Emission Probabilities

Authors: Aakansha Gupta, Neha Vadnere, Tapasvi Soni, M. Anbarsi

Abstract:

Protein secondary structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine and biotechnology. Some aspects of protein functions and genome analysis can be predicted by secondary structure prediction. This is used to help annotate sequences, classify proteins, identify domains, and recognize functional motifs. In this paper, we represent protein secondary structure as a mathematical model. To extract and predict the protein secondary structure from the primary structure, we require a set of parameters. Any constants appearing in the model are specified by these parameters, which also provide a mechanism for efficient and accurate use of data. To estimate these model parameters there are many algorithms out of which the most popular one is the EM algorithm or called the Expectation Maximization Algorithm. These model parameters are estimated with the use of protein datasets like RS126 by using the Bayesian Probabilistic method (data set being categorical). This paper can then be extended into comparing the efficiency of EM algorithm to the other algorithms for estimating the model parameters, which will in turn lead to an efficient component for the Protein Secondary Structure Prediction. Further this paper provides a scope to use these parameters for predicting secondary structure of proteins using machine learning techniques like neural networks and fuzzy logic. The ultimate objective will be to obtain greater accuracy better than the previously achieved.

Keywords: model parameters, expectation maximization algorithm, protein secondary structure prediction, bioinformatics

Procedia PDF Downloads 481
14703 Design of EV Steering Unit Using AI Based on Estimate and Control Model

Authors: Seong Jun Yoon, Jasurbek Doliev, Sang Min Oh, Rodi Hartono, Kyoojae Shin

Abstract:

Electric power steering (EPS), which is commonly used in electric vehicles recently, is an electric-driven steering device for vehicles. Compared to hydraulic systems, EPS offers advantages such as simple system components, easy maintenance, and improved steering performance. However, because the EPS system is a nonlinear model, difficult problems arise in controller design. To address these, various machine learning and artificial intelligence approaches, notably artificial neural networks (ANN), have been applied. ANN can effectively determine relationships between inputs and outputs in a data-driven manner. This research explores two main areas: designing an EPS identifier using an ANN-based backpropagation (BP) algorithm and enhancing the EPS system controller with an ANN-based Levenberg-Marquardt (LM) algorithm. The proposed ANN-based BP algorithm shows superior performance and accuracy compared to linear transfer function estimators, while the LM algorithm offers better input angle reference tracking and faster response times than traditional PID controllers. Overall, the proposed ANN methods demonstrate significant promise in improving EPS system performance.

Keywords: ANN backpropagation modelling, electric power steering, transfer function estimator, electrical vehicle driving system

Procedia PDF Downloads 44
14702 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model

Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini

Abstract:

The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.

Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures

Procedia PDF Downloads 124
14701 Hybrid Structure Learning Approach for Assessing the Phosphate Laundries Impact

Authors: Emna Benmohamed, Hela Ltifi, Mounir Ben Ayed

Abstract:

Bayesian Network (BN) is one of the most efficient classification methods. It is widely used in several fields (i.e., medical diagnostics, risk analysis, bioinformatics research). The BN is defined as a probabilistic graphical model that represents a formalism for reasoning under uncertainty. This classification method has a high-performance rate in the extraction of new knowledge from data. The construction of this model consists of two phases for structure learning and parameter learning. For solving this problem, the K2 algorithm is one of the representative data-driven algorithms, which is based on score and search approach. In addition, the integration of the expert's knowledge in the structure learning process allows the obtainment of the highest accuracy. In this paper, we propose a hybrid approach combining the improvement of the K2 algorithm called K2 algorithm for Parents and Children search (K2PC) and the expert-driven method for learning the structure of BN. The evaluation of the experimental results, using the well-known benchmarks, proves that our K2PC algorithm has better performance in terms of correct structure detection. The real application of our model shows its efficiency in the analysis of the phosphate laundry effluents' impact on the watershed in the Gafsa area (southwestern Tunisia).

Keywords: Bayesian network, classification, expert knowledge, structure learning, surface water analysis

Procedia PDF Downloads 128
14700 A Supervised Goal Directed Algorithm in Economical Choice Behaviour: An Actor-Critic Approach

Authors: Keyvanl Yahya

Abstract:

This paper aims to find a algorithmic structure that affords to predict and explain economic choice behaviour particularly under uncertainty (random policies) by manipulating the prevalent Actor-Critic learning method that complies with the requirements we have been entrusted ever since the field of neuroeconomics dawned on us. Whilst skimming some basics of neuroeconomics that might be relevant to our discussion, we will try to outline some of the important works which have so far been done to simulate choice making processes. Concerning neurological findings that suggest the existence of two specific functions that are executed through Basal Ganglia all the way down to sub-cortical areas, namely 'rewards' and 'beliefs', we will offer a modified version of actor/critic algorithm to shed a light on the relation between these functions and most importantly resolve what is referred to as a challenge for actor-critic algorithms, that is lack of inheritance or hierarchy which avoids the system being evolved in continuous time tasks whence the convergence might not emerge.

Keywords: neuroeconomics, choice behaviour, decision making, reinforcement learning, actor-critic algorithm

Procedia PDF Downloads 397
14699 Towards a Balancing Medical Database by Using the Least Mean Square Algorithm

Authors: Kamel Belammi, Houria Fatrim

Abstract:

imbalanced data set, a problem often found in real world application, can cause seriously negative effect on classification performance of machine learning algorithms. There have been many attempts at dealing with classification of imbalanced data sets. In medical diagnosis classification, we often face the imbalanced number of data samples between the classes in which there are not enough samples in rare classes. In this paper, we proposed a learning method based on a cost sensitive extension of Least Mean Square (LMS) algorithm that penalizes errors of different samples with different weight and some rules of thumb to determine those weights. After the balancing phase, we applythe different classifiers (support vector machine (SVM), k- nearest neighbor (KNN) and multilayer neuronal networks (MNN)) for balanced data set. We have also compared the obtained results before and after balancing method.

Keywords: multilayer neural networks, k- nearest neighbor, support vector machine, imbalanced medical data, least mean square algorithm, diabetes

Procedia PDF Downloads 532
14698 Influence of the Line Parameters in Transmission Line Fault Location

Authors: Marian Dragomir, Alin Dragomir

Abstract:

In the paper, two fault location algorithms are presented for transmission lines which use the line parameters to estimate the distance to the fault. The first algorithm uses only the measurements from one end of the line and the positive and zero sequence parameters of the line, while the second one uses the measurements from both ends of the line and only the positive sequence parameters of the line. The algorithms were tested using a transmission grid transposed in MATLAB. In a first stage it was established a fault location base line, where the algorithms mentioned above estimate the fault locations using the exact line parameters. After that, the positive and zero sequence resistance and reactance of the line were calculated again for different ground resistivity values and then the fault locations were estimated again in order to compare the results with the base line results. The results show that the algorithm which uses the zero sequence impedance of the line is the most sensitive to the line parameters modifications. The other algorithm is less sensitive to the line parameters modification.

Keywords: estimation algorithms, fault location, line parameters, simulation tool

Procedia PDF Downloads 355
14697 Computational Aerodynamic Shape Optimisation Using a Concept of Control Nodes and Modified Cuckoo Search

Authors: D. S. Naumann, B. J. Evans, O. Hassan

Abstract:

This paper outlines the development of an automated aerodynamic optimisation algorithm using a novel method of parameterising a computational mesh by employing user–defined control nodes. The shape boundary movement is coupled to the movement of the novel concept of the control nodes via a quasi-1D-linear deformation. Additionally, a second order smoothing step has been integrated to act on the boundary during the mesh movement based on the change in its second derivative. This allows for both linear and non-linear shape transformations dependent on the preference of the user. The domain mesh movement is then coupled to the shape boundary movement via a Delaunay graph mapping. A Modified Cuckoo Search (MCS) algorithm is used for optimisation within the prescribed design space defined by the allowed range of control node displacement. A finite volume compressible NavierStokes solver is used for aerodynamic modelling to predict aerodynamic design fitness. The resulting coupled algorithm is applied to a range of test cases in two dimensions including the design of a subsonic, transonic and supersonic intake and the optimisation approach is compared with more conventional optimisation strategies. Ultimately, the algorithm is tested on a three dimensional wing optimisation case.

Keywords: mesh movement, aerodynamic shape optimization, cuckoo search, shape parameterisation

Procedia PDF Downloads 337
14696 Breast Cancer Diagnosing Based on Online Sequential Extreme Learning Machine Approach

Authors: Musatafa Abbas Abbood Albadr, Masri Ayob, Sabrina Tiun, Fahad Taha Al-Dhief, Mohammad Kamrul Hasan

Abstract:

Breast Cancer (BC) is considered one of the most frequent reasons of cancer death in women between 40 to 55 ages. The BC is diagnosed by using digital images of the FNA (Fine Needle Aspirate) for both benign and malignant tumors of the breast mass. Therefore, this work proposes the Online Sequential Extreme Learning Machine (OSELM) algorithm for diagnosing BC by using the tumor features of the breast mass. The current work has used the Wisconsin Diagnosis Breast Cancer (WDBC) dataset, which contains 569 samples (i.e., 357 samples for benign class and 212 samples for malignant class). Further, numerous measurements of assessment were used in order to evaluate the proposed OSELM algorithm, such as specificity, precision, F-measure, accuracy, G-mean, MCC, and recall. According to the outcomes of the experiment, the highest performance of the proposed OSELM was accomplished with 97.66% accuracy, 98.39% recall, 95.31% precision, 97.25% specificity, 96.83% F-measure, 95.00% MCC, and 96.84% G-Mean. The proposed OSELM algorithm demonstrates promising results in diagnosing BC. Besides, the performance of the proposed OSELM algorithm was superior to all its comparatives with respect to the rate of classification.

Keywords: breast cancer, machine learning, online sequential extreme learning machine, artificial intelligence

Procedia PDF Downloads 111
14695 Optimal Bayesian Control of the Proportion of Defectives in a Manufacturing Process

Authors: Viliam Makis, Farnoosh Naderkhani, Leila Jafari

Abstract:

In this paper, we present a model and an algorithm for the calculation of the optimal control limit, average cost, sample size, and the sampling interval for an optimal Bayesian chart to control the proportion of defective items produced using a semi-Markov decision process approach. Traditional p-chart has been widely used for controlling the proportion of defectives in various kinds of production processes for many years. It is well known that traditional non-Bayesian charts are not optimal, but very few optimal Bayesian control charts have been developed in the literature, mostly considering finite horizon. The objective of this paper is to develop a fast computational algorithm to obtain the optimal parameters of a Bayesian p-chart. The decision problem is formulated in the partially observable framework and the developed algorithm is illustrated by a numerical example.

Keywords: Bayesian control chart, semi-Markov decision process, quality control, partially observable process

Procedia PDF Downloads 319
14694 The Effects of Nanoemulsions Based on Commercial Oils for the Quality of Vacuum-Packed Sea Bass at 2±2°C

Authors: Mustafa Durmuş, Yesim Ozogul, Esra Balıkcı, Saadet Gokdoğan, Fatih Ozogul, Ali Rıza Köşker, İlknur Yuvka

Abstract:

Food scientists and researchers have paid attention to develop new ways for improving the nutritional value of foods. The application of nanotechnology techniques to the food industry may allow the modification of food texture, taste, sensory attributes, coloring strength, processability, and stability during shelf life of products. In this research, the effects of nanoemulsions based on commercial oils for vacuum-packed sea bass fillets stored at 2±2°C were investigated in terms of the sensory, chemical (total volatile basic nitrogen (TVB-N), thiobarbituric acid (TBA), peroxide value (PV) and free fatty acids (FFA), pH, water holding capacity (WHC)) and microbiological qualities (total anaerobic bacteria and total lactic acid bacteria). Physical properties of emulsions (viscosity, the particle size of droplet, thermodynamic stability, refractive index, and surface tension) were determined. Nanoemulsion preparation method was based on high energy principle, with ultrasonic homojenizator. Sensory analyses of raw fish showed that the demerit points of the control group were found higher than those of treated groups. The sensory score (odour, taste and texture) of the cooked fillets decreased with storage time, especially in the control. Results obtained from chemical and microbiological analyses also showed that nanoemulsions significantly (p<0.05) decreased the values of biochemical parameters and growth of bacteria during storage period, thus improving quality of vacuum-packed sea bass.

Keywords: quality parameters, nanoemulsion, sea bass, shelf life, vacuum packing

Procedia PDF Downloads 459
14693 Refactoring Object Oriented Software through Community Detection Using Evolutionary Computation

Authors: R. Nagarani

Abstract:

An intrinsic property of software in a real-world environment is its need to evolve, which is usually accompanied by the increase of software complexity and deterioration of software quality, making software maintenance a tough problem. Refactoring is regarded as an effective way to address this problem. Many refactoring approaches at the method and class level have been proposed. But the extent of research on software refactoring at the package level is less. This work presents a novel approach to refactor the package structures of object oriented software using genetic algorithm based community detection. It uses software networks to represent classes and their dependencies. It uses a constrained community detection algorithm to obtain the optimized community structures in software networks, which also correspond to the optimized package structures. It finally provides a list of classes as refactoring candidates by comparing the optimized package structures with the real package structures.

Keywords: community detection, complex network, genetic algorithm, package, refactoring

Procedia PDF Downloads 418
14692 Improving Load Frequency Control of Multi-Area Power System by Considering Uncertainty by Using Optimized Type 2 Fuzzy Pid Controller with the Harmony Search Algorithm

Authors: Mehrdad Mahmudizad, Roya Ahmadi Ahangar

Abstract:

This paper presents the method of designing the type 2 fuzzy PID controllers in order to solve the problem of Load Frequency Control (LFC). The Harmony Search (HS) algorithm is used to regulate the measurement factors and the effect of uncertainty of membership functions of Interval Type 2 Fuzzy Proportional Integral Differential (IT2FPID) controllers in order to reduce the frequency deviation resulted from the load oscillations. The simulation results implicitly show that the performance of the proposed IT2FPID LFC in terms of error, settling time and resistance against different load oscillations is more appropriate and preferred than PID and Type 1 Fuzzy Proportional Integral Differential (T1FPID) controllers.

Keywords: load frequency control, fuzzy-pid controller, type 2 fuzzy system, harmony search algorithm

Procedia PDF Downloads 278
14691 Textile Cottage Industry: A Facilitator for Capacity Building and Youth Empowerment

Authors: Salihu Maiwada

Abstract:

The large scale textile industry in Nigeria was at one time the second largest employer of labor after government. With recent developments and changing situations, there is a serious decline in this sector which consequently forced the local textile industries to close down and the workers retrenched. the category of people worst hit was the youths and the middle age. This paper examines the potentials of the textile cottage industry as a facilitator for capacity building and economic empowerment among the Nigerian youths. The paper focuses on economic viability, persistence, and above-all, its potentials for poverty reduction as well as self employment. The methodology used in the study is the survey method and the instrument used to collect the necessary information is field interview. The results obtained showed that the textile cottage industries are flourishing and the Nigerian youths are engaged in the practice. In addition, the paper suggests areas that require government's financial intervention which will facilitate the establishment and ensure the sustainability of the textile cottage industry. The paper concludes with some recommendations for the youths and for the government.

Keywords: capacity building, economic, empowerment, persistence, sustainability, youths

Procedia PDF Downloads 590
14690 Ovarian Hormones and Antioxidants Biomarkers in Dromedary Camels Synchronized with Controlled Intravaginal Drug Release/Ovsynch GPG Program during Breeding Season

Authors: Heba Hozyen, Ragab Mohamed, Amal Abd El Hameed, Amal Abo El-Maaty

Abstract:

This study aimed to investigate the effect of CIDR and ovsynch (Gonadotropin-prostaglandine-gonadotropin GPG) protocols for synchronization of follicular waves of dromedary camels on ovarian hormones, oxidative stress and conception during breeding season. Twelve dark colored dromedary camels were divided into two equal groups. The first group was subjected to CIDR insertion for 7 days and blood samples were collected each other day from the day of CIDR insertion (day 0) till day 21. The other group was subjected to GPG system (Ovsynch) and blood samples were collected daily for 11 days. Progesterone (P4) and estradiol were assayed using commercial ELISA diagnostic EIA kits. Catalase (CAT), total antioxidants capacity (TAC), glutathione reduced (GHD), lipid peroxide product (malondialdehyde, MDA) and nitric oxide (NO) were measured colorimetrically using spectrophotometer. Results revealed that CIDR treated camels had significantly high P4 (P= 0.0001), estradiol (P= 0.0001), CAT (P= 0.034), NO (P= 0.016) and TAC (P= 0.04) but significantly low MDA (P= 0.001) and GHD (P= 0.003) compared to GPG treated ones. Camels inserted with CIDR had higher conception rate (66.7%) compared to those treated with GPG (33%). In conclusion, camels treated with CIDR had higher hormonal response and antioxidant capacity than those synchronized with GPG which positively reflected on their conception rate. The better response of camels to CIDR and the higher conception compared to GPG protocol recommends its use for future reproductive management in camels.

Keywords: antioxidants, camel, CIDR, season, steroid hormones

Procedia PDF Downloads 291
14689 Objects Tracking in Catadioptric Images Using Spherical Snake

Authors: Khald Anisse, Amina Radgui, Mohammed Rziza

Abstract:

Tracking objects on video sequences is a very challenging task in many works in computer vision applications. However, there is no article that treats this topic in catadioptric vision. This paper is an attempt that tries to describe a new approach of omnidirectional images processing based on inverse stereographic projection in the half-sphere. We used the spherical model proposed by Gayer and al. For object tracking, our work is based on snake method, with optimization using the Greedy algorithm, by adapting its different operators. The algorithm will respect the deformed geometries of omnidirectional images such as spherical neighborhood, spherical gradient and reformulation of optimization algorithm on the spherical domain. This tracking method that we call "spherical snake" permitted to know the change of the shape and the size of object in different replacements in the spherical image.

Keywords: computer vision, spherical snake, omnidirectional image, object tracking, inverse stereographic projection

Procedia PDF Downloads 402
14688 Signal Restoration Using Neural Network Based Equalizer for Nonlinear channels

Authors: Z. Zerdoumi, D. Benatia, , D. Chicouche

Abstract:

This paper investigates the application of artificial neural network to the problem of nonlinear channel equalization. The difficulties caused by channel distortions such as inter symbol interference (ISI) and nonlinearity can overcome by nonlinear equalizers employing neural networks. It has been shown that multilayer perceptron based equalizer outperform significantly linear equalizers. We present a multilayer perceptron based equalizer with decision feedback (MLP-DFE) trained with the back propagation algorithm. The capacity of the MLP-DFE to deal with nonlinear channels is evaluated. From simulation results it can be noted that the MLP based DFE improves significantly the restored signal quality, the steady state mean square error (MSE), and minimum Bit Error Rate (BER), when comparing with its conventional counterpart.

Keywords: Artificial Neural Network, signal restoration, Nonlinear Channel equalization, equalization

Procedia PDF Downloads 497
14687 Inventory Management to Minimize Storage Costs and Improve Delivery Time in a Pharmaceutical Industry

Authors: Israel Becerril Rosales, Manuel González De La Rosa, Gerardo Villa Sánchez

Abstract:

In this work, the effects that produce not having a good inventory management is analyzed, in addition of the way that how it affects the storage costs. The research began conducting the historical analysis about stored products, its storage capacity, and distribution. The results were not optimal, since in all its raw materials (RM) have overstocking, the warehouse capacity is only used by 61%, does not have a specific place for each of its RM, causing that the delivery times increases and makes difficult a cyclical inventory. These shortcomings allowed to view and select as design alternatives the inventory ABC, so that depending on the consumption of each RM would be redistributed by using economic amount requested. Also, the Delphi method to ensure the practical applicability of the proposed tool was used, taking in account comments and suggestions of the involved experts, as well as the compliance of NOM-059-SSA1-2015 good manufacturing practices of drug. With the actions implemented, the utilization rate drops of 61% to 32% capacity, it shows that the warehouse was not designed properly due to there is not an industrial engineering area.

Keywords: lead time, improve delivery, storage costs, inventory management

Procedia PDF Downloads 232
14686 Analyze and Visualize Eye-Tracking Data

Authors: Aymen Sekhri, Emmanuel Kwabena Frimpong, Bolaji Mubarak Ayeyemi, Aleksi Hirvonen, Matias Hirvonen, Tedros Tesfay Andemichael

Abstract:

Fixation identification, which involves isolating and identifying fixations and saccades in eye-tracking protocols, is an important aspect of eye-movement data processing that can have a big impact on higher-level analyses. However, fixation identification techniques are frequently discussed informally and rarely compared in any meaningful way. With two state-of-the-art algorithms, we will implement fixation detection and analysis in this work. The velocity threshold fixation algorithm is the first algorithm, and it identifies fixation based on a threshold value. For eye movement detection, the second approach is U'n' Eye, a deep neural network algorithm. The goal of this project is to analyze and visualize eye-tracking data from an eye gaze dataset that has been provided. The data was collected in a scenario in which individuals were shown photos and asked whether or not they recognized them. The results of the two-fixation detection approach are contrasted and visualized in this paper.

Keywords: human-computer interaction, eye-tracking, CNN, fixations, saccades

Procedia PDF Downloads 135
14685 Environmental Potential of Biochar from Wood Biomass Thermochemical Conversion

Authors: Cora Bulmău

Abstract:

Soil polluted with hydrocarbons spills is a major global concern today. As a response to this issue, our experimental study tries to put in evidence the option to choose for one environmentally friendly method: use of the biochar, despite to a classical procedure; incineration of contaminated soil. Biochar represents the solid product obtained through the pyrolysis of biomass, its additional use being as an additive intended to improve the quality of the soil. The positive effect of biochar addition to soil is represented by its capacity to adsorb and contain petroleum products within its pores. Taking into consideration the capacity of the biochar to interact with organic contaminants, the purpose of the present study was to experimentally establish the effects of the addition of wooden biomass-derived biochar on a soil contaminated with oil. So, the contaminated soil was amended with biochar (10%) produced by pyrolysis in different operational conditions of the thermochemical process. After 25 days, the concentration of petroleum hydrocarbons from soil treated with biochar was measured. An analytical method as Soxhlet extraction was adopted to estimate the concentrations of total petroleum products (TPH) in the soil samples: This technique was applied to contaminated soil, also to soils remediated by incineration/adding biochar. The treatment of soil using biochar obtained from pyrolysis of the Birchwood led to a considerable decrease in the concentrations of petroleum products. The incineration treatments conducted under experimental stage to clean up the same soil, contaminated with petroleum products, involved specific parameters: temperature of about 600°C, 800°C and 1000°C and treatment time 30 and 60 minutes. The experimental results revealed that the method using biochar has registered values of efficiency up to those of all incineration processes applied for the shortest time.

Keywords: biochar, biomass, remediaton, soil, TPH

Procedia PDF Downloads 236
14684 A Hybrid Based Algorithm to Solve the Multi-objective Minimum Spanning Tree Problem

Authors: Boumesbah Asma, Chergui Mohamed El-amine

Abstract:

Since it has been shown that the multi-objective minimum spanning tree problem (MOST) is NP-hard even with two criteria, we propose in this study a hybrid NSGA-II algorithm with an exact mutation operator, which is only used with low probability, to find an approximation to the Pareto front of the problem. In a connected graph G, a spanning tree T of G being a connected and cycle-free graph, if k edges of G\T are added to T, we obtain a partial graph H of G inducing a reduced size multi-objective spanning tree problem compared to the initial one. With a weak probability for the mutation operator, an exact method for solving the reduced MOST problem considering the graph H is then used to give birth to several mutated solutions from a spanning tree T. Then, the selection operator of NSGA-II is activated to obtain the Pareto front approximation. Finally, an adaptation of the VNS metaheuristic is called for further improvements on this front. It allows finding good individuals to counterbalance the diversification and the intensification during the optimization search process. Experimental comparison studies with an exact method show promising results and indicate that the proposed algorithm is efficient.

Keywords: minimum spanning tree, multiple objective linear optimization, combinatorial optimization, non-sorting genetic algorithm, variable neighborhood search

Procedia PDF Downloads 91
14683 Numerical Analysis of Shallow Footing Rested on Geogrid Reinforced Sandy Soil

Authors: Seyed Abolhasan Naeini, Javad Shamsi Soosahab

Abstract:

The use of geosynthetic reinforcement within the footing soils is a very effective and useful method to avoid the construction of costly deep foundations. This study investigated the use of geosynthetics for soil improvement based on numerical modeling using FELA software. Pressure settlement behavior and bearing capacity ratio of foundation on geogrid reinforced sand is investigated and the effect of different parameters like as number of geogrid layers and vertical distance between elements in three different relative density soil is studied. The effects of geometrical parameters of reinforcement layers were studied for determining the optimal values to reach to maximum bearing capacity. The results indicated that the optimum range of the distance ratio between the reinforcement layers was achieved at 0.5 to 0.6 and after number of geogrid layers of 4, no significant effect on increasing the bearing capacity of footing on reinforced sandy with geogrid

Keywords: geogrid, reinforced sand, FELA software, distance ratio, number of geogrid layers

Procedia PDF Downloads 148
14682 Ensuring Uniform Energy Consumption in Non-Deterministic Wireless Sensor Network to Protract Networks Lifetime

Authors: Vrince Vimal, Madhav J. Nigam

Abstract:

Wireless sensor networks have enticed much of the spotlight from researchers all around the world, owing to its extensive applicability in agricultural, industrial and military fields. Energy conservation node deployment stratagems play a notable role for active implementation of Wireless Sensor Networks. Clustering is the approach in wireless sensor networks which improves energy efficiency in the network. The clustering algorithm needs to have an optimum size and number of clusters, as clustering, if not implemented properly, cannot effectively increase the life of the network. In this paper, an algorithm has been proposed to address connectivity issues with the aim of ensuring the uniform energy consumption of nodes in every part of the network. The results obtained after simulation showed that the proposed algorithm has an edge over existing algorithms in terms of throughput and networks lifetime.

Keywords: Wireless Sensor network (WSN), Random Deployment, Clustering, Isolated Nodes, Networks Lifetime

Procedia PDF Downloads 336
14681 An Anode Based on Modified Silicon Nanostructured for Lithium – Ion Battery Application

Authors: C. Yaddaden, M. Berouaken, L. Talbi, K. Ayouz, M. Ayat, A. Cheriet, F. Boudeffar, A. Manseri, N. Gabouze

Abstract:

Lithium-ion batteries (LIBs) are widely used in various electronic devices due to their high energy density. However, the performance of the anode material in LIBs is crucial for enhancing the battery's overall efficiency. This research focuses on developing a new anode material by modifying silicon nanostructures, specifically porous silicon nanowires (PSiNWs) and porous silicon nanoparticles (NPSiP), with silver nanoparticles (Ag) to improve the performance of LIBs. The aim of this research is to investigate the potential application of PSiNWs/Ag and NPSiP/Ag as anodes in LIBs and evaluate their performance in terms of specific capacity and Coulombic efficiency. The research methodology involves the preparation of PSiNWs and NPSiP using metal-assisted chemical etching and electrochemical etching techniques, respectively. The Ag nanoparticles are introduced onto the nanostructures through electrodissolution of the porous film and ultrasonic treatment. Galvanostatic charge/discharge measurements are conducted between 1 and 0.01 V to evaluate the specific capacity and Coulombic efficiency of both PSiNWs/Ag and NPSiP/Ag electrodes. The specific capacity of the PSiNWs/Ag electrode is approximately 1800 mA h g-1, with a Coulombic efficiency of 98.8% at the first charge/discharge cycle. On the other hand, the NPSiP/Ag electrode exhibits a specific capacity of 2600 mAh g-1. Both electrodes show a slight increase in capacity retention after 80 cycles, attributed to the high porosity and surface area of the nanostructures and the stabilization of the solid electrolyte interphase (SEI). This research highlights the potential of using modified silicon nanostructures as anodes for LIBs, which can pave the way for the development of more efficient lithium-ion batteries.

Keywords: porous silicon nanowires, silicon nanoparticles, lithium-ion batteries, galvanostatic charge/discharge

Procedia PDF Downloads 63
14680 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete

Authors: Farzad Danaei, Yilmaz Akkaya

Abstract:

In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.

Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient

Procedia PDF Downloads 77
14679 Mean Shift-Based Preprocessing Methodology for Improved 3D Buildings Reconstruction

Authors: Nikolaos Vassilas, Theocharis Tsenoglou, Djamchid Ghazanfarpour

Abstract:

In this work we explore the capability of the mean shift algorithm as a powerful preprocessing tool for improving the quality of spatial data, acquired from airborne scanners, from densely built urban areas. On one hand, high resolution image data corrupted by noise caused by lossy compression techniques are appropriately smoothed while at the same time preserving the optical edges and, on the other, low resolution LiDAR data in the form of normalized Digital Surface Map (nDSM) is upsampled through the joint mean shift algorithm. Experiments on both the edge-preserving smoothing and upsampling capabilities using synthetic RGB-z data show that the mean shift algorithm is superior to bilateral filtering as well as to other classical smoothing and upsampling algorithms. Application of the proposed methodology for 3D reconstruction of buildings of a pilot region of Athens, Greece results in a significant visual improvement of the 3D building block model.

Keywords: 3D buildings reconstruction, data fusion, data upsampling, mean shift

Procedia PDF Downloads 315
14678 Optimal MPPT Charging Battery System for Photovoltaic Standalone Applications

Authors: Kelaiaia Mounia Samira, Labar Hocine, Mesbah Tarek, Kelaiaia samia

Abstract:

The photovoltaic panel produces green power, and because of its availability across the globe, it can supply isolated loads (site away of the electrical network or difficult of access). Unfortunately this energy remains very expensive. The most application of these types of power needs storage devices, the Lithium batteries are commonly used because of its powerful storage capability. Using a solar panel or an array of panels without a controller that can perform MPPT will often result in wasted power, which results in the need to install more panels for the same power requirement. For devices that have the battery connected directly to the panel, this will also result in premature battery failure or capacity loss. In this paper it is proposed a modified P&O algorithm for the MPPT which takes in account the battery’s internal resistance vs temperature and stage of charging. Of course the temperature variation and irradiation of the PV panel are also introduced.

Keywords: modeling, battery, MPPT, charging, PV Panel

Procedia PDF Downloads 525