Search results for: invasive weed optimization algorithm
6001 Multi-Point Dieless Forming Product Defect Reduction Using Reliability-Based Robust Process Optimization
Authors: Misganaw Abebe Baye, Ji-Woo Park, Beom-Soo Kang
Abstract:
The product quality of multi-point dieless forming (MDF) is identified to be dependent on the process parameters. Moreover, a certain variation of friction and material properties may have a substantially worse influence on the final product quality. This study proposed on how to compensate the MDF product defects by minimizing the sensitivity of noise parameter variations. This can be attained by reliability-based robust optimization (RRO) technique to obtain the optimal process setting of the controllable parameters. Initially two MDF Finite Element (FE) simulations of AA3003-H14 saddle shape showed a substantial amount of dimpling, wrinkling, and shape error. FE analyses are consequently applied on ABAQUS commercial software to obtain the correlation between the control process setting and noise variation with regard to the product defects. The best prediction models are chosen from the family of metamodels to swap the computational expensive FE simulation. Genetic algorithm (GA) is applied to determine the optimal process settings of the control parameters. Monte Carlo Analysis (MCA) is executed to determine how the noise parameter variation affects the final product quality. Finally, the RRO FE simulation and the experimental result show that the amendment of the control parameters in the final forming process leads to a considerably better-quality product.Keywords: dimpling, multi-point dieless forming, reliability-based robust optimization, shape error, variation, wrinkling
Procedia PDF Downloads 2546000 Investigation of Soil Slopes Stability
Authors: Nima Farshidfar, Navid Daryasafar
Abstract:
In this paper, the seismic stability of reinforced soil slopes is studied using pseudo-dynamic analysis. Equilibrium equations that are applicable to the every kind of failure surface are written using Horizontal Slices Method. In written equations, the balance of the vertical and horizontal forces and moment equilibrium is fully satisfied. Failure surface is assumed to be log-spiral, and non-linear equilibrium equations obtained for the system are solved using Newton-Raphson Method. Earthquake effects are applied as horizontal and vertical pseudo-static coefficients to the problem. To solve this problem, a code was developed in MATLAB, and the critical failure surface is calculated using genetic algorithm. At the end, comparing the results obtained in this paper, effects of various parameters and the effect of using pseudo - dynamic analysis in seismic forces modeling is presented.Keywords: soil slopes, pseudo-dynamic, genetic algorithm, optimization, limit equilibrium method, log-spiral failure surface
Procedia PDF Downloads 3395999 Parameter Identification Analysis in the Design of Rock Fill Dams
Authors: G. Shahzadi, A. Soulaimani
Abstract:
This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS
Procedia PDF Downloads 1465998 Kinematic Gait Analysis Is a Non-Invasive, More Objective and Earlier Measurement of Impairment in the Mdx Mouse Model of Duchenne Muscular Dystrophy
Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K. Lehtimäki, A. Nurmi, D. Wells
Abstract:
Duchenne muscular dystrophy (DMD) is caused by an X linked mutation in the dystrophin gene; lack of dystrophin causes a progressive muscle necrosis which leads to a progressive decrease in mobility in those suffering from the disease. The MDX mouse, a mutant mouse model which displays a frank dystrophinopathy, is currently widely employed in pre clinical efficacy models for treatments and therapies aimed at DMD. In general the end-points examined within this model have been based on invasive histopathology of muscles and serum biochemical measures like measurement of serum creatine kinase (sCK). It is established that a “critical period” between 4 and 6 weeks exists in the MDX mouse when there is extensive muscle damage that is largely sub clinical but evident with sCK measurements and histopathological staining. However, a full characterization of the MDX model remains largely incomplete especially with respect to the ability to aggravate of the muscle damage beyond the critical period. The purpose of this study was to attempt to aggravate the muscle damage in the MDX mouse and to create a wider, more readily translatable and discernible, therapeutic window for the testing of potential therapies for DMD. The study consisted of subjecting 15 male mutant MDX mice and 15 male wild-type mice to an intense chronic exercise regime that consisted of bi-weekly (two times per week) treadmill sessions over a 12 month period. Each session was 30 minutes in duration and the treadmill speed was gradually built up to 14m/min for the entire session. Baseline plasma creatine kinase (pCK), treadmill training performance and locomotor activity were measured after the “critical period” at around 10 weeks of age and again at 14 weeks of age, 6 months, 9 months and 12 months of age. In addition, kinematic gait analysis was employed using a novel analysis algorithm in order to compare changes in gait and fine motor skills in diseased exercised MDX mice compared to exercised wild type mice and non exercised MDX mice. In addition, a morphological and metabolic profile (including lipid profile), from the muscles most severely affected, the gastrocnemius muscle and the tibialis anterior muscle, was also measured at the same time intervals. Results indicate that by aggravating or exacerbating the underlying muscle damage in the MDX mouse by exercise a more pronounced and severe phenotype in comes to light and this can be picked up earlier by kinematic gait analysis. A reduction in mobility as measured by open field is not apparent at younger ages nor during the critical period, but changes in gait are apparent in the mutant MDX mice. These gait changes coincide with pronounced morphological and metabolic changes by non-invasive anatomical MRI and proton spectroscopy (1H-MRS) we have reported elsewhere. Evidence of a progressive asymmetric pathology in imaging parameters as well as in the kinematic gait analysis was found. Taken together, the data show that chronic exercise regime exacerbates the muscle damage beyond the critical period and the ability to measure through non-invasive means are important factors to consider when performing preclinical efficacy studies in the MDX mouse.Keywords: Gait, muscular dystrophy, Kinematic analysis, neuromuscular disease
Procedia PDF Downloads 2765997 Impact of Lobular Carcinoma in situ on Local Recurrence in Breast Cancer Treated with Breast Conservation Therapy: A Systematic Review and Meta-Analysis
Authors: Christopher G. Harris, Guy D. Eslick
Abstract:
Purpose: Lobular carcinoma in situ (LCIS) is a known risk factor for breast cancer of unclear significance when detected in association with invasive carcinoma. This meta-analysis aims to determine the impact of LCIS on local recurrence risk for individuals with breast cancer treated with breast conservation therapy to help guide appropriate treatment strategies. Methods: We identified relevant studies from five electronic databases. Studies were deemed suitable for inclusion where they compared patients with invasive breast cancer and concurrent LCIS to those with breast cancer alone, all patients underwent breast conservation therapy (lumpectomy with adjuvant radiation therapy), and local recurrence was evaluated. Recurrence data were pooled by use of a random effects model. Results: From 1488 citations screened by our search, 8 studies were deemed suitable for inclusion. These studies comprised of 908 cases and 10638 controls. Median follow-up time was 90 months. There was a significantly increased overall risk of local breast cancer recurrence for individuals with LCIS in association with breast cancer following breast conservation therapy [pOR 1.87; 95% CI 1.14-3.04; p = 0.012]. The risk of local recurrence was non-significantly increased at 5 [pOR 1.09; 95% CI 0.48-2.48; p = 0.828] and 10 years [pOR 1.90; 95% CI 0.89-4.06; p = 0.096]. Conclusions: Individuals with LCIS in association with invasive breast cancer have an increased risk of local recurrence following breast conservation therapy. This supports consideration of aggressive local control of LCIS by way of completion mastectomy or re-excision for certain high-risk patients.Keywords: breast cancer, breast conservation therapy, lobular carcinoma in situ, lobular neoplasia, local recurrence, meta-analysis
Procedia PDF Downloads 1605996 Olive-Mill Wastewater and Organo-Mineral Fertlizers Application for the Control of Parasitic Weed Phelipanche ramosa L. Pomel in Tomato
Authors: Grazia Disciglio, Francesco Lops, Annalisa Tarantino, Emanuele Tarantino
Abstract:
The parasitic weed specie Phelipanche ramosa (L) Pomel is one of the major constraints in tomato crop in Apulia region (southern Italy). The experimental was considered to investigate the effect of six organic compounds (Olive miller wastewater, Allil isothiocyanate®, Alfa plus K®, Radicon®, Rizosum Max®, Kendal Nem®) on the naturally infested field of tomato growing season in 2016. The randomized block design with 3 replicates was adopted. Tomato seedling were transplant on 19 May 2016. During the growing cycle of the tomato at 74, 81, 93 and 103 days after transplantation (DAT), the number of parasitic shoots (branched plants) that had emerged in each plot was determined. At harvesting on 13 September 2016 the major quanti-qualitative yield parameters were determined, including marketable yield, mean weight, dry matter, soluble solids, fruit colour, pH and titratable acidity. The treatments provided the results show that none of treatments provided complete control against P. ramosa. However, among the products tested Olive miller wastewater, Alfa plus K®, Rizosum Max® and Kendal Nem® products applied to the soil show the number of emerged shoots significantly lower than Radicon® and especially than the Allil isothiocyanate® treatment and the untreated control. Regarding the effect of different treatments on the tomato productive parameters, the marketable yield resulted significantly higher in the same mentioned treatments which gave the lower P. ramosa infestation. No significative differences for the other fruit characteristics were observed.Keywords: processing tomato crop, Phelipanche ramosa, olive-mill wastewater, organic fertilizers
Procedia PDF Downloads 3255995 Testing Nitrogen and Iron Based Compounds as an Environmentally Safer Alternative to Control Broadleaf Weeds in Turf
Authors: Simran Gill, Samuel Bartels
Abstract:
Turfgrass is an important component of urban and rural lawns and landscapes. However, broadleaf weeds such as dandelions (Taraxacum officinale) and white clovers (Trifolium repens) pose major challenges to the health and aesthetics of turfgrass fields. Chemical weed control methods, such as 2,4-D weedicides, have been widely deployed; however, their safety and environmental impacts are often debated. Alternative, environmentally friendly control methods have been considered, but experimental tests for their effectiveness have been limited. This study investigates the use and effectiveness of nitrogen and iron compounds as nutrient management methods of weed control. In a two-phase experiment, the first conducted on a blend of cool season turfgrasses in plastic containers, the blend included Perennial ryegrass (Lolium perenne), Kentucky bluegrass (Poa pratensis) and Creeping red fescue (Festuca rubra) grown under controlled conditions in the greenhouse, involved the application of different combinations of nitrogen (urea and ammonium sulphate) and iron (chelated iron and iron sulphate) compounds and their combinations (urea × chelated iron, urea × iron sulphate, ammonium sulphate × chelated iron, ammonium sulphate × iron sulphate) contrasted with chemical 2, 4-D weedicide and a control (no application) treatment. There were three replicates of each of the treatments, resulting in a total of 30 treatment combinations. The parameters assessed during weekly data collection included a visual quality rating of weeds (nominal scale of 0-9), number of leaves, longest leaf span, number of weeds, chlorophyll fluorescence of grass, the visual quality rating of grass (0-9), and the weight of dried grass clippings. The results drawn from the experiment conducted over the period of 12 weeks, with three applications each at an interval of every 4 weeks, stated that the combination of ammonium sulphate and iron sulphate appeared to be most effective in halting the growth and establishment of dandelions and clovers while it also improved turf health. The second phase of the experiment, which involved the ammonium sulphate × iron sulphate, weedicide, and control treatments, was conducted outdoors on already established perennial turf with weeds under natural field conditions. After 12 weeks of observation, the results were comparable among the treatments in terms of weed control, but the ammonium sulphate × iron sulphate treatment fared much better in terms of the improved visual quality of the turf and other quality ratings. Preliminary results from these experiments thus suggest that nutrient management based on nitrogen and iron compounds could be a useful environmentally friendly alternative for controlling broadleaf weeds and improving the health and quality of turfgrass.Keywords: broadleaf weeds, nitrogen, iron, turfgrass
Procedia PDF Downloads 725994 Design and Optimization of Open Loop Supply Chain Distribution Network Using Hybrid K-Means Cluster Based Heuristic Algorithm
Authors: P. Suresh, K. Gunasekaran, R. Thanigaivelan
Abstract:
Radio frequency identification (RFID) technology has been attracting considerable attention with the expectation of improved supply chain visibility for consumer goods, apparel, and pharmaceutical manufacturers, as well as retailers and government procurement agencies. It is also expected to improve the consumer shopping experience by making it more likely that the products they want to purchase are available. Recent announcements from some key retailers have brought interest in RFID to the forefront. A modified K- Means Cluster based Heuristic approach, Hybrid Genetic Algorithm (GA) - Simulated Annealing (SA) approach, Hybrid K-Means Cluster based Heuristic-GA and Hybrid K-Means Cluster based Heuristic-GA-SA for Open Loop Supply Chain Network problem are proposed. The study incorporated uniform crossover operator and combined crossover operator in GAs for solving open loop supply chain distribution network problem. The algorithms are tested on 50 randomly generated data set and compared with each other. The results of the numerical experiments show that the Hybrid K-means cluster based heuristic-GA-SA, when tested on 50 randomly generated data set, shows superior performance to the other methods for solving the open loop supply chain distribution network problem.Keywords: RFID, supply chain distribution network, open loop supply chain, genetic algorithm, simulated annealing
Procedia PDF Downloads 1655993 A Low-Latency Quadratic Extended Domain Modular Multiplier for Bilinear Pairing Based on Non-Least Positive Multiplication
Authors: Yulong Jia, Xiang Zhang, Ziyuan Wu, Shiji Hu
Abstract:
The calculation of bilinear pairing is the core of the SM9 algorithm, which relies on the underlying prime domain algorithm and the quadratic extension domain algorithm. Among the field algorithms, modular multiplication operation is the most time-consuming part. Therefore, the underlying modular multiplication algorithm is optimized to maximize the operation speed of bilinear pairings. This paper uses a modular multiplication method based on non-least positive (NLP) combined with Karatsuba and schoolbook multiplication to improve the Montgomery algorithm. At the same time, according to the characteristics of multiplication operation in the quadratic extension domain, a quadratic extension domain FP2-NLP modular multiplication algorithm for bilinear pairings is proposed, which effectively reduces the operation time of modular multiplication in the quadratic extension domain. The sub-expanded domain Fp₂ -NLP modular multiplication algorithm effectively reduces the operation time of modular multiplication under the second-expanded domain. The multiplication unit in the quadratic extension domain is implemented using SMIC55nm process, and two different implementation architectures are designed to cope with different application scenarios. Compared with the existing related literature, The output latency of this design can reach a minimum of 15 cycles. The shortest time for calculating the (AB+CD)r⁻¹ mod form is 37.5ns, and the comprehensive area-time product (AT) is 11400. The final R-ate pairing algorithm hardware accelerator consumes 2670k equivalent logic gates and 1.8ms computing time in 55nm process.Keywords: sm9, hardware, NLP, Montgomery
Procedia PDF Downloads 75992 A New Class of Conjugate Gradient Methods Based on a Modified Search Direction for Unconstrained Optimization
Authors: Belloufi Mohammed, Sellami Badreddine
Abstract:
Conjugate gradient methods have played a special role for solving large scale optimization problems due to the simplicity of their iteration, convergence properties and their low memory requirements. In this work, we propose a new class of conjugate gradient methods which ensures sufficient descent. Moreover, we propose a new search direction with the Wolfe line search technique for solving unconstrained optimization problems, a global convergence result for general functions is established provided that the line search satisfies the Wolfe conditions. Our numerical experiments indicate that our proposed methods are preferable and in general superior to the classical conjugate gradient methods in terms of efficiency and robustness.Keywords: unconstrained optimization, conjugate gradient method, sufficient descent property, numerical comparisons
Procedia PDF Downloads 4055991 Enhancing the Bionic Eye: A Real-time Image Optimization Framework to Encode Color and Spatial Information Into Retinal Prostheses
Authors: William Huang
Abstract:
Retinal prostheses are currently limited to low resolution grayscale images that lack color and spatial information. This study develops a novel real-time image optimization framework and tools to encode maximum information to the prostheses which are constrained by the number of electrodes. One key idea is to localize main objects in images while reducing unnecessary background noise through region-contrast saliency maps. A novel color depth mapping technique was developed through MiniBatchKmeans clustering and color space selection. The resulting image was downsampled using bicubic interpolation to reduce image size while preserving color quality. In comparison to current schemes, the proposed framework demonstrated better visual quality in tested images. The use of the region-contrast saliency map showed improvements in efficacy up to 30%. Finally, the computational speed of this algorithm is less than 380 ms on tested cases, making real-time retinal prostheses feasible.Keywords: retinal implants, virtual processing unit, computer vision, saliency maps, color quantization
Procedia PDF Downloads 1535990 An Inviscid Compressible Flow Solver Based on Unstructured OpenFOAM Mesh Format
Authors: Utkan Caliskan
Abstract:
Two types of numerical codes based on finite volume method are developed in order to solve compressible Euler equations to simulate the flow through forward facing step channel. Both algorithms have AUSM+- up (Advection Upstream Splitting Method) scheme for flux splitting and two-stage Runge-Kutta scheme for time stepping. In this study, the flux calculations differentiate between the algorithm based on OpenFOAM mesh format which is called 'face-based' algorithm and the basic algorithm which is called 'element-based' algorithm. The face-based algorithm avoids redundant flux computations and also is more flexible with hybrid grids. Moreover, some of OpenFOAM’s preprocessing utilities can be used on the mesh. Parallelization of the face based algorithm for which atomic operations are needed due to the shared memory model, is also presented. For several mesh sizes, 2.13x speed up is obtained with face-based approach over the element-based approach.Keywords: cell centered finite volume method, compressible Euler equations, OpenFOAM mesh format, OpenMP
Procedia PDF Downloads 3195989 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts
Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira
Abstract:
In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design
Procedia PDF Downloads 1155988 Passive Vibration Isolation Analysis and Optimization for Mechanical Systems
Authors: Ozan Yavuz Baytemir, Ender Cigeroglu, Gokhan Osman Ozgen
Abstract:
Vibration is an important issue in the design of various components of aerospace, marine and vehicular applications. In order not to lose the components’ function and operational performance, vibration isolation design involving the optimum isolator properties selection and isolator positioning processes appear to be a critical study. Knowing the growing need for the vibration isolation system design, this paper aims to present two types of software capable of implementing modal analysis, response analysis for both random and harmonic types of excitations, static deflection analysis, Monte Carlo simulations in addition to study of parameter and location optimization for different types of isolation problem scenarios. Investigating the literature, there is no such study developing a software-based tool that is capable of implementing all those analysis, simulation and optimization studies in one platform simultaneously. In this paper, the theoretical system model is generated for a 6-DOF rigid body. The vibration isolation system of any mechanical structure is able to be optimized using hybrid method involving both global search and gradient-based methods. Defining the optimization design variables, different types of optimization scenarios are listed in detail. Being aware of the need for a user friendly vibration isolation problem solver, two types of graphical user interfaces (GUIs) are prepared and verified using a commercial finite element analysis program, Ansys Workbench 14.0. Using the analysis and optimization capabilities of those GUIs, a real application used in an air-platform is also presented as a case study at the end of the paper.Keywords: hybrid optimization, Monte Carlo simulation, multi-degree-of-freedom system, parameter optimization, location optimization, passive vibration isolation analysis
Procedia PDF Downloads 5655987 Accurate Algorithm for Selecting Ground Motions Satisfying Code Criteria
Authors: S. J. Ha, S. J. Baik, T. O. Kim, S. W. Han
Abstract:
For computing the seismic responses of structures, current seismic design provisions permit response history analyses (RHA) that can be used without limitations in height, seismic design category, and building irregularity. In order to obtain accurate seismic responses using RHA, it is important to use adequate input ground motions. Current seismic design provisions provide criteria for selecting ground motions. In this study, the accurate and computationally efficient algorithm is proposed for accurately selecting ground motions that satisfy the requirements specified in current seismic design provisions. The accuracy of the proposed algorithm is verified using single-degree-of-freedom systems with various natural periods and yield strengths. This study shows that the mean seismic responses obtained from RHA with seven and ten ground motions selected using the proposed algorithm produce errors within 20% and 13%, respectively.Keywords: algorithm, ground motion, response history analysis, selection
Procedia PDF Downloads 2865986 Study on Dynamic Stiffness Matching and Optimization Design Method of a Machine Tool
Authors: Lu Xi, Li Pan, Wen Mengmeng
Abstract:
The stiffness of each component has different influences on the stiffness of the machine tool. Taking the five-axis gantry machining center as an example, we made the modal analysis of the machine tool, followed by raising and lowering the stiffness of the pillar, slide plate, beam, ram and saddle so as to study the stiffness matching among these components on the standard of whether the stiffness of the modified machine tool changes more than 50% relative to the stiffness of the original machine tool. The structural optimization of the machine tool can be realized by changing the stiffness of the components whose stiffness is mismatched. For example, the stiffness of the beam is mismatching. The natural frequencies of the first six orders of the beam increased by 7.70%, 0.38%, 6.82%, 7.96%, 18.72% and 23.13%, with the weight increased by 28Kg, leading to the natural frequencies of several orders which had a great influence on the dynamic performance of the whole machine increased by 1.44%, 0.43%, 0.065%, which verified the correctness of the optimization method based on stiffness matching proposed in this paper.Keywords: machine tool, optimization, modal analysis, stiffness matching
Procedia PDF Downloads 1025985 Parameter Selection for Computationally Efficient Use of the Bfvrns Fully Homomorphic Encryption Scheme
Authors: Cavidan Yakupoglu, Kurt Rohloff
Abstract:
In this study, we aim to provide a novel parameter selection model for the BFVrns scheme, which is one of the prominent FHE schemes. Parameter selection in lattice-based FHE schemes is a practical challenges for experts or non-experts. Towards a solution to this problem, we introduce a hybrid principles-based approach that combines theoretical with experimental analyses. To begin, we use regression analysis to examine the parameters on the performance and security. The fact that the FHE parameters induce different behaviors on performance, security and Ciphertext Expansion Factor (CEF) that makes the process of parameter selection more challenging. To address this issue, We use a multi-objective optimization algorithm to select the optimum parameter set for performance, CEF and security at the same time. As a result of this optimization, we get an improved parameter set for better performance at a given security level by ensuring correctness and security against lattice attacks by providing at least 128-bit security. Our result enables average ~ 5x smaller CEF and mostly better performance in comparison to the parameter sets given in [1]. This approach can be considered a semiautomated parameter selection. These studies are conducted using the PALISADE homomorphic encryption library, which is a well-known HE library. The abstract goes here.Keywords: lattice cryptography, fully homomorphic encryption, parameter selection, LWE, RLWE
Procedia PDF Downloads 1575984 Wireless Battery Charger with Adaptive Rapid-Charging Algorithm
Authors: Byoung-Hee Lee
Abstract:
Wireless battery charger with adaptive rapid charging algorithm is proposed. The proposed wireless charger adopts voltage regulation technique to reduce the number of power conversion steps. Moreover, based on battery models, an adaptive rapid charging algorithm for Li-ion batteries is obtained. Rapid-charging performance with the proposed wireless battery charger and the proposed rapid charging algorithm has been experimentally verified to show more than 70% charging time reduction compared to conventional constant-current constant-voltage (CC-CV) charging without the degradation of battery lifetime.Keywords: wireless, battery charger, adaptive, rapid-charging
Procedia PDF Downloads 3785983 Frequency- and Content-Based Tag Cloud Font Distribution Algorithm
Authors: Ágnes Bogárdi-Mészöly, Takeshi Hashimoto, Shohei Yokoyama, Hiroshi Ishikawa
Abstract:
The spread of Web 2.0 has caused user-generated content explosion. Users can tag resources to describe and organize them. Tag clouds provide rough impression of relative importance of each tag within overall cloud in order to facilitate browsing among numerous tags and resources. The goal of our paper is to enrich visualization of tag clouds. A font distribution algorithm has been proposed to calculate a novel metric based on frequency and content, and to classify among classes from this metric based on power law distribution and percentages. The suggested algorithm has been validated and verified on the tag cloud of a real-world thesis portal.Keywords: tag cloud, font distribution algorithm, frequency-based, content-based, power law
Procedia PDF Downloads 5055982 Spare Part Inventory Optimization Policy: A Study Literature
Authors: Zukhrof Romadhon, Nani Kurniati
Abstract:
Availability of Spare parts is critical to support maintenance tasks and the production system. Managing spare part inventory deals with some parameters and objective functions, as well as the tradeoff between inventory costs and spare parts availability. Several mathematical models and methods have been developed to optimize the spare part policy. Many researchers who proposed optimization models need to be considered to identify other potential models. This work presents a review of several pertinent literature on spare part inventory optimization and analyzes the gaps for future research. Initial investigation on scholars and many journal database systems under specific keywords related to spare parts found about 17K papers. Filtering was conducted based on five main aspects, i.e., replenishment policy, objective function, echelon network, lead time, model solving, and additional aspects of part classification. Future topics could be identified based on the number of papers that haven’t addressed specific aspects, including joint optimization of spare part inventory and maintenance.Keywords: spare part, spare part inventory, inventory model, optimization, maintenance
Procedia PDF Downloads 625981 Fragment Domination for Many-Objective Decision-Making Problems
Authors: Boris Djartov, Sanaz Mostaghim
Abstract:
This paper presents a number-based dominance method. The main idea is how to fragment the many attributes of the problem into subsets suitable for the well-established concept of Pareto dominance. Although other similar methods can be found in the literature, they focus on comparing the solutions one objective at a time, while the focus of this method is to compare entire subsets of the objective vector. Given the nature of the method, it is computationally costlier than other methods and thus, it is geared more towards selecting an option from a finite set of alternatives, where each solution is defined by multiple objectives. The need for this method was motivated by dynamic alternate airport selection (DAAS). In DAAS, pilots, while en route to their destination, can find themselves in a situation where they need to select a new landing airport. In such a predicament, they need to consider multiple alternatives with many different characteristics, such as wind conditions, available landing distance, the fuel needed to reach it, etc. Hence, this method is primarily aimed at human decision-makers. Many methods within the field of multi-objective and many-objective decision-making rely on the decision maker to initially provide the algorithm with preference points and weight vectors; however, this method aims to omit this very difficult step, especially when the number of objectives is so large. The proposed method will be compared to Favour (1 − k)-Dom and L-dominance (LD) methods. The test will be conducted using well-established test problems from the literature, such as the DTLZ problems. The proposed method is expected to outperform the currently available methods in the literature and hopefully provide future decision-makers and pilots with support when dealing with many-objective optimization problems.Keywords: multi-objective decision-making, many-objective decision-making, multi-objective optimization, many-objective optimization
Procedia PDF Downloads 915980 Classifying and Analysis 8-Bit to 8-Bit S-Boxes Characteristic Using S-Box Evaluation Characteristic
Authors: Muhammad Luqman, Yusuf Kurniawan
Abstract:
S-Boxes is one of the linear parts of the cryptographic algorithm. The existence of S-Box in the cryptographic algorithm is needed to maintain non-linearity of the algorithm. Nowadays, modern cryptographic algorithms use an S-Box as a part of algorithm process. Despite the fact that several cryptographic algorithms today reuse theoretically secure and carefully constructed S-Boxes, there is an evaluation characteristic that can measure security properties of S-Boxes and hence the corresponding primitives. Analysis of an S-Box usually is done using manual mathematics calculation. Several S-Boxes are presented as a Truth Table without any mathematical background algorithm. Then, it’s rather difficult to determine the strength of Truth Table S-Box without a mathematical algorithm. A comprehensive analysis should be applied to the Truth Table S-Box to determine the characteristic. Several important characteristics should be owned by the S-Boxes, they are Nonlinearity, Balancedness, Algebraic degree, LAT, DAT, differential delta uniformity, correlation immunity and global avalanche criterion. Then, a comprehensive tool will be present to automatically calculate the characteristics of S-Boxes and determine the strength of S-Box. Comprehensive analysis is done on a deterministic process to produce a sequence of S-Boxes characteristic and give advice for a better S-Box construction.Keywords: cryptographic properties, Truth Table S-Boxes, S-Boxes characteristic, deterministic process
Procedia PDF Downloads 3635979 Cash Flow Optimization on Synthetic CDOs
Authors: Timothée Bligny, Clément Codron, Antoine Estruch, Nicolas Girodet, Clément Ginet
Abstract:
Collateralized Debt Obligations are not as widely used nowadays as they were before 2007 Subprime crisis. Nonetheless there remains an enthralling challenge to optimize cash flows associated with synthetic CDOs. A Gaussian-based model is used here in which default correlation and unconditional probabilities of default are highlighted. Then numerous simulations are performed based on this model for different scenarios in order to evaluate the associated cash flows given a specific number of defaults at different periods of time. Cash flows are not solely calculated on a single bought or sold tranche but rather on a combination of bought and sold tranches. With some assumptions, the simplex algorithm gives a way to find the maximum cash flow according to correlation of defaults and maturities. The used Gaussian model is not realistic in crisis situations. Besides present system does not handle buying or selling a portion of a tranche but only the whole tranche. However the work provides the investor with relevant elements on how to know what and when to buy and sell.Keywords: synthetic collateralized debt obligation (CDO), credit default swap (CDS), cash flow optimization, probability of default, default correlation, strategies, simulation, simplex
Procedia PDF Downloads 2755978 Second Order Optimality Conditions in Nonsmooth Analysis on Riemannian Manifolds
Authors: Seyedehsomayeh Hosseini
Abstract:
Much attention has been paid over centuries to understanding and solving the problem of minimization of functions. Compared to linear programming and nonlinear unconstrained optimization problems, nonlinear constrained optimization problems are much more difficult. Since the procedure of finding an optimizer is a search based on the local information of the constraints and the objective function, it is very important to develop techniques using geometric properties of the constraints and the objective function. In fact, differential geometry provides a powerful tool to characterize and analyze these geometric properties. Thus, there is clearly a link between the techniques of optimization on manifolds and standard constrained optimization approaches. Furthermore, there are manifolds that are not defined as constrained sets in R^n an important example is the Grassmann manifolds. Hence, to solve optimization problems on these spaces, intrinsic methods are used. In a nondifferentiable problem, the gradient information of the objective function generally cannot be used to determine the direction in which the function is decreasing. Therefore, techniques of nonsmooth analysis are needed to deal with such a problem. As a manifold, in general, does not have a linear structure, the usual techniques, which are often used in nonsmooth analysis on linear spaces, cannot be applied and new techniques need to be developed. This paper presents necessary and sufficient conditions for a strict local minimum of extended real-valued, nonsmooth functions defined on Riemannian manifolds.Keywords: Riemannian manifolds, nonsmooth optimization, lower semicontinuous functions, subdifferential
Procedia PDF Downloads 3615977 A Non-Invasive Method for Assessing the Adrenocortical Function in the Roan Antelope (Hippotragus equinus)
Authors: V. W. Kamgang, A. Van Der Goot, N. C. Bennett, A. Ganswindt
Abstract:
The roan antelope (Hippotragus equinus) is the second largest antelope species in Africa. These past decades, populations of roan antelope are declining drastically throughout Africa. This situation resulted in the development of intensive breeding programmes for this species in Southern African, where they are popular game ranching herbivores in with increasing numbers in captivity. Nowadays, avoidance of stress is important when managing wildlife to ensure animal welfare. In this regard, a non-invasive approach to monitor the adrenocortical function as a measure of stress would be preferable, since animals are not disturbed during sample collection. However, to date, a non-invasive method has not been established for the roan antelope. In this study, we validated a non-invasive technique to monitor the adrenocortical function in this species. Herein, we performed an adrenocorticotropic hormone (ACTH) stimulation test at Lapalala reserve Wilderness, South Africa, using adult captive roan antelopes to determine the stress-related physiological responses. Two individually housed roan antelope (a male and a female) received an intramuscular injection with Synacthen depot (Norvatis) loaded into a 3ml syringe (Pneu-Dart) at an estimated dose of 1 IU/kg. A total number of 86 faecal samples (male: 46, female: 40) were collected 5 days before and 3 days post-injection. All samples were then lyophilised, pulverized and extracted with 80% ethanol (0,1g/3ml) and the resulting faecal extracts were analysed for immunoreactive faecal glucocorticoid metabolite (fGCM) concentrations using five enzyme immunoassays (EIAs); (i) 11-oxoaetiocholanolone I (detecting 11,17 dioxoandrostanes), (ii) 11-oxoaetiocholanolone II (detecting fGCM with a 5α-pregnane-3α-ol-11one structure), (iii) a 5α-pregnane-3β-11β,21-triol-20-one (measuring 3β,11β-diol CM), (iv) a cortisol and (v) a corticosterone. In both animals, all EIAs detected an increase in fGCM concentration 100% post-ACTH administration. However, the 11-oxoaetiocholanolone I EIA performed best, with a 20-fold increase in the male (baseline: 0.384 µg/g, DW; peak: 8,585 µg/g DW) and a 17-fold in the female (baseline: 0.323 µg/g DW, peak: 7,276 µg/g DW), measured 17 hours and 12 hours post-administration respectively. These results are important as the ability to assess adrenocortical function non-invasively in roan can now be used as an essential prerequisite to evaluate the effects of stressful circumstances; such as variation of environmental conditions or reproduction in other to improve management strategies for the conservation of this iconic antelope species.Keywords: adrenocorticotropic hormone challenge, adrenocortical function, captive breeding, non-invasive method, roan antelope
Procedia PDF Downloads 1455976 Evaluation of Invasive Tree Species for Production of Phosphate Bonded Composites
Authors: Stephen Osakue Amiandamhen, Schwaller Andreas, Martina Meincken, Luvuyo Tyhoda
Abstract:
Invasive alien tree species are currently being cleared in South Africa as a result of the forest and water imbalances. These species grow wildly constituting about 40% of total forest area. They compete with the ecosystem for natural resources and are considered as ecosystem engineers by rapidly changing disturbance regimes. As such, they are harvested for commercial uses but much of it is wasted because of their form and structure. The waste is being sold to local communities as fuel wood. These species can be considered as potential feedstock for the production of phosphate bonded composites. The presence of bark in wood-based composites leads to undesirable properties, and debarking as an option can be cost implicative. This study investigates the potentials of these invasive species processed without debarking on some fundamental properties of wood-based panels. Some invasive alien tree species were collected from EC Biomass, Port Elizabeth, South Africa. They include Acacia mearnsii (Black wattle), A. longifolia (Long-leaved wattle), A. cyclops (Red-eyed wattle), A. saligna (Golden-wreath wattle) and Eucalyptus globulus (Blue gum). The logs were chipped as received. The chips were hammer-milled and screened through a 1 mm sieve. The wood particles were conditioned and the quantity of bark in the wood was determined. The binding matrix was prepared using a reactive magnesia, phosphoric acid and class S fly ash. The materials were mixed and poured into a metallic mould. The composite within the mould was compressed at room temperature at a pressure of 200 KPa. After initial setting which took about 5 minutes, the composite board was demoulded and air-cured for 72 h. The cured product was thereafter conditioned at 20°C and 70% relative humidity for 48 h. Test of physical and strength properties were conducted on the composite boards. The effect of binder formulation and fly ash content on the properties of the boards was studied using fitted response surface technology, according to a central composite experimental design (CCD) at a fixed wood loading of 75% (w/w) of total inorganic contents. The results showed that phosphate/magnesia ratio of 3:1 and fly ash content of 10% was required to obtain a product of good properties and sufficient strength for intended applications. The proposed products can be used for ceilings, partitioning and insulating wall panels.Keywords: invasive alien tree species, phosphate bonded composites, physical properties, strength
Procedia PDF Downloads 2955975 A Model for Diagnosis and Prediction of Coronavirus Using Neural Network
Authors: Sajjad Baghernezhad
Abstract:
Meta-heuristic and hybrid algorithms have high adeer in modeling medical problems. In this study, a neural network was used to predict covid-19 among high-risk and low-risk patients. This study was conducted to collect the applied method and its target population consisting of 550 high-risk and low-risk patients from the Kerman University of medical sciences medical center to predict the coronavirus. In this study, the memetic algorithm, which is a combination of a genetic algorithm and a local search algorithm, has been used to update the weights of the neural network and develop the accuracy of the neural network. The initial study showed that the accuracy of the neural network was 88%. After updating the weights, the memetic algorithm increased by 93%. For the proposed model, sensitivity, specificity, positive predictivity value, value/accuracy to 97.4, 92.3, 95.8, 96.2, and 0.918, respectively; for the genetic algorithm model, 87.05, 9.20 7, 89.45, 97.30 and 0.967 and for logistic regression model were 87.40, 95.20, 93.79, 0.87 and 0.916. Based on the findings of this study, neural network models have a lower error rate in the diagnosis of patients based on individual variables and vital signs compared to the regression model. The findings of this study can help planners and health care providers in signing programs and early diagnosis of COVID-19 or Corona.Keywords: COVID-19, decision support technique, neural network, genetic algorithm, memetic algorithm
Procedia PDF Downloads 675974 New Iterative Algorithm for Improving Depth Resolution in Ionic Analysis: Effect of Iterations Number
Authors: N. Dahraoui, M. Boulakroune, D. Benatia
Abstract:
In this paper, the improvement by deconvolution of the depth resolution in Secondary Ion Mass Spectrometry (SIMS) analysis is considered. Indeed, we have developed a new Tikhonov-Miller deconvolution algorithm where a priori model of the solution is included. This is a denoisy and pre-deconvoluted signal obtained from: firstly, by the application of wavelet shrinkage algorithm, secondly by the introduction of the obtained denoisy signal in an iterative deconvolution algorithm. In particular, we have focused the light on the effect of the iterations number on the evolution of the deconvoluted signals. The SIMS profiles are multilayers of Boron in Silicon matrix.Keywords: DRF, in-depth resolution, multiresolution deconvolution, SIMS, wavelet shrinkage
Procedia PDF Downloads 4185973 A Second Order Genetic Algorithm for Traveling Salesman Problem
Authors: T. Toathom, M. Munlin, P. Sugunnasil
Abstract:
The traveling salesman problem (TSP) is one of the best-known problems in optimization problem. There are many research regarding the TSP. One of the most usage tool for this problem is the genetic algorithm (GA). The chromosome of the GA for TSP is normally encoded by the order of the visited city. However, the traditional chromosome encoding scheme has some limitations which are twofold: the large solution space and the inability to encapsulate some information. The number of solution for a certain problem is exponentially grow by the number of city. Moreover, the traditional chromosome encoding scheme fails to recognize the misplaced correct relation. It implies that the tradition method focuses only on exact solution. In this work, we relax some of the concept in the GA for TSP which is the exactness of the solution. The proposed work exploits the relation between cities in order to reduce the solution space in the chromosome encoding. In this paper, a second order GA is proposed to solve the TSP. The term second order refers to how the solution is encoded into chromosome. The chromosome is divided into 2 types: the high order chromosome and the low order chromosome. The high order chromosome is the chromosome that focus on the relation between cities such as the city A should be visited before city B. On the other hand, the low order chromosome is a type of chromosome that is derived from a high order chromosome. In other word, low order chromosome is encoded by the traditional chromosome encoding scheme. The genetic operation, mutation and crossover, will be performed on the high order chromosome. Then, the high order chromosome will be mapped to a group of low order chromosomes whose characteristics are satisfied with the high order chromosome. From the mapped set of chromosomes, the champion chromosome will be selected based on the fitness value which will be later used as a representative for the high order chromosome. The experiment is performed on the city data from TSPLIB.Keywords: genetic algorithm, traveling salesman problem, initial population, chromosomes encoding
Procedia PDF Downloads 2725972 Description of the Non-Iterative Learning Algorithm of Artificial Neuron
Authors: B. S. Akhmetov, S. T. Akhmetova, A. I. Ivanov, T. S. Kartbayev, A. Y. Malygin
Abstract:
The problem of training of a network of artificial neurons in biometric appendices is that this process has to be completely automatic, i.e. the person operator should not participate in it. Therefore, this article discusses the issues of training the network of artificial neurons and the description of the non-iterative learning algorithm of artificial neuron.Keywords: artificial neuron, biometrics, biometrical applications, learning of neuron, non-iterative algorithm
Procedia PDF Downloads 496