Search results for: optimization procedure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5194

Search results for: optimization procedure

3664 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 69
3663 Optimizing Pavement Construction Procedures in the Southern Desert of Libya

Authors: Khlifa El Atrash, Gabriel Assaf

Abstract:

Libya uses a volumetric analysis in designing asphalt mixtures, which can also be upgraded in hot, arid weather. However, in order to be effective, it should include many important aspects which are materials, environment, and method of construction. However, the quality of some roads was below a satisfactory level. This paper examines the factors that contribute to low quality of road performance in Libya. To evaluate these factors, a questionnaire survey and a laboratory comparative study were performed for a few mixes under-represented of temperature and traffic load. In laboratory, rutting test conducted on two different asphalt mixture, these mixes included, an asphalt concrete mix using local aggregate and asphalt binder B(60/70) at the optimum Marshall asphalt content, another mixes designed using Superpave design procedure with the same materials and performance asphalt binder grade PG (70-10). In the survey, the questionnaire was distributed to 55 engineers and specialists in this field. The interview was conducted to a few others, and the factors that were leading to poor performance of asphalt roads were listed as; 1) Owner Experience and technical staff 2) Asphalt characteristics 3) Updating and development of Asphalt Mix Design methods 4) Lack of data collection by authorization Agency 5) Construction and compaction process 6) Mentoring and controlling mixing procedure. Considering and improving these factors will play an important role to improve the pavement performances, longer service life, and lower maintenance costs. This research summarized some recommendations for making asphalt mixtures used in hot, dry areas. Such asphalt mixtures should use asphalt binder which is less affected by pavement temperature change and traffic load. The properties of the mixture, such as durability, deformation, air voids, and performance, largely depend on the type of materials, environment, and mixing method. These properties, in turn, affect the pavement performance.

Keywords: volumetric analysis, pavement performances, hot climate, traffic load, pavement temperature, asphalt mixture, environment, design and construction

Procedia PDF Downloads 245
3662 Correlations in the Ising Kagome Lattice

Authors: Antonio Aguilar Aguilar, Eliezer Braun Guitler

Abstract:

Using a previously developed procedure and with the aid of algebraic software, a two-dimensional generalized Ising model with a 4×2 unitary cell (UC), we obtain a Kagome Lattice with twelve different spin-spin values of interaction, in order to determine the partition function per spin L(T). From the partition function we can study the magnetic behavior of the system. Because of the competition phenomenon between spins, a very complex behavior among them in a variety of magnetic states can be observed.

Keywords: correlations, Ising, Kagome, exact functions

Procedia PDF Downloads 352
3661 Optimization of Lead Bioremediation by Marine Halomonas sp. ES015 Using Statistical Experimental Methods

Authors: Aliaa M. El-Borai, Ehab A. Beltagy, Eman E. Gadallah, Samy A. ElAssar

Abstract:

Bioremediation technology is now used for treatment instead of traditional metal removal methods. A strain was isolated from Marsa Alam, Red sea, Egypt showed high resistance to high lead concentration and was identified by the 16S rRNA gene sequencing technique as Halomonas sp. ES015. Medium optimization was carried out using Plackett-Burman design, and the most significant factors were yeast extract, casamino acid and inoculums size. The optimized media obtained by the statistical design raised the removal efficiency from 84% to 99% from initial concentration 250 ppm of lead. Moreover, Box-Behnken experimental design was applied to study the relationship between yeast extract concentration, casamino acid concentration and inoculums size. The optimized medium increased removal efficiency to 97% from initial concentration 500 ppm of lead. Immobilized Halomonas sp. ES015 cells on sponge cubes, using optimized medium in loop bioremediation column, showed relatively constant lead removal efficiency when reused six successive cycles over the range of time interval. Also metal removal efficiency was not affected by flow rate changes. Finally, the results of this research refer to the possibility of lead bioremediation by free or immobilized cells of Halomonas sp. ES015. Also, bioremediation can be done in batch cultures and semicontinuous cultures using column technology.

Keywords: bioremediation, lead, Box–Behnken, Halomonas sp. ES015, loop bioremediation, Plackett-Burman

Procedia PDF Downloads 177
3660 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem

Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee

Abstract:

Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.

Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research

Procedia PDF Downloads 322
3659 Robotic Arm-Automated Spray Painting with One-Shot Object Detection and Region-Based Path Optimization

Authors: Iqraq Kamal, Akmal Razif, Sivadas Chandra Sekaran, Ahmad Syazwan Hisaburi

Abstract:

Painting plays a crucial role in the aerospace manufacturing industry, serving both protective and cosmetic purposes for components. However, the traditional manual painting method is time-consuming and labor-intensive, posing challenges for the sector in achieving higher efficiency. Additionally, the current automated robot path planning has been a bottleneck for spray painting processes, as typical manual teaching methods are time-consuming, error-prone, and skill-dependent. Therefore, it is essential to develop automated tool path planning methods to replace manual ones, reducing costs and improving product quality. Focusing on flat panel painting in aerospace manufacturing, this study aims to address issues related to unreliable part identification techniques caused by the high-mixture, low-volume nature of the industry. The proposed solution involves using a spray gun and a UR10 robotic arm with a vision system that utilizes one-shot object detection (OS2D) to identify parts accurately. Additionally, the research optimizes path planning by concentrating on the region of interest—specifically, the identified part, rather than uniformly covering the entire painting tray.

Keywords: aerospace manufacturing, one-shot object detection, automated spray painting, vision-based path optimization, deep learning, automation, robotic arm

Procedia PDF Downloads 63
3658 Stability Optimization of NABH₄ via PH and H₂O:NABH₄ Ratios for Large Scale Hydrogen Production

Authors: Parth Mehta, Vedasri Bai Khavala, Prabhu Rajagopal, Tiju Thomas

Abstract:

There is an increasing need for alternative clean fuels, and hydrogen (H₂) has long been considered a promising solution with a high calorific value (142MJ/kg). However, the storage of H₂ and expensive processes for its generation have hindered its usage. Sodium borohydride (NaBH₄) can potentially be used as an economically viable means of H₂ storage. Thus far, there have been attempts to optimize the life of NaBH₄ (half-life) in aqueous media by stabilizing it with sodium hydroxide (NaOH) for various pH values. Other reports have shown that H₂ yield and reaction kinetics remained constant for all ratios of H₂O to NaBH₄ > 30:1, without any acidic catalysts. Here we highlight the importance of pH and H₂O: NaBH₄ ratio (80:1, 40:1, 20:1 and 10:1 by weight), for NaBH₄ stabilization (half-life reaction time at room temperature) and corrosion minimization of H₂ reactor components. It is interesting to observe that at any particular pH>10 (e.g., pH = 10, 11 and 12), the H₂O: NaBH₄ ratio does not have the expected linear dependence with stability. On the contrary, high stability was observed at the ratio of 10:1 H₂O: NaBH₄ across all pH>10. When the H₂O: NaBH₄ ratio is increased from 10:1 to 20:1 and beyond (till 80:1), constant stability (% degradation) is observed with respect to time. For practical usage (consumption within 6 hours of making NaBH₄ solution), 15% degradation at pH 11 and NaBH₄: H₂O ratio of 10:1 is recommended. Increasing this ratio demands higher NaOH concentration at the same pH, thus requiring a higher concentration or volume of acid (e.g., HCl) for H₂ generation. The reactions are done with tap water to render the results useful from an industrial standpoint. The observed stability regimes are rationalized based on complexes associated with NaBH₄ when solvated in water, which depend sensitively on both pH and NaBH₄: H₂O ratio.

Keywords: hydrogen, sodium borohydride, stability optimization, H₂O:NaBH₄ ratio

Procedia PDF Downloads 102
3657 Chaotic Sequence Noise Reduction and Chaotic Recognition Rate Improvement Based on Improved Local Geometric Projection

Authors: Rubin Dan, Xingcai Wang, Ziyang Chen

Abstract:

A chaotic time series noise reduction method based on the fusion of the local projection method, wavelet transform, and particle swarm algorithm (referred to as the LW-PSO method) is proposed to address the problem of false recognition due to noise in the recognition process of chaotic time series containing noise. The method first uses phase space reconstruction to recover the original dynamical system characteristics and removes the noise subspace by selecting the neighborhood radius; then it uses wavelet transform to remove D1-D3 high-frequency components to maximize the retention of signal information while least-squares optimization is performed by the particle swarm algorithm. The Lorenz system containing 30% Gaussian white noise is simulated and verified, and the phase space, SNR value, RMSE value, and K value of the 0-1 test method before and after noise reduction of the Schreiber method, local projection method, wavelet transform method, and LW-PSO method are compared and analyzed, which proves that the LW-PSO method has a better noise reduction effect compared with the other three common methods. The method is also applied to the classical system to evaluate the noise reduction effect of the four methods and the original system identification effect, which further verifies the superiority of the LW-PSO method. Finally, it is applied to the Chengdu rainfall chaotic sequence for research, and the results prove that the LW-PSO method can effectively reduce the noise and improve the chaos recognition rate.

Keywords: Schreiber noise reduction, wavelet transform, particle swarm optimization, 0-1 test method, chaotic sequence denoising

Procedia PDF Downloads 177
3656 Case Report and Discussion of Natural History of Bouveret Syndrome

Authors: Parul Garg

Abstract:

Bouveret Syndrome is a rare presentation described as Gastric Outlet Obstruction secondary to Gallstone Ileus. Here we describe the 3-year progression of disease from cholelithiasis to gallstone ileus with relevant imaging findings. The patient was treated under an Upper Gastrointestinal Surgery service with surgical intervention in the form of a laparoscopic assisted procedure with midline laparotomy. She recovered well and was discharged 1 week post operatively. No complications occurred.

Keywords: Cholelithiasis, Bouveret syndrome, Gallstone Ileus, gastric outlet obstruction

Procedia PDF Downloads 102
3655 Pupil Size: A Measure of Identification Memory in Target Present Lineups

Authors: Camilla Elphick, Graham Hole, Samuel Hutton, Graham Pike

Abstract:

Pupil size has been found to change irrespective of luminosity, suggesting that it can be used to make inferences about cognitive processes, such as cognitive load. To see whether identifying a target requires a different cognitive load to rejecting distractors, the effect of viewing a target (compared with viewing distractors) on pupil size was investigated using a sequential video lineup procedure with two lineup sessions. Forty one participants were chosen randomly via the university. Pupil sizes were recorded when viewing pre target distractors and post target distractors and compared to pupil size when viewing the target. Overall, pupil size was significantly larger when viewing the target compared with viewing distractors. In the first session, pupil size changes were significantly different between participants who identified the target (Hits) and those who did not. Specifically, the pupil size of Hits reduced significantly after viewing the target (by 26%), suggesting that cognitive load reduced following identification. The pupil sizes of Misses (who made no identification) and False Alarms (who misidentified a distractor) did not reduce, suggesting that the cognitive load remained high in participants who failed to make the correct identification. In the second session, pupil sizes were smaller overall, suggesting that cognitive load was smaller in this session, and there was no significant difference between Hits, Misses and False Alarms. Furthermore, while the frequency of Hits increased, so did False Alarms. These two findings suggest that the benefits of including a second session remain uncertain, as the second session neither provided greater accuracy nor a reliable way to measure it. It is concluded that pupil size is a measure of face recognition strength in the first session of a target present lineup procedure. However, it is still not known whether cognitive load is an adequate explanation for this, or whether cognitive engagement might describe the effect more appropriately. If cognitive load and cognitive engagement can be teased apart with further investigation, this would have positive implications for understanding eyewitness identification. Nevertheless, this research has the potential to provide a tool for improving the reliability of lineup procedures.

Keywords: cognitive load, eyewitness identification, face recognition, pupillometry

Procedia PDF Downloads 383
3654 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 142
3653 Genetically Modified Organisms

Authors: Mudrika Singhal

Abstract:

The research paper is basically about how the genetically modified organisms evolved and their significance in today’s world. It also highlights about the various pros and cons of the genetically modified organisms and the progress of India in this field. A genetically modified organism is the one whose genetic material has been altered using genetic engineering techniques. They have a wide range of uses such as transgenic plants, genetically modified mammals such as mouse and also in insects and aquatic life. Their use is rooted back to the time around 12,000 B.C. when humans domesticated plants and animals. At that humans used genetically modified organisms produced by the procedure of selective breeding and not by genetic engineering techniques. Selective breeding is the procedure in which selective traits are bred in plants and animals and then are domesticated. Domestication of wild plants into a suitable cultigen is a well known example of this technique. GMOs have uses in varied fields ranging from biological and medical research, production of pharmaceutical drugs to agricultural fields. The first organisms to be genetically modified were the microbes because of their simpler genetics. At present the genetically modified protein insulin is used to treat diabetes. In the case of plants transgenic plants, genetically modified crops and cisgenic plants are the examples of genetic modification. In the case of mammals, transgenic animals such as mice, rats etc. serve various purposes such as researching human diseases, improvement in animal health etc. Now coming upon the pros and cons related to the genetically modified organisms, pros include crops with higher yield, less growth time and more predictable in comparison to traditional breeding. Cons include that they are dangerous to mammals such as rats, these products contain protein which would trigger allergic reactions. In India presently, group of GMOs include GM microorganisms, transgenic crops and animals. There are varied applications in the field of healthcare and agriculture. In the nutshell, the research paper is about the progress in the field of genetic modification, taking along the effects in today’s world.

Keywords: applications, mammals, transgenic, engineering and technology

Procedia PDF Downloads 579
3652 Effects of Umbilical Cord Clamping on Puppies Neonatal Vitality

Authors: Maria L. G. Lourenço, Keylla H. N. P. Pereira, Viviane Y. Hibaru, Fabiana F. Souza, Joao C. P. Ferreira, Simone B. Chiacchio, Luiz H. A. Machado

Abstract:

In veterinary medicine, the standard procedure during a caesarian section is clamping the umbilical cord immediately after birth. In human neonates, when the umbilical cord is kept intact after birth, blood continues to flow from the cord to the newborn, but this procedure may prove to be difficult in dogs due to the shorter umbilical cord and the number of newborns in the litter. However, a possible detachment of the placenta while keeping the umbilical cord intact may make the residual blood to flow to the neonate. This study compared the effects on neonatal vitality between clamping and no clamping the umbilical cord of dogs born through cesarean section, assessing them through Apgar and reflex scores. Fifty puppies delivered from 16 bitches were randomly allocated to receive clamping of the umbilical cord immediately (n=25) or to not receive the clamping until breathing (n=25). The neonates were assessed during the first five min of life and once again 10 min after the first assessment. The differences observed between the two moments were significant (p < 0.01) for both the Apgar and reflex scores. The differences observed between the groups (clamped vs. not clamped) were not significant for the Apgar score in the 1st moment (p=0.1), but the 2nd moment was significantly (p < 0.01) in the group not clamped, as well as significant (p < 0.05) for the reflex score in the 1st moment and 2nd moment (p < 0.05), revealing higher neonatal vitality in the not clamped group. The differences observed between the moments (1st vs. 2nd) of each group as significant (p < 0.01), revealing higher neonatal vitality in the 2nd moments. In the no clamping group, after removing the neonates together with the umbilical cord and the placenta, we observed that the umbilical cords were full of blood at the time of birth and later became whitish and collapsed, demonstrating the blood transfer. The results suggest that keeping the umbilical cord intact for at least three minutes after the onset breathing is not detrimental and may contribute to increase neonate vitality in puppies delivered by cesarean section.

Keywords: puppy vitality, newborn dog, cesarean section, Apgar score

Procedia PDF Downloads 132
3651 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 47
3650 Rapid Algorithm for GPS Signal Acquisition

Authors: Fabricio Costa Silva, Samuel Xavier de Souza

Abstract:

A Global Positioning System (GPS) receiver is responsible to determine position, velocity and timing information by using satellite information. To get this information are necessary to combine an incoming and a locally generated signal. The procedure called acquisition need to found two information, the frequency and phase of the incoming signal. This is very time consuming, so there are several techniques to reduces the computational complexity, but each of then put projects issues in conflict. I this papers we present a method that can reduce the computational complexity by reducing the search space and paralleling the search.

Keywords: GPS, acquisition, complexity, parallelism

Procedia PDF Downloads 518
3649 The Benefits of a Totally Autologous Breast Reconstruction Technique Using Extended Latissimus Dorsi Flap with Lipo-Modelling: A Seven Years United Kingdom Tertiary Breast Unit Results

Authors: Wisam Ismail, Brendan Wooler, Penelope McManus

Abstract:

Introduction: The public perception of implants has been damaged in the wake of recent negative publicity and increasingly we are finding patients wanting to avoid them. Planned lipo-modelling to enhance the volume of a Latissimus dorsi flap is a viable alternative to silicone implants and maintains a Totally Autologous Technique (TAT). Here we demonstrate that when compared to an Implant Assisted Technique (IAT), a TAT offers patients many benefits that offset the requirement of more operations initially, with reduced short and long term complications, reduced symmetrisation surgery and reduced revision rates. Methods. Data was collected prospectively over 7 years. The minimum follows up was 3 years. The technique was generally standardized in the hand of one surgeon. All flaps were extended LD flaps (ELD). Lipo-modelling was performed using standard techniques. Outcome measures were unplanned secondary procedures, complication rates, and contralateral symmetrisation surgery rates. Key Results Were: Lower complication rates in the TAT group (18.5% vs. 33.3%), despite higher radiotherapy rates (TAT=49%, IAT=36.8%), TAT was associated with lower subsequent symmetrisation rates (30.6% vs. 50.9%), IAT had a relative risk of 3.1 for subsequent unplanned procedure, Autologous patients required an average of 1.76 sessions of lipo-modelling, Conclusions: Using lipo-modelling to enable totally autologous LD reconstruction offers significant advantages over an implant assisted technique. We have shown a lower subsequent unplanned procedure rate, lower revision surgery, and less contralateral symmetrisation surgery. We anticipate that a TAT will be supported by patient satisfaction surveys and long-term patient-reported cosmetic outcome data and intended to study this.

Keywords: breast, Latissimus dorsi, lipomodelling, reconstruction

Procedia PDF Downloads 313
3648 A Generalized Weighted Loss for Support Vextor Classification and Multilayer Perceptron

Authors: Filippo Portera

Abstract:

Usually standard algorithms employ a loss where each error is the mere absolute difference between the true value and the prediction, in case of a regression task. In the present, we present several error weighting schemes that are a generalization of the consolidated routine. We study both a binary classification model for Support Vextor Classification and a regression net for Multylayer Perceptron. Results proves that the error is never worse than the standard procedure and several times it is better.

Keywords: loss, binary-classification, MLP, weights, regression

Procedia PDF Downloads 76
3647 The Effect of Primary Treatment on Histopathological Patterns and Choice of Neck Dissection in Regional Failure of Nasopharyngeal Carcinoma Patients

Authors: Ralene Sim, Stefan Mueller, N. Gopalakrishna Iyer, Ngian Chye Tan, Khee Chee Soo, R. Shetty Mahalakshmi, Hiang Khoon Tan

Abstract:

Background: Regional failure in nasopharyngeal carcinoma (NPC) is managed by salvage treatment in the form of neck dissection. Radical neck dissection (RND) is preferred over modified radical neck dissection (MRND) since it is traditionally believed to offer better long-term disease control. However, with the advent of more advanced imaging modalities like high-resolution Magnetic Resonance Imaging, Computed Tomography, and Positron Emission Tomography-CT scans, earlier detection is achieved. Additionally, concurrent chemotherapy also contributes to reduced tumour burden. Hence, there may be a lesser need for an RND and a greater role for MRND. With this retrospective study, the primary aim is to ascertain whether MRND, as opposed to RND, has similar outcomes and hence, whether there would be more grounds to offer a less aggressive procedure to achieve lower patient morbidity. Methods: This is a retrospective study of 66 NPC patients treated at Singapore General Hospital between 1994 to 2016 for histologically proven regional recurrence, of which 41 patients underwent RND and 25 who underwent MRND, based on surgeon preference. The type of ND performed, primary treatment mode, adjuvant treatment, and pattern of recurrence were reviewed. Overall survival (OS) was calculated using Kaplan-Meier estimate and compared. Results: Overall, the disease parameters such as nodal involvement and extranodal extension were comparable between the two groups. Comparing MRND and RND, the median (IQR) OS is 1.76 (0.58 to 3.49) and 2.41 (0.78 to 4.11) respectively. However, the p-value found is 0.5301 and hence not statistically significant. Conclusion: RND is more aggressive and has been associated with greater morbidity. Hence, with similar outcomes, MRND could be an alternative salvage procedure for regional failure in selected NPC patients, allowing similar salvage rates with lesser mortality and morbidity.

Keywords: nasopharyngeal carcinoma, neck dissection, modified neck dissection, radical neck dissection

Procedia PDF Downloads 149
3646 A Mixed-Integer Nonlinear Program to Optimally Pace and Fuel Ultramarathons

Authors: Kristopher A. Pruitt, Justin M. Hill

Abstract:

The purpose of this research is to determine the pacing and nutrition strategies which minimize completion time and carbohydrate intake for athletes competing in ultramarathon races. The model formulation consists of a two-phase optimization. The first-phase mixed-integer nonlinear program (MINLP) determines the minimum completion time subject to the altitude, terrain, and distance of the race, as well as the mass and cardiovascular fitness of the athlete. The second-phase MINLP determines the minimum total carbohydrate intake required for the athlete to achieve the completion time prescribed by the first phase, subject to the flow of carbohydrates through the stomach, liver, and muscles. Consequently, the second phase model provides the optimal pacing and nutrition strategies for a particular athlete for each kilometer of a particular race. Validation of the model results over a wide range of athlete parameters against completion times for real competitive events suggests strong agreement. Additionally, the kilometer-by-kilometer pacing and nutrition strategies, the model prescribes for a particular athlete suggest unconventional approaches could result in lower completion times. Thus, the MINLP provides prescriptive guidance that athletes can leverage when developing pacing and nutrition strategies prior to competing in ultramarathon races. Given the highly-variable topographical characteristics common to many ultramarathon courses and the potential inexperience of many athletes with such courses, the model provides valuable insight to competitors who might otherwise fail to complete the event due to exhaustion or carbohydrate depletion.

Keywords: nutrition, optimization, pacing, ultramarathons

Procedia PDF Downloads 172
3645 High Aspect Ratio Micropillar Array Based Microfluidic Viscometer

Authors: Ahmet Erten, Adil Mustafa, Ayşenur Eser, Özlem Yalçın

Abstract:

We present a new viscometer based on a microfluidic chip with elastic high aspect ratio micropillar arrays. The displacement of pillar tips in flow direction can be used to analyze viscosity of liquid. In our work, Computational Fluid Dynamics (CFD) is used to analyze pillar displacement of various micropillar array configurations in flow direction at different viscosities. Following CFD optimization, micro-CNC based rapid prototyping is used to fabricate molds for microfluidic chips. Microfluidic chips are fabricated out of polydimethylsiloxane (PDMS) using soft lithography methods with molds machined out of aluminum. Tip displacements of micropillar array (300 µm in diameter and 1400 µm in height) in flow direction are recorded using a microscope mounted camera, and the displacements are analyzed using image processing with an algorithm written in MATLAB. Experiments are performed with water-glycerol solutions mixed at 4 different ratios to attain 1 cP, 5 cP, 10 cP and 15 cP viscosities at room temperature. The prepared solutions are injected into the microfluidic chips using a syringe pump at flow rates from 10-100 mL / hr and the displacement versus flow rate is plotted for different viscosities. A displacement of around 1.5 µm was observed for 15 cP solution at 60 mL / hr while only a 1 µm displacement was observed for 10 cP solution. The presented viscometer design optimization is still in progress for better sensitivity and accuracy. Our microfluidic viscometer platform has potential for tailor made microfluidic chips to enable real time observation and control of viscosity changes in biological or chemical reactions.

Keywords: Computational Fluid Dynamics (CFD), high aspect ratio, micropillar array, viscometer

Procedia PDF Downloads 229
3644 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization

Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder

Abstract:

In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.

Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening

Procedia PDF Downloads 279
3643 Use of Galileo Advanced Features in Maritime Domain

Authors: Olivier Chaigneau, Damianos Oikonomidis, Marie-Cecile Delmas

Abstract:

GAMBAS (Galileo Advanced features for the Maritime domain: Breakthrough Applications for Safety and security) is a project funded by the European Space Program Agency (EUSPA) aiming at identifying the search-and-rescue and ship security alert system needs for maritime users (including operators and fishing stakeholders) and developing operational concepts to answer these needs. The general objective of the GAMBAS project is to support the deployment of Galileo exclusive features in the maritime domain in order to improve safety and security at sea, detection of illegal activities and associated surveillance means, resilience to natural and human-induced emergency situations, and develop, integrate, demonstrate, standardize and disseminate these new associated capabilities. The project aims to demonstrate: improvement of the SAR (Search And Rescue) and SSAS (Ship Security Alert System) detection and response to maritime distress through the integration of new features into the beacon for SSAS in terms of cost optimization, user-friendly aspects, integration of Galileo and OS NMA (Open Service Navigation Message Authentication) reception for improved authenticated localization performance and reliability, and at sea triggering capabilities, optimization of the responsiveness of RCCs (Rescue Co-ordination Centre) towards the distress situations affecting vessels, the adaptation of the MCCs (Mission Control Center) and MEOLUT (Medium Earth Orbit Local User Terminal) to the data distribution of SSAS alerts.

Keywords: Galileo new advanced features, maritime, safety, security

Procedia PDF Downloads 79
3642 An Integrated Approach for Optimal Selection of Machining Parameters in Laser Micro-Machining Process

Authors: A. Gopala Krishna, M. Lakshmi Chaitanya, V. Kalyana Manohar

Abstract:

In the existent analysis, laser micro machining (LMM) of Silicon carbide (SiCp) reinforced Aluminum 7075 Metal Matrix Composite (Al7075/SiCp MMC) was studied. While machining, Because of the intense heat generated, A layer gets formed on the work piece surface which is called recast layer and this layer is detrimental to the surface quality of the component. The recast layer needs to be as small as possible for precise applications. Therefore, The height of recast layer and the depth of groove which are conflicting in nature were considered as the significant manufacturing criteria, Which determines the pursuit of a machining process obtained in LMM of Al7075/10%SiCp composite. The present work formulates the depth of groove and height of recast layer in relation to the machining parameters using the Response Surface Methodology (RSM) and correspondingly, The formulated mathematical models were put to use for optimization. Since the effect of machining parameters on the depth of groove and height of recast layer was contradictory, The problem was explicated as a multi objective optimization problem. Moreover, An evolutionary Non-dominated sorting genetic algorithm (NSGA-II) was employed to optimize the model established by RSM. Subsequently this algorithm was also adapted to achieve the Pareto optimal set of solutions that provide a detailed illustration for making the optimal solutions. Eventually experiments were conducted to affirm the results obtained from RSM and NSGA-II.

Keywords: Laser Micro Machining (LMM), depth of groove, Height of recast layer, Response Surface Methodology (RSM), non-dominated sorting genetic algorithm

Procedia PDF Downloads 331
3641 An Efficient and Green Procedure for the Synthesis of Highly Substituted Polyhydronaphthalene Derivatives via a One-Pot, Multi-Component Reaction in Aqueous Media

Authors: Adeleh Moshtaghi Zonouz, Issa Eskandari

Abstract:

A simple, efficient, and green one-pot, four-component synthesis of highly substituted polyhydronaphthalenes in aqueous media is described. The method has such advantages as short reaction times, high yields, mild reaction conditions, operational simplicity and environmentally benign.

Keywords: polyhydronaphthalene, 2, 6-dicyanoanilines, multi-component reaction, aqueous media

Procedia PDF Downloads 357
3640 Optimal Energy Management System for Electrical Vehicles to Further Extend the Range

Authors: M. R. Rouhi, S. Shafiei, A. Taghavipour, H. Adibi-Asl, A. Doosthoseini

Abstract:

This research targets at alleviating the problem of range anxiety associated with the battery electric vehicles (BEVs) by considering mechanical and control aspects of the powertrain. In this way, all the energy consuming components and their effect on reducing the range of the BEV and battery life index are identified. On the other hand, an appropriate control strategy is designed to guarantee the performance of the BEV and the extended electric range which is evaluated by an extensive simulation procedure and a real-world driving schedule.

Keywords: battery, electric vehicles, ultra-capacitor, model predictive control

Procedia PDF Downloads 245
3639 Rapid Parallel Algorithm for GPS Signal Acquisition

Authors: Fabricio Costa Silva, Samuel Xavier de Souza

Abstract:

A Global Positioning System (GPS) receiver is responsible to determine position, velocity and timing information by using satellite information. To get this information's are necessary to combine an incoming and a locally generated signal. The procedure called acquisition need to found two information, the frequency and phase of the incoming signal. This is very time consuming, so there are several techniques to reduces the computational complexity, but each of then put projects issues in conflict. I this papers we present a method that can reduce the computational complexity by reducing the search space and paralleling the search.

Keywords: GPS, acquisition, low complexity, parallelism

Procedia PDF Downloads 473
3638 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories

Authors: Haj Najafi Leila, Tehranizadeh Mohsen

Abstract:

Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.

Keywords: dependency, story-cost, cost modes, engineering demand parameter

Procedia PDF Downloads 160
3637 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel

Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler

Abstract:

Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.

Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process

Procedia PDF Downloads 111
3636 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).

Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures

Procedia PDF Downloads 342
3635 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach

Authors: Jean Berger, Nassirou Lo, Martin Noel

Abstract:

Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.

Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization

Procedia PDF Downloads 354