Search results for: inverse optimization approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16669

Search results for: inverse optimization approach

15589 Deep Learning Based-Object-classes Semantic Classification of Arabic Texts

Authors: Imen Elleuch, Wael Ouarda, Gargouri Bilel

Abstract:

We proposes in this paper a Deep Learning based approach to classify text in order to enrich an Arabic ontology based on the objects classes of Gaston Gross. Those object classes are defined by taking into account the syntactic and semantic features of the treated language. Thus, our proposed approach is a hybrid one. In fact, it is based on the one hand on the object classes that represents a knowledge based-approach on classification of text and in the other hand it uses the deep learning approach that use the word embedding-based-approach to classify text. We have applied our proposed approach on a corpus constructed from an Arabic dictionary. The obtained semantic classification of text will enrich the Arabic objects classes ontology. In fact, new classes can be added to the ontology or an expansion of the features that characterizes each object class can be updated. The obtained results are compared to a similar work that treats the same object with a classical linguistic approach for the semantic classification of text. This comparison highlight our hybrid proposed approach that can be ameliorated by broaden the dataset used in the deep learning process.

Keywords: deep-learning approach, object-classes, semantic classification, Arabic

Procedia PDF Downloads 85
15588 A Human Factors Approach to Workload Optimization for On-Screen Review Tasks

Authors: Christina Kirsch, Adam Hatzigiannis

Abstract:

Rail operators and maintainers worldwide are increasingly replacing walking patrols in the rail corridor with mechanized track patrols -essentially data capture on trains- and on-screen reviews of track infrastructure in centralized review facilities. The benefit is that infrastructure workers are less exposed to the dangers of the rail corridor. The impact is a significant change in work design from walking track sections and direct observation in the real world to sedentary jobs in the review facility reviewing captured data on screens. Defects in rail infrastructure can have catastrophic consequences. Reviewer performance regarding accuracy and efficiency of reviews within the available time frame is essential to ensure safety and operational performance. Rail operators must optimize workload and resource loading to transition to on-screen reviews successfully. Therefore, they need to know what workload assessment methodologies will provide reliable and valid data to optimize resourcing for on-screen reviews. This paper compares objective workload measures, including track difficulty ratings and review distance covered per hour, and subjective workload assessments (NASA TLX) and analyses the link between workload and reviewer performance, including sensitivity, precision, and overall accuracy. An experimental study was completed with eight on-screen reviewers, including infrastructure workers and engineers, reviewing track sections with different levels of track difficulty over nine days. Each day the reviewers completed four 90-minute sessions of on-screen inspection of the track infrastructure. Data regarding the speed of review (km/ hour), detected defects, false negatives, and false positives were collected. Additionally, all reviewers completed a subjective workload assessment (NASA TLX) after each 90-minute session and a short employee engagement survey at the end of the study period that captured impacts on job satisfaction and motivation. The results showed that objective measures for tracking difficulty align with subjective mental demand, temporal demand, effort, and frustration in the NASA TLX. Interestingly, review speed correlated with subjective assessments of physical and temporal demand, but to mental demand. Subjective performance ratings correlated with all accuracy measures and review speed. The results showed that subjective NASA TLX workload assessments accurately reflect objective workload. The analysis of the impact of workload on performance showed that subjective mental demand correlated with high precision -accurately detected defects, not false positives. Conversely, high temporal demand was negatively correlated with sensitivity and the percentage of detected existing defects. Review speed was significantly correlated with false negatives. With an increase in review speed, accuracy declined. On the other hand, review speed correlated with subjective performance assessments. Reviewers thought their performance was higher when they reviewed the track sections faster, despite the decline in accuracy. The study results were used to optimize resourcing and ensure that reviewers had enough time to review the allocated track sections to improve defect detection rates in accordance with the efficiency-thoroughness trade-off. Overall, the study showed the importance of a multi-method approach to workload assessment and optimization, combining subjective workload assessments with objective workload and performance measures to ensure that recommendations for work system optimization are evidence-based and reliable.

Keywords: automation, efficiency-thoroughness trade-off, human factors, job design, NASA TLX, performance optimization, subjective workload assessment, workload analysis

Procedia PDF Downloads 118
15587 Machine Learning and Metaheuristic Algorithms in Short Femoral Stem Custom Design to Reduce Stress Shielding

Authors: Isabel Moscol, Carlos J. Díaz, Ciro Rodríguez

Abstract:

Hip replacement becomes necessary when a person suffers severe pain or considerable functional limitations and the best option to enhance their quality of life is through the replacement of the damaged joint. One of the main components in femoral prostheses is the stem which distributes the loads from the joint to the proximal femur. To preserve more bone stock and avoid weakening of the diaphysis, a short starting stem was selected, generated from the intramedullary morphology of the patient's femur. It ensures the implantability of the design and leads to geometric delimitation for personalized optimization with machine learning (ML) and metaheuristic algorithms. The present study attempts to design a cementless short stem to make the strain deviation before and after implantation close to zero, promoting its fixation and durability. Regression models developed to estimate the percentage change of maximum principal stresses were used as objective optimization functions by the metaheuristic algorithm. The latter evaluated different geometries of the short stem with the modification of certain parameters in oblique sections from the osteotomy plane. The optimized geometry reached a global stress shielding (SS) of 18.37% with a determination factor (R²) of 0.667. The predicted results favour implantability integration in the short stem optimization to effectively reduce SS in the proximal femur.

Keywords: machine learning techniques, metaheuristic algorithms, short-stem design, stress shielding, hip replacement

Procedia PDF Downloads 194
15586 Optimization Techniques of Doubly-Fed Induction Generator Controller Design for Reliability Enhancement of Wind Energy Conversion Systems

Authors: Om Prakash Bharti, Aanchal Verma, R. K. Saket

Abstract:

The Doubly-Fed Induction Generator (DFIG) is suggested for Wind Energy Conversion System (WECS) to extract wind power. DFIG is preferably employed due to its robustness towards variable wind and rotor speed. DFIG has the adaptable property because the system parameters are smoothly dealt with, including real power, reactive power, DC-link voltage, and the transient and dynamic responses, which are needed to analyze constantly. The analysis becomes more prominent during any unusual condition in the electrical power system. Hence, the study and improvement in the system parameters and transient response performance of DFIG are required to be accomplished using some controlling techniques. For fulfilling the task, the present work implements and compares the optimization methods for the design of the DFIG controller for WECS. The bio-inspired optimization techniques are applied to get the optimal controller design parameters for DFIG-based WECS. The optimized DFIG controllers are then used to retrieve the transient response performance of the six-order DFIG model with a step input. The results using MATLAB/Simulink show the betterment of the Firefly algorithm (FFA) over other control techniques when compared with the other controller design methods.

Keywords: doubly-fed induction generator, wind turbine, wind energy conversion system, induction generator, transfer function, proportional, integral, derivatives

Procedia PDF Downloads 92
15585 Wind Turbines Optimization: Shield Structure for a High Wind Speed Conditions

Authors: Daniyar Seitenov, Nazim Mir-Nasiri

Abstract:

Optimization of horizontal axis semi-exposed wind turbine has been performed using a shield protection that automatically protects the generator shaft at extreme wind speeds from over speeding, mechanical damage and continues generating electricity during the high wind speed conditions. A semi-exposed to wind generator has been designed and its structure has been described in this paper. The simplified point-force dynamic load model on the blades has been derived for normal and extreme wind conditions with and without shield involvement. Numerical simulation has been conducted at different values of wind speed to study the efficiency of shield application. The obtained results show that the maximum power generated by the wind turbine with shield does not exceed approximately the rated value of the generator, where shield serves as an automatic break for extreme wind speed values of 15 m/sec and above. Meantime the wind turbine without shield produced a power that is much larger than the rated value. The optimized horizontal axis semi-exposed wind turbine with shield protection is suitable for low and medium power generation when installed on the roofs of high rise buildings for harvesting wind energy. Wind shield works automatically with no power consumption. The structure of the generator with the protection, math simulation of kinematics and dynamics of power generation has been described in details in this paper.

Keywords: renewable energy, wind turbine, wind turbine optimization, high wind speed

Procedia PDF Downloads 177
15584 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 56
15583 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities

Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun

Abstract:

As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.

Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning

Procedia PDF Downloads 55
15582 Polymer Patterning by Dip Pen Nanolithography

Authors: Ayse Cagil Kandemir, Derya Erdem, Markus Niederberger, Ralph Spolenak

Abstract:

Dip Pen nanolithography (DPN), which is a tip based method, serves a novel approach to produce nano and micro-scaled patterns due to its high resolution and pattern flexibility. It is introduced as a new constructive scanning probe lithography (SPL) technique. DPN delivers materials in the form of an ink by using the tip of a cantilever as pen and substrate as paper in order to form surface architectures. First studies rely on delivery of small organic molecules on gold substrate in ambient conditions. As time passes different inks such as; polymers, colloidal particles, oligonucleotides, metallic salts were examined on a variety of surfaces. Discovery of DPN also enabled patterning with multiple inks by using multiple cantilevers for the first time in SPL history. Specifically, polymer inks, which constitute a flexible matrix for various materials, can have a potential in MEMS, NEMS and drug delivery applications. In our study, it is aimed to construct polymer patterns using DPN by studying wetting behavior of polymer on semiconductor, metal and polymer surfaces. The optimum viscosity range of polymer and effect of environmental conditions such as humidity and temperature are examined. It is observed that there is an inverse relation with ink viscosity and depletion time. This study also yields the optimal writing conditions to produce consistent patterns with DPN. It is shown that written dot sizes increase with dwell time, indicating that the examined writing conditions yield repeatable patterns.

Keywords: dip pen nanolithography, polymer, surface patterning, surface science

Procedia PDF Downloads 396
15581 Design and Optimization of Soil Nailing Construction

Authors: Fereshteh Akbari, Farrokh Jalali Mosalam, Ali Hedayatifar, Amirreza Aminjavaheri

Abstract:

The soil nailing is an effective method to stabilize slopes and retaining structures. Consequently, the lateral and vertical displacement of retaining walls are important criteria to evaluate the safety risks of adjacent structures. This paper is devoted to the optimization problems of retaining walls based on ABAQOUS Software. The various parameters such as nail length, orientation, arrangement, horizontal spacing, and bond skin friction, on lateral and vertical displacement of retaining walls are investigated. In order to ensure accuracy, the mobilized shear stress acting around the perimeter of the nail-soil interface is also modeled in ABAQOUS software. The observed trend of results is compared to the previous researches.

Keywords: retaining walls, soil nailing, ABAQOUS software, lateral displacement, vertical displacement

Procedia PDF Downloads 128
15580 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 85
15579 Review on Quaternion Gradient Operator with Marginal and Vector Approaches for Colour Edge Detection

Authors: Nadia Ben Youssef, Aicha Bouzid

Abstract:

Gradient estimation is one of the most fundamental tasks in the field of image processing in general, and more particularly for color images since that the research in color image gradient remains limited. The widely used gradient method is Di Zenzo’s gradient operator, which is based on the measure of squared local contrast of color images. The proposed gradient mechanism, presented in this paper, is based on the principle of the Di Zenzo’s approach using quaternion representation. This edge detector is compared to a marginal approach based on multiscale product of wavelet transform and another vector approach based on quaternion convolution and vector gradient approach. The experimental results indicate that the proposed color gradient operator outperforms marginal approach, however, it is less efficient then the second vector approach.

Keywords: gradient, edge detection, color image, quaternion

Procedia PDF Downloads 233
15578 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 488
15577 Pilot Scale Deproteinization Study on Fish Scale Using Response Surface Methodology

Authors: Fatima Bellali, Mariem Kharroubi

Abstract:

Fish scale wastes are one of the main sources of production of value-added products such as collagen. The main aim of this study is to investigate the optimization conditions of the sardine scale deproteinization using response surface methodology (RSM) on a pilot scale. In order to look for the optimal conditions, a Box–Behnken-based design of experiment (DOE) method was carried out. The model predicted values of product coal ash content were in good agreement with the experiment values (R2 = 0.9813). Finally, model-based optimization was carried out to identify the operating parameters (reaction time=4h and the solid-liquid ratio= 1/10) and to obtain the lowest collagen content.

Keywords: pilot scale, Plackett and Burman design, fish waste, deproteinization

Procedia PDF Downloads 159
15576 Enhanced Dielectric Properties of La Substituted CoFe2O4 Magnetic Nanoparticles

Authors: M. Vadivel, R. Ramesh Babu

Abstract:

Spinel ferrite magnetic nanomaterials have received a great deal of attention in recent years due to their wide range of potential applications in various fields such as magnetic data storage and microwave device applications. Among the family of spinel ferrites, cobalt ferrite (CoFe2O4) has been widely used in the field of high-frequency applications because of its remarkable material qualities such as moderate saturation magnetization, high coercivity, large permeability at higher frequency and high electrical resistivity. For aforementioned applications, the materials should have an improved electrical property, especially enhancement in the dielectric properties. It is well known that the substitution of rare earth metal cations in Fe3+ site of CoFe2O4 nanoparticles leads to structural distortion and thus significantly influences the structural and morphological properties whereas greatly modifies the electrical and magnetic properties of a material. In the present investigation, we report on the influence of lanthanum (La3+) ion substitution on the structural, morphological, dielectric and magnetic properties of CoFe2O4 magnetic nanoparticles prepared by co-precipitation method. Powder X-ray diffraction patterns reveal the formation of inverse cubic spinel structure with the signature of LaFeO3 phase at higher La3+ ion concentrations. Raman and Fourier transform infrared spectral analysis also confirms the formation of inverse cubic spinel structure and Fe-O symmetrical stretching vibrations of CoFe2O4 nanoparticles, respectively. Transmission electron microscopy study reveals that the size of the particles gradually increases with increasing La3+ ion concentrations whereas the agglomeration gets slightly reduced for La3+ ion substituted CoFe2O4 nanoparticles than that of undoped CoFe2O4 nanoparticles. Dielectric properties such as dielectric constant and dielectric loss were recorded as a function of frequency and temperature which reveals that the dielectric constant gradually increases with increasing temperatures as well as La3+ ion concentrations. The increased dielectric constant might be the reason that the formation of LaFeO3 secondary phase at higher La3+ ion concentrations. Magnetic measurement demonstrates that the saturation magnetization gradually decreases from 61.45 to 25.13 emu/g with increasing La3+ ion concentrations which is due to the nonmagnetic nature of La3+ ions substitution.

Keywords: cobalt ferrite, co-precipitation, dielectric properties, saturation magnetization

Procedia PDF Downloads 315
15575 FEM for Stress Reduction by Optimal Auxiliary Holes in a Uniaxially Loaded Plate

Authors: Basavaraj R. Endigeri, Shriharsh Desphande

Abstract:

Optimization and reduction of stress concentration around holes in a uniaxially loaded plate is one of the important design criteria in many of the engineering applications. These stress risers will lead to failure of the component at the region of high stress concentration which has to be avoided by means of providing auxiliary holes on either side of the parent hole. By literature survey it is known that till date, there is no analytical solution documented to reduce the stress concentration by providing auxiliary holes expect for fever geometries. In the present work, plate with a hole subjected to uniaxial load is analyzed with the numerical method to determine the optimum sizes and locations for the auxillary holes for different center hole diameter to plate width ratios. The introduction of auxiliary holes at a optimum location and radii with its effect on stress concentration is also represented graphically. The finite element analysis package ANSYS 8.0 is used to carry out analysis and optimization is performed to determine the location and radii for optimum values of auxiliary holes to reduce stress concentration. All the results for different diameter to plate width ratio are presented graphically. It is found from the work that introduction of auxiliary holes on either side of central circular hole will reduce stress concentration factor by a factor of 19 to 21 percentage.

Keywords: finite element method, optimization, stress concentration factor, auxiliary holes

Procedia PDF Downloads 438
15574 Deep Learning Based 6D Pose Estimation for Bin-Picking Using 3D Point Clouds

Authors: Hesheng Wang, Haoyu Wang, Chungang Zhuang

Abstract:

Estimating the 6D pose of objects is a core step for robot bin-picking tasks. The problem is that various objects are usually randomly stacked with heavy occlusion in real applications. In this work, we propose a method to regress 6D poses by predicting three points for each object in the 3D point cloud through deep learning. To solve the ambiguity of symmetric pose, we propose a labeling method to help the network converge better. Based on the predicted pose, an iterative method is employed for pose optimization. In real-world experiments, our method outperforms the classical approach in both precision and recall.

Keywords: pose estimation, deep learning, point cloud, bin-picking, 3D computer vision

Procedia PDF Downloads 159
15573 The Design, Development, and Optimization of a Capacitive Pressure Sensor Utilizing an Existing 9DOF Platform

Authors: Andrew Randles, Ilker Ocak, Cheam Daw Don, Navab Singh, Alex Gu

Abstract:

Nine Degrees of Freedom (9 DOF) systems are already in development in many areas. In this paper, an integrated pressure sensor is proposed that will make use of an already existing monolithic 9 DOF inertial MEMS platform. Capacitive pressure sensors can suffer from limited sensitivity for a given size of membrane. This novel pressure sensor design increases the sensitivity by over 5 times compared to a traditional array of square diaphragms while still fitting within a 2 mm x 2 mm chip and maintaining a fixed static capacitance. The improved design uses one large diaphragm supported by pillars with fixed electrodes placed above the areas of maximum deflection. The design optimization increases the sensitivity from 0.22 fF/kPa to 1.16 fF/kPa. Temperature sensitivity was also examined through simulation.

Keywords: capacitive pressure sensor, 9 DOF, 10 DOF, sensor, capacitive, inertial measurement unit, IMU, inertial navigation system, INS

Procedia PDF Downloads 544
15572 Optimization of Fenton Process for the Treatment of Young Municipal Leachate

Authors: Bouchra Wassate, Younes Karhat, Khadija El Falaki

Abstract:

Leachate is a source of surface water and groundwater contamination if it has not been pretreated. Indeed, due to its complex structure and its pollution load make its treatment extremely difficult to achieve the standard limits required. The objective of this work is to show the interest of advanced oxidation processes on leachate treatment of urban waste containing high concentrations of organic pollutants. The efficiency of Fenton (Fe2+ +H2O2 + H+) reagent for young leachate recovered from collection trucks household waste in the city of Casablanca, Morocco, was evaluated with the objectives of chemical oxygen demand (COD) and discoloration reductions. The optimization of certain physicochemical parameters (initial pH value, reaction time, and [Fe2+], [H2O2]/ [Fe2+] ratio) has yielded good results in terms of reduction of COD and discoloration of the leachate.

Keywords: COD removal, color removal, Fenton process, oxidation process, leachate

Procedia PDF Downloads 286
15571 Heat Transfer Process Parameter Optimization in SI/Ge Using TAGUCHI Method

Authors: Evln Ranga Charyulu, S. P. Venu Madhavarao, S. Udaya kumar, S. V. S. S. N. V. G. Krishna Murthy

Abstract:

With the advent of new nanometer process technologies, it is possible to integrate billion transistors on a single substrate. When more and more functionality included there is the possibility of multi-million transistors switching simultaneously consuming more power and dissipating more power along with more leakage of current into the substrate of porous silicon or germanium material. These results in substrate heating and thermal noise generation coupled to signals of interest. The heating process is represented by coupled nonlinear partial differential equations in porous silicon and germanium. By identifying heat sources and heat fluxes may results in designing of ultra-low power circuits. The PDEs are solved by finite difference scheme assuming that boundary layer equations in porous silicon and germanium. Local heat fluxes along the vertical isothermal surface immersed in porous SI/Ge are considered. The parameters considered for optimization are thermal diffusivity, thermal expansion coefficient, thermal diffusion ratio, permeability, specific heat at constant temperatures, Rayleigh number, amplitude of wavy surface, mass expansion coefficient. The diffusion of heat was caused by the concentration gradient. Thermal physical properties are homogeneous and isotropic. By using L8, TAGUCHI method the parameters are optimized.

Keywords: heat transfer, pde, taguchi optimization, SI/Ge

Procedia PDF Downloads 335
15570 Optimisation of B2C Supply Chain Resource Allocation

Authors: Firdaous Zair, Zoubir Elfelsoufi, Mohammed Fourka

Abstract:

The allocation of resources is an issue that is needed on the tactical and operational strategic plan. This work considers the allocation of resources in the case of pure players, manufacturers and Click & Mortars that have launched online sales. The aim is to improve the level of customer satisfaction and maintaining the benefits of e-retailer and of its cooperators and reducing costs and risks. Our contribution is a decision support system and tool for improving the allocation of resources in logistics chains e-commerce B2C context. We first modeled the B2C chain with all operations that integrates and possible scenarios since online retailers offer a wide selection of personalized service. The personalized services that online shopping companies offer to the clients can be embodied in many aspects, such as the customizations of payment, the distribution methods, and after-sales service choices. In addition, every aspect of customized service has several modes. At that time, we analyzed the optimization problems of supply chain resource allocation in customized online shopping service mode, which is different from the supply chain resource allocation under traditional manufacturing or service circumstances. Then we realized an optimization model and algorithm for the development based on the analysis of the allocation of the B2C supply chain resources. It is a multi-objective optimization that considers the collaboration of resources in operations, time and costs but also the risks and the quality of services as well as dynamic and uncertain characters related to the request.

Keywords: e-commerce, supply chain, B2C, optimisation, resource allocation

Procedia PDF Downloads 271
15569 Computationally Efficient Stacking Sequence Blending for Composite Structures with a Large Number of Design Regions Using Cellular Automata

Authors: Ellen Van Den Oord, Julien Marie Jan Ferdinand Van Campen

Abstract:

This article introduces a computationally efficient method for stacking sequence blending of composite structures. The computational efficiency makes the presented method especially interesting for composite structures with a large number of design regions. Optimization of composite structures with an unequal load distribution may lead to locally optimized thicknesses and ply orientations that are incompatible with one another. Blending constraints can be enforced to achieve structural continuity. In literature, many methods can be found to implement structural continuity by means of stacking sequence blending in one way or another. The complexity of the problem makes the blending of a structure with a large number of adjacent design regions, and thus stacking sequences, prohibitive. In this work the local stacking sequence optimization is preconditioned using a method found in the literature that couples the mechanical behavior of the laminate, in the form of lamination parameters, to blending constraints, yielding near-optimal easy-to-blend designs. The preconditioned design is then fed to the scheme using cellular automata that have been developed by the authors. The method is applied to the benchmark 18-panel horseshoe blending problem to demonstrate its performance. The computational efficiency of the proposed method makes it especially suited for composite structures with a large number of design regions.

Keywords: composite, blending, optimization, lamination parameters

Procedia PDF Downloads 225
15568 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization

Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder

Abstract:

In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.

Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening

Procedia PDF Downloads 298
15567 Numerical Investigation of a Supersonic Ejector for Refrigeration System

Authors: Karima Megdouli, Bourhan Taschtouch

Abstract:

Supersonic ejectors have many applications in refrigeration systems. And improving ejector performance is the key to improve the efficiency of these systems. One of the main advantages of the ejector is its geometric simplicity and the absence of moving parts. This paper presents a theoretical model for evaluating the performance of a new supersonic ejector configuration for refrigeration system applications. The relationship between the flow field and the key parameters of the new configuration has been illustrated by analyzing the Mach number and flow velocity contours. The method of characteristics (MOC) is used to design the supersonic nozzle of the ejector. The results obtained are compared with those obtained by CFD. The ejector is optimized by minimizing exergy destruction due to irreversibility and shock waves. The optimization converges to an efficient optimum solution, ensuring improved and stable performance over the whole considered range of uncertain operating conditions.

Keywords: supersonic ejector, theoretical model, CFD, optimization, performance

Procedia PDF Downloads 74
15566 A New Framework for ECG Signal Modeling and Compression Based on Compressed Sensing Theory

Authors: Siavash Eftekharifar, Tohid Yousefi Rezaii, Mahdi Shamsi

Abstract:

The purpose of this paper is to exploit compressed sensing (CS) method in order to model and compress the electrocardiogram (ECG) signals at a high compression ratio. In order to obtain a sparse representation of the ECG signals, first a suitable basis matrix with Gaussian kernels, which are shown to nicely fit the ECG signals, is constructed. Then the sparse model is extracted by applying some optimization technique. Finally, the CS theory is utilized to obtain a compressed version of the sparse signal. Reconstruction of the ECG signal from the compressed version is also done to prove the reliability of the algorithm. At this stage, a greedy optimization technique is used to reconstruct the ECG signal and the Mean Square Error (MSE) is calculated to evaluate the precision of the proposed compression method.

Keywords: compressed sensing, ECG compression, Gaussian kernel, sparse representation

Procedia PDF Downloads 462
15565 Static Study of Piezoelectric Bimorph Beams with Delamination Zone

Authors: Zemirline Adel, Ouali Mohammed, Mahieddine Ali

Abstract:

The FOSDT (First Order Shear Deformation Theory) is taking into consideration to study the static behavior of a bimorph beam, with a delamination zone between the upper and the lower layer. The effect of limit conditions and lengths of the delamination zone are presented in this paper, with a PVDF piezoelectric material application. A FEM “Finite Element Method” is used to discretize the beam. In the axial displacement, a displacement field appears in the debonded zone with inverse effect between the upper and the lower layer was observed.

Keywords: static, piezoelectricity, beam, delamination

Procedia PDF Downloads 415
15564 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.

Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation

Procedia PDF Downloads 69
15563 Ficus Carica as Adsorbent for Removal of Phenol from Aqueous Solutions: Modelling and Optimization

Authors: Tizi Hayet, Berrama Tarek, Bounif Nadia

Abstract:

Phenol and its derivatives are organic compounds utilized in the chemical industry. They are introduced into the environment by accidental spills and illegal release of industrial and municipal wastewater. Phenols are organic intermediaries that considered as potential pollutants. Adsorption is one of the purification and separation techniques used in this area. Algeria produces annually 131000 tones of fig; therefore, a large amount of fig leaves is generated, and the conversion of this waste into adsorbent allows the valorization of agricultural residue. The main purpose of this present work is to describe an application of the statistical method for modeling and optimization of the conditions of the phenol (Ph) adsorption from agricultural by-product locally available (fig leaves). The best experimental performance of Ph elimination on the adsorbent was obtained with: Adsorbent concentration (X2) = 0.2 g L-1; Initial concentration (X3) = 150 mg L-1; Speed agitation (X1) = 300 rpm.

Keywords: low-cost adsorbents, fig leaves, full factorial design, phenol, biosorption

Procedia PDF Downloads 96
15562 Control of Oil Content of Fried Zucchini Slices by Partial Predrying and Process Optimization

Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner

Abstract:

Main concern about deep-fat-fried food materials is their high final oil contents absorbed during frying process and/or after cooling period, since diet including high content of oil is accepted unhealthy by consumers. Different methods have been evaluated to decrease oil content of fried food stuffs. One promising method is partially drying of food material before frying. In the present study it was aimed to control and decrease the final oil content of zucchini slices by means of partial drying and to optimize process conditions. Conventional oven drying was used to decrease moisture content of zucchini slices at a certain extent. Process performance in terms of oil uptake was evaluated by comparing oil content of predried and then fried zucchini slices with those determined for directly fried ones. For predrying and frying processes, oven temperature and weight loss and frying oil temperature and time pairs were controlled variables, respectively. Zucchini slices were also directly fried for sensory evaluations revealing preferred properties of final product in terms of surface color, moisture content, texture and taste. These properties of directly fried zucchini slices taking the highest score at the end of sensory evaluation were determined and used as targets in optimization procedure. Response surface methodology was used for process optimization. The properties, determined after sensory evaluation, were selected as targets; meanwhile oil content was aimed to be minimized. Results indicated that final oil content of zucchini slices could be reduced from 58% to 46% by controlling conditions of predrying and frying processes. As a result, it was suggested that predrying could be one choose to reduce oil content of fried zucchini slices for health diet. This project (113R015) has been supported by TUBITAK.

Keywords: health process, optimization, response surface methodology, oil uptake, conventional oven

Procedia PDF Downloads 365
15561 First Investigation on CZTS Electron affinity and Thickness Optimization using SILVACO-Atlas 2D Simulation

Authors: Zeineb Seboui, Samar Dabbabi

Abstract:

In this paper, we study the performance of Cu₂ZnSnS₄ (CZTS) based solar cell. In our knowledge, it is for the first time that the FTO/ZnO:Co/CZTS structure is simulated using the SILVACO-Atlas 2D simulation. Cu₂ZnSnS₄ (CZTS), ZnO:Co and FTO (SnO₂:F) layers have been deposited on glass substrates by the spray pyrolysis technique. The extracted physical properties, such as thickness and optical parameters of CZTS layer, are considered to create a new input data of CZTS based solar cell. The optimization of CZTS electron affinity and thickness is performed to have the best FTO/ZnO: Co/CZTS efficiency. The use of CZTS absorber layer with 3.99 eV electron affinity and 3.2 µm in thickness leads to the higher efficiency of 16.86 %, which is very important in the development of new technologies and new solar cell devices.

Keywords: CZTS solar cell, characterization, electron affinity, thickness, SILVACO-atlas 2D simulation

Procedia PDF Downloads 75
15560 The Design Optimization for Sound Absorption Material of Multi-Layer Structure

Authors: Un-Hwan Park, Jun-Hyeok Heo, In-Sung Lee, Tae-Hyeon Oh, Dae-Kyu Park

Abstract:

Sound absorbing material is used as automotive interior material. Sound absorption coefficient should be predicted to design it. But it is difficult to predict sound absorbing coefficient because it is comprised of several material layers. So, its targets are achieved through many experimental tunings. It causes a lot of cost and time. In this paper, we propose the process to estimate the sound absorption coefficient with multi-layer structure. In order to estimate the coefficient, physical properties of each material are used. These properties also use predicted values by Foam-X software using the sound absorption coefficient data measured by impedance tube. Since there are many physical properties and the measurement equipment is expensive, the values predicted by software are used. Through the measurement of the sound absorption coefficient of each material, its physical properties are calculated inversely. The properties of each material are used to calculate the sound absorption coefficient of the multi-layer material. Since the absorption coefficient of multi-layer can be calculated, optimization design is possible through simulation. Then, we will compare and analyze the calculated sound absorption coefficient with the data measured by scaled reverberation chamber and impedance tubes for a prototype. If this method is used when developing automotive interior materials with multi-layer structure, the development effort can be reduced because it can be optimized by simulation. So, cost and time can be saved.

Keywords: sound absorption material, sound impedance tube, sound absorption coefficient, optimization design

Procedia PDF Downloads 287