Search results for: inverse optimization approach
15559 A Game-Theory-Based Price-Optimization Algorithm for the Simulation of Markets Using Agent-Based Modelling
Authors: Juan Manuel Sanchez-Cartas, Gonzalo Leon
Abstract:
A price competition algorithm for ABMs based on game theory principles is proposed to deal with the simulation of theoretical market models. The algorithm is applied to the classical Hotelling’s model and to a two-sided market model to show it leads to the optimal behavior predicted by theoretical models. However, when theoretical models fail to predict the equilibrium, the algorithm is capable of reaching a feasible outcome. Results highlight that the algorithm can be implemented in other simulation models to guarantee rational users and endogenous optimal behaviors. Also, it can be applied as a tool of verification given that is theoretically based.Keywords: agent-based models, algorithmic game theory, multi-sided markets, price optimization
Procedia PDF Downloads 45415558 Numerical Investigation of Entropy Signatures in Fluid Turbulence: Poisson Equation for Pressure Transformation from Navier-Stokes Equation
Authors: Samuel Ahamefula Mba
Abstract:
Fluid turbulence is a complex and nonlinear phenomenon that occurs in various natural and industrial processes. Understanding turbulence remains a challenging task due to its intricate nature. One approach to gain insights into turbulence is through the study of entropy, which quantifies the disorder or randomness of a system. This research presents a numerical investigation of entropy signatures in fluid turbulence. The work is to develop a numerical framework to describe and analyse fluid turbulence in terms of entropy. This decomposes the turbulent flow field into different scales, ranging from large energy-containing eddies to small dissipative structures, thus establishing a correlation between entropy and other turbulence statistics. This entropy-based framework provides a powerful tool for understanding the underlying mechanisms driving turbulence and its impact on various phenomena. This work necessitates the derivation of the Poisson equation for pressure transformation of Navier-Stokes equation and using Chebyshev-Finite Difference techniques to effectively resolve it. To carry out the mathematical analysis, consider bounded domains with smooth solutions and non-periodic boundary conditions. To address this, a hybrid computational approach combining direct numerical simulation (DNS) and Large Eddy Simulation with Wall Models (LES-WM) is utilized to perform extensive simulations of turbulent flows. The potential impact ranges from industrial process optimization and improved prediction of weather patterns.Keywords: turbulence, Navier-Stokes equation, Poisson pressure equation, numerical investigation, Chebyshev-finite difference, hybrid computational approach, large Eddy simulation with wall models, direct numerical simulation
Procedia PDF Downloads 9215557 In-situ Observations Using SEM-EBSD for Bending Deformation in Single-Crystal Materials
Authors: Yuko Matayoshi, Takashi Sakai, Yin-Gjum Jin, Jun-ichi Koyama
Abstract:
To elucidate the material characteristics of single crystals of pure aluminum and copper, the respective relations between crystallographic orientations and micro structures were examined, along with bending and mechanical properties. The texture distribution was also analysed. Bending tests were performed in a SEM apparatus while its behaviors were observed. Some analytical results related to crystal direction maps, inverse pole figures, and textures were obtained from electron back scatter diffraction (EBSD) analyses.Keywords: pure aluminum, pure copper, single crystal, bending, SEM-EBSD analysis, texture, microstructure
Procedia PDF Downloads 36715556 Comparison of Machine Learning Models for the Prediction of System Marginal Price of Greek Energy Market
Authors: Ioannis P. Panapakidis, Marios N. Moschakis
Abstract:
The Greek Energy Market is structured as a mandatory pool where the producers make their bid offers in day-ahead basis. The System Operator solves an optimization routine aiming at the minimization of the cost of produced electricity. The solution of the optimization problem leads to the calculation of the System Marginal Price (SMP). Accurate forecasts of the SMP can lead to increased profits and more efficient portfolio management from the producer`s perspective. Aim of this study is to provide a comparative analysis of various machine learning models such as artificial neural networks and neuro-fuzzy models for the prediction of the SMP of the Greek market. Machine learning algorithms are favored in predictions problems since they can capture and simulate the volatilities of complex time series.Keywords: deregulated energy market, forecasting, machine learning, system marginal price
Procedia PDF Downloads 21415555 Optimum Design of Grillage Systems Using Firefly Algorithm Optimization Method
Authors: F. Erdal, E. Dogan, F. E. Uz
Abstract:
In this study, firefly optimization based optimum design algorithm is presented for the grillage systems. Naming of the algorithm is derived from the fireflies, whose sense of movement is taken as a model in the development of the algorithm. Fireflies’ being unisex and attraction between each other constitute the basis of the algorithm. The design algorithm considers the displacement and strength constraints which are implemented from LRFD-AISC (Load and Resistance Factor Design-American Institute of Steel Construction). It selects the appropriate W (Wide Flange)-sections for the transverse and longitudinal beams of the grillage system among 272 discrete W-section designations given in LRFD-AISC so that the design limitations described in LRFD are satisfied and the weight of the system is confined to be minimal. Number of design examples is considered to demonstrate the efficiency of the algorithm presented.Keywords: firefly algorithm, steel grillage systems, optimum design, stochastic search techniques
Procedia PDF Downloads 43115554 Applications of Artificial Neural Networks in Civil Engineering
Authors: Naci Büyükkaracığan
Abstract:
Artificial neural networks (ANN) is an electrical model based on the human brain nervous system and working principle. Artificial neural networks have been the subject of an active field of research that has matured greatly over the past 55 years. ANN now is used in many fields. But, it has been viewed that artificial neural networks give better results in particular optimization and control systems. There are requirements of optimization and control system in many of the area forming the subject of civil engineering applications. In this study, the first artificial intelligence systems are widely used in the solution of civil engineering systems were examined with the basic principles and technical aspects. Finally, the literature reviews for applications in the field of civil engineering were conducted and also artificial intelligence techniques were informed about the study and its results.Keywords: artificial neural networks, civil engineering, Fuzzy logic, statistics
Procedia PDF Downloads 40915553 Comparison of Different Data Acquisition Techniques for Shape Optimization Problems
Authors: Attila Vámosi, Tamás Mankovits, Dávid Huri, Imre Kocsis, Tamás Szabó
Abstract:
Non-linear FEM calculations are indispensable when important technical information like operating performance of a rubber component is desired. Rubber bumpers built into air-spring structures may undergo large deformations under load, which in itself shows non-linear behavior. The changing contact range between the parts and the incompressibility of the rubber increases this non-linear behavior further. The material characterization of an elastomeric component is also a demanding engineering task. The shape optimization problem of rubber parts led to the study of FEM based calculation processes. This type of problems was posed and investigated by several authors. In this paper the time demand of certain calculation methods are studied and the possibilities of time reduction is presented.Keywords: rubber bumper, data acquisition, finite element analysis, support vector regression
Procedia PDF Downloads 46915552 Electro-Fenton Degradation of Erythrosine B Using Carbon Felt as a Cathode: Doehlert Design as an Optimization Technique
Authors: Sourour Chaabane, Davide Clematis, Marco Panizza
Abstract:
This study investigates the oxidation of Erythrosine B (EB) food dye by a homogeneous electro-Fenton process using iron (II) sulfate heptahydrate as a catalyst, carbon felt as cathode, and Ti/RuO2. The treated synthetic wastewater contains 100 mg L⁻¹ of EB and has a pH = 3. The effects of three independent variables have been considered for process optimization, such as applied current intensity (0.1 – 0.5 A), iron concentration (1 – 10 mM), and stirring rate (100 – 1000 rpm). Their interactions were investigated considering response surface methodology (RSM) based on Doehlert design as optimization method. EB removal efficiency and energy consumption were considered model responses after 30 minutes of electrolysis. Analysis of variance (ANOVA) revealed that the quadratic model was adequately fitted to the experimental data with R² (0.9819), adj-R² (0.9276) and low Fisher probability (< 0.0181) for EB removal model, and R² (0.9968), adj-R² (0.9872) and low Fisher probability (< 0.0014) relative to the energy consumption model reflected a robust statistical significance. The energy consumption model significantly depends on current density, as expected. The foregoing results obtained by RSM led to the following optimal conditions for EB degradation: current intensity of 0.2 A, iron concentration of 9.397 mM, and stirring rate of 500 rpm, which gave a maximum decolorization rate of 98.15 % with a minimum energy consumption of 0.74 kWh m⁻³ at 30 min of electrolysis.Keywords: electrofenton, erythrosineb, dye, response serface methdology, carbon felt
Procedia PDF Downloads 7015551 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils
Authors: Muqdad Al-Juboori, Bithin Datta
Abstract:
Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.Keywords: artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis
Procedia PDF Downloads 22215550 Harmonizing Cities: Integrating Land Use Diversity and Multimodal Transit for Social Equity
Authors: Zi-Yan Chao
Abstract:
With the rapid development of urbanization and increasing demand for efficient transportation systems, the interaction between land use diversity and transportation resource allocation has become a critical issue in urban planning. Achieving a balance of land use types, such as residential, commercial, and industrial areas, is crucial role in ensuring social equity and sustainable urban development. Simultaneously, optimizing multimodal transportation networks, including bus, subway, and car routes, is essential for minimizing total travel time and costs, while ensuring fairness for all social groups, particularly in meeting the transportation needs of low-income populations. This study develops a bilevel programming model to address these challenges, with land use diversity as the foundation for measuring equity. The upper-level model maximizes land use diversity for balanced land distribution across regions. The lower-level model optimizes multimodal transportation networks to minimize travel time and costs while maintaining user equilibrium. The model also incorporates constraints to ensure fair resource allocation, such as balancing transportation accessibility and cost differences across various social groups. A solution approach is developed to solve the bilevel optimization problem, ensuring efficient exploration of the solution space for land use and transportation resource allocation. This study maximizes social equity by maximizing land use diversity and achieving user equilibrium with optimal transportation resource distribution. The proposed method provides a robust framework for addressing urban planning challenges, contributing to sustainable and equitable urban development.Keywords: bilevel programming model, genetic algorithms, land use diversity, multimodal transportation optimization, social equity
Procedia PDF Downloads 2115549 Design of a Thrust Vectoring System for an Underwater ROV
Authors: Isaac Laryea
Abstract:
Underwater remote-operated vehicles (ROVs) are highly useful in aquatic research and underwater operations. Unfortunately, unsteady and unpredictable conditions underwater make it difficult for underwater vehicles to maintain a steady attitude during motion. Existing underwater vehicles make use of multiple thrusters positioned at specific positions on their frame to maintain a certain pose. This study proposes an alternate way of maintaining a steady attitude during horizontal motion at low speeds by making use of a thrust vector-controlled propulsion system. The study began by carrying out some preliminary calculations to get an idea of a suitable shape and form factor. Flow simulations were carried out to ensure that enough thrust could be generated to move the system. Using the Lagrangian approach, a mathematical system was developed for the ROV, and this model was used to design a control system. A PID controller was selected for the control system. However, after tuning, it was realized that a PD controller satisfied the design specifications. The designed control system produced an overshoot of 6.72%, with a settling time of 0.192s. To achieve the effect of thrust vectoring, an inverse kinematics synthesis was carried out to determine what angle the actuators need to move to. After building the system, intermittent angular displacements of 10°, 15°, and 20° were given during bench testing, and the response of the control system as well as the servo motor angle was plotted. The final design was able to move in water but was not able to handle large angular displacements as a result of the small angle approximation used in the mathematical model.Keywords: PID control, thrust vectoring, parallel manipulators, ROV, underwater, attitude control
Procedia PDF Downloads 6615548 The Optimization of an Industrial Recycling Line: Improving the Durability of Recycled Polyethyene Blends
Authors: Alae Lamtai, Said Elkoun, Hniya Kharmoudi, Mathieu Robert, Carl Diez
Abstract:
This study applies Taguchi's design of experiment methodology and grey relational analysis (GRA) for multi objective optimization of an industrial recycling line. This last is composed mainly of a mono and twin-screw extruder and a filtration system. Experiments were performed according to L₁₆ standard orthogonal array based on five process parameters, namely: mono screw design, screw speed of the mono and twin-screw extruder, melt pump pressure, and filter mesh size. The objective of this optimization is to improve the durability of the Polyethylene (PE) blend by decreasing the loss of Stress Crack resistance (SCR) using Notched Crack Ligament Stress (NCLS) test and Unnotched Crack Ligament Stress (UCLS) in parallel with increasing the gain of Izod impact strength of the Polyethylene (PE) blend before and after recycling. Based on Grey Relational Analysis (GRA), the optimal setting of process parameters was identified, and the results indicated that the mono-screw design and screw speed of both mono and twin-screw extruder impact significantly the mechanical properties of recycled Polyethylene (PE) blend.Keywords: Taguchi, recycling line, polyethylene, stress crack resistance, Izod impact strength, grey relational analysis
Procedia PDF Downloads 8215547 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm
Authors: Annalakshmi G., Sakthivel Murugan S.
Abstract:
This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization
Procedia PDF Downloads 16315546 Model Based Optimization of Workplace Ergonomics by Workpiece and Resource Positioning
Authors: Edward Hage, Pieter Lietaert, Gabriel Abedrabbo
Abstract:
Musculoskeletal disorders are an important category of work-related diseases. They are often caused by working in non-ergonomic postures and are preventable with proper workplace design, possibly including human-machine collaboration. This paper presents a methodology and a supporting software prototype to design a simple assembly cell with minimal ergonomic risk. The methodology helps to determine the optimal position and orientation of workpieces and workplace resources for specific operator assembly actions. The methodology is tested on an industrial use case: a collaborative robot (cobot) assisted assembly of a clamping device. It is shown that the automated methodology results in a workplace design with significantly reduced ergonomic risk to the operator compared to a manual design of the cell.Keywords: ergonomics optimization, design for ergonomics, workplace design, pose generation
Procedia PDF Downloads 12215545 Optimization of Hepatitis B Surface Antigen Purifications to Improving the Production of Hepatitis B Vaccines on Pichia pastoris
Authors: Rizky Kusuma Cahyani
Abstract:
Hepatitis B is a liver inflammatory disease caused by hepatitis B virus (HBV). This infection can be prevented by vaccination which contains HBV surface protein (sHBsAg). However, vaccine supply is limited. Several attempts have been conducted to produce local sHBsAg. However, the purity degree and protein yield are still inadequate. Therefore optimization of HBsAg purification steps is required to obtain high yield with better purification fold. In this study, optimization of purification was done in 2 steps, precipitation using variation of NaCl concentration (0,3 M; 0,5 M; 0,7 M) and PEG (3%, 5%, 7%); ion exchange chromatography (IEC) using NaCl 300-500 mM elution buffer concentration.To determine HBsAg protein, bicinchoninic acid assay (BCA) and enzyme-linked immunosorbent assay (ELISA) was used in this study. Visualization of HBsAg protein was done by SDS-PAGE analysis. Based on quantitative analysis, optimal condition at precipitation step was given 0,3 M NaCl and PEG 3%, while in ion exchange chromatography step, the optimum condition when protein eluted with NaCl 500 mM. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) analysis indicates that the presence of protein HBsAg with a molecular weight of 25 kDa (monomer) and 50 kDa (dimer). The optimum condition for purification of sHBsAg produced in Pichia pastoris gave a yield of 47% and purification fold 17x so that it would increase the production of hepatitis B vaccine to be more optimal.Keywords: hepatitis B virus, HBsAg, hepatitis B surface antigen, Pichia pastoris, purification
Procedia PDF Downloads 15115544 Determination of the Minimum Time and the Optimal Trajectory of a Moving Robot Using Picard's Method
Authors: Abbes Lounis, Kahina Louadj, Mohamed Aidene
Abstract:
This paper presents an optimal control problem applied to a robot; the problem is to determine a command which makes it possible to reach a final state from a given initial state in record time. The approach followed to solve this optimization problem with constraints on the control starts by presenting the equations of motion of the dynamic system then by applying Pontryagin's maximum principle (PMP) to determine the optimal control, and Picard's successive approximation method combined with the shooting method to solve the resulting differential system.Keywords: robotics, Pontryagin's Maximum Principle, PMP, Picard's method, shooting method, non-linear differential systems
Procedia PDF Downloads 25315543 Simulation of Wet Scrubbers for Flue Gas Desulfurization
Authors: Anders Schou Simonsen, Kim Sorensen, Thomas Condra
Abstract:
Wet scrubbers are used for flue gas desulfurization by injecting water directly into the flue gas stream from a set of sprayers. The water droplets will flow freely inside the scrubber, and flow down along the scrubber walls as a thin wall film while reacting with the gas phase to remove SO₂. This complex multiphase phenomenon can be divided into three main contributions: the continuous gas phase, the liquid droplet phase, and the liquid wall film phase. This study proposes a complete model, where all three main contributions are taken into account and resolved using OpenFOAM for the continuous gas phase, and MATLAB for the liquid droplet and wall film phases. The 3D continuous gas phase is composed of five species: CO₂, H₂O, O₂, SO₂, and N₂, which are resolved along with momentum, energy, and turbulence. Source terms are present for four species, energy and momentum, which are affecting the steady-state solution. The liquid droplet phase experiences breakup, collisions, dynamics, internal chemistry, evaporation and condensation, species mass transfer, energy transfer and wall film interactions. Numerous sub-models have been implemented and coupled to realise the above-mentioned phenomena. The liquid wall film experiences impingement, acceleration, atomization, separation, internal chemistry, evaporation and condensation, species mass transfer, and energy transfer, which have all been resolved using numerous sub-models as well. The continuous gas phase has been coupled with the liquid phases using source terms by an approach, where the two software packages are couples using a link-structure. The complete CFD model has been verified using 16 experimental tests from an existing scrubber installation, where a gradient-based pattern search optimization algorithm has been used to tune numerous model parameters to match the experimental results. The CFD model needed to be fast for evaluation in order to apply this optimization routine, where approximately 1000 simulations were needed. The results show that the complex multiphase phenomena governing wet scrubbers can be resolved in a single model. The optimization routine was able to tune the model to accurately predict the performance of an existing installation. Furthermore, the study shows that a coupling between OpenFOAM and MATLAB is realizable, where the data and source term exchange increases the computational requirements by approximately 5%. This allows for exploiting the benefits of both software programs.Keywords: desulfurization, discrete phase, scrubber, wall film
Procedia PDF Downloads 26015542 Chemical Oxygen Demand Fractionation of Primary Wastewater Effluent for Process Optimization and Modelling
Authors: Thandeka Y. S. Jwara, Paul Musonge
Abstract:
Traditionally, the complexity associated with implementing and controlling biological nutrient removal (BNR) in wastewater works (WWW) has been primarily in terms of balancing competing requirements for nitrogen and phosphorus removal, particularly with respect to the use of influent chemical oxygen demand (COD) as a carbon source for the microorganisms. Successful BNR optimization and modelling using WEST (Worldwide Engine for Simulation and Training) depend largely on the accurate fractionation of the influent COD. The different COD fractions have differing effects on the BNR process, and therefore, the influent characteristics need to be well understood. This study presents the fractionation results of primary wastewater effluent COD at one of South Africa’s wastewater works treating 65ML/day of mixed industrial and domestic effluent. The method used for COD fractionation was the oxygen uptake rate/respirometry method. The breakdown of the results of the analysis is as follows: 70.5% biodegradable COD (bCOD) and 29.5% of non-biodegradable COD (iCOD) in terms of the total COD. Further fractionation led to a readily biodegradable soluble fraction (SS) of 75%, a slowly degradable particulate fraction (XS) of 24%, a particulate non-biodegradable fraction (XI) of 50.8% and a non-biodegradable soluble fraction (SI) of 49.2%. The fractionation results demonstrate that the primary effluent has good COD characteristics, as shown by the high level of the bCOD fraction with Ss being higher than Xs. This means that the microorganisms have sufficient substrate for the BNR process and that these components can now serve as inputs to the WEST Model for the plant under study.Keywords: chemical oxygen demand, COD fractionation, wastewater modelling, wastewater optimization
Procedia PDF Downloads 14215541 Observation of Inverse Blech Length Effect during Electromigration of Cu Thin Film
Authors: Nalla Somaiah, Praveen Kumar
Abstract:
Scaling of transistors and, hence, interconnects is very important for the enhanced performance of microelectronic devices. Scaling of devices creates significant complexity, especially in the multilevel interconnect architectures, wherein current crowding occurs at the corners of interconnects. Such a current crowding creates hot-spots at the respective corners, resulting in non-uniform temperature distribution in the interconnect as well. This non-uniform temperature distribution, which is exuberated with continued scaling of devices, creates a temperature gradient in the interconnect. In particular, the increased current density at corners and the associated temperature rise due to Joule heating accelerate the electromigration induced failures in interconnects, especially at corners. This has been the classic reliability issue associated with metallic interconnects. Herein, it is generally understood that electromigration induced damages can be avoided if the length of interconnect is smaller than a critical length, often termed as Blech length. Interestingly, the effect of non-negligible temperature gradients generated at these corners in terms of thermomigration and electromigration-thermomigration coupling has not attracted enough attention. Accordingly, in this work, the interplay between the electromigration and temperature gradient induced mass transport was studied using standard Blech structure. In this particular sample structure, the majority of the current is forcefully directed into the low resistivity metallic film from a high resistivity underlayer film, resulting in current crowding at the edges of the metallic film. In this study, 150 nm thick Cu metallic film was deposited on 30 nm thick W underlayer film in the configuration of Blech structure. Series of Cu thin strips, with lengths of 10, 20, 50, 100, 150 and 200 μm, were fabricated. Current density of ≈ 4 × 1010 A/m² was passed through Cu and W films at a temperature of 250ºC. Herein, along with expected forward migration of Cu atoms from the cathode to the anode at the cathode end of the Cu film, backward migration from the anode towards the center of Cu film was also observed. Interestingly, smaller length samples consistently showed enhanced migration at the cathode end, thus indicating the existence of inverse Blech length effect in presence of temperature gradient. A finite element based model showing the interplay between electromigration and thermomigration driving forces has been developed to explain this observation.Keywords: Blech structure, electromigration, temperature gradient, thin films
Procedia PDF Downloads 25315540 Using a Strength Based Approach to Teaching Children with Special Needs
Authors: Eunice Tan
Abstract:
The purpose of this presentation is to look at an alternative to the approach and methodologies of working with a child with special needs. The strength-based approach to education embodies a paradigm shift. It is a strategy to move away from a deficit-based methodology which inadvertently may lead to an extensive list of things that the child cannot do or is unable to do. Today, many parents of individuals with special needs are focused on the individual’s deficits rather than on his or her strengths. Even when parents recognise and identify their child’s savant strengths to be valuable and wish to develop their abilities, they face the challenge that there are insufficient programs committed to supporting the development and improvement of such abilities. What is a strength-based approach in education? A strength-based approach in education focuses on students' positive qualities and contributions to class instead of the skills and abilities they may not have. Many schools are focused on the child’s special educational needs rather than the whole child. Parents interviewed have said that they have to engage external tutors to help hone in on their child’s interests and strengths. The strength-based approach to writing statements encourages educators to find out: • What a child can do • What a child can do when he or she is given educational support • Learning more about children with special needs and their strengths and talents will broaden our understanding of how we can help them with language acquisition, social skills, as well as self-help and independence skills.Keywords: special needs, strengths, and talents, alternative educational approach, strength based approach
Procedia PDF Downloads 28715539 Matrix Completion with Heterogeneous Cost
Authors: Ilqar Ramazanli
Abstract:
The matrix completion problem has been studied broadly under many underlying conditions. The problem has been explored under adaptive or non-adaptive, exact or estimation, single-phase or multi-phase, and many other categories. In most of these cases, the observation cost of each entry is uniform and has the same cost across the columns. However, in many real-life scenarios, we could expect elements from distinct columns or distinct positions to have a different cost. In this paper, we explore this generalization under adaptive conditions. We approach the problem under two different cost models. The first one is that entries from different columns have different observation costs, but within the same column, each entry has a uniform cost. The second one is any two entry has different observation cost, despite being the same or different columns. We provide complexity analysis of our algorithms and provide tightness guarantees.Keywords: matroid optimization, matrix completion, linear algebra, algorithms
Procedia PDF Downloads 10815538 Optimal Production and Maintenance Policy for a Partially Observable Production System with Stochastic Demand
Authors: Leila Jafari, Viliam Makis
Abstract:
In this paper, the joint optimization of the economic manufacturing quantity (EMQ), safety stock level, and condition-based maintenance (CBM) is presented for a partially observable, deteriorating system subject to random failure. The demand is stochastic and it is described by a Poisson process. The stochastic model is developed and the optimization problem is formulated in the semi-Markov decision process framework. A modification of the policy iteration algorithm is developed to find the optimal policy. A numerical example is presented to compare the optimal policy with the policy considering zero safety stock.Keywords: condition-based maintenance, economic manufacturing quantity, safety stock, stochastic demand
Procedia PDF Downloads 46115537 Optimization of Multistage Extractor for the Butanol Separation from Aqueous Solution Using Ionic Liquids
Authors: Dharamashi Rabari, Anand Patel
Abstract:
n-Butanol can be regarded as a potential biofuel. Being resistive to corrosion and having high calorific value, butanol is a very attractive energy source as opposed to ethanol. By fermentation process called ABE (acetone, butanol, ethanol), bio-butanol can be produced. ABE carried out mostly by bacteria Clostridium acetobutylicum. The major drawback of the process is the butanol concentration higher than 10 g/L, delays the growth of microbes resulting in a low yield. It indicates the simultaneous separation of butanol from the fermentation broth. Two hydrophobic Ionic Liquids (ILs) 1-butyl-1-methylpiperidinium bis (trifluoromethylsulfonyl)imide [bmPIP][Tf₂N] and 1-hexyl-3-methylimidazolium bis (trifluoromethylsulfonyl)imide [hmim][Tf₂N] were chosen. The binary interaction parameters for both ternary systems i.e. [bmPIP][Tf₂N] + water + n-butanol and [hmim][Tf₂N] + water +n-butanol were taken from the literature that was generated by NRTL model. Particle swarm optimization (PSO) with the isothermal sum rate (ISR) method was used to optimize the cost of liquid-liquid extractor. For [hmim][Tf₂N] + water +n-butanol system, PSO shows 84% success rate with the number of stages equal to eight and solvent flow rate equal to 461 kmol/hr. The number of stages was three with 269.95 kmol/hr solvent flow rate for [bmPIP][Tf₂N] + water + n-butanol system. Moreover, both ILs were very efficient as the loss of ILs in raffinate phase was negligible.Keywords: particle swarm optimization, isothermal sum rate method, success rate, extraction
Procedia PDF Downloads 12115536 Sensitivity Analysis Optimization of a Horizontal Axis Wind Turbine from Its Aerodynamic Profiles
Authors: Kevin Molina, Daniel Ortega, Manuel Martinez, Andres Gonzalez-Estrada, William Pinto
Abstract:
Due to the increasing environmental impact, the wind energy is getting strong. This research studied the relationship between the power produced by a horizontal axis wind turbine (HAWT) and the aerodynamic profiles used for its construction. The analysis is studied using the Computational Fluid Dynamic (CFD), presenting the parallel between the energy generated by a turbine designed with selected profiles and another one optimized. For the study, a selection process was carried out from profile NACA 6 digits recommended by the National Renewable Energy Laboratory (NREL) for the construction of this type of turbines. The selection was taken into account different characteristics of the wind (speed and density) and the profiles (aerodynamic coefficients Cl and Cd to different Reynolds and incidence angles). From the selected profiles, was carried out a sensitivity analysis optimization process between its geometry and the aerodynamic forces that are induced on it. The 3D model of the turbines was realized using the Blade Element Momentum method (BEM) and both profiles. The flow fields on the turbines were simulated, obtaining the forces induced on the blade, the torques produced and an increase of 3% in power due to the optimized profiles. Therefore, the results show that the sensitivity analysis optimization process can assist to increment the wind turbine power.Keywords: blade element momentum, blade, fluid structure interaction, horizontal axis wind turbine, profile design
Procedia PDF Downloads 25715535 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data
Authors: K. Sathishkumar, V. Thiagarasu
Abstract:
Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.Keywords: microarray technology, gene expression data, clustering, gene Selection
Procedia PDF Downloads 32315534 Self Determination Theory and Trauma Informed Approach in Women's Shelters: A Common Ground
Authors: Gamze Dogan Birer
Abstract:
Women’s shelters provide service to women who had been subjected to physical, psychological, economical, and sexual violence. It is proposed that adopting a trauma-informed approach in these shelters would contribute to the ‘woman-defined’ success of the service. This includes reshaping the physical qualities of the shelter, contacts, and interventions that women face during their stay in a way that accepts and addresses their traumatic experiences. It is stated in this paper that the trauma-informed approach has commonalities with the basic psychological needs that are proposed by self-determination theory. Therefore, it is proposed that self-determination theory can be used as a theoretical background for trauma-informed approachKeywords: self determination theory, trauma informed approach, violence against women, women's shelters
Procedia PDF Downloads 15815533 Revalidation and Hormonization of Existing IFCC Standardized Hepatic, Cardiac, and Thyroid Function Tests by Precison Optimization and External Quality Assurance Programs
Authors: Junaid Mahmood Alam
Abstract:
Revalidating and harmonizing clinical chemistry analytical principles and optimizing methods through quality control programs and assessments is the preeminent means to attain optimal outcome within the clinical laboratory services. Present study reports revalidation of our existing IFCC regularized analytical methods, particularly hepatic and thyroid function tests, by optimization of precision analyses and processing through external and internal quality assessments and regression determination. Parametric components of hepatic (Bilirubin ALT, γGT, ALP), cardiac (LDH, AST, Trop I) and thyroid/pituitary (T3, T4, TSH, FT3, FT4) function tests were used to validate analytical techniques on automated chemistry and immunological analyzers namely Hitachi 912, Cobas 6000 e601, Cobas c501, Cobas e411 with UV kinetic, colorimetric dry chemistry principles and Electro-Chemiluminescence immunoassay (ECLi) techniques. Process of validation and revalidation was completed with evaluating and assessing the precision analyzed Preci-control data of various instruments plotting against each other with regression analyses R2. Results showed that: Revalidation and optimization of respective parameters that were accredited through CAP, CLSI and NEQAPP assessments depicted 99.0% to 99.8% optimization, in addition to the methodology and instruments used for analyses. Regression R2 analysis of BilT was 0.996, whereas that of ALT, ALP, γGT, LDH, AST, Trop I, T3, T4, TSH, FT3, and FT4 exhibited R2 0.998, 0.997, 0.993, 0.967, 0.970, 0.980, 0.976, 0.996, 0.997, 0.997, and R2 0.990, respectively. This confirmed marked harmonization of analytical methods and instrumentations thus revalidating optimized precision standardization as per IFCC recommended guidelines. It is concluded that practices of revalidating and harmonizing the existing or any new services should be followed by all clinical laboratories, especially those associated with tertiary care hospital. This is will ensure deliverance of standardized, proficiency tested, optimized services for prompt and better patient care that will guarantee maximum patients’ confidence.Keywords: revalidation, standardized, IFCC, CAP, harmonized
Procedia PDF Downloads 26815532 Optimization of Air Pollution Control Model for Mining
Authors: Zunaira Asif, Zhi Chen
Abstract:
The sustainable measures on air quality management are recognized as one of the most serious environmental concerns in the mining region. The mining operations emit various types of pollutants which have significant impacts on the environment. This study presents a stochastic control strategy by developing the air pollution control model to achieve a cost-effective solution. The optimization method is formulated to predict the cost of treatment using linear programming with an objective function and multi-constraints. The constraints mainly focus on two factors which are: production of metal should not exceed the available resources, and air quality should meet the standard criteria of the pollutant. The applicability of this model is explored through a case study of an open pit metal mine, Utah, USA. This method simultaneously uses meteorological data as a dispersion transfer function to support the practical local conditions. The probabilistic analysis and the uncertainties in the meteorological conditions are accomplished by Monte Carlo simulation. Reasonable results have been obtained to select the optimized treatment technology for PM2.5, PM10, NOx, and SO2. Additional comparison analysis shows that baghouse is the least cost option as compared to electrostatic precipitator and wet scrubbers for particulate matter, whereas non-selective catalytical reduction and dry-flue gas desulfurization are suitable for NOx and SO2 reduction respectively. Thus, this model can aid planners to reduce these pollutants at a marginal cost by suggesting control pollution devices, while accounting for dynamic meteorological conditions and mining activities.Keywords: air pollution, linear programming, mining, optimization, treatment technologies
Procedia PDF Downloads 20615531 A User Interface for Easiest Way Image Encryption with Chaos
Authors: D. López-Mancilla, J. M. Roblero-Villa
Abstract:
Since 1990, the research on chaotic dynamics has received considerable attention, particularly in light of potential applications of this phenomenon in secure communications. Data encryption using chaotic systems was reported in the 90's as a new approach for signal encoding that differs from the conventional methods that use numerical algorithms as the encryption key. The algorithms for image encryption have received a lot of attention because of the need to find security on image transmission in real time over the internet and wireless networks. Known algorithms for image encryption, like the standard of data encryption (DES), have the drawback of low level of efficiency when the image is large. The encrypting based on chaos proposes a new and efficient way to get a fast and highly secure image encryption. In this work, a user interface for image encryption and a novel and easiest way to encrypt images using chaos are presented. The main idea is to reshape any image into a n-dimensional vector and combine it with vector extracted from a chaotic system, in such a way that the vector image can be hidden within the chaotic vector. Once this is done, an array is formed with the original dimensions of the image and turns again. An analysis of the security of encryption from the images using statistical analysis is made and is used a stage of optimization for image encryption security and, at the same time, the image can be accurately recovered. The user interface uses the algorithms designed for the encryption of images, allowing you to read an image from the hard drive or another external device. The user interface, encrypt the image allowing three modes of encryption. These modes are given by three different chaotic systems that the user can choose. Once encrypted image, is possible to observe the safety analysis and save it on the hard disk. The main results of this study show that this simple method of encryption, using the optimization stage, allows an encryption security, competitive with complicated encryption methods used in other works. In addition, the user interface allows encrypting image with chaos, and to submit it through any public communication channel, including internet.Keywords: image encryption, chaos, secure communications, user interface
Procedia PDF Downloads 48915530 Investigation and Optimization of DNA Isolation Efficiency Using Ferrite-Based Magnetic Nanoparticles
Authors: Tímea Gerzsenyi, Ágnes M. Ilosvai, László Vanyorek, Emma Szőri-Dorogházi
Abstract:
DNA isolation is a crucial step in many molecular biological applications for diagnostic and research purposes. However, traditional extraction requires toxic reagents, and commercially available kits are expensive, this leading to the recently wide-spread method, the magnetic nanoparticle (MNP)-based DNA isolation. Different ferrite containing MNPs were examined and compared in their plasmid DNA isolation efficiency. Among the tested MNPs, one has never been used for the extraction of plasmid molecules, marking a distinct application. pDNA isolation process was optimized for each type of nanoparticle and the best protocol was selected based on different criteria: DNA quantity, quality and integrity. With the best-performing magnetic nanoparticle, which excelled in all aspects, further tests were performed to recover genomic DNA from bacterial cells and a protocol was developed.Keywords: DNA isolation, nanobiotechnology, magnetic nanoparticles, protocol optimization, pDNA, gDNA
Procedia PDF Downloads 4