Search results for: optimal%20parameters
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3001

Search results for: optimal%20parameters

2371 A Numerical Study on Semi-Active Control of a Bridge Deck under Seismic Excitation

Authors: A. Yanik, U. Aldemir

Abstract:

This study investigates the benefits of implementing the semi-active devices in relation to passive viscous damping in the context of seismically isolated bridge structures. Since the intrinsically nonlinear nature of semi-active devices prevents the direct evaluation of Laplace transforms, frequency response functions are compiled from the computed time history response to sinusoidal and pulse-like seismic excitation. A simple semi-active control policy is used in regard to passive linear viscous damping and an optimal non-causal semi-active control strategy. The control strategy requires optimization. Euler-Lagrange equations are solved numerically during this procedure. The optimal closed-loop performance is evaluated for an idealized controllable dash-pot. A simplified single-degree-of-freedom model of an isolated bridge is used as numerical example. Two bridge cases are investigated. These cases are; bridge deck without the isolation bearing and bridge deck with the isolation bearing. To compare the performances of the passive and semi-active control cases, frequency dependent acceleration, velocity and displacement response transmissibility ratios Ta(w), Tv(w), and Td(w) are defined. To fully investigate the behavior of the structure subjected to the sinusoidal and pulse type excitations, different damping levels are considered. Numerical results showed that, under the effect of external excitation, bridge deck with semi-active control showed better structural performance than the passive bridge deck case.

Keywords: bridge structures, passive control, seismic, semi-active control, viscous damping

Procedia PDF Downloads 228
2370 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 300
2369 Urban Neighborhood Center Location Evaluating Method Based On UNA the GIS Spatial Analysis Tools: Kerman's Neighborhood in Tehran Case

Authors: Sepideh Jabbari Behnam, Shadabeh Gashtasbi Iraei, Elnaz Mohsenin, MohammadAli Aghajani

Abstract:

Urban neighborhoods, as important urban forming cells, play a key role in creating urban texture and integrated form. Nowadays, most of neighborhood divisions are based on urban management systems but without considering social issues and the other aspects of urban life. This can cause problems such as providing inappropriate services for city dwellers, the loss of local identity and etc. In this regard for regenerating of such neighborhoods, it is essential to locate neighborhood centers with appropriate access and services for all residents. The main objective of this article is reaching to the location of neighborhood centers in a way that, most of issues relating to the physical features (such as the form of access network and texture permeability and etc.) and other qualities such as land uses, densities and social and economic features can be done simultaneously. This paper attempts to use methods of spatial analysis in order to surveying spatial structure and space syntax of urban textures and Urban Network Analysis Systems. This can be done by one of GIS toolbars which is named UNA (Urban Network Analysis) with the use of its five functions (include: Reach, Betweenness, Gravity, Closeness, Straightness).These functions were written according to space syntax theory and offer its relating output. This paper tries to locate and evaluate the optimal location of neighborhood centers in order to create local centers. This is done through weighing of each of these functions and taking into account of spatial features.

Keywords: evaluate optimal location, Local centers, location of neighborhood centers, Spatial analysis, Urban network

Procedia PDF Downloads 445
2368 Integrating Explicit Instruction and Problem-Solving Approaches for Efficient Learning

Authors: Slava Kalyuga

Abstract:

There are two opposing major points of view on the optimal degree of initial instructional guidance that is usually discussed in the literature by the advocates of the corresponding learning approaches. Using unguided or minimally guided problem-solving tasks prior to explicit instruction has been suggested by productive failure and several other instructional theories, whereas an alternative approach - using fully guided worked examples followed by problem solving - has been demonstrated as the most effective strategy within the framework of cognitive load theory. An integrated approach discussed in this paper could combine the above frameworks within a broader theoretical perspective which would allow bringing together their best features and advantages in the design of learning tasks for STEM education. This paper represents a systematic review of the available empirical studies comparing the above alternative sequences of instructional methods to explore effects of several possible moderating factors. The paper concludes that different approaches and instructional sequences should coexist within complex learning environments. Selecting optimal sequences depends on such factors as specific goals of learner activities, types of knowledge to learn, levels of element interactivity (task complexity), and levels of learner prior knowledge. This paper offers an outline of a theoretical framework for the design of complex learning tasks in STEM education that would integrate explicit instruction and inquiry (exploratory, discovery) learning approaches in ways that depend on a set of defined specific factors.

Keywords: cognitive load, explicit instruction, exploratory learning, worked examples

Procedia PDF Downloads 104
2367 Multidimensional Modeling of Solidification Process of Multi-Crystalline Silicon under Magnetic Field for Solar Cell Technology

Authors: Mouhamadou Diop, Mohamed I. Hassan

Abstract:

Molten metallic flow in metallurgical plant is highly turbulent and presents a complex coupling with heat transfer, phase transfer, chemical reaction, momentum transport, etc. Molten silicon flow has significant effect in directional solidification of multicrystalline silicon by affecting the temperature field and the emerging crystallization interface as well as the transport of species and impurities during casting process. Owing to the complexity and limits of reliable measuring techniques, computational models of fluid flow are useful tools to study and quantify these problems. The overall objective of this study is to investigate the potential of a traveling magnetic field for an efficient operating control of the molten metal flow. A multidimensional numerical model will be developed for the calculations of Lorentz force, molten metal flow, and the related phenomenon. The numerical model is implemented in a laboratory-scale silicon crystallization furnace. This study presents the potential of traveling magnetic field approach for an efficient operating control of the molten flow. A numerical model will be used to study the effects of magnetic force applied on the molten flow, and their interdependencies. In this paper, coupled and decoupled, steady and unsteady models of molten flow and crystallization interface will be compared. This study will allow us to retrieve the optimal traveling magnetic field parameter range for crystallization furnaces and the optimal numerical simulations strategy for industrial application.

Keywords: multidimensional, numerical simulation, solidification, multicrystalline, traveling magnetic field

Procedia PDF Downloads 225
2366 Environmental Benefits of Corn Cob Ash in Lateritic Soil Cement Stabilization for Road Works in a Sub-Tropical Region

Authors: Ahmed O. Apampa, Yinusa A. Jimoh

Abstract:

The potential economic viability and environmental benefits of using a biomass waste, such as corn cob ash (CCA) as pozzolan in stabilizing soils for road pavement construction in a sub-tropical region was investigated. Corn cob was obtained from Maya in South West Nigeria and processed to ash of characteristics similar to Class C Fly Ash pozzolan as specified in ASTM C618-12. This was then blended with ordinary Portland cement in the CCA:OPC ratios of 1:1, 1:2 and 2:1. Each of these blends was then mixed with lateritic soil of ASHTO classification A-2-6(3) in varying percentages from 0 – 7.5% at 1.5% intervals. The soil-CCA-Cement mixtures were thereafter tested for geotechnical index properties including the BS Proctor Compaction, California Bearing Ratio (CBR) and the Unconfined Compression Strength Test. The tests were repeated for soil-cement mix without any CCA blending. The cost of the binder inputs and optimal blends of CCA:OPC in the stabilized soil were thereafter analyzed by developing algorithms that relate the experimental data on strength parameters (Unconfined Compression Strength, UCS and California Bearing Ratio, CBR) with the bivariate independent variables CCA and OPC content, using Matlab R2011b. An optimization problem was then set up minimizing the cost of chemical stabilization of laterite with CCA and OPC, subject to the constraints of minimum strength specifications. The Evolutionary Engine as well as the Generalized Reduced Gradient option of the Solver of MS Excel 2010 were used separately on the cells to obtain the optimal blend of CCA:OPC. The optimal blend attaining the required strength of 1800 kN/m2 was determined for the 1:2 CCA:OPC as 5.4% mix (OPC content 3.6%) compared with 4.2% for the OPC only option; and as 6.2% mix for the 1:1 blend (OPC content 3%). The 2:1 blend did not attain the required strength, though over a 100% gain in UCS value was obtained over the control sample with 0% binder. Upon the fact that 0.97 tonne of CO2 is released for every tonne of cement used (OEE, 2001), the reduced OPC requirement to attain the same result indicates the possibility of reducing the net CO2 contribution of the construction industry to the environment ranging from 14 – 28.5% if CCA:OPC blends are widely used in soil stabilization, going by the results of this study. The paper concludes by recommending that Nigeria and other developing countries in the sub-tropics with abundant stock of biomass waste should look in the direction of intensifying the use of biomass waste as fuel and the derived ash for the production of pozzolans for road-works, thereby reducing overall green house gas emissions and in compliance with the objectives of the United Nations Framework on Climate Change.

Keywords: corn cob ash, biomass waste, lateritic soil, unconfined compression strength, CO2 emission

Procedia PDF Downloads 363
2365 Efficient Implementation of Finite Volume Multi-Resolution Weno Scheme on Adaptive Cartesian Grids

Authors: Yuchen Yang, Zhenming Wang, Jun Zhu, Ning Zhao

Abstract:

An easy-to-implement and robust finite volume multi-resolution Weighted Essentially Non-Oscillatory (WENO) scheme is proposed on adaptive cartesian grids in this paper. Such a multi-resolution WENO scheme is combined with the ghost cell immersed boundary method (IBM) and wall-function technique to solve Navier-Stokes equations. Unlike the k-exact finite volume WENO schemes which involve large amounts of extra storage, repeatedly solving the matrix generated in a least-square method or the process of calculating optimal linear weights on adaptive cartesian grids, the present methodology only adds very small overhead and can be easily implemented in existing edge-based computational fluid dynamics (CFD) codes with minor modifications. Also, the linear weights of this adaptive finite volume multi-resolution WENO scheme can be any positive numbers on condition that their sum is one. It is a way of bypassing the calculation of the optimal linear weights and such a multi-resolution WENO scheme avoids dealing with the negative linear weights on adaptive cartesian grids. Some benchmark viscous problems are numerical solved to show the efficiency and good performance of this adaptive multi-resolution WENO scheme. Compared with a second-order edge-based method, the presented method can be implemented into an adaptive cartesian grid with slight modification for big Reynolds number problems.

Keywords: adaptive mesh refinement method, finite volume multi-resolution WENO scheme, immersed boundary method, wall-function technique.

Procedia PDF Downloads 136
2364 Optimal Construction Using Multi-Criteria Decision-Making Methods

Authors: Masood Karamoozian, Zhang Hong

Abstract:

The necessity and complexity of the decision-making process and the interference of the various factors to make decisions and consider all the relevant factors in a problem are very obvious nowadays. Hence, researchers show their interest in multi-criteria decision-making methods. In this research, the Analytical Hierarchy Process (AHP), Simple Additive Weighting (SAW), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methods of multi-criteria decision-making have been used to solve the problem of optimal construction systems. Systems being evaluated in this problem include; Light Steel Frames (LSF), a case study of designs by Zhang Hong studio in the Southeast University of Nanjing, Insulating Concrete Form (ICF), Ordinary Construction System (OCS), and Prefabricated Concrete System (PRCS) as another case study designs in Zhang Hong studio in the Southeast University of Nanjing. Crowdsourcing was done by using a questionnaire at the sample level (200 people). Questionnaires were distributed among experts, university centers, and conferences. According to the results of the research, the use of different methods of decision-making led to relatively the same results. In this way, with the use of all three multi-criteria decision-making methods mentioned above, the Prefabricated Concrete System (PRCS) was in the first rank, and the Light Steel Frame (LSF) system ranked second. Also, the Prefabricated Concrete System (PRCS), in terms of performance standards and economics, was ranked first, and the Light Steel Frame (LSF) system was allocated the first rank in terms of environmental standards.

Keywords: multi-criteria decision making, AHP, SAW, TOPSIS

Procedia PDF Downloads 91
2363 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model

Authors: Bokkasam Sasidhar, Ibrahim Aljasser

Abstract:

The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.

Keywords: scheduling, maximal flow problem, multiple arc network model, optimization

Procedia PDF Downloads 389
2362 Evaluating the Relationship between Overconfidence of Senior Managers and Abnormal Cash Fluctuations with Respect to Financial Flexibility in Companies Listed in Tehran Stock Exchange

Authors: Hadi Mousavi, Majid Davoudi Nasr

Abstract:

Executives can maximize profits by recognizing the factors that affect investment and using them to obtain the optimal level of investment. Inefficient markets have shortcomings that can impact the optimal level of investment, leading to the process of over-investment or under-investment. In the present study, the relationship between the overconfidence of senior managers and abnormal cash fluctuations with respect to financial flexibility in companies listed in the Tehran stock exchange from 2009 to 2013 were evaluated. In this study, the sample consists of 84 companies selected by a systematic elimination method and 420 year-companies in total. In this research, EVIEWS software was used to test the research hypotheses by linear regression and correlation coefficient and after designing and testing the research hypothesis. After designing and testing research hypotheses that have been used to each hypothesis, it was concluded that there was a significant relationship between the overconfidence of senior managers and abnormal cash fluctuations, and this relationship was not significant at any level of financial flexibility. Moreover, the findings of the research showed that there was a significant relationship between senior manager’s overconfidence and positive abnormal cash flow fluctuations in firms, and this relationship is significant only at the level of companies with high financial flexibility. Finally, the results indicate that there is no significant relationship between senior managers 'overconfidence and negative cash flow abnormalities, and the relationship between senior managers' overconfidence and negative cash flow fluctuations at the level of companies with high financial flexibility was confirmed.

Keywords: abnormal cash fluctuations, overconfidence of senior managers, financial flexibility, accounting

Procedia PDF Downloads 111
2361 Quality-Of-Service-Aware Green Bandwidth Allocation in Ethernet Passive Optical Network

Authors: Tzu-Yang Lin, Chuan-Ching Sue

Abstract:

Sleep mechanisms are commonly used to ensure the energy efficiency of each optical network unit (ONU) that concerns a single class delay constraint in the Ethernet Passive Optical Network (EPON). How long the ONUs can sleep without violating the delay constraint has become a research problem. Particularly, we can derive an analytical model to determine the optimal sleep time of ONUs in every cycle without violating the maximum class delay constraint. The bandwidth allocation considering such optimal sleep time is called Green Bandwidth Allocation (GBA). Although the GBA mechanism guarantees that the different class delay constraints do not violate the maximum class delay constraint, packets with a more relaxed delay constraint will be treated as those with the most stringent delay constraint and may be sent early. This means that the ONU will waste energy in active mode to send packets in advance which did not need to be sent at the current time. Accordingly, we proposed a QoS-aware GBA using a novel intra-ONU scheduling to control the packets to be sent according to their respective delay constraints, thereby enhancing energy efficiency without deteriorating delay performance. If packets are not explicitly classified but with different packet delay constraints, we can modify the intra-ONU scheduling to classify packets according to their packet delay constraints rather than their classes. Moreover, we propose the switchable ONU architecture in which the ONU can switch the architecture according to the sleep time length, thus improving energy efficiency in the QoS-aware GBA. The simulation results show that the QoS-aware GBA ensures that packets in different classes or with different delay constraints do not violate their respective delay constraints and consume less power than the original GBA.

Keywords: Passive Optical Networks, PONs, Optical Network Unit, ONU, energy efficiency, delay constraint

Procedia PDF Downloads 266
2360 Effects of Lung Protection Ventilation Strategies on Postoperative Pulmonary Complications After Noncardiac Surgery: A Network Meta-Analysis of Randomized Controlled Trials

Authors: Ran An, Dang Wang

Abstract:

Background: Mechanical ventilation has been confirmed to increase the incidence of postoperative pulmonary complications (PPCs), and several studies have shown that low tidal volumes combined with positive end-expiratory pressure (PEEP) and recruitment manoeuvres (RM) reduce the incidence of PPCs. However, the optimal lung-protective ventilatory strategy remains unclear. Methods: Multiple databases were searched for randomized controlled trials (RCTs) published prior to October 2023. The association between individual PEEP (iPEEP) or other forms of lung-protective ventilation and the incidence of PPCs was evaluated by Bayesian network meta-analysis. Results: We included 58 studies (11610 patients) in this meta-analysis. The network meta-analysis showed that low ventilation (LVt) combined with iPEEP and RM was associated with significantly lower incidences of PPCs [HVt: OR=0.38 95CrI (0.19, 0.75), LVt: OR=0.33, 95% CrI (0.12, 0.82)], postoperative atelectasis, and pneumonia than was HVt or LVt. In abdominal surgery, LVT combined with iPEEP or medium-to-high PEEP and RM were associated with significantly lower incidences of PPCs, postoperative atelectasis, and pneumonia. LVt combined with iPEEP and RM was ranked the highest, which was based on SUCRA scores. Conclusion: LVt combined with iPEEP and RM decreased the incidences of PPCs, postoperative atelectasis, and pneumonia in noncardiac surgery patients. iPEEP-guided ventilation was the optimal lung protection ventilation strategy. The quality of evidence was moderate.

Keywords: protection ventilation strategies, postoperative pulmonary complications, network meta-analysis, noncardiac surgery

Procedia PDF Downloads 20
2359 Magnetic Resonance Imaging for Assessment of the Quadriceps Tendon Cross-Sectional Area as an Adjunctive Diagnostic Parameter in Patients with Patellofemoral Pain Syndrome

Authors: Jae Ni Jang, SoYoon Park, Sukhee Park, Yumin Song, Jae Won Kim, Keum Nae Kang, Young Uk Kim

Abstract:

Objectives: Patellofemoral pain syndrome (PFPS) is a common clinical condition characterized by anterior knee pain. Here, we investigated the quadriceps tendon cross-sectional area (QTCSA) as a novel predictor for the diagnosis of PFPS. By examining the association between the QTCSA and PFPS, we aimed to provide a more valuable diagnostic parameter and more equivocal assessment of the diagnostic potential of PFPS by comparing the QTCSA with the quadriceps tendon thickness (QTT), a traditional measure of quadriceps tendon hypertrophy. Patients and Methods: This retrospective study included 30 patients with PFPS and 30 healthy participants who underwent knee magnetic resonance imaging. T1-weighted turbo spin echo transverse magnetic resonance images were obtained. The QTCSA was measured on the axial-angled phases of the images by drawing outlines, and the QTT was measured at the most hypertrophied quadriceps tendon. Results: The average QTT and QTCSA for patients with PFPS (6.33±0.80 mm and 155.77±36.60 mm², respectively) were significantly greater than those for healthy participants (5.77±0.36 mm and 111.90±24.10 mm2, respectively; both P<0.001). We used a receiver operating characteristic curve to confirm the sensitivities and specificities for both the QTT and QTCSA as predictors of PFPS. The optimal diagnostic cutoff value for QTT was 5.98 mm, with a sensitivity of 66.7%, a specificity of 70.0%, and an area under the curve of 0.75 (0.62–0.88). The optimal diagnostic cutoff value for QTCSA was 121.04 mm², with a sensitivity of 73.3%, a specificity of 70.0%, and an area under the curve of 0.83 (0.74–0.93). Conclusion: The QTCSA was found to be a more reliable diagnostic indicator for PFPS than QTT.

Keywords: patellofemoral pain syndrome, quadriceps muscle, hypertrophy, magnetic resonance imaging

Procedia PDF Downloads 23
2358 Simulation, Optimization, and Analysis Approach of Microgrid Systems

Authors: Saqib Ali

Abstract:

Sources are classified into two depending upon the factor of reviving. These sources, which cannot be revived into their original shape once they are consumed, are considered as nonrenewable energy resources, i.e., (coal, fuel) Moreover, those energy resources which are revivable to the original condition even after being consumed are known as renewable energy resources, i.e., (wind, solar, hydel) Renewable energy is a cost-effective way to generate clean and green electrical energy Now a day’s majority of the countries are paying heed to energy generation from RES Pakistan is mostly relying on conventional energy resources which are mostly nonrenewable in nature coal, fuel is one of the major resources, and with the advent of time their prices are increasing on the other hand RES have great potential in the country with the deployment of RES greater reliability and an effective power system can be obtained In this thesis, a similar concept is being used and a hybrid power system is proposed which is composed of intermixing of renewable and nonrenewable sources The Source side is composed of solar, wind, fuel cells which will be used in an optimal manner to serve load The goal is to provide an economical, reliable, uninterruptable power supply. This is achieved by optimal controller (PI, PD, PID, FOPID) Optimization techniques are applied to the controllers to achieve the desired results. Advanced algorithms (Particle swarm optimization, Flower Pollination Algorithm) will be used to extract the desired output from the controller Detailed comparison in the form of tables and results will be provided, which will highlight the efficiency of the proposed system.

Keywords: distributed generation, demand-side management, hybrid power system, micro grid, renewable energy resources, supply-side management

Procedia PDF Downloads 84
2357 Evidence Theory Based Emergency Multi-Attribute Group Decision-Making: Application in Facility Location Problem

Authors: Bidzina Matsaberidze

Abstract:

It is known that, in emergency situations, multi-attribute group decision-making (MAGDM) models are characterized by insufficient objective data and a lack of time to respond to the task. Evidence theory is an effective tool for describing such incomplete information in decision-making models when the expert and his knowledge are involved in the estimations of the MAGDM parameters. We consider an emergency decision-making model, where expert assessments on humanitarian aid from distribution centers (HADC) are represented in q-rung ortho-pair fuzzy numbers, and the data structure is described within the data body theory. Based on focal probability construction and experts’ evaluations, an objective function-distribution centers’ selection ranking index is constructed. Our approach for solving the constructed bicriteria partitioning problem consists of two phases. In the first phase, based on the covering’s matrix, we generate a matrix, the columns of which allow us to find all possible partitionings of the HADCs with the service centers. Some constraints are also taken into consideration while generating the matrix. In the second phase, based on the matrix and using our exact algorithm, we find the partitionings -allocations of the HADCs to the centers- which correspond to the Pareto-optimal solutions. For an illustration of the obtained results, a numerical example is given for the facility location-selection problem.

Keywords: emergency MAGDM, q-rung orthopair fuzzy sets, evidence theory, HADC, facility location problem, multi-objective combinatorial optimization problem, Pareto-optimal solutions

Procedia PDF Downloads 75
2356 Fractal Nature of Granular Mixtures of Different Concretes Formulated with Different Methods of Formulation

Authors: Fatima Achouri, Kaddour Chouicha, Abdelwahab Khatir

Abstract:

It is clear that concrete of quality must be made with selected materials chosen in optimum proportions that remain after implementation, a minimum of voids in the material produced. The different methods of formulations what we use, are based for the most part on a granular curve which describes an ‘optimal granularity’. Many authors have engaged in fundamental research on granular arrangements. A comparison of mathematical models reproducing these granular arrangements with experimental measurements of compactness have to verify that the minimum porosity P according to the following extent granular exactly a power law. So the best compactness in the finite medium are obtained with power laws, such as Furnas, Fuller or Talbot, each preferring a particular setting between 0.20 and 0.50. These considerations converge on the assumption that the optimal granularity Caquot approximates by a power law. By analogy, it can then be analyzed as a granular structure of fractal-type since the properties that characterize the internal similarity fractal objects are reflected also by a power law. Optimized mixtures may be described as a series of installments falling granular stuff to better the tank on a regular hierarchical distribution which would give at different scales, by cascading effects, the same structure to the mix. Likely this model may be appropriate for the entire extent of the size distribution of the components, since the cement particles (and silica fume) correctly deflocculated, micrometric dimensions, to chippings sometimes several tens of millimeters. As part of this research, the aim is to give an illustration of the application of fractal analysis to characterize the granular concrete mixtures optimized for a so-called fractal dimension where different concretes were studying that we proved a fractal structure of their granular mixtures regardless of the method of formulation or the type of concrete.

Keywords: concrete formulation, fractal character, granular packing, method of formulation

Procedia PDF Downloads 237
2355 Optimal Concentration of Fluorescent Nanodiamonds in Aqueous Media for Bioimaging and Thermometry Applications

Authors: Francisco Pedroza-Montero, Jesús Naín Pedroza-Montero, Diego Soto-Puebla, Osiris Alvarez-Bajo, Beatriz Castaneda, Sofía Navarro-Espinoza, Martín Pedroza-Montero

Abstract:

Nanodiamonds have been widely studied for their physical properties, including chemical inertness, biocompatibility, optical transparency from the ultraviolet to the infrared region, high thermal conductivity, and mechanical strength. In this work, we studied how the fluorescence spectrum of nanodiamonds quenches concerning the concentration in aqueous solutions systematically ranging from 0.1 to 10 mg/mL. Our results demonstrated a non-linear fluorescence quenching as the concentration increases for both of the NV zero-phonon lines; the 5 mg/mL concentration shows the maximum fluorescence emission. Furthermore, this behaviour is theoretically explained as an electronic recombination process that modulates the intensity in the NV centres. Finally, to gain more insight, the FRET methodology is used to determine the fluorescence efficiency in terms of the fluorophores' separation distance. Thus, the concentration level is simulated as follows, a small distance between nanodiamonds would be considered a highly concentrated system, whereas a large distance would mean a low concentrated one. Although the 5 mg/mL concentration shows the maximum intensity, our main interest is focused on the concentration of 0.5 mg/mL, which our studies demonstrate the optimal human cell viability (99%). In this respect, this concentration has the feature of being as biocompatible as water giving the possibility to internalize it in cells without harming the living media. To this end, not only can we track nanodiamonds on the surface or inside the cell with excellent precision due to their fluorescent intensity, but also, we can perform thermometry tests transforming a fluorescence contrast image into a temperature contrast image.

Keywords: nanodiamonds, fluorescence spectroscopy, concentration, bioimaging, thermometry

Procedia PDF Downloads 386
2354 A Network Optimization Study of Logistics for Enhancing Emergency Preparedness in Asia-Pacific

Authors: Giuseppe Timperio, Robert De Souza

Abstract:

The combination of factors such as temperamental climate change, rampant urbanization of risk exposed areas, political and social instabilities, is posing an alarming base for the further growth of number and magnitude of humanitarian crises worldwide. Given the unique features of humanitarian supply chain such as unpredictability of demand in space, time, and geography, spike in the number of requests for relief items in the first days after the calamity, uncertain state of logistics infrastructures, large volumes of unsolicited low-priority items, a proactive approach towards design of disaster response operations is needed to achieve high agility in mobilization of emergency supplies in the immediate aftermath of the event. This paper is an attempt in that direction, and it provides decision makers with crucial strategic insights for a more effective network design for disaster response. Decision sciences and ICT are integrated to analyse the robustness and resilience of a prepositioned network of emergency strategic stockpiles for a real-life case about Indonesia, one of the most vulnerable countries in Asia-Pacific, with the model being built upon a rich set of quantitative data. At this aim, a network optimization approach was implemented, with several what-if scenarios being accurately developed and tested. Findings of this study are able to support decision makers facing challenges related with disaster relief chains resilience, particularly about optimal configuration of supply chain facilities and optimal flows across the nodes, while considering the network structure from an end-to-end in-country distribution perspective.

Keywords: disaster preparedness, humanitarian logistics, network optimization, resilience

Procedia PDF Downloads 160
2353 Weakly Solving Kalah Game Using Artificial Intelligence and Game Theory

Authors: Hiba El Assibi

Abstract:

This study aims to weakly solve Kalah, a two-player board game, by developing a start-to-finish winning strategy using an optimized Minimax algorithm with Alpha-Beta Pruning. In weakly solving Kalah, our focus is on creating an optimal strategy from the game's beginning rather than analyzing every possible position. The project will explore additional enhancements like symmetry checking and code optimizations to speed up the decision-making process. This approach is expected to give insights into efficient strategy formulation in board games and potentially help create games with a fair distribution of outcomes. Furthermore, this research provides a unique perspective on human versus Artificial Intelligence decision-making in strategic games. By comparing the AI-generated optimal moves with human choices, we can explore how seemingly advantageous moves can, in the long run, be harmful, thereby offering a deeper understanding of strategic thinking and foresight in games. Moreover, this paper discusses the evaluation of our strategy against existing methods, providing insights on performance and computational efficiency. We also discuss the scalability of our approach to the game, considering different board sizes (number of pits and stones) and rules (different variations) and studying how that affects performance and complexity. The findings have potential implications for the development of AI applications in strategic game planning, enhancing our understanding of human cognitive processes in game settings, and offer insights into creating balanced and engaging game experiences.

Keywords: minimax, alpha beta pruning, transposition tables, weakly solving, game theory

Procedia PDF Downloads 33
2352 A Data-Driven Optimal Control Model for the Dynamics of Monkeypox in a Variable Population with a Comprehensive Cost-Effectiveness Analysis

Authors: Martins Onyekwelu Onuorah, Jnr Dahiru Usman

Abstract:

Introduction: In the realm of public health, the threat posed by Monkeypox continues to elicit concern, prompting rigorous studies to understand its dynamics and devise effective containment strategies. Particularly significant is its recurrence in variable populations, such as the observed outbreak in Nigeria in 2022. In light of this, our study undertakes a meticulous analysis, employing a data-driven approach to explore, validate, and propose optimized intervention strategies tailored to the distinct dynamics of Monkeypox within varying demographic structures. Utilizing a deterministic mathematical model, we delved into the intricate dynamics of Monkeypox, with a particular focus on a variable population context. Our qualitative analysis provided insights into the disease-free equilibrium, revealing its stability when R0 is less than one and discounting the possibility of backward bifurcation, as substantiated by the presence of a single stable endemic equilibrium. The model was rigorously validated using real-time data from the Nigerian 2022 recorded cases for Epi weeks 1 – 52. Transitioning from qualitative to quantitative, we augmented our deterministic model with optimal control, introducing three time-dependent interventions to scrutinize their efficacy and influence on the epidemic's trajectory. Numerical simulations unveiled a pronounced impact of the interventions, offering a data-supported blueprint for informed decision-making in containing the disease. A comprehensive cost-effectiveness analysis employing the Infection Averted Ratio (IAR), Average Cost-Effectiveness Ratio (ACER), and Incremental Cost-Effectiveness Ratio (ICER) facilitated a balanced evaluation of the interventions’ economic and health impacts. In essence, our study epitomizes a holistic approach to understanding and mitigating Monkeypox, intertwining rigorous mathematical modeling, empirical validation, and economic evaluation. The insights derived not only bolster our comprehension of Monkeypox's intricate dynamics but also unveil optimized, cost-effective interventions. This integration of methodologies and findings underscores a pivotal stride towards aligning public health imperatives with economic sustainability, marking a significant contribution to global efforts in combating infectious diseases.

Keywords: monkeypox, equilibrium states, stability, bifurcation, optimal control, cost-effectiveness

Procedia PDF Downloads 51
2351 Cost Benefit Analysis: Evaluation among the Millimetre Wavebands and SHF Bands of Small Cell 5G Networks

Authors: Emanuel Teixeira, Anderson Ramos, Marisa Lourenço, Fernando J. Velez, Jon M. Peha

Abstract:

This article discusses the benefit cost analysis aspects of millimetre wavebands (mmWaves) and Super High Frequency (SHF). The devaluation along the distance of the carrier-to-noise-plus-interference ratio with the coverage distance is assessed by considering two different path loss models, the two-slope urban micro Line-of-Sight (UMiLoS) for the SHF band and the modified Friis propagation model, for frequencies above 24 GHz. The equivalent supported throughput is estimated at the 5.62, 28, 38, 60 and 73 GHz frequency bands and the influence of carrier-to-noise-plus-interference ratio in the radio and network optimization process is explored. Mostly owing to the lessening caused by the behaviour of the two-slope propagation model for SHF band, the supported throughput at this band is higher than at the millimetre wavebands only for the longest cell lengths. The benefit cost analysis of these pico-cellular networks was analysed for regular cellular topologies, by considering the unlicensed spectrum. For shortest distances, we can distinguish an optimal of the revenue in percentage terms for values of the cell length, R ≈ 10 m for the millimeter wavebands and for longest distances an optimal of the revenue can be observed at R ≈ 550 m for the 5.62 GHz. It is possible to observe that, for the 5.62 GHz band, the profit is slightly inferior than for millimetre wavebands, for the shortest Rs, and starts to increase for cell lengths approximately equal to the ratio between the break-point distance and the co-channel reuse factor, achieving a maximum for values of R approximately equal to 550 m.

Keywords: millimetre wavebands, SHF band, SINR, cost benefit analysis, 5G

Procedia PDF Downloads 125
2350 Designing Agricultural Irrigation Systems Using Drone Technology and Geospatial Analysis

Authors: Yongqin Zhang, John Lett

Abstract:

Geospatial technologies have been increasingly used in agriculture for various applications and purposes in recent years. Unmanned aerial vehicles (drones) fit the needs of farmers in farming operations, from field spraying to grow cycles and crop health. In this research, we conducted a practical research project that used drone technology to design and map optimal locations and layouts of irrigation systems for agriculture farms. We flew a DJI Mavic 2 Pro drone to acquire aerial remote sensing images over two agriculture fields in Forest, Mississippi, in 2022. Flight plans were first designed to capture multiple high-resolution images via a 20-megapixel RGB camera mounted on the drone over the agriculture fields. The Drone Deploy web application was then utilized to develop flight plans and subsequent image processing and measurements. The images were orthorectified and processed to estimate the area of the area and measure the locations of the water line and sprinkle heads. Field measurements were conducted to measure the ground targets and validate the aerial measurements. Geospatial analysis and photogrammetric measurements were performed for the study area to determine optimal layout and quantitative estimates for irrigation systems. We created maps and tabular estimates to demonstrate the locations, spacing, amount, and layout of sprinkler heads and water lines to cover the agricultural fields. This research project provides scientific guidance to Mississippi farmers for a precision agricultural irrigation practice.

Keywords: drone images, agriculture, irrigation, geospatial analysis, photogrammetric measurements

Procedia PDF Downloads 60
2349 Multi-Criteria Optimal Management Strategy for in-situ Bioremediation of LNAPL Contaminated Aquifer Using Particle Swarm Optimization

Authors: Deepak Kumar, Jahangeer, Brijesh Kumar Yadav, Shashi Mathur

Abstract:

In-situ remediation is a technique which can remediate either surface or groundwater at the site of contamination. In the present study, simulation optimization approach has been used to develop management strategy for remediating LNAPL (Light Non-Aqueous Phase Liquid) contaminated aquifers. Benzene, toluene, ethyl benzene and xylene are the main component of LNAPL contaminant. Collectively, these contaminants are known as BTEX. In in-situ bioremediation process, a set of injection and extraction wells are installed. Injection wells supply oxygen and other nutrient which convert BTEX into carbon dioxide and water with the help of indigenous soil bacteria. On the other hand, extraction wells check the movement of plume along downstream. In this study, optimal design of the system has been done using PSO (Particle Swarm Optimization) algorithm. A comprehensive management strategy for pumping of injection and extraction wells has been done to attain a maximum allowable concentration of 5 ppm and 4.5 ppm. The management strategy comprises determination of pumping rates, the total pumping volume and the total running cost incurred for each potential injection and extraction well. The results indicate a high pumping rate for injection wells during the initial management period since it facilitates the availability of oxygen and other nutrients necessary for biodegradation, however it is low during the third year on account of sufficient oxygen availability. This is because the contaminant is assumed to have biodegraded by the end of the third year when the concentration drops to a permissible level.

Keywords: groundwater, in-situ bioremediation, light non-aqueous phase liquid, BTEX, particle swarm optimization

Procedia PDF Downloads 419
2348 Reusability of Coimmobilized Enzymes

Authors: Aleksandra Łochowicz, Daria Świętochowska, Loredano Pollegioni, Nazim Ocal, Franck Charmantray, Laurence Hecquet, Katarzyna Szymańska

Abstract:

Multienzymatic cascade reactions are nowadays widely used in pharmaceutical, chemical and cosmetics industries to produce high valuable compounds. They can be carried out in two ways, step by step and one-pot. If two or more enzymes are in the same reaction vessel is necessary to work out the compromise to run the reaction in optimal conditions for each enzyme. So far most of the reports of multienzymatic cascades concern on usage of free enzymes. Unfortunately using free enzymes as catalysts of reactions accomplish high cost. What is more, free enzymes are soluble in solvents which makes reuse impossible. To overcome this obstacle enzymes can be immobilized what provides heterogeneity of biocatalyst that enables reuse and easy separation of the enzyme from solvents and reaction products. Usually, immobilization increase also the thermal and operational stability of enzyme. The advantages of using immobilized multienzymes are enhanced enzyme stability, improved cascade enzymatic activity via substrate channeling, and ease of recovery for reuse. The one-pot immobilized multienzymatic cascade can be carried out in mixed or coimmobilized type. When biocatalysts are coimmobilized on the same carrier the are in close contact to each other which increase the reaction rate and catalytic efficiency, and eliminate the lag time. However, in this type providing the optimal conditions both in the process of immobilization and cascade reaction for each enzyme is complicated. Herein, we examined immobilization of 3 enzymes: D-amino acid oxidase from Rhodotorula gracilis, commercially available catalase and transketolase from Geobacillus stearothermophilus. As a support we used silica monoliths with hierarchical structure of pores. Then we checked their stability and reusability in one-pot cascade of L-erythrulose and hydroxypuryvate acid synthesis.

Keywords: biocatalysts, enzyme immobilization, multienzymatic reaction, silica carriers

Procedia PDF Downloads 133
2347 Optimum Design of Support and Care Home for the Elderly

Authors: P. Shahabi

Abstract:

The increase in average human life expectancy has led to a growing elderly population. This demographic shift has brought forth various challenges related to the mental and physical well-being of the elderly, often resulting in a lack of dignity and respect for this valuable segment of society. These emerging social issues have cast a shadow on the lives of families, prompting the need for innovative solutions to enhance the lives of the elderly. In this study, within the context of architecture, we aim to create a pleasant and nurturing environment that combines traditional Iranian and modern architectural elements to cater to the unique needs of the elderly. Our primary research objectives encompass the following: Recognizing the societal demand for nursing homes due to the increasing elderly population, addressing the need for a conducive environment that promotes physical and mental well-being among the elderly, developing spatial designs that are specifically tailored to the elderly population, ensuring their comfort and convenience. To achieve these objectives, we have undertaken a comprehensive exploration of the challenges and issues faced by the elderly. We have also laid the groundwork for the architectural design of nursing homes, culminating in the presentation of an architectural plan aimed at minimizing the difficulties faced by the elderly and enhancing their quality of life. It is noteworthy that many of the existing nursing homes in Iran lack the necessary welfare and safety conditions required for the elderly. Hence, our research aims to establish comprehensive and suitable criteria for the optimal design of nursing homes. We believe that through optimal design, we can create spaces that are not only diverse, attractive, and dynamic but also significantly improve the quality of life for the elderly. We hold the hope that these homes will serve as beacons of hope and tranquility for all individuals in their later years.

Keywords: care home, elderly, optimum design, support

Procedia PDF Downloads 55
2346 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs

Authors: H. M. Soroush

Abstract:

The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.

Keywords: number of late jobs, scheduling, single server, stochastic

Procedia PDF Downloads 482
2345 The Interventricular Septum as a Site for Implantation of Electrocardiac Devices - Clinical Implications of Topography and Variation in Position

Authors: Marcin Jakiel, Maria Kurek, Karolina Gutkowska, Sylwia Sanakiewicz, Dominika Stolarczyk, Jakub Batko, Rafał Jakiel, Mateusz K. Hołda

Abstract:

Proper imaging of the interventricular septum during endocavital lead implantation is essential for successful procedure. The interventricular septum is located oblique to the 3 main body planes and forms angles of 44.56° ± 7.81°, 45.44° ± 7.81°, 62.49° (IQR 58.84° - 68.39°) with the sagittal, frontal and transverse planes, respectively. The optimal left anterior oblique (LAO) projection is to have the septum aligned along the radiation beam and will be obtained for an angle of 53.24° ± 9,08°, while the best visualization of the septal surface in the right anterior oblique (RAO) projection is obtained by using an angle of 45.44° ± 7.81°. In addition, the RAO angle (p=0.003) and the septal slope to the transverse plane (p=0.002) are larger in the male group, but the LAO angle (p=0.003) and the dihedral angle that the septum forms with the sagittal plane (p=0.003) are smaller, compared to the female group. Analyzing the optimal RAO angle in cross-sections lying at the level of the connections of the septum with the free wall of the right ventricle from the front and back, we obtain slightly smaller angle values, i.e. 41.11° ± 8.51° and 43.94° ± 7.22°, respectively. As the septum is directed leftward in the apical region, the optimal RAO angle for this area decreases (16.49° ± 7,07°) and does not show significant differences between the male and female groups (p=0.23). Within the right ventricular apex, there is a cavity formed by the apical segment of the interventricular septum and the free wall of the right ventricle with a depth of 12.35mm (IQR 11.07mm - 13.51mm). The length of the septum measured in longitudinal section, containing 4 heart cavities, is 73.03mm ± 8.06mm. With the left ventricular septal wall formed by the interventricular septum in the apical region at a length of 10.06mm (IQR 8.86 - 11.07mm) already lies outside the right ventricle. Both mentioned lengths are significantly larger in the male group (p<0.001). For proper imaging of the septum from the right ventricular side, an oblique position of the visualization devices is necessary. Correct determination of the RAO and LAO angle during the procedure allows to improve the procedure performed, and possible modification of the visual field when moving in the anterior, posterior and apical directions of the septum will avoid complications. Overlooking the change in the direction of the interventricular septum in the apical region and a significant decrease in the RAO angle can result in implantation of the lead into the free wall of the right ventricle with less effective pacing and even complications such as wall perforation and cardiac tamponade. The demonstrated gender differences can also be helpful in setting the right projections. A necessary addition to the analysis will be a description of the area of the ventricular septum, which we are currently working on using autopsy material.

Keywords: anatomical variability, angle, electrocardiological procedure, intervetricular septum

Procedia PDF Downloads 87
2344 Optimizing Volume Fraction Variation Profile of Bidirectional Functionally Graded Circular Plate under Mechanical Loading to Minimize Its Stresses

Authors: Javad Jamali Khouei, Mohammadreza Khoshravan

Abstract:

Considering that application of functionally graded material is increasing in most industries, it seems necessary to present a methodology for designing optimal profile of structures such as plate under mechanical loading which is highly consumed in industries. Therefore, volume fraction variation profile of functionally graded circular plate which has been considered two-directional is optimized so that stress of structure is minimized. For this purpose, equilibrium equations of two-directional functionally graded circular plate are solved by applying semi analytical-numerical method under mechanical loading and support conditions. By solving equilibrium equations, deflections and stresses are obtained in terms of control variables of volume fraction variation profile. As a result, the problem formula can be defined as an optimization problem by aiming at minimization of critical von-mises stress under constraints of deflections, stress and a physical constraint relating to structure of material. Then, the related problem can be solved with help of one of the metaheuristic algorithms such as genetic algorithm. Results of optimization for the applied model under constraints and loadings and boundary conditions show that functionally graded plate should be graded only in radial direction and there is no need for volume fraction variation of the constituent particles in thickness direction. For validating results, optimal values of the obtained design variables are graphically evaluated.

Keywords: two-directional functionally graded material, single objective optimization, semi analytical-numerical solution, genetic algorithm, graphical solution with contour

Procedia PDF Downloads 264
2343 Basins of Attraction for Quartic-Order Methods

Authors: Young Hee Geum

Abstract:

We compare optimal quartic order method for the multiple zeros of nonlinear equations illustrating the basins of attraction. To construct basins of attraction effectively, we take a 600×600 uniform grid points at the origin of the complex plane and paint the initial values on the basins of attraction with different colors according to the iteration number required for convergence.

Keywords: basins of attraction, convergence, multiple-root, nonlinear equation

Procedia PDF Downloads 239
2342 Integrated Simulation and Optimization for Carbon Capture and Storage System

Authors: Taekyoon Park, Seokgoo Lee, Sungho Kim, Ung Lee, Jong Min Lee, Chonghun Han

Abstract:

CO2 capture and storage/sequestration (CCS) is a key technology for addressing the global warming issue. This paper proposes an integrated model for the whole chain of CCS, from a power plant to a reservoir. The integrated model is further utilized to determine optimal operating conditions and study responses to various changes in input variables.

Keywords: CCS, caron dioxide, carbon capture and storage, simulation, optimization

Procedia PDF Downloads 334