Search results for: input constraints
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3257

Search results for: input constraints

3077 Consideration of Uncertainty in Engineering

Authors: A. Mohammadi, M. Moghimi, S. Mohammadi

Abstract:

Engineers need computational methods which could provide solutions less sensitive to the environmental effects, so the techniques should be used which take the uncertainty to account to control and minimize the risk associated with design and operation. In order to consider uncertainty in engineering problem, the optimization problem should be solved for a suitable range of the each uncertain input variable instead of just one estimated point. Using deterministic optimization problem, a large computational burden is required to consider every possible and probable combination of uncertain input variables. Several methods have been reported in the literature to deal with problems under uncertainty. In this paper, different methods presented and analyzed.

Keywords: uncertainty, Monte Carlo simulated, stochastic programming, scenario method

Procedia PDF Downloads 387
3076 Modelling and Optimization of Laser Cutting Operations

Authors: Hany Mohamed Abdu, Mohamed Hassan Gadallah, El-Giushi Mokhtar, Yehia Mahmoud Ismail

Abstract:

Laser beam cutting is one nontraditional machining process. This paper optimizes the parameters of Laser beam cutting machining parameters of Stainless steel (316L) by considering the effect of input parameters viz. power, oxygen pressure, frequency and cutting speed. Statistical design of experiments are carried in three different levels and process responses such as 'Average kerf taper (Ta)' and 'Surface Roughness (Ra)' are measured accordingly. A quadratic mathematical model (RSM) for each of the responses is developed as a function of the process parameters. Responses predicted by the models (as per Taguchi’s L27 OA) are employed to search for an optimal parametric combination to achieve desired yield of the process. RSM models are developed for mean responses, S/N ratio, and standard deviation of responses. Optimization models are formulated as single objective problem subject to process constraints. Models are formulated based on Analysis of Variance (ANOVA) using MATLAB environment. Optimum solutions are compared with Taguchi Methodology results.

Keywords: optimization, laser cutting, robust design, kerf width, Taguchi method, RSM and DOE

Procedia PDF Downloads 594
3075 Dimensioning of a Solar Dryer with Application of an Experiment Design Method for Drying Food Products

Authors: B. Touati, A. Saad, B. Lips, A. Abdenbi, M. Mokhtari.

Abstract:

The purpose of this study is an application of experiment design method for dimensioning of a solar drying system. NIMROD software was used to build up the matrix of experiments and to analyze the results. The software has the advantages of being easy to use and consists of a forced way, with some choices about the number and range of variation of the parameters, and the desired polynomial shape. The first design of experiments performed concern the drying with constant input characteristics of the hot air in the dryer and a second design of experiments in which the drying chamber is coupled with a solar collector. The first design of experiments allows us to study the influence of various parameters and get the studied answers in a polynomial form. The correspondence between the polynomial thus determined, and the model results were good. The results of the polynomials of the second design of experiments and those of the model are worse than the results in the case of drying with constant input conditions. This is due to the strong link between all the input parameters, especially, the surface of the sensor and the drying chamber, and the mass of the product.

Keywords: solar drying, experiment design method, NIMROD, mint leaves

Procedia PDF Downloads 477
3074 The Impact of Space Charges on the Electromechanical Constraints in HVDC Power Cable Containing Defects

Authors: H. Medoukali, B. Zegnini

Abstract:

Insulation techniques in high-voltage cables rely heavily on chemically synapsed polyethylene. The latter may contain manufacturing defects such as small cavities, for example. The presence of the cavity affects the distribution of the electric field at the level of the insulating layer; this change in the electric field is affected by the presence of different space charge densities within the insulating material. This study is carried out by performing simulations to determine the distribution of the electric field inside the insulator. The simulations are based on the creation of a two-dimensional model of a high-voltage cable of 154 kV using the COMSOL Multiphysics software. Each time we study the effect of changing the space charge density of on the electromechanical Constraints.

Keywords: COMSOL multiphysics, electric field, HVDC, microcavities, space charges, XLPE

Procedia PDF Downloads 106
3073 Performance Analysis of M-Ary Pulse Position Modulation in Multihop Multiple Input Multiple Output-Free Space Optical System over Uncorrelated Gamma-Gamma Atmospheric Turbulence Channels

Authors: Hechmi Saidi, Noureddine Hamdi

Abstract:

The performance of Decode and Forward (DF) multihop Free Space Optical ( FSO) scheme deploying Multiple Input Multiple Output (MIMO) configuration under Gamma-Gamma (GG) statistical distribution, that adopts M-ary Pulse Position Modulation (MPPM) coding, is investigated. We have extracted exact and estimated values of Symbol-Error Rates (SERs) respectively. A closed form formula related to the Probability Density Function (PDF) is expressed for our designed system. Thanks to the use of DF multihop MIMO FSO configuration and MPPM signaling, atmospheric turbulence is combatted; hence the transmitted signal quality is improved.

Keywords: free space optical, multiple input multiple output, M-ary pulse position modulation, multihop, decode and forward, symbol error rate, gamma-gamma channel

Procedia PDF Downloads 181
3072 A Subband BSS Structure with Reduced Complexity and Fast Convergence

Authors: Salah Al-Din I. Badran, Samad Ahmadi, Ismail Shahin

Abstract:

A blind source separation method is proposed; in this method, we use a non-uniform filter bank and a novel normalisation. This method provides a reduced computational complexity and increased convergence speed comparing to the full-band algorithm. Recently, adaptive sub-band scheme has been recommended to solve two problems: reduction of computational complexity and increase the convergence speed of the adaptive algorithm for correlated input signals. In this work, the reduction in computational complexity is achieved with the use of adaptive filters of orders less than the full-band adaptive filters, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each subband than the input signal at full bandwidth, and can promote better rates of convergence.

Keywords: blind source separation, computational complexity, subband, convergence speed, mixture

Procedia PDF Downloads 556
3071 Number of Parameters of Anantharam's Model with Single-Input Single-Output Case

Authors: Kazuyoshi Mori

Abstract:

In this paper, we consider the parametrization of Anantharam’s model within the framework of the factorization approach. In the parametrization, we investigate the number of required parameters of Anantharam’s model. We consider single-input single-output systems in this paper. By the investigation, we find three cases that are (1) there exist plants which require only one parameter and (2) two parameters, and (3) the number of parameters is at most three.

Keywords: linear systems, parametrization, coprime factorization, number of parameters

Procedia PDF Downloads 192
3070 Decision Support System for Optimal Placement of Wind Turbines in Electric Distribution Grid

Authors: Ahmed Ouammi

Abstract:

This paper presents an integrated decision framework to support decision makers in the selection and optimal allocation of wind power plants in the electric grid. The developed approach intends to maximize the benefice related to the project investment during the planning period. The proposed decision model considers the main cost components, meteorological data, environmental impacts, operation and regulation constraints, and territorial information. The decision framework is expressed as a stochastic constrained optimization problem with the aim to identify the suitable locations and related optimal wind turbine technology considering the operational constraints and maximizing the benefice. The developed decision support system is applied to a case study to demonstrate and validate its performance.

Keywords: decision support systems, electric power grid, optimization, wind energy

Procedia PDF Downloads 132
3069 Urban Governance in Major Development Projects: Challenges, Issues and Constraints - Case of Constantine

Authors: Chouabbia Khedidja, Lazri Youcef, Mouhoubi Nedjima

Abstract:

In optics and in ambition to break into the ranks of international metropolis cities, Constantine, a regional metropolis of eastern Algeria, is facing multiple challenges shared between the response to the urban crisis plaguing the city and the creation of territorial attractiveness in the metropolisation process. This ambition cannot be achieve in conditions of poor governance and lack of cooperation especially between the actors involved in major development projects, these last qualified by change and hope carriers to make the city more attractive and pleasant. Thus, governance or good governance has become not only a necessity but also a challenge for the city of Constantine. Through this example of Constantine. We will analyze the challenges facing a metropolis amongst other urban governance and the constraints that affect the smooth running of major development projects when governance is missing or inoperative.

Keywords: urban governance, metropolis, big development project, actors, constantine

Procedia PDF Downloads 438
3068 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method

Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson

Abstract:

Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.

Keywords: fault detection, ground robot, inverse simulation, rover

Procedia PDF Downloads 281
3067 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 247
3066 Entrepreneurship under the Effect of Information Technology

Authors: Mohammad Hadi Khorashadi Zadeh ‎

Abstract:

An entrepreneur is a manager or the owner of the commercial company that creates resources and money by risking and initiative. The Netpreneur is the capability to run an online business. It needs only the Connectivity. An Entrepreneur, as long as he has a service which the market demands can set up a feasible and viable trade with his Intellectual Capital as the principle input and the Connectivity Infrastructure as the only physical input. The internet is possibly the most significant revolution in science and technology that our generation could fantasize or imagine. It has introduced in various benefits to the society, culture, economics and politics. The entrepreneur is a premium member in the community. She/he provides services to the society and community including employment.

Keywords: entrepreneur, Netpreneur, intellectual capital, infrastructure

Procedia PDF Downloads 295
3065 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models

Authors: Suriya

Abstract:

Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.

Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar

Procedia PDF Downloads 25
3064 Energy Management System Based on Voltage Fluctuations Minimization for Droop-Controlled Islanded Microgrid

Authors: Zahra Majd, Mohsen Kalantar

Abstract:

Power management and voltage regulation is one of the most important issues in microgrid (MG) control and scheduling. This paper proposes a multiobjective scheduling formulation that consists of active power costs, voltage fluctuations summation, and technical constraints of MG. Furthermore, load flow and reserve constraints are considered to achieve proper voltage regulation. A modified Jacobian matrix is presented for calculating voltage variations and Mont Carlo simulation is used for generating and reducing scenarios. To convert the problem to a mixed integer linear program, a linearization procedure for nonlinear equations is presented. The proposed model is applied to a typical low-voltage MG and two different cases are investigated. The results show the effectiveness of the proposed model.

Keywords: microgrid, energy management system, voltage fluctuations, modified Jacobian matrix

Procedia PDF Downloads 70
3063 Impedance Matching of Axial Mode Helical Antennas

Authors: Hossein Mardani, Neil Buchanan, Robert Cahill, Vincent Fusco

Abstract:

In this paper, we study the input impedance characteristics of axial mode helical antennas to find an effective way for matching it to 50 Ω. The study is done on the important matching parameters such as like wire diameter and helix to the ground plane gap. It is intended that these parameters control the matching without detrimentally affecting the radiation pattern. Using transmission line theory, a simple broadband technique is proposed, which is applicable for perfect matching of antennas with similar design parameters. We provide design curves to help to choose the proper dimensions of the matching section based on the antenna’s unmatched input impedance. Finally, using the proposed technique, a 4-turn axial mode helix is designed at 2.5 GHz center frequency and the measurement results of the manufactured antenna will be included. This parametric study gives a good insight into the input impedance characteristics of axial mode helical antennas and the proposed impedance matching approach provides a simple, useful method for matching these types of antennas.

Keywords: antenna, helix, helical, axial mode, wireless power transfer, impedance matching

Procedia PDF Downloads 285
3062 A Case Study of Bee Algorithm for Ready Mixed Concrete Problem

Authors: Wuthichai Wongthatsanekorn, Nuntana Matheekrieangkrai

Abstract:

This research proposes Bee Algorithm (BA) to optimize Ready Mixed Concrete (RMC) truck scheduling problem from single batch plant to multiple construction sites. This problem is considered as an NP-hard constrained combinatorial optimization problem. This paper provides the details of the RMC dispatching process and its related constraints. BA was then developed to minimize total waiting time of RMC trucks while satisfying all constraints. The performance of BA is then evaluated on two benchmark problems (3 and 5construction sites) according to previous researchers. The simulation results of BA are compared in term of efficiency and accuracy with Genetic Algorithm (GA) and all problems show that BA approach outperforms GA in term of efficiency and accuracy to obtain optimal solution. Hence, BA approach could be practically implemented to obtain the best schedule.

Keywords: bee colony optimization, ready mixed concrete problem, ruck scheduling, multiple construction sites

Procedia PDF Downloads 362
3061 System Identification and Controller Design for a DC Electrical Motor

Authors: Armel Asongu Nkembi, Ahmad Fawad

Abstract:

The aim of this paper is to determine in a concise way the transfer function that characterizes a DC electrical motor with a helix. In practice it can be obtained by applying a particular input to the system and then, based on the observation of its output, determine an approximation to the transfer function of the system. In our case, we use a step input and find the transfer function parameters that give the simulated first-order time response. The simulation of the system is done using MATLAB/Simulink. In order to determine the parameters, we assume a first order system and use the Broida approximation to determine the parameters and then its Mean Square Error (MSE). Furthermore, we design a PID controller for the control process first in the continuous time domain and tune it using the Ziegler-Nichols open loop process. We then digitize the controller to obtain a digital controller since most systems are implemented using computers, which are digital in nature.

Keywords: transfer function, step input, MATLAB, Simulink, DC electrical motor, PID controller, open-loop process, mean square process, digital controller, Ziegler-Nichols

Procedia PDF Downloads 25
3060 Municipalities as Enablers of Citizen-Led Urban Initiatives: Possibilities and Constraints

Authors: Rosa Nadine Danenberg

Abstract:

In recent years, bottom-up urban development has started growing as an alternative to conventional top-down planning. In large proportions, citizens and communities initiate small-scale interventions; suddenly seeming to form a trend. As a result, more and more cities are witnessing not only the growth of but also an interest in these initiatives, as they bear the potential to reshape urban spaces. Such alternative city-making efforts cause new dynamics in urban governance, with inevitable consequences for the controlled city planning and its administration. The emergence of enabling relationships between top-down and bottom-up actors signals an increasingly common urban practice. Various case studies show that an enabling relationship is possible, yet, how it can be optimally realized stays rather underexamined. Therefore, the seemingly growing worldwide phenomenon of ‘municipal bottom-up urban development’ necessitates an adequate governance structure. As such, the aim of this research is to contribute knowledge to how municipalities can enable citizen-led urban initiatives from a governance innovation perspective. Empirical case-study research in Stockholm and Istanbul, derived from interviews with founders of four citizen-led urban initiatives and one municipal representative in each city, provided valuable insights to possibilities and constraints for enabling practices. On the one hand, diverging outcomes emphasize the extreme oppositional features of both cases (Stockholm and Istanbul). Firstly, both cities’ characteristics are drastically different. Secondly, the ideologies and motifs for the initiatives to emerge vary widely. Thirdly, the major constraints for citizen-led urban initiatives to relate to the municipality are considerably different. Two types of municipality’s organizational structures produce different underlying mechanisms which demonstrate the constraints. The first municipal organizational structure is steered by bureaucracy (Stockholm). It produces an administrative division that brings up constraints such as the lack of responsibility, transparency and continuity by municipal representatives. The second structure is dominated by municipal politics and governmental hierarchy (Istanbul). It produces informality, lack of transparency and a fragmented civil society. In order to cope with the constraints produced by both types of organizational structures, the initiatives have adjusted their organization to the municipality’s underlying structures. On the other hand, this paper has in fact also come to a rather unifying conclusion. Interestingly, the suggested possibilities for an enabling relationship underline converging new urban governance arrangements. This could imply that for the two varying types of municipality’s organizational structures there is an accurate governance structure. Namely, the combination of a neighborhood council with a municipal guide, with allowance for the initiatives to adopt a politicizing attitude is found as coinciding. Especially its combination appears key to redeem varying constraints. A municipal guide steers the initiatives through bureaucratic struggles, is supported by coproduction methods, while it balances out municipal politics. Next, a neighborhood council, that is politically neutral and run by local citizens, can function as an umbrella for citizen-led urban initiatives. What is crucial is that it should cater for a more entangled relationship between municipalities and initiatives with enhanced involvement of the initiatives in decision-making processes and limited involvement of prevailing constraints pointed out in this research.

Keywords: bottom-up urban development, governance innovation, Istanbul, Stockholm

Procedia PDF Downloads 199
3059 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging

Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati

Abstract:

Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.

Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization

Procedia PDF Downloads 55
3058 2L1, a Bridge between L1 and L2

Authors: Elena Ginghina

Abstract:

There are two major categories of language acquisition: first and second language acquisition, which distinguish themselves in their learning process and in their ultimate attainment. However, in the case of a bilingual child, one of the languages he grows up with receives gradually the features of a second language. This phenomenon characterizes the successive first language acquisition, when the initial state of the child is already marked by another language. Nevertheless, the dominance of the languages can change throughout the life, if the exposure to language and the quality of the input are better in 2L1. Related to the exposure to language and the quality of the input, there are cases even at the simultaneous bilingualism, where the two languages although learned from birth one, differ from one another at some point. This paper aims to see, what makes a 2L1 to become a second language and under what circumstances can a L2 learner reach a native or a near native speaker level.

Keywords: bilingualism, first language acquisition, native speakers of German, second language acquisition

Procedia PDF Downloads 547
3057 Thermodynamic Analysis of a Vapor Absorption System Using Modified Gouy-Stodola Equation

Authors: Gulshan Sachdeva, Ram Bilash

Abstract:

In this paper, the exergy analysis of vapor absorption refrigeration system using LiBr-H2O as working fluid is carried out with the modified Gouy-Stodola approach rather than the classical Gouy-Stodola equation and effect of varying input parameters is also studied on the performance of the system. As the modified approach uses the concept of effective temperature, the mathematical expressions for effective temperature have been formulated and calculated for each component of the system. Various constraints and equations are used to develop program in EES to solve these equations. The main aim of this analysis is to determine the performance of the system and the components having major irreversible loss. Results show that exergy destruction rate is considerable in absorber and generator followed by evaporator and condenser. There is an increase in exergy destruction in generator, absorber and condenser and decrease in the evaporator by the modified approach as compared to the conventional approach. The value of exergy determined by the modified Gouy Stodola equation deviates maximum i.e. 26% in the generator as compared to the exergy calculated by the classical Gouy-Stodola method.

Keywords: exergy analysis, Gouy-Stodola, refrigeration, vapor absorption

Procedia PDF Downloads 380
3056 Optimizing Residential Housing Renovation Strategies at Territorial Scale: A Data Driven Approach and Insights from the French Context

Authors: Rit M., Girard R., Villot J., Thorel M.

Abstract:

In a scenario of extensive residential housing renovation, stakeholders need models that support decision-making through a deep understanding of the existing building stock and accurate energy demand simulations. To address this need, we have modified an optimization model using open data that enables the study of renovation strategies at both territorial and national scales. This approach provides (1) a definition of a strategy to simplify decision trees from theoretical combinations, (2) input to decision makers on real-world renovation constraints, (3) more reliable identification of energy-saving measures (changes in technology or behaviour), and (4) discrepancies between currently planned and actually achieved strategies. The main contribution of the studies described in this document is the geographic scale: all residential buildings in the areas of interest were modeled and simulated using national data (geometries and attributes). These buildings were then renovated, when necessary, in accordance with the environmental objectives, taking into account the constraints applicable to each territory (number of renovations per year) or at the national level (renovation of thermal deficiencies (Energy Performance Certificates F&G)). This differs from traditional approaches that focus only on a few buildings or archetypes. This model can also be used to analyze the evolution of a building stock as a whole, as it can take into account both the construction of new buildings and their demolition or sale. Using specific case studies of French territories, this paper highlights a significant discrepancy between the strategies currently advocated by decision-makers and those proposed by our optimization model. This discrepancy is particularly evident in critical metrics such as the relationship between the number of renovations per year and achievable climate targets or the financial support currently available to households and the remaining costs. In addition, users are free to seek optimizations for their building stock across a range of different metrics (e.g., financial, energy, environmental, or life cycle analysis). These results are a clear call to re-evaluate existing renovation strategies and take a more nuanced and customized approach. As the climate crisis moves inexorably forward, harnessing the potential of advanced technologies and data-driven methodologies is imperative.

Keywords: residential housing renovation, MILP, energy demand simulations, data-driven methodology

Procedia PDF Downloads 48
3055 Strategic Citizen Participation in Applied Planning Investigations: How Planners Use Etic and Emic Community Input Perspectives to Fill-in the Gaps in Their Analysis

Authors: John Gaber

Abstract:

Planners regularly use citizen input as empirical data to help them better understand community issues they know very little about. This type of community data is based on the lived experiences of local residents and is known as "emic" data. What is becoming more common practice for planners is their use of data from local experts and stakeholders (known as "etic" data or the outsider perspective) to help them fill in the gaps in their analysis of applied planning research projects. Utilizing international Health Impact Assessment (HIA) data, I look at who planners invite to their citizen input investigations. Research presented in this paper shows that planners access a wide range of emic and etic community perspectives in their search for the “community’s view.” The paper concludes with how planners can chart out a new empirical path in their execution of emic/etic citizen participation strategies in their applied planning research projects.

Keywords: citizen participation, emic data, etic data, Health Impact Assessment (HIA)

Procedia PDF Downloads 469
3054 CIPP Evaluation of Online Broadcasting of Suan Dusit Rajabhat University

Authors: Somkiat Korbuakaew, Winai Mankhatitham, Anchan Chongcharoen, Wichar Kunkum

Abstract:

This research’s objective is to evaluate the online broadcasting of Suan Dusit Rajabhat Univeristy by CIPP model. The evaluation was separated into 4 parts: context factor, input factor, process factor and product factor. Sample group in this research were 399 participants who were university’s executive, staff and students. Questionnaires and interview were the research tools. Data were analyzed by computer program. Statistics used here were percentage, mean, and standard deviation. Findings are as follows: 1. Context factor: The context factor here in this research was university’s executives, staff and students. The study shows that they would like to use online broadcasting to be the educational tool and IT development. 2. Input factor: The input factor was the modern IT equipment to create interesting teaching materials and develop education in general. 3. Process factor: The process factor in this study was the publication of the program that it should be promoted more among students and should be more objective. 4. Product factor: The product factor in this study was the purpose of the program that it expands the educational channel for students.

Keywords: evaluation, project, internet, online broadcasting

Procedia PDF Downloads 502
3053 Potentials of Ecotourism to Nature Conservation and Improvement of Livelihood of People around Ayikunnugba Waterfalls, Oke-Ila Orangun, Nigeria

Authors: Funmilola Ajani, I. A. Ayodele, O.A. Filade

Abstract:

Tourism has direct, indirect and induced impacts on economic development and the industry is one of the most crucial tradable sectors in the world. The study was therefore carried out to assess the potentials of ecotourism to nature conservation and its contributions to the improvement of the livelihood of Oke- Ila Orangun community. One hundred and fifty residents were chosen by stratified random sampling as respondents. Respondents awareness of ecotourism was assessed using an 8-point scale while respondents acceptance of ecotourism was assessed using a 14-point scale. Contributions to improvement of livelihood of residents and perceived constraints identified by residents to the development of the water fall and socio-economic variables among others were also obtained. Also, in-depth interview was conducted with the king of Ayikunnugba. The data was analyzed using descriptive statistics such as frequency count, mean and percentages. Correlation analysis was used to determine whether or not a relationship exists between two variables at 0.05 level of significance. Perception of respondents based on the awareness of ecotourism and contributions to livelihood development was high (78.3%). A significant relationship exists between acceptance of ecotourism and its contributions to peoples’ livelihood. Also, relationship between constraints encountered by respondents and its contributions to peoples livelihood is highly significant(r =0.546; P =0.00). Majority (71.3%) of the respondents believed that the development of the area will not lead to environmental pollution. Public- Private- Partnership (PPP) is therefore recommended so as to enable the recreation site to meet international standard in terms of development and management.

Keywords: Ayikunnugba water fall, ecotourism constraints, nature conservation, awareness

Procedia PDF Downloads 134
3052 Gray Level Image Encryption

Authors: Roza Afarin, Saeed Mozaffari

Abstract:

The aim of this paper is image encryption using Genetic Algorithm (GA). The proposed encryption method consists of two phases. In modification phase, pixels locations are altered to reduce correlation among adjacent pixels. Then, pixels values are changed in the diffusion phase to encrypt the input image. Both phases are performed by GA with binary chromosomes. For modification phase, these binary patterns are generated by Local Binary Pattern (LBP) operator while for diffusion phase binary chromosomes are obtained by Bit Plane Slicing (BPS). Initial population in GA includes rows and columns of the input image. Instead of subjective selection of parents from this initial population, a random generator with predefined key is utilized. It is necessary to decrypt the coded image and reconstruct the initial input image. Fitness function is defined as average of transition from 0 to 1 in LBP image and histogram uniformity in modification and diffusion phases, respectively. Randomness of the encrypted image is measured by entropy, correlation coefficients and histogram analysis. Experimental results show that the proposed method is fast enough and can be used effectively for image encryption.

Keywords: correlation coefficients, genetic algorithm, image encryption, image entropy

Procedia PDF Downloads 305
3051 Singular Value Decomposition Based Optimisation of Design Parameters of a Gearbox

Authors: Mehmet Bozca

Abstract:

Singular value decomposition based optimisation of geometric design parameters of a 5-speed gearbox is studied. During the optimisation, a four-degree-of freedom torsional vibration model of the pinion gear-wheel gear system is obtained and the minimum singular value of the transfer matrix is considered as the objective functions. The computational cost of the associated singular value problems is quite low for the objective function, because it is only necessary to compute the largest and smallest singular values (µmax and µmin) that can be achieved by using selective eigenvalue solvers; the other singular values are not needed. The design parameters are optimised under several constraints that include bending stress, contact stress and constant distance between gear centres. Thus, by optimising the geometric parameters of the gearbox such as, the module, number of teeth and face width it is possible to obtain a light-weight-gearbox structure. It is concluded that the all optimised geometric design parameters also satisfy all constraints.

Keywords: Singular value, optimisation, gearbox, torsional vibration

Procedia PDF Downloads 337
3050 Particle Swarm Optimization Based Method for Minimum Initial Marking in Labeled Petri Nets

Authors: Hichem Kmimech, Achref Jabeur Telmoudi, Lotfi Nabli

Abstract:

The estimation of the initial marking minimum (MIM) is a crucial problem in labeled Petri nets. In the case of multiple choices, the search for the initial marking leads to a problem of optimization of the minimum allocation of resources with two constraints. The first concerns the firing sequence that could be legal on the initial marking with respect to the firing vector. The second deals with the total number of tokens that can be minimal. In this article, the MIM problem is solved by the meta-heuristic particle swarm optimization (PSO). The proposed approach presents the advantages of PSO to satisfy the two previous constraints and find all possible combinations of minimum initial marking with the best computing time. This method, more efficient than conventional ones, has an excellent impact on the resolution of the MIM problem. We prove through a set of definitions, lemmas, and examples, the effectiveness of our approach.

Keywords: marking, production system, labeled Petri nets, particle swarm optimization

Procedia PDF Downloads 152
3049 Optimization of Roster Construction In Sports

Authors: Elijah Cavan

Abstract:

In Major League Sports (MLB, NBA, NHL, NFL), it is the Front Office Staff (FOS) who make decisions about who plays for their respective team. The FOS bear the brunt of the responsibility for acquiring players through drafting, trading and signing players in free agency while typically contesting with maximum roster salary constraints. The players themselves are volatile assets of these teams- their value fluctuates with age and performance. A simple comparison can be made when viewing players as assets. The problem here is similar to that of optimizing your investment portfolio. The The goal is ultimately to maximize your periodic returns while tolerating a fixed risk (degree of uncertainty/ potential loss). Each franchise may value assets differently, and some may only tolerate lower risk levels- these are examples of factors that introduce additional constraints into the model. In this talk, we will detail the mathematical formulation of this problem as a constrained optimization problem- which can be solved with classical machine learning methods but is also well posed as a problem to be solved on quantum computers

Keywords: optimization, financial mathematics, sports analytics, simulated annealing

Procedia PDF Downloads 99
3048 A Review on Robot Trajectory Optimization and Process Validation through off-Line Programming in Virtual Environment Using Robcad

Authors: Ashwini Umale

Abstract:

Trajectory planning and optimization is a fundamental problem in articulated robotics. It is often viewed as a two phase problem of initial feasible path planning around obstacles and subsequent optimization of a trajectory satisfying dynamical constraints. An optimized trajectory of multi-axis robot is important and directly influences the Performance of the executing task. Optimal is defined to be the minimum time to transition from the current speed to the set speed. In optimization of trajectory through virtual environment explores the most suitable way to represent robot motion from virtual environment to real environment. This paper aims to review the research of trajectory optimization in virtual environment using simulation software Robcad. Improvements are to be expected in trajectory optimization to generate smooth and collision free trajectories with minimization of overall robot cycle time.

Keywords: trajectory optimization, forward kinematics and reverse kinematics, dynamic constraints, robcad simulation software

Procedia PDF Downloads 482