Search results for: ecological planning method
21162 Approximations of Fractional Derivatives and Its Applications in Solving Non-Linear Fractional Variational Problems
Authors: Harendra Singh, Rajesh Pandey
Abstract:
The paper presents a numerical method based on operational matrix of integration and Ryleigh method for the solution of a class of non-linear fractional variational problems (NLFVPs). Chebyshev first kind polynomials are used for the construction of operational matrix. Using operational matrix and Ryleigh method the NLFVP is converted into a system of non-linear algebraic equations, and solving these equations we obtained approximate solution for NLFVPs. Convergence analysis of the proposed method is provided. Numerical experiment is done to show the applicability of the proposed numerical method. The obtained numerical results are compared with exact solution and solution obtained from Chebyshev third kind. Further the results are shown graphically for different fractional order involved in the problems.Keywords: non-linear fractional variational problems, Rayleigh-Ritz method, convergence analysis, error analysis
Procedia PDF Downloads 29921161 Leadership in the Emergence Paradigm: A Literature Review on the Medusa Principles
Authors: Everard van Kemenade
Abstract:
Many quality improvement activities are planned. Leaders are strongly involved in missions, visions and strategic planning. They use, consciously or unconsciously, the PDCA-cycle, also know as the Deming cycle. After the planning, the plans are carried out and the results or effects are measured. If the results show that the goals in the plan have not been achieved, adjustments are made in the next plan or in the execution of the processes. Then, the cycle is run through again. Traditionally, the PDCA-cycle is advocated as a means to an end. However, PDCA is especially fit for planned, ordered, certain contexts. It fits with the empirical and referential quality paradigm. For uncertain, unordered, unplanned processes, something else might be needed instead of Plan-Do-Check-Act. Due to the complexity of our society, the influence of the context, and the uncertainty in our world nowadays, not every activity can be planned anymore. At the same time organisations need to be more innovative than ever. That provides leaders with ‘wicked tendencies’. However, that raises the question how one can innovate without being able to plan? Complexity science studies the interactions of a diverse group of agents that bring about change in times of uncertainty, e.g. when radical innovation is co-created. This process is called emergence. This research study explores the role of leadership in the emergence paradigm. Aim of the article is to study the way that leadership can support the emergence of innovation in a complex context. First, clarity is given on the concepts used in the research question: complexity, emergence, innovation and leadership. Thereafter a literature search is conducted to answer the research question. The topics ‘emergent leadership’ or ‘complexity leadership’ are chosen for an exploratory search in Google and Google Scholar using the berry picking method. Exclusion criterion is emergence in other disciplines than organizational development or in the meaning of ‘arising’. The literature search conducted gave 45 hits. Twenty-seven articles were excluded after reading the title and abstract because they did not research the topic of emergent leadership and complexity. After reading the remaining articles as a whole one more was excluded because the article used emergent in the limited meaning of ‗arising‘ and eight more were excluded because the topic did not match the research question of this article. That brings the total of the search to 17 articles. The useful conclusions from the articles are merged and grouped together under overarching topics, using thematic analysis. The findings are that 5 topics prevail when looking at possibilities for leadership to facilitate innovation: enabling, sharing values, dreaming, interacting, context sensitivity and adaptivity. Together they form In Dutch the acronym Medusa.Keywords: complexity science, emergence, leadership in the emergence paradigm, innovation, the Medusa principles
Procedia PDF Downloads 3121160 An Approximation Method for Exact Boundary Controllability of Euler-Bernoulli
Authors: A. Khernane, N. Khelil, L. Djerou
Abstract:
The aim of this work is to study the numerical implementation of the Hilbert uniqueness method for the exact boundary controllability of Euler-Bernoulli beam equation. This study may be difficult. This will depend on the problem under consideration (geometry, control, and dimension) and the numerical method used. Knowledge of the asymptotic behaviour of the control governing the system at time T may be useful for its calculation. This idea will be developed in this study. We have characterized as a first step the solution by a minimization principle and proposed secondly a method for its resolution to approximate the control steering the considered system to rest at time T.Keywords: boundary control, exact controllability, finite difference methods, functional optimization
Procedia PDF Downloads 34821159 Online Battery Equivalent Circuit Model Estimation on Continuous-Time Domain Using Linear Integral Filter Method
Authors: Cheng Zhang, James Marco, Walid Allafi, Truong Q. Dinh, W. D. Widanage
Abstract:
Equivalent circuit models (ECMs) are widely used in battery management systems in electric vehicles and other battery energy storage systems. The battery dynamics and the model parameters vary under different working conditions, such as different temperature and state of charge (SOC) levels, and therefore online parameter identification can improve the modelling accuracy. This paper presents a way of online ECM parameter identification using a continuous time (CT) estimation method. The CT estimation method has several advantages over discrete time (DT) estimation methods for ECM parameter identification due to the widely separated battery dynamic modes and fast sampling. The presented method can be used for online SOC estimation. Test data are collected using a lithium ion cell, and the experimental results show that the presented CT method achieves better modelling accuracy compared with the conventional DT recursive least square method. The effectiveness of the presented method for online SOC estimation is also verified on test data.Keywords: electric circuit model, continuous time domain estimation, linear integral filter method, parameter and SOC estimation, recursive least square
Procedia PDF Downloads 38421158 A Risk-Based Approach to Construction Management
Authors: Chloe E. Edwards, Yasaman Shahtaheri
Abstract:
Risk management plays a fundamental role in project planning and delivery. The purpose of incorporating risk management into project management practices is to identify and address uncertainties related to key project-related activities. The uncertainties, known as risk events, can relate to project deliverables that are quantifiable and are often measured by impact to project schedule, cost, or environmental impact. Risk management should be incorporated as an iterative practice throughout the planning, execution, and commissioning phases of a project. This paper specifically examines how risk management contributes to effective project planning and delivery through a case study of a transportation project. This case study focused solely on impacts to project schedule regarding three milestones: readiness for delivery, readiness for testing and commissioning, and completion of the facility. The case study followed the ISO 31000: Risk Management – Guidelines. The key factors that are outlined by these guidelines include understanding the scope and context of the project, conducting a risk assessment including identification, analysis, and evaluation, and lastly, risk treatment through mitigation measures. This process requires continuous consultation with subject matter experts and monitoring to iteratively update the risks accordingly. The risk identification process led to a total of fourteen risks related to design, permitting, construction, and commissioning. The analysis involved running 1,000 Monte Carlo simulations through @RISK 8.0 Industrial software to determine potential milestone completion dates based on the project baseline schedule. These dates include the best case, most likely case, and worst case to provide an estimated delay for each milestone. Evaluation of these results provided insight into which risks were the highest contributors to the projected milestone completion dates. Based on the analysis results, the risk management team was able to provide recommendations for mitigation measures to reduce the likelihood of risks occurring. The risk management team also provided recommendations for managing the identified risks and project activities moving forward to meet the most likely or best-case milestone completion dates.Keywords: construction management, monte carlo simulation, project delivery, risk assessment, transportation engineering
Procedia PDF Downloads 10821157 Digital Curriculum Preservation Planning, Actions, and Challenges
Authors: Misook Ahn
Abstract:
This study examined the Digital Curriculum Repository (DCR) project initiated at Defense Language Institute Foreign Language Center (DLIFLC). The purpose of the DCR is to build a centralized curriculum infrastructure, preserve all curriculum materials, and provide academic service to users (faculty, students, or other agencies). The DCR collection includes core language curriculum materials developed by each language school—foreign language textbooks, language survival kits, and audio files currently in or not in use at the schools. All core curriculum materials with audio and video files have been coded, collected, and preserved at the DCR. The DCR website was designed with MS SharePoint for easy accessibility by the DLIFLC’s faculty and students. All metadata for the collected curriculum materials have been input by language, code, year, book type, level, user, version, and current status (in use/not in use). The study documents digital curriculum preservation planning, actions, and challenges, including collecting, coding, collaborating, designing DCR SharePoint, and policymaking. DCR Survey data is also collected and analyzed for this research. Based on the finding, the study concludes that the mandatory policy for the DCR system and collaboration with school leadership are critical elements of a successful repository system. The sample collected items, metadata, and DCR SharePoint site are presented in the evaluation section.Keywords: MS share point, digital preservation, repository, policy
Procedia PDF Downloads 16021156 The Culex Pipiens Niche: Assessment with Climatic and Physiographic Variables via a Geographic Information System
Authors: Maria C. Proença, Maria T. Rebelo, Marília Antunes, Maria J. Alves, Hugo Osório, Sofia Cunha, João Casaca
Abstract:
Using a geographic information system (GIS), the relations between a georeferenced data set of Culex pipiens sl. mosquitoes collected in Portugal mainland during seven years (2006-2012) and meteorological and physiographic parameters such as: air relative humidity, air temperature (minima, maxima and mean daily temperatures), daily total rainfall, altitude, land use/land cover and proximity to water bodies are evaluated. Focus is on the mosquito females; the characterization of its habitat is the key for the planning of chirurgical non-aggressive prophylactic countermeasures to avoid ambient degradation. The GIS allow for the spatial determination of the zones were the mosquito mean captures has been above average; using the meteorological values at these coordinates, the limits of each parameter are identified/computed. The meteorological parameters measured at the net of weather stations all over the country are averaged by month and interpolated to produce raster maps that can be segmented according to the thresholds obtained for each parameter. The intersection of the maps obtained for each month show the evolution of the area favorable to the species through the mosquito season, which is from May to October at these latitudes. In parallel, mean and above average captures were related to the physiographic parameters. Three levels of risk could be identified for each parameter, using above average captures as an index. The results were applied to the suitability meteorological maps of each month. The Culex pipiens critical niche is delimited, reflecting the critical areas and the level of risk for transmission of the pathogens to which they are competent vectors (West Nile virus, iridoviruses, rheoviruses and parvoviruses).Keywords: Culex pipiens, ecological niche, risk assessment, risk management
Procedia PDF Downloads 54521155 Numerical Investigation of Embankment Settlement Improved by Method of Preloading by Vertical Drains
Authors: Seyed Abolhasan Naeini, Saeideh Mohammadi
Abstract:
Time dependent settlement due to loading on soft saturated soils produces many problems such as high consolidation settlements and low consolidation rates. Also, long term consolidation settlement of soft soil underlying the embankment leads to unpredicted settlements and cracks on soil surface. Preloading method is an effective improvement method to solve this problem. Using vertical drains in preloading method is an effective method for improving soft soils. Applying deep soil mixing method on soft soils is another effective method for improving soft soils. There are little studies on using two methods of preloading and deep soil mixing simultaneously. In this paper, the concurrent effect of preloading with deep soil mixing by vertical drains is investigated through a finite element code, Plaxis2D. The influence of parameters such as deep soil mixing columns spacing, existence of vertical drains and distance between them, on settlement and stability factor of safety of embankment embedded on soft soil is investigated in this research.Keywords: preloading, soft soil, vertical drains, deep soil mixing, consolidation settlement
Procedia PDF Downloads 21721154 The Role of Evaluation for Effective and Efficient Change in Higher Education Institutions
Authors: Pattaka Sa-Ngimnet
Abstract:
That the University as we have known it is no longer serving the needs of the vast majority of students and potential students has been a topic of much discussion. Institutions of higher education, in this age of global culture, are in a process of metamorphosis. Technology is being used to allow more students, older students, working students and disabled students, who cannot attend conventional classes, to have greater access to higher education through the internet. But change must come about only after much evaluation and experimentation or education will simply become a commodity as, in some cases, it already has. This paper will be concerned with the meaning and methods of change and evaluation as they are applied to institutions of higher education. Organization’s generally have different goals and different approaches in order to be successful. However, the means of reaching those goals requires rational and effective planning. Any plans for successful change in any institution must take into account both effectiveness and efficiency and the differences between them. “Effectiveness” refers to an adequate means of achieving an objective. “Efficiency” refers to the ability to achieve an objective without waste of time or resources (The Free Dictionary). So an effective means may not be efficient and an efficient means may not be effective. The goal is to reach a synthesis of effectiveness and efficiency that will maximize both to the extent each is limited by the other. This focus of this paper then is to determine how an educational institution can become either successful or oppressive depending on the kinds of planning, evaluating and changes that operate by and on the administration. If the plan is concerned only with efficiency, the institution can easily become oppressive and lose sight of its purpose of educating students. If it is overly concentrated on effectiveness, the students may receive a superior education in the short run but the institution will face operating difficulties. In becoming only goal oriented, institutions also face problems. Simply stated, if the institution reaches its goals, the stake holders may become satisfied and fail to change and keep up with the needs of the times. So goals should be seen only as benchmarks in a process of becoming even better in providing quality education. Constant and consistent evaluation is the key to making all these factors come together in a successful process of planning, testing and changing the plans as needed. The focus of the evaluation has to be considered. Evaluations must take into account progress and needs of students, methods and skills of instructors, resources available from the institution and the styles and objectives of administrators. Thus the role of evaluation is pivotal in providing for the maximum of both effective and efficient change in higher education institutions.Keywords: change, effectiveness, efficiency, education
Procedia PDF Downloads 32021153 Prediction Fluid Properties of Iranian Oil Field with Using of Radial Based Neural Network
Authors: Abdolreza Memari
Abstract:
In this article in order to estimate the viscosity of crude oil,a numerical method has been used. We use this method to measure the crude oil's viscosity for 3 states: Saturated oil's viscosity, viscosity above the bubble point and viscosity under the saturation pressure. Then the crude oil's viscosity is estimated by using KHAN model and roller ball method. After that using these data that include efficient conditions in measuring viscosity, the estimated viscosity by the presented method, a radial based neural method, is taught. This network is a kind of two layered artificial neural network that its stimulation function of hidden layer is Gaussian function and teaching algorithms are used to teach them. After teaching radial based neural network, results of experimental method and artificial intelligence are compared all together. Teaching this network, we are able to estimate crude oil's viscosity without using KHAN model and experimental conditions and under any other condition with acceptable accuracy. Results show that radial neural network has high capability of estimating crude oil saving in time and cost is another advantage of this investigation.Keywords: viscosity, Iranian crude oil, radial based, neural network, roller ball method, KHAN model
Procedia PDF Downloads 50321152 A Hybrid Normalized Gradient Correlation Based Thermal Image Registration for Morphoea
Authors: L. I. Izhar, T. Stathaki, K. Howell
Abstract:
Analyzing and interpreting of thermograms have been increasingly employed in the diagnosis and monitoring of diseases thanks to its non-invasive, non-harmful nature and low cost. In this paper, a novel system is proposed to improve diagnosis and monitoring of morphoea skin disorder based on integration with the published lines of Blaschko. In the proposed system, image registration based on global and local registration methods are found inevitable. This paper presents a modified normalized gradient cross-correlation (NGC) method to reduce large geometrical differences between two multimodal images that are represented by smooth gray edge maps is proposed for the global registration approach. This method is improved further by incorporating an iterative-based normalized cross-correlation coefficient (NCC) method. It is found that by replacing the final registration part of the NGC method where translational differences are solved in the spatial Fourier domain with the NCC method performed in the spatial domain, the performance and robustness of the NGC method can be greatly improved. It is shown in this paper that the hybrid NGC method not only outperforms phase correlation (PC) method but also improved misregistration due to translation, suffered by the modified NGC method alone for thermograms with ill-defined jawline. This also demonstrates that by using the gradients of the gray edge maps and a hybrid technique, the performance of the PC based image registration method can be greatly improved.Keywords: Blaschko’s lines, image registration, morphoea, thermal imaging
Procedia PDF Downloads 31121151 Comparison of Allowable Stress Method and Time History Response Analysis for Seismic Design of Buildings
Authors: Sayuri Inoue, Naohiro Nakamura, Tsubasa Hamada
Abstract:
The seismic design method of buildings is classified into two types: static design and dynamic design. The static design is a design method that exerts static force as seismic force and is a relatively simple design method created based on the experience of seismic motion in the past 100 years. At present, static design is used for most of the Japanese buildings. Dynamic design mainly refers to the time history response analysis. It is a comparatively difficult design method that input the earthquake motion assumed in the building model and examine the response. Currently, it is only used for skyscrapers and specific buildings. In the present design standard in Japan, it is good to use either the design method of the static design and the dynamic design in the medium and high-rise buildings. However, when actually designing middle and high-rise buildings by two kinds of design methods, the relatively simple static design method satisfies the criteria, but in the case of a little difficult dynamic design method, the criterion isn't often satisfied. This is because the dynamic design method was built with the intention of designing super high-rise buildings. In short, higher safety is required as compared with general buildings, and criteria become stricter. The authors consider applying the dynamic design method to general buildings designed by the static design method so far. The reason is that application of the dynamic design method is reasonable for buildings that are out of the conventional standard structural form such as emphasizing design. For the purpose, it is important to compare the design results when the criteria of both design methods are arranged side by side. In this study, we performed time history response analysis to medium-rise buildings that were actually designed with allowable stress method. Quantitative comparison between static design and dynamic design was conducted, and characteristics of both design methods were examined.Keywords: buildings, seismic design, allowable stress design, time history response analysis, Japanese seismic code
Procedia PDF Downloads 15721150 Second Order Analysis of Frames Using Modified Newmark Method
Authors: Seyed Amin Vakili, Sahar Sadat Vakili, Seyed Ehsan Vakili, Nader Abdoli Yazdi
Abstract:
The main purpose of this paper is to present the Modified Newmark Method as a method of non-linear frame analysis by considering the effect of the axial load (second order analysis). The discussion will be restricted to plane frameworks containing a constant cross-section for each element. In addition, it is assumed that the frames are prevented from out-of-plane deflection. This part of the investigation is performed to generalize the established method for the assemblage structures such as frameworks. As explained, the governing differential equations are non-linear and cannot be formulated easily due to unknown axial load of the struts in the frame. By the assumption of constant axial load, the governing equations are changed to linear ones in most methods. Since the modeling and the solutions of the non-linear form of the governing equations are cumbersome, the linear form of the equations would be used in the established method. However, according to the ability of the method to reconsider the minor omitted parameters in modeling during the solution procedure, the axial load in the elements at each stage of the iteration can be computed and applied in the next stage. Therefore, the ability of the method to present an accurate approach to the solutions of non-linear equations will be demonstrated again in this paper.Keywords: nonlinear, stability, buckling, modified newmark method
Procedia PDF Downloads 42721149 Reliability-Based Method for Assessing Liquefaction Potential of Soils
Authors: Mehran Naghizaderokni, Asscar Janalizadechobbasty
Abstract:
This paper explores probabilistic method for assessing the liquefaction potential of sandy soils. The current simplified methods for assessing soil liquefaction potential use a deterministic safety factor in order to determine whether liquefaction will occur or not. However, these methods are unable to determine the liquefaction probability related to a safety factor. A solution to this problem can be found by reliability analysis.This paper presents a reliability analysis method based on the popular certain liquefaction analysis method. The proposed probabilistic method is formulated based on the results of reliability analyses of 190 field records and observations of soil performance against liquefaction. The results of the present study show that confidence coefficient greater and smaller than 1 does not mean safety and/or liquefaction in cadence for liquefaction, and for assuring liquefaction probability, reliability based method analysis should be used. This reliability method uses the empirical acceleration attenuation law in the Chalos area to derive the probability density distribution function and the statistics for the earthquake-induced cyclic shear stress ratio (CSR). The CSR and CRR statistics are used in continuity with the first order and second moment method to calculate the relation between the liquefaction probability, the safety factor and the reliability index. Based on the proposed method, the liquefaction probability related to a safety factor can be easily calculated. The influence of some of the soil parameters on the liquefaction probability can be quantitatively evaluated.Keywords: liquefaction, reliability analysis, chalos area, civil and structural engineering
Procedia PDF Downloads 47021148 Parallelizing the Hybrid Pseudo-Spectral Time Domain/Finite Difference Time Domain Algorithms for the Large-Scale Electromagnetic Simulations Using Massage Passing Interface Library
Authors: Donggun Lee, Q-Han Park
Abstract:
Due to its coarse grid, the Pseudo-Spectral Time Domain (PSTD) method has advantages against the Finite Difference Time Domain (FDTD) method in terms of memory requirement and operation time. However, since the efficiency of parallelization is much lower than that of FDTD, PSTD is not a useful method for a large-scale electromagnetic simulation in a parallel platform. In this paper, we propose the parallelization technique of the hybrid PSTD-FDTD (HPF) method which simultaneously possesses the efficient parallelizability of FDTD and the quick speed and low memory requirement of PSTD. Parallelization cost of the HPF method is exactly the same as the parallel FDTD, but still, it occupies much less memory space and has faster operation speed than the parallel FDTD. Experiments in distributed memory systems have shown that the parallel HPF method saves up to 96% of the operation time and reduces 84% of the memory requirement. Also, by combining the OpenMP library to the MPI library, we further reduced the operation time of the parallel HPF method by 50%.Keywords: FDTD, hybrid, MPI, OpenMP, PSTD, parallelization
Procedia PDF Downloads 14821147 Visualizing the Future of New York’s Southern Tier: Engaging Students to Help Create Sustainable Communities
Authors: William C. Dean
Abstract:
In the pedagogical sequence of the four- and five-year architectural programs at Alfred State, the fourth-year Urban Design Studio constitutes the first course where students directly explore design issues in the urban context. It is the first large-scale, community-based service learning project for most of the participating students. The students learn key lessons that include the benefits of working both individually and in groups of different sizes toward a common goal, accepting - and responding creatively too - criticism from stakeholders at different points in the project, and recognizing the role that local politics and activism can play in planning for community development. Above all, students are exposed to the importance of good planning in relation to preservation and community revitalization. The purpose of this paper is to discuss the use of community-based service-learning projects in undergraduate architectural education to promote student civic engagement as a means of helping communities visualize potential solutions for revitalizing their neighborhoods and business districts. A series of case studies will be presented in terms of challenges that were encountered, opportunities for student engagement and leadership, and the feasibility of sustainable community development resulting from those projects. The reader will be encouraged to consider how they can recognize needs within their own communities that could benefit from the assistance of architecture students and faculty.Keywords: urban design, service-learning, civic engagement, community revitalization
Procedia PDF Downloads 9521146 The Use of Fractional Brownian Motion in the Generation of Bed Topography for Bodies of Water Coupled with the Lattice Boltzmann Method
Authors: Elysia Barker, Jian Guo Zhou, Ling Qian, Steve Decent
Abstract:
A method of modelling topography used in the simulation of riverbeds is proposed in this paper, which removes the need for datapoints and measurements of physical terrain. While complex scans of the contours of a surface can be achieved with other methods, this requires specialised tools, which the proposed method overcomes by using fractional Brownian motion (FBM) as a basis to estimate the real surface within a 15% margin of error while attempting to optimise algorithmic efficiency. This removes the need for complex, expensive equipment and reduces resources spent modelling bed topography. This method also accounts for the change in topography over time due to erosion, sediment transport, and other external factors which could affect the topography of the ground by updating its parameters and generating a new bed. The lattice Boltzmann method (LBM) is used to simulate both stationary and steady flow cases in a side-by-side comparison over the generated bed topography using the proposed method and a test case taken from an external source. The method, if successful, will be incorporated into the current LBM program used in the testing phase, which will allow an automatic generation of topography for the given situation in future research, removing the need for bed data to be specified.Keywords: bed topography, FBM, LBM, shallow water, simulations
Procedia PDF Downloads 9921145 Kernel Parallelization Equation for Identifying Structures under Unknown and Periodic Loads
Authors: Seyed Sadegh Naseralavi
Abstract:
This paper presents a Kernel parallelization equation for damage identification in structures under unknown periodic excitations. Herein, the dynamic differential equation of the motion of structure is viewed as a mapping from displacements to external forces. Utilizing this viewpoint, a new method for damage detection in structures under periodic loads is presented. The developed method requires only two periods of load. The method detects the damages without finding the input loads. The method is based on the fact that structural displacements under free and forced vibrations are associated with two parallel subspaces in the displacement space. Considering the concept, kernel parallelization equation (KPE) is derived for damage detection under unknown periodic loads. The method is verified for a case study under periodic loads.Keywords: Kernel, unknown periodic load, damage detection, Kernel parallelization equation
Procedia PDF Downloads 28721144 The Social Impact of Green Buildings
Authors: Elise Machline
Abstract:
Policy instruments have been developed worldwide to reduce the energy demand of buildings. Two types of such instruments have been green building rating systems and energy efficiency standards for buildings -such as Green Star (Australia), LEED (United States, Leadership in Energy and Environmental Design), Energy Star (United States), and BREEAM (United Kingdom, Building Research Establishment Environmental Assessment Method). The popularity of the idea of sustainable development has allowed the actors to consider the potential value generated by the environmental performance of buildings, labeled “green value” in the literature. Sustainable performances of buildings are expected to improve their attractiveness, increasing their value. A growing number of empirical studies demonstrate that green buildings yield rental/sale premia, as well as higher occupancy rates and thus higher asset values. The results suggest that green buildings are not affordable to all and that their construction tends to have a gentrifying effect. An increasing number of countries are institutionalizing green strategies for affordable housing. In that sense, making green buildings affordable to all will depend on government policies. That research aims to investigate whether green building fosters inequality in Israel, under the banner of sustainability. The method is comparison (of the market value). This method involves comparing the green buildings sale prices with non-certified buildings of the same type that have undergone recent transactions. The “market value” is deduced from those sources by analogy. The results show that, in Israel, green building projects are usually addressed to the middle to upper classes. The green apartment’s sale premium is about 19% (comparing to non-certified dwelling). There is a link between energy and/or environmental performance and the financial value of the dwellings. Moreover, price differential is much higher than the value of energy savings. This perpetuates socio-spatial and socio-economic inequality as well as ecological vulnerability for the poor and other socially marginal groups. Moreover, there are no green affordable housings and the authorities do not subsidy green building or retrofitting.Keywords: green building, gentrification, social housing, green value, green building certification
Procedia PDF Downloads 42021143 Support for Planning of Mobile Personnel Tasks by Solving Time-Dependent Routing Problems
Authors: Wlodzimierz Ogryczak, Tomasz Sliwinski, Jaroslaw Hurkala, Mariusz Kaleta, Bartosz Kozlowski, Piotr Palka
Abstract:
Implementation concepts of a decision support system for planning and management of mobile personnel tasks (sales representatives and others) are discussed. Large-scale periodic time-dependent vehicle routing and scheduling problems with complex constraints are solved for this purpose. Complex nonuniform constraints with respect to frequency, time windows, working time, etc. are taken into account with additional fast adaptive procedures for operational rescheduling of plans in the presence of various disturbances. Five individual solution quality indicators with respect to a single personnel person are considered. This paper deals with modeling issues corresponding to the problem and general solution concepts. The research was supported by the European Union through the European Regional Development Fund under the Operational Programme ‘Innovative Economy’ for the years 2007-2013; Priority 1 Research and development of modern technologies under the project POIG.01.03.01-14-076/12: 'Decision Support System for Large-Scale Periodic Vehicle Routing and Scheduling Problems with Complex Constraints.'Keywords: mobile personnel management, multiple criteria, time dependent, time windows, vehicle routing and scheduling
Procedia PDF Downloads 32321142 MATLAB Supported Learning and Students' Conceptual Understanding of Functions of Two Variables: Experiences from Wolkite University
Authors: Eyasu Gemech, Kassa Michael, Mulugeta Atnafu
Abstract:
A non-equivalent group's quasi-experiment research was conducted at Wolkite University to investigate MATLAB supported learning and students' conceptual understanding in learning Applied Mathematics II using four different comparative instructional approaches: MATLAB supported traditional lecture method, MATLAB supported collaborative method, only collaborative method, and only traditional lecture method. Four intact classes of mechanical engineering groups 1 and 2, garment engineering and textile engineering students were randomly selected out of eight departments. The first three departments were considered as treatment groups and the fourth one 'Textile engineering' was assigned as a comparison group. The departments had 30, 29, 35 and 32 students respectively. The results of the study show that there is a significant mean difference in students' conceptual understanding between groups of students learning through MATLAB supported collaborative method and the other learning approaches. Students who were learned through MATLAB technology-supported learning in combination with collaborative method were found to understand concepts of functions of two variables better than students learning through the other methods of learning. These, hence, are informative of the potential approaches universities would follow for a better students’ understanding of concepts.Keywords: MATLAB supported collaborative method, MATLAB supported learning, collaborative method, conceptual understanding, functions of two variables
Procedia PDF Downloads 28121141 Forecasting Amman Stock Market Data Using a Hybrid Method
Authors: Ahmad Awajan, Sadam Al Wadi
Abstract:
In this study, a hybrid method based on Empirical Mode Decomposition and Holt-Winter (EMD-HW) is used to forecast Amman stock market data. First, the data are decomposed by EMD method into Intrinsic Mode Functions (IMFs) and residual components. Then, all components are forecasted by HW technique. Finally, forecasting values are aggregated together to get the forecasting value of stock market data. Empirical results showed that the EMD- HW outperform individual forecasting models. The strength of this EMD-HW lies in its ability to forecast non-stationary and non- linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy comparing with eight existing forecasting methods based on the five forecast error measures.Keywords: Holt-Winter method, empirical mode decomposition, forecasting, time series
Procedia PDF Downloads 13221140 Assessment of ASEI-PDSI Method on Students’ Attitude and Achievement in Junior Secondary Schools Mathematics in FCT-Abuja
Authors: Amenaghawon Clement Osemwinyen
Abstract:
The Activity, Student-centred, Experiment, Improvisation - Plan, Do, See, Improve (ASEI-PDSI) method championed by the Strengthening Mathematics And Science Education (SMASE) - Nigeria Project is an attempt to improve the quality of mathematics, which has consistently declined over the years in both public primary and secondary schools across the country. The study thus assessed the ASEI-PDSI method on students’ attitudes and achievement in junior secondary schools (JSS) mathematics in FCT-Abuja. A survey research design was adopted, and 100 mathematics teachers using a stratified random sampling method were used for the study. The data were collected using structured questionnaires and analyzed using descriptive statistics. The findings showed that the ASEI-PDSI method had significantly improved the attitudes of students toward mathematics. The study also revealed that the ASEI-PDSI method significantly influenced junior secondary school (JSS) students’ mathematics achievement. Amongst the recommendations were that teachers should be encouraged to adopt the ASEI-PDSI method in teaching and learning mathematics in order to create a mathematically stimulating classroom environment which could advertently influence junior secondary school (JSS) students’ attitude and academic performance in mathematics. Also, regular in-service training programs should be organized by stakeholders (government and other interest groups) so as to improve the teaching strategies of teachers, mostly as they affect the ASEI-PDSI method.Keywords: achievement, ASEI-PDSI method, attitude, mathematics, SMASE
Procedia PDF Downloads 11921139 Finite Element Method for Calculating Temperature Field of Main Cable of Suspension Bridge
Authors: Heng Han, Zhilei Liang, Xiangong Zhou
Abstract:
In this paper, the finite element method is used to study the temperature field of the main cable of the suspension bridge, and the calculation method of the average temperature of the cross-section of the main cable suitable for the construction control of the cable system is proposed; By comparing and analyzing the temperature field of the main cable with five diameters, a reasonable diameter limit for calculating the average temperature of the cross section of the main cable by finite element method is proposed. The results show that the maximum error of this method is less than 1℃, which meets the requirements of construction control accuracy; For the main cable with a diameter greater than 400mm, the surface temperature measuring points combined with the finite element method shall be used to calculate the average cross-section temperature.Keywords: suspension bridge, main cable, temperature field, finite element
Procedia PDF Downloads 16321138 A Stable Method for Determination of the Number of Independent Components
Authors: Yuyan Yi, Jingyi Zheng, Nedret Billor
Abstract:
Independent component analysis (ICA) is one of the most commonly used blind source separation (BSS) techniques for signal pre-processing, such as noise reduction and feature extraction. The main parameter in the ICA method is the number of independent components (IC). Although there have been several methods for the determination of the number of ICs, it has not been given sufficient attentionto this important parameter. In this study, wereview the mostused methods fordetermining the number of ICs and providetheir advantages and disadvantages. Further, wepropose an improved version of column-wise ICAByBlock method for the determination of the number of ICs.To assess the performance of the proposed method, we compare the column-wise ICAbyBlock with several existing methods through different ICA methods by using simulated and real signal data. Results show that the proposed column-wise ICAbyBlock is an effective and stable method for determining the optimal number of components in ICA. This method is simple, and results can be demonstrated intuitively with good visualizations.Keywords: independent component analysis, optimal number, column-wise, correlation coefficient, cross-validation, ICAByblock
Procedia PDF Downloads 10021137 Lessons Learnt from Industry: Achieving Net Gain Outcomes for Biodiversity
Authors: Julia Baker
Abstract:
Development plays a major role in stopping biodiversity loss. But the ‘silo species’ protection of legislation (where certain species are protected while many are not) means that development can be ‘legally compliant’ and result in biodiversity loss. ‘Net Gain’ (NG) policies can help overcome this by making it an absolute requirement that development causes no overall loss of biodiversity and brings a benefit. However, offsetting biodiversity losses in one location with gains elsewhere is controversial because people suspect ‘offsetting’ to be an easy way for developers to buy their way out of conservation requirements. Yet the good practice principles (GPP) of offsetting provide several advantages over existing legislation for protecting biodiversity from development. This presentation describes the learning from implementing NG approaches based on GPP. It regards major upgrades of the UK’s transport networks, which involved removing vegetation in order to construct and safely operate new infrastructure. While low-lying habitats were retained, trees and other habitats disrupting the running or safety of transport networks could not. Consequently, achieving NG within the transport corridor was not possible and offsetting was required. The first ‘lessons learnt’ were on obtaining a commitment from business leaders to go beyond legislative requirements and deliver NG, and on the institutional change necessary to embed GPP within daily operations. These issues can only be addressed when the challenges that biodiversity poses for business are overcome. These challenges included: biodiversity cannot be measured easily unlike other sustainability factors like carbon and water that have metrics for target-setting and measuring progress; and, the mindset that biodiversity costs money and does not generate cash in return, which is the opposite of carbon or waste for example, where people can see how ‘sustainability’ actions save money. The challenges were overcome by presenting the GPP of NG as a cost-efficient solution to specific, critical risks facing the business that also boost industry recognition, and by using government-issued NG metrics to develop business-specific toolkits charting their NG progress whilst ensuring that NG decision-making was based on rich ecological data. An institutional change was best achieved by supporting, mentoring and training sustainability/environmental managers for these ‘frontline’ staff to embed GPP within the business. The second learning was from implementing the GPP where business partnered with local governments, wildlife groups and land owners to support their priorities for nature conservation, and where these partners had a say in decisions about where and how best to achieve NG. From this inclusive approach, offsetting contributed towards conservation priorities when all collaborated to manage trade-offs between: -Delivering ecologically equivalent offsets or compensating for losses of one type of biodiversity by providing another. -Achieving NG locally to the development whilst contributing towards national conservation priorities through landscape-level planning. -Not just protecting the extent and condition of existing biodiversity but ‘doing more’. -The multi-sector collaborations identified practical, workable solutions to ‘in perpetuity’. But key was strengthening linkages between biodiversity measures implemented for development and conservation work undertaken by local organizations so that developers support NG initiatives that really count.Keywords: biodiversity offsetting, development, nature conservation planning, net gain
Procedia PDF Downloads 19621136 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition
Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can
Abstract:
To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning
Procedia PDF Downloads 8621135 The Use of Stochastic Gradient Boosting Method for Multi-Model Combination of Rainfall-Runoff Models
Authors: Phanida Phukoetphim, Asaad Y. Shamseldin
Abstract:
In this study, the novel Stochastic Gradient Boosting (SGB) combination method is addressed for producing daily river flows from four different rain-runoff models of Ohinemuri catchment, New Zealand. The selected rainfall-runoff models are two empirical black-box models: linear perturbation model and linear varying gain factor model, two conceptual models: soil moisture accounting and routing model and Nedbør-Afrstrømnings model. In this study, the simple average combination method and the weighted average combination method were used as a benchmark for comparing the results of the novel SGB combination method. The models and combination results are evaluated using statistical and graphical criteria. Overall results of this study show that the use of combination technique can certainly improve the simulated river flows of four selected models for Ohinemuri catchment, New Zealand. The results also indicate that the novel SGB combination method is capable of accurate prediction when used in a combination method of the simulated river flows in New Zealand.Keywords: multi-model combination, rainfall-runoff modeling, stochastic gradient boosting, bioinformatics
Procedia PDF Downloads 33921134 Stable Tending Control of Complex Power Systems: An Example of Localized Design of Power System Stabilizers
Authors: Wenjuan Du
Abstract:
The phase compensation method was proposed based on the concept of the damping torque analysis (DTA). It is a method for the design of a PSS (power system stabilizer) to suppress local-mode power oscillations in a single-machine infinite-bus power system. This paper presents the application of the phase compensation method for the design of a PSS in a multi-machine power system. The application is achieved by examining the direct damping contribution of the stabilizer to the power oscillations. By using linearized equal area criterion, a theoretical proof to the application for the PSS design is presented. Hence PSS design in the paper is an example of stable tending control by localized method.Keywords: phase compensation method, power system small-signal stability, power system stabilizer
Procedia PDF Downloads 64121133 Achieving Design-Stage Elemental Cost Planning Accuracy: Case Study of New Zealand
Authors: Johnson Adafin, James O. B. Rotimi, Suzanne Wilkinson, Abimbola O. Windapo
Abstract:
An aspect of client expenditure management that requires attention is the level of accuracy achievable in design-stage elemental cost planning. This has been a major concern for construction clients and practitioners in New Zealand (NZ). Pre-tender estimating inaccuracies are significantly influenced by the level of risk information available to estimators. Proper cost planning activities should ensure the production of a project’s likely construction costs (initial and final), and subsequent cost control activities should prevent unpleasant consequences of cost overruns, disputes and project abandonment. If risks were properly identified and priced at the design stage, observed variance between design-stage elemental cost plans (ECPs) and final tender sums (FTS) (initial contract sums) could be reduced. This study investigates the variations between design-stage ECPs and FTS of construction projects, with a view to identifying risk factors that are responsible for the observed variance. Data were sourced through interviews, and risk factors were identified by using thematic analysis. Access was obtained to project files from the records of study participants (consultant quantity surveyors), and document analysis was employed in complementing the responses from the interviews. Study findings revealed the discrepancies between ECPs and FTS in the region of -14% and +16%. It is opined in this study that the identified risk factors were responsible for the variability observed. The values obtained from the analysis would enable greater accuracy in the forecast of FTS by Quantity Surveyors. Further, whilst inherent risks in construction project developments are observed globally, these findings have important ramifications for construction projects by expanding existing knowledge on what is needed for reasonable budgetary performance and successful delivery of construction projects. The findings contribute significantly to the study by providing quantitative confirmation to justify the theoretical conclusions generated in the literature from around the world. This therefore adds to and consolidates existing knowledge.Keywords: accuracy, design-stage, elemental cost plan, final tender sum
Procedia PDF Downloads 270