Search results for: interpolation constraints
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1353

Search results for: interpolation constraints

483 Quantifying Meaning in Biological Systems

Authors: Richard L. Summers

Abstract:

The advanced computational analysis of biological systems is becoming increasingly dependent upon an understanding of the information-theoretic structure of the materials, energy and interactive processes that comprise those systems. The stability and survival of these living systems are fundamentally contingent upon their ability to acquire and process the meaning of information concerning the physical state of its biological continuum (biocontinuum). The drive for adaptive system reconciliation of a divergence from steady-state within this biocontinuum can be described by an information metric-based formulation of the process for actionable knowledge acquisition that incorporates the axiomatic inference of Kullback-Leibler information minimization driven by survival replicator dynamics. If the mathematical expression of this process is the Lagrangian integrand for any change within the biocontinuum then it can also be considered as an action functional for the living system. In the direct method of Lyapunov, such a summarizing mathematical formulation of global system behavior based on the driving forces of energy currents and constraints within the system can serve as a platform for the analysis of stability. As the system evolves in time in response to biocontinuum perturbations, the summarizing function then conveys information about its overall stability. This stability information portends survival and therefore has absolute existential meaning for the living system. The first derivative of the Lyapunov energy information function will have a negative trajectory toward a system's steady state if the driving force is dissipating. By contrast, system instability leading to system dissolution will have a positive trajectory. The direction and magnitude of the vector for the trajectory then serves as a quantifiable signature of the meaning associated with the living system’s stability information, homeostasis and survival potential.

Keywords: meaning, information, Lyapunov, living systems

Procedia PDF Downloads 115
482 Improving the Global Competitiveness of SMEs by Logistics Transportation Management: Case Study Chicken Meat Supply Chain

Authors: P. Vanichkobchinda

Abstract:

The Logistics Transportation techniques, Open Vehicle Routing (OVR) is an approach toward transportation cost reduction, especially for long distance pickup and delivery nodes. The outstanding characteristic of OVR is that the route starting node and ending node are not necessary the same as in typical vehicle routing problems. This advantage enables the routing to flow continuously and the vehicle does not always return to its home base. This research aims to develop a heuristic for the open vehicle routing problem with pickup and delivery under time window and loading capacity constraints to minimize the total distance. The proposed heuristic is developed based on the Insertion method, which is a simple method and suitable for the rapid calculation that allows insertion of the new additional transportation requirements along the original paths. According to the heuristic analysis, cost comparisons between the proposed heuristic and companies are using method, nearest neighbor method show that the insertion heuristic. Moreover, the proposed heuristic gave superior solutions in all types of test problems. In conclusion, the proposed heuristic can effectively and efficiently solve the open vehicle routing. The research indicates that the improvement of new transport's calculation and the open vehicle routing with "Insertion Heuristic" represent a better outcome with 34.3 percent in average. in cost savings. Moreover, the proposed heuristic gave superior solutions in all types of test problems. In conclusion, the proposed heuristic can effectively and efficiently solve the open vehicle routing.

Keywords: business competitiveness, cost reduction, SMEs, logistics transportation, VRP

Procedia PDF Downloads 669
481 User-Perceived Quality Factors for Certification Model of Web-Based System

Authors: Jamaiah H. Yahaya, Aziz Deraman, Abdul Razak Hamdan, Yusmadi Yah Jusoh

Abstract:

One of the most essential issues in software products is to maintain it relevancy to the dynamics of the user’s requirements and expectation. Many studies have been carried out in quality aspect of software products to overcome these problems. Previous software quality assessment models and metrics have been introduced with strengths and limitations. In order to enhance the assurance and buoyancy of the software products, certification models have been introduced and developed. From our previous experiences in certification exercises and case studies collaborating with several agencies in Malaysia, the requirements for user based software certification approach is identified and demanded. The emergence of social network applications, the new development approach such as agile method and other varieties of software in the market have led to the domination of users over the software. As software become more accessible to the public through internet applications, users are becoming more critical in the quality of the services provided by the software. There are several categories of users in web-based systems with different interests and perspectives. The classifications and metrics are identified through brain storming approach with includes researchers, users and experts in this area. The new paradigm in software quality assessment is the main focus in our research. This paper discusses the classifications of users in web-based software system assessment and their associated factors and metrics for quality measurement. The quality model is derived based on IEEE structure and FCM model. The developments are beneficial and valuable to overcome the constraints and improve the application of software certification model in future.

Keywords: software certification model, user centric approach, software quality factors, metrics and measurements, web-based system

Procedia PDF Downloads 382
480 Bluetooth Communication Protocol Study for Multi-Sensor Applications

Authors: Joao Garretto, R. J. Yarwood, Vamsi Borra, Frank Li

Abstract:

Bluetooth Low Energy (BLE) has emerged as one of the main wireless communication technologies used in low-power electronics, such as wearables, beacons, and Internet of Things (IoT) devices. BLE’s energy efficiency characteristic, smart mobiles interoperability, and Over the Air (OTA) capabilities are essential features for ultralow-power devices, which are usually designed with size and cost constraints. Most current research regarding the power analysis of BLE devices focuses on the theoretical aspects of the advertising and scanning cycles, with most results being presented in the form of mathematical models and computer software simulations. Such computer modeling and simulations are important for the comprehension of the technology, but hardware measurement is essential for the understanding of how BLE devices behave in real operation. In addition, recent literature focuses mostly on the BLE technology, leaving possible applications and its analysis out of scope. In this paper, a coin cell battery-powered BLE Data Acquisition Device, with a 4-in-1 sensor and one accelerometer, is proposed and evaluated with respect to its Power Consumption. First, evaluations of the device in advertising mode with the sensors turned off completely, followed by the power analysis when each of the sensors is individually turned on and data is being transmitted, and concluding with the power consumption evaluation when both sensors are on and respectively broadcasting the data to a mobile phone. The results presented in this paper are real-time measurements of the electrical current consumption of the BLE device, where the energy levels that are demonstrated are matched to the BLE behavior and sensor activity.

Keywords: bluetooth low energy, power analysis, BLE advertising cycle, wireless sensor node

Procedia PDF Downloads 72
479 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: economic production quantity, random cost, supply chain management, vendor-managed inventory

Procedia PDF Downloads 106
478 Applying Lean Six Sigma in an Emergency Department, of a Private Hospital

Authors: Sarah Al-Lumai, Fatima Al-Attar, Nour Jamal, Badria Al-Dabbous, Manal Abdulla

Abstract:

Today, many commonly used Industrial Engineering tools and techniques are being used in hospitals around the world for the goal of producing a more efficient and effective healthcare system. A common quality improvement methodology known as Lean Six-Sigma has been successful in manufacturing industries and recently in healthcare. The objective of our project is to use the Lean Six-Sigma methodology to reduce waiting time in the Emergency Department (ED), in a local private hospital. Furthermore, a comprehensive literature review was conducted to evaluate the success of Lean Six-Sigma in the ED. According to the study conducted by Ibn Sina Hospital, in Morocco, the most common problem that patients complain about is waiting time. To ensure patient satisfaction many hospitals such as North Shore University Hospital were able to reduce waiting time up to 37% by using Lean Six-Sigma. Other hospitals, such as John Hopkins’s medical center used Lean Six-Sigma successfully to enhance the overall patient flow that ultimately decreased waiting time. Furthermore, it was found that capacity constraints, such as staff shortages and lack of beds were one of the main reasons behind long waiting time. With the use of Lean Six-Sigma and bed management, hospitals like Memorial Hermann Southwest Hospital were able to reduce patient delays. Moreover, in order to successfully implement Lean Six-Sigma in our project, two common methodologies were considered, DMAIC and DMADV. After the assessment of both methodologies, it was found that DMAIC was a more suitable approach to our project because it is more concerned with improving an already existing process. With many of its successes, Lean Six-Sigma has its limitation especially in healthcare; but limitations can be minimized if properly approached.

Keywords: lean six sigma, DMAIC, hospital, methodology

Procedia PDF Downloads 475
477 Application of Simulation of Discrete Events in Resource Management of Massive Concreting

Authors: Mohammad Amin Hamedirad, Seyed Javad Vaziri Kang Olyaei

Abstract:

Project planning and control are one of the most critical issues in the management of construction projects. Traditional methods of project planning and control, such as the critical path method or Gantt chart, are not widely used for planning projects with discrete and repetitive activities, and one of the problems of project managers is planning the implementation process and optimal allocation of its resources. Massive concreting projects is also a project with discrete and repetitive activities. This study uses the concept of simulating discrete events to manage resources, which includes finding the optimal number of resources considering various limitations such as limitations of machinery, equipment, human resources and even technical, time and implementation limitations using analysis of resource consumption rate, project completion time and critical points analysis of the implementation process. For this purpose, the concept of discrete-event simulation has been used to model different stages of implementation. After reviewing the various scenarios, the optimal number of allocations for each resource is finally determined to reach the maximum utilization rate and also to reduce the project completion time or reduce its cost according to the existing constraints. The results showed that with the optimal allocation of resources, the project completion time could be reduced by 90%, and the resulting costs can be reduced by up to 49%. Thus, allocating the optimal number of project resources using this method will reduce its time and cost.

Keywords: simulation, massive concreting, discrete event simulation, resource management

Procedia PDF Downloads 124
476 Solving the Economic Load Dispatch Problem Using Differential Evolution

Authors: Alaa Sheta

Abstract:

Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.

Keywords: economic load dispatch, power systems, optimization, differential evolution

Procedia PDF Downloads 264
475 Topology Optimization of Heat Exchanger Manifolds for Aircraft

Authors: Hanjong Kim, Changwan Han, Seonghun Park

Abstract:

Heat exchanger manifolds in aircraft play an important role in evenly distributing the fluid entering through the inlet to the heat transfer unit. In order to achieve this requirement, the manifold should be designed to have a light weight by withstanding high internal pressure. Therefore, this study aims at minimizing the weight of the heat exchanger manifold through topology optimization. For topology optimization, the initial design space was created with the inner surface extracted from the currently used manifold model and with the outer surface having a dimension of 243.42 mm of X 74.09 mm X 65 mm. This design space solid model was transformed into a finite element model with a maximum tetrahedron mesh size of 2 mm using ANSYS Workbench. Then, topology optimization was performed under the boundary conditions of an internal pressure of 5.5 MPa and the fixed support for rectangular inlet boundaries by SIMULIA TOSCA. This topology optimization produced the minimized finial volume of the manifold (i.e., 7.3% of the initial volume) based on the given constraints (i.e., 6% of the initial volume) and the objective function (i.e., maximizing manifold stiffness). Weight of the optimized model was 6.7% lighter than the currently used manifold, but after smoothing the topology optimized model, this difference would be bigger. The current optimized model has uneven thickness and skeleton-shaped outer surface to reduce stress concentration. We are currently simplifying the optimized model shape with spline interpolations by reflecting the design characteristics in thickness and skeletal structures from the optimized model. This simplified model will be validated again by calculating both stress distributions and weight reduction and then the validated model will be manufactured using 3D printing processes.

Keywords: topology optimization, manifold, heat exchanger, 3D printing

Procedia PDF Downloads 225
474 The Role of Teacher Candidates' Beliefs in Their Development of Inclusive Teaching Practices

Authors: Charlotte Brenner, Fisayo Latilo, McKenna Causey

Abstract:

This study explores the transformation of teacher candidates' beliefs regarding inclusion and inclusive teaching practices during their instructional and practicum experiences in the Canadian context. With the increasing diversity of schools, the study investigates how teacher candidates' beliefs impact their implementation of inclusive teaching practices, which are essential for meeting diverse student needs. The research examines the influence of teacher education programs, transformative learning experiences, and inclusive practicum placements on teacher candidates' beliefs about inclusion. Using a multiple case study approach, the study assesses teacher candidates' initial beliefs, documents changes in these beliefs after coursework on inclusion, and explores the supports and constraints affecting belief development in both university and practicum settings. Preliminary findings suggest that teacher candidates generally hold positive beliefs about inclusion at the outset of their teacher education programs. However, coursework and practicum experiences significantly shape their understanding of diversity, strategies for inclusion, and awareness of broader social issues related to inclusive classrooms. The research underscores the critical role of teacher education programs in shaping teacher candidates' beliefs about inclusion and highlights the value of transformative learning experiences and inclusive practicum placements in enhancing their understanding of equity and inclusion. Continued research is necessary to identify specific elements within courses and practicum experiences that promote positive beliefs about inclusive teaching practices, ultimately contributing to the creation of more equitable classrooms and improved student outcomes.

Keywords: inclusion, beliefs, teacher candidates, inclusive teaching practices

Procedia PDF Downloads 55
473 A Ground Structure Method to Minimize the Total Installed Cost of Steel Frame Structures

Authors: Filippo Ranalli, Forest Flager, Martin Fischer

Abstract:

This paper presents a ground structure method to optimize the topology and discrete member sizing of steel frame structures in order to minimize total installed cost, including material, fabrication and erection components. The proposed method improves upon existing cost-based ground structure methods by incorporating constructability considerations well as satisfying both strength and serviceability constraints. The architecture for the method is a bi-level Multidisciplinary Feasible (MDF) architecture in which the discrete member sizing optimization is nested within the topology optimization process. For each structural topology generated, the sizing optimization process seek to find a set of discrete member sizes that result in the lowest total installed cost while satisfying strength (member utilization) and serviceability (node deflection and story drift) criteria. To accurately assess cost, the connection details for the structure are generated automatically using accurate site-specific cost information obtained directly from fabricators and erectors. Member continuity rules are also applied to each node in the structure to improve constructability. The proposed optimization method is benchmarked against conventional weight-based ground structure optimization methods resulting in an average cost savings of up to 30% with comparable computational efficiency.

Keywords: cost-based structural optimization, cost-based topology and sizing, optimization, steel frame ground structure optimization, multidisciplinary optimization of steel structures

Procedia PDF Downloads 321
472 Adding a Few Language-Level Constructs to Improve OOP Verifiability of Semantic Correctness

Authors: Lian Yang

Abstract:

Object-oriented programming (OOP) is the dominant programming paradigm in today’s software industry and it has literally enabled average software developers to develop millions of commercial strength software applications in the era of INTERNET revolution over the past three decades. On the other hand, the lack of strict mathematical model and domain constraint features at the language level has long perplexed the computer science academia and OOP engineering community. This situation resulted in inconsistent system qualities and hard-to-understand designs in some OOP projects. The difficulties with regards to fix the current situation are also well known. Although the power of OOP lies in its unbridled flexibility and enormously rich data modeling capability, we argue that the ambiguity and the implicit facade surrounding the conceptual model of a class and an object should be eliminated as much as possible. We listed the five major usage of class and propose to separate them by proposing new language constructs. By using well-established theories of set and FSM, we propose to apply certain simple, generic, and yet effective constraints at OOP language level in an attempt to find a possible solution to the above-mentioned issues regarding OOP. The goal is to make OOP more theoretically sound as well as to aid programmers uncover warning signs of irregularities and domain-specific issues in applications early on the development stage and catch semantic mistakes at runtime, improving correctness verifiability of software programs. On the other hand, the aim of this paper is more practical than theoretical.

Keywords: new language constructs, set theory, FSM theory, user defined value type, function groups, membership qualification attribute (MQA), check-constraint (CC)

Procedia PDF Downloads 222
471 An Optimization Model for the Arrangement of Assembly Areas Considering Time Dynamic Area Requirements

Authors: Michael Zenker, Henrik Prinzhorn, Christian Böning, Tom Strating

Abstract:

Large-scale products are often assembled according to the job-site principle, meaning that during the assembly the product is located at a fixed position, while the area requirements are constantly changing. On one hand, the product itself is growing with each assembly step, whereas varying areas for storage, machines or working areas are temporarily required. This is an important factor when arranging products to be assembled within the factory. Currently, it is common to reserve a fixed area for each product to avoid overlaps or collisions with the other assemblies. Intending to be large enough to include the product and all adjacent areas, this reserved area corresponds to the superposition of the maximum extents of all required areas of the product. In this procedure, the reserved area is usually poorly utilized over the course of the entire assembly process; instead a large part of it remains unused. If the available area is a limited resource, a systematic arrangement of the products, which complies with the dynamic area requirements, will lead to an increased area utilization and productivity. This paper presents the results of a study on the arrangement of assembly objects assuming dynamic, competing area requirements. First, the problem situation is extensively explained, and existing research on associated topics is described and evaluated on the possibility of an adaptation. Then, a newly developed mathematical optimization model is introduced. This model allows an optimal arrangement of dynamic areas, considering logical and practical constraints. Finally, in order to quantify the potential of the developed method, some test series results are presented, showing the possible increase in area utilization.

Keywords: dynamic area requirements, facility layout problem, optimization model, product assembly

Procedia PDF Downloads 211
470 Heuristics for Optimizing Power Consumption in the Smart Grid

Authors: Zaid Jamal Saeed Almahmoud

Abstract:

Our increasing reliance on electricity, with inefficient consumption trends, has resulted in several economical and environmental threats. These threats include wasting billions of dollars, draining limited resources, and elevating the impact of climate change. As a solution, the smart grid is emerging as the future power grid, with smart techniques to optimize power consumption and electricity generation. Minimizing the peak power consumption under a fixed delay requirement is a significant problem in the smart grid. In addition, matching demand to supply is a key requirement for the success of the future electricity. In this work, we consider the problem of minimizing the peak demand under appliances constraints by scheduling power jobs with uniform release dates and deadlines. As the problem is known to be NP-Hard, we propose two versions of a heuristic algorithm for solving this problem. Our theoretical analysis and experimental results show that our proposed heuristics outperform existing methods by providing a better approximation to the optimal solution. In addition, we consider dynamic pricing methods to minimize the peak load and match demand to supply in the smart grid. Our contribution is the proposal of generic, as well as customized pricing heuristics to minimize the peak demand and match demand with supply. In addition, we propose optimal pricing algorithms that can be used when the maximum deadline period of the power jobs is relatively small. Finally, we provide theoretical analysis and conduct several experiments to evaluate the performance of the proposed algorithms.

Keywords: heuristics, optimization, smart grid, peak demand, power supply

Procedia PDF Downloads 72
469 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 53
468 Heritage Impact Assessment Policy within Western Balkans, Albania

Authors: Anisa Duraj

Abstract:

As usually acknowledged, cultural heritage is the weakest component in EIA studies. The role of heritage impact assessment (HIA) in development projects is not often accounted for, and in those cases where it is, HIA is considered as a reactive response and not as a solutions provider. Because of continuous development projects, in most cases, heritage is unconsidered and often put under threat. Cultural protection and development challenges ask for prudent legal regulation and appropriate policy implementation. The challenges become even more peculiar in underdeveloped countries or endangered areas, which are generally characterized by numerous legal constraints. Therefore, the need for strategic proposals for HIA is of high importance. In order to trigger HIA as a proactive operation in the IA process and make sure to cover cultural heritage in the whole EIA framework, an appropriate system of evaluation of impacts should be provided. To obtain the required results for HIA, this last must be part of a regional policy, which will address and guide development projects toward a proper evaluation of their impacts affecting heritage. In order to get a clearer picture of existing gabs but also new possibilities for HIA, this paper will focus on the Western Balkans region and the undergoing changes that it faces. Concerning continuous development pressure in the region and within the aspiration of the Western Balkans countries to join the European Union (EU) as member states, attention should be paid to new development policies under the EU directives for conducting EIAs, and accurate support is required for the restructuration of existing policies as well as for the implementation of the UN Agenda for SDGs. In the framework of new emerging needs, if HIA is taken into account, the outcome would be an inclusive regional program that would help to overcome marginality issues of spaces and people.

Keywords: cultural heritage, impact assessment, SDGs, urban development, western Balkans, regional policy, HIA, EIA

Procedia PDF Downloads 76
467 Optimization of Smart Beta Allocation by Momentum Exposure

Authors: J. B. Frisch, D. Evandiloff, P. Martin, N. Ouizille, F. Pires

Abstract:

Smart Beta strategies intend to be an asset management revolution with reference to classical cap-weighted indices. Indeed, these strategies allow a better control on portfolios risk factors and an optimized asset allocation by taking into account specific risks or wishes to generate alpha by outperforming indices called 'Beta'. Among many strategies independently used, this paper focuses on four of them: Minimum Variance Portfolio, Equal Risk Contribution Portfolio, Maximum Diversification Portfolio, and Equal-Weighted Portfolio. Their efficiency has been proven under constraints like momentum or market phenomenon, suggesting a reconsideration of cap-weighting.
 To further increase strategy return efficiency, it is proposed here to compare their strengths and weaknesses inside time intervals corresponding to specific identifiable market phases, in order to define adapted strategies depending on pre-specified situations. 
Results are presented as performance curves from different combinations compared to a benchmark. If a combination outperforms the applicable benchmark in well-defined actual market conditions, it will be preferred. It is mainly shown that such investment 'rules', based on both historical data and evolution of Smart Beta strategies, and implemented according to available specific market data, are providing very interesting optimal results with higher return performance and lower risk.
 Such combinations have not been fully exploited yet and justify present approach aimed at identifying relevant elements characterizing them.

Keywords: smart beta, minimum variance portfolio, equal risk contribution portfolio, maximum diversification portfolio, equal weighted portfolio, combinations

Procedia PDF Downloads 319
466 Critical Success Factors Quality Requirement Change Management

Authors: Jamshed Ahmad, Abdul Wahid Khan, Javed Ali Khan

Abstract:

Managing software quality requirements change management is a difficult task in the field of software engineering. Avoiding incoming changes result in user dissatisfaction while accommodating to many requirement changes may delay product delivery. Poor requirements management is solely considered the primary cause of the software failure. It becomes more challenging in global software outsourcing. Addressing success factors in quality requirement change management is desired today due to the frequent change requests from the end-users. In this research study, success factors are recognized and scrutinized with the help of a systematic literature review (SLR). In total, 16 success factors were identified, which significantly impacted software quality requirement change management. The findings show that Proper Requirement Change Management, Rapid Delivery, Quality Software Product, Access to Market, Project Management, Skills and Methodologies, Low Cost/Effort Estimation, Clear Plan and Road Map, Agile Processes, Low Labor Cost, User Satisfaction, Communication/Close Coordination, Proper Scheduling and Time Constraints, Frequent Technological Changes, Robust Model, Geographical distribution/Cultural differences are the key factors that influence software quality requirement change. The recognized success factors and validated with the help of various research methods, i.e., case studies, interviews, surveys and experiments. These factors are then scrutinized in continents, database, company size and period of time. Based on these findings, requirement change will be implemented in a better way.

Keywords: global software development, requirement engineering, systematic literature review, success factors

Procedia PDF Downloads 183
465 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 106
464 Failure of Agriculture Soil following the Passage of Tractors

Authors: Anis Eloud, Sayed Chehaibi

Abstract:

Compaction of agricultural soils as a result of the passage of heavy machinery on the fields is a problem that affects many agronomists and farmers since it results in a loss of yield of most crops. To remedy this, and raise the overall future of the food security challenge, we must study and understand the process of soil degradation. The present review is devoted to understanding the effect of repeated passages on agricultural land. The experiments were performed on a plot of the area of the ESIER, characterized by a clay texture in order to quantify the soil compaction caused by the wheels of the tractor during repeated passages on agricultural land. The test tractor CASE type puissance 110 hp and 5470 kg total mass of 3500 kg including the two rear axles and 1970 kg on the front axle. The state of soil compaction has been characterized by measuring its resistance to penetration by means of a penetrometer and direct manual reading, the density and permeability of the soil. Soil moisture was taken jointly. The measurements are made in the initial state before passing the tractor and after each pass varies from 1 to 7 on the track wheel inflated to 1.5 bar for the rear wheel and broke water to the level of valve and 4 bar for the front wheels. The passages are spaced to the average of one week. The results show that the passage of wheels on a farm tilled soil leads to compaction and the latter increases with the number of passages, especially for the upper 15 cm depth horizons. The first passage is characterized by the greatest effect. However, the effect of other passages do not follow a definite law for the complex behavior of granular media and the history of labor and the constraints it suffers from its formation.

Keywords: wheel traffic, tractor, soil compaction, wheel

Procedia PDF Downloads 461
463 Landscape Planning And Development Of Integrated Farming Based On Low External Input Sustainable Agriculture (LEISA) In Pangulah Village, Karawang County, West Java, Indonesia

Authors: Eduwin Eko Franjaya, Yesi Hendriani Supartoyo

Abstract:

Integrated farming with LEISA concept as one of the systems or sustainable farming techniques in agriculture has provided opportunities to increase farmers' income. This system also has a positive impact on the environment. However, the development of integrated farming is still on a small scale/site scale. Development on a larger scale is necessary considering to the number of potential resources in the village that can be integrated each other. The aim of this research is to develop an integrated farming landscape on small scale that has been done in previous study, into the village scale. The method used in this study follows the rules of scientific planning in landscape architecture. The initial phase begins with an inventory of the existing condition of the village, by conducting a survey. The second stage is analysis of potential and constraints in the village based on the results of a survey that has been done before. The next stage is concept-making that consists of basic concept, design concept, and development concept. The basic concept is integrated farming based on LEISA. The design concept is based on commodities that are developed in the village. The development concept consists of space concept, circulation concept, the concept of vegetation and commodities, and the concept of the production system. The last stage is planning process which produces Site Plan based on LEISA on village scale. Site Plan is also the end product of this research. The results of this research are expected to increase the income and welfare of the farmers in the village, and can be develop into a tourism area of integrated farming.

Keywords: integrated farming, LEISA, site plan, sustainable agriculture

Procedia PDF Downloads 426
462 Back to Nature: Addressing the German Nudist Movement’s Colonial Past and Its Repercussions

Authors: Saskia Köbschall

Abstract:

The idea of ‘being close to nature’ and the ways of achieving this proximity are socially and historically constructed, as are notions of nakedness and nudity. During the colonial period, nudity and clothedness functioned as instruments of racial domination. Nakedness became central to the colonialists’ thinking, to their binary of the ‘civilized’ and those ‘close to nature’, therefore turning the level of perceived unclothedness into a measurement of ‘civilization'. While being ‘one with nature’ continued to be a criterion of backwardness in the colonies, it emerged as a futuristic and avant-garde endeavor in the metropole: In Germany, at the height of its colonial expansion, the Life Reform Movement (Lebensreformbewegung) called for the liberation of the white body from the ‘constraints of civilization’, for its ‘return to nature’ via practices like nudism. Despite this simultaneity, the scholarship of the life reform and the nudist movement in particular does not address the colonial past of the movement or its repercussions in the present. Taking the biography of prominent life reformist Hans Paasche (1881 - 1920) as a starting point, this paper explores the work of imperial legacies in the history and present of the German nudist movement. Paasche started his career as a German colonial officer, participating in the brutal obliteration of the Maji-Maji uprising (1905/06) that claimed the lives of nearly 200.000 people. Once a passionate game hunter, he later became a known nature conservationist; once a self-proclaimed explorer of Africa, he later became one of the most prominent advocates of nudism and vegetarianism. The paper joins conceptual and historical research in order to address the German nudist movement’s colonial past and understand its repercussions in the present.

Keywords: Germany, life reform, colonialism, archives, nudity, nature

Procedia PDF Downloads 65
461 Socio-Economic Child’S Wellbeing Impasse in South Africa: Towards a Theory-Based Solution Model

Authors: Paulin Mbecke

Abstract:

Research Issue: Under economic constraints, socio-economic conditions of households worsen discounting child’s wellbeing to the bottom of many governments and households’ priority lists. In such situation, many governments fail to rebalance priorities in providing services such as education, housing and social security which are the prerequisites for the wellbeing of children. Consequently, many households struggle to respond to basic needs especially those of children. Although economic conditions play a crucial role in creating prosperity or poverty in households and therefore the wellbeing or misery for children; they are not the sole cause. Research Insights: The review of the South African Index of Multiple Deprivation and the South African Child Gauge establish the extent to which economic conditions impact on the wellbeing or misery of children. The analysis of social, cultural, environmental and structural theories demonstrates that non-economic factors contribute equally to the wellbeing or misery of children, yet, they are disregarded. In addition, the assessment of a child abuse database proves a weak correlation between economic factors (prosperity or poverty) and child’s wellbeing or misery. Theoretical Implications: Through critical social research theory and modelling, the paper proposes a Theory-Based Model that combines different factors to facilitate the understanding of child’s wellbeing or misery. Policy Implications: The proposed model assists in broad policy and decision making and reviews processes in promoting child’s wellbeing and in preventing, intervening and managing child’s misery with regard to education, housing, and social security.

Keywords: children, child’s misery, child’s wellbeing, household’s despair, household’s prosperity

Procedia PDF Downloads 265
460 Resource Allocation and Task Scheduling with Skill Level and Time Bound Constraints

Authors: Salam Saudagar, Ankit Kamboj, Niraj Mohan, Satgounda Patil, Nilesh Powar

Abstract:

Task Assignment and Scheduling is a challenging Operations Research problem when there is a limited number of resources and comparatively higher number of tasks. The Cost Management team at Cummins needs to assign tasks based on a deadline and must prioritize some of the tasks as per business requirements. Moreover, there is a constraint on the resources that assignment of tasks should be done based on an individual skill level, that may vary for different tasks. Another constraint is for scheduling the tasks that should be evenly distributed in terms of number of working hours, which adds further complexity to this problem. The proposed greedy approach to solve assignment and scheduling problem first assigns the task based on management priority and then by the closest deadline. This is followed by an iterative selection of an available resource with the least allocated total working hours for a task, i.e. finding the local optimal choice for each task with the goal of determining the global optimum. The greedy approach task allocation is compared with a variant of Hungarian Algorithm, and it is observed that the proposed approach gives an equal allocation of working hours among the resources. The comparative study of the proposed approach is also done with manual task allocation and it is noted that the visibility of the task timeline has increased from 2 months to 6 months. An interactive dashboard app is created for the greedy assignment and scheduling approach and the tasks with more than 2 months horizon that were waiting in a queue without a delivery date initially are now analyzed effectively by the business with expected timelines for completion.

Keywords: assignment, deadline, greedy approach, Hungarian algorithm, operations research, scheduling

Procedia PDF Downloads 126
459 Academic Education and Internship towards Architecture Professional Practice

Authors: Sawsan Saridar masri, Hisham Arnaouty

Abstract:

Architecture both defines and is defined by social, cultural, political and financial constraints: this is where the discipline and the profession of architecture meet. This mutual sway evolves wherever interferences in the built environment are thought-out and can be strengthened or weakened by the many ways in which the practice of architecture can be undertaken. The more familiar we are about the concerns and factors that control what can be made, the greater the opportunities to propose and make appropriate architectures. Apparently, the criteria in any qualification policy should permit flexibility of approach and will – for reasons including cultural choice, political issues, and son on – vary significantly from country to country. However the weighting of the various criteria have to ensure adequate standards both in educational system as in the professional training. This paper develops, deepens and questions about the regulatory entry routes to the professional practice of architecture in the Arab world. It is also intended to provide an informed basis about strategies for conventional and unconventional models of practice in preparation for the next stages of architect’s work experience and professional experience. With the objective of promoting the implementation of adequate built environment in the practice of architecture, a comprehensive analysis of various pathways of access to the profession are selected as case studies, encompassing examples from across the world. The review of such case studies allows the creation of a comprehensive picture in relation to the conditions for qualification of practitioners of the built environment at the level of the Middle Eastern countries and the Arab World. Such investigation considers the following aspects: professional title and domain of practice, accreditation of courses, internship and professional training, professional examination and continuing professional development.

Keywords: architecture, internship, mobility, professional practice

Procedia PDF Downloads 528
458 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks

Authors: Andrew N. Saylor, James R. Peters

Abstract:

Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.

Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging

Procedia PDF Downloads 110
457 Positive Energy Districts in the Swedish Energy System

Authors: Vartan Ahrens Kayayan, Mattias Gustafsson, Erik Dotzauer

Abstract:

The European Union is introducing the positive energy district concept, which has the goal to reduce overall carbon dioxide emissions. Other studies have already mapped the make-up of such districts, and reviewed their definitions and where they are positioned. The Swedish energy system is unique compared to others in Europe, due to the implementation of low-carbon electricity and heat energy sources and high uptake of district heating. The goal for this paper is to start the discussion about how the concept of positive energy districts can best be applied to the Swedish context and meet their mitigation goals. To explore how these differences impact the formation of positive energy districts, two cases were analyzed for their methods and how these integrate into the Swedish energy system: a district in Uppsala with a focus on energy and another in Helsingborg with a focus on climate. The case in Uppsala uses primary energy calculations which can be critisied but take a virtual border that allows for its surrounding system to be considered. The district in Helsingborg has a complex methodology for considering the life cycle emissions of the neighborhood. It is successful in considering the energy balance on a monthly basis, but it can be problematized in terms of creating sub-optimized systems due to setting tight geographical constraints. The discussion of shaping the definitions and methodologies for positive energy districts is taking place in Europe and Sweden. We identify three pitfalls that must be avoided so that positive energy districts meet their mitigation goals in the Swedish context. The goal of pushing out fossil fuels is not relevant in the current energy system, the mismatch between summer electricity production and winter energy demands should be addressed, and further implementations should consider collaboration with the established district heating grid.

Keywords: positive energy districts, energy system, renewable energy, European Union

Procedia PDF Downloads 63
456 Evidence Theory Based Emergency Multi-Attribute Group Decision-Making: Application in Facility Location Problem

Authors: Bidzina Matsaberidze

Abstract:

It is known that, in emergency situations, multi-attribute group decision-making (MAGDM) models are characterized by insufficient objective data and a lack of time to respond to the task. Evidence theory is an effective tool for describing such incomplete information in decision-making models when the expert and his knowledge are involved in the estimations of the MAGDM parameters. We consider an emergency decision-making model, where expert assessments on humanitarian aid from distribution centers (HADC) are represented in q-rung ortho-pair fuzzy numbers, and the data structure is described within the data body theory. Based on focal probability construction and experts’ evaluations, an objective function-distribution centers’ selection ranking index is constructed. Our approach for solving the constructed bicriteria partitioning problem consists of two phases. In the first phase, based on the covering’s matrix, we generate a matrix, the columns of which allow us to find all possible partitionings of the HADCs with the service centers. Some constraints are also taken into consideration while generating the matrix. In the second phase, based on the matrix and using our exact algorithm, we find the partitionings -allocations of the HADCs to the centers- which correspond to the Pareto-optimal solutions. For an illustration of the obtained results, a numerical example is given for the facility location-selection problem.

Keywords: emergency MAGDM, q-rung orthopair fuzzy sets, evidence theory, HADC, facility location problem, multi-objective combinatorial optimization problem, Pareto-optimal solutions

Procedia PDF Downloads 72
455 Tectono-Thermal Evolution of Ningwu-Jingle Basin in North China Craton: Constraints from Apatite (U–Th-Sm)/He and Fission Track Thermochronology

Authors: Zhibin Lei, Minghui Yang

Abstract:

Ningwu-Jingle basin is a structural syncline which has undergone a complex tectono-thermal history since Cretaceous. It stretches along the strike of the northern Lvliang Mountains which are the most important mountains in the middle and west of North China Craton. The Mesozoic units make up of the core of Ningwu-Jingle Basin, with pre-Mesozoic units making up of its flanks. The available low-temperature thermochronology implies that Ningwu-Jingle Basin has experienced two stages of uplifting: 94±7Ma to 111±8Ma (Albian to Cenomanian) and 62±4 to 75±5Ma (Danian to Maastrichtian). In order to constrain its tectono-thermal history in the Cenozoic, both apatite (U-Th-Sm)/He and fission track dating analysis are applied on 3 Middle Jurassic and 3 Upper Triassic sandstone samples. The central fission track ages range from 74.4±8.8Ma to 66.0±8.0Ma (Campanian to Maastrichtian) which matches well with previous data. The central He ages range from 20.1±1.2Ma to 49.1±3.0Ma (Ypresian to Burdigalian). Inverse thermal modeling is established based on both apatite fission track data and (U-Th-Sm)/He data. The thermal history obtained reveals that all 6 sandstone samples cross the high-temperature limit of fission track partial annealing zone by the uppermost Cretaceous and that of He partial retention zone by the uppermost Eocene to the early Oligocene. The result indicates that the middle and west of North China Craton is not stable in the Cenozoic.

Keywords: apatite fission track thermochronology, apatite (u–th)/he thermochronology, Ningwu-Jingle basin, North China craton, tectono-thermal history

Procedia PDF Downloads 235
454 Assessing the Feasibility of Commercial Meat Rabbit Production in the Kumasi Metropolis of Ghana

Authors: Nana Segu Acquaah-Harrison, James Osei Mensah, Richard Aidoo, David Amponsah, Amy Buah, Gilbert Aboagye

Abstract:

The study aimed at assessing the feasibility of commercial meat rabbit production in the Kumasi Metropolis of Ghana. Structured and unstructured questionnaires were utilized in obtaining information from two hundred meat consumers and 15 meat rabbit farmers. Data were analyzed using Net Present Value (NPV), Internal Rate of Return (IRR), Benefit Cost Ratio (BCR)/Profitability Index (PI) technique, percentages and chi-square contingency test. The study found that the current demand for rabbit meat is low (36%). The desirable nutritional attributes of rabbit meat and other socio economic factors of meat consumers make the potential demand for rabbit meat high (69%). It was estimated that GH¢5,292 (approximately $ 2672) was needed as a start-up capital for a 40-doe unit meat rabbit farm in Kumasi Metropolis. The cost of breeding animals, housing and equipment formed 12.47%, 53.97% and 24.87% respectively of the initial estimated capital. A Net Present Value of GH¢ 5,910.75 (approximately $ 2984) was obtained at the end of the fifth year, with an internal rate return and profitability index of 70% and 1.12 respectively. The major constraints identified in meat rabbit production were low price of rabbit meat, shortage of fodder, pest and diseases, high cost of capital, high cost of operating materials and veterinary care. Based on the analysis, it was concluded that meat rabbit production is feasible in the Kumasi Metropolis of Ghana. The study recommends embarking on mass advertisement; farmer association and adapting to new technologies in the production process will help to enhance productivity.

Keywords: feasibility, commercial meat rabbit, production, Kumasi, Ghana

Procedia PDF Downloads 105