Search results for: deterministic scheduling
143 GIS Model for Sanitary Landfill Site Selection Based on Geotechnical Parameters
Authors: Hecson Christian, Joel Macwan
Abstract:
Landfill site selection in an urban area is a critical issue in the planning process. With the growth of the urbanization, it has a mammoth impact on the economy, ecology, and environmental health of the region. Outsized amount of wastes are produced and the problem gets soared every day. Hence, selection of ideal site for sanitary landfill is a challenge for urban planners and solid waste managers. Disposal site is a function of many parameters. Among all, Geotechnical parameters are very vital as the same is related to surrounding open land. Moreover, the accessible safe and acceptable land is also scarce. Therefore, in this paper geotechnical parameters are used to develop a GIS model to identify an ideal location for landfill purpose. Metropolitan city of Surat is highly populated and fastest growing urban area in India. The research objectives are to conduct field experiments to collect data and to transfer the facts in GIS platform to evolve a model, to find ideal location. Planners’ preferences were obtained to use analytical hierarchical process (AHP) to find weights of each parameter. Integration of GIS and Multi-Criteria Decision Analysis (MCDA) techniques are applied to improve decision-making. It augments an environment for transformation and combination of geographical data and planners’ preferences. GIS performs deterministic overlay and buffer operations. MCDA methods evaluate alternatives based on the decision makers’ subjective values and priorities. Research results have shown many alternative locations. Economic analysis of selected site from actual operations point of view is not included in this research.Keywords: GIS, AHP, MCDA, Geo-technical
Procedia PDF Downloads 145142 [Keynote Talk]: The Challenges and Solutions for Developing Mobile Apps in a Small University
Authors: Greg Turner, Bin Lu, Cheer-Sun Yang
Abstract:
As computing technology advances, smartphone applications can assist in student learning in a pervasive way. For example, the idea of using a mobile apps for the PA Common Trees, Pests, Pathogens, in the field as a reference tool allows middle school students to learn about trees and associated pests/pathogens without bringing a textbook. In the past, some researches study the mobile software Mobile Application Software Development Life Cycle (MADLC) including traditional models such as the waterfall model, or more recent Agile Methods. Others study the issues related to the software development process. Very little research is on the development of three heterogenous mobile systems simultaneously in a small university where the availability of developers is an issue. In this paper, we propose to use a hybride model of Waterfall Model and the Agile Model, known as the Relay Race Methodology (RRM) in practice, to reflect the concept of racing and relaying for scheduling. Based on the development project, we observe that the modeling of the transition between any two phases is manifested naturally. Thus, we claim that the RRM model can provide a de fecto rather than a de jure basis for the core concept in the MADLC. In this paper, the background of the project is introduced first. Then, the challenges are pointed out followed by our solutions. Finally, the experiences learned and the future work are presented.Keywords: agile methods, mobile apps, software process model, waterfall model
Procedia PDF Downloads 409141 Relevance of Reliability Approaches to Predict Mould Growth in Biobased Building Materials
Authors: Lucile Soudani, Hervé Illy, Rémi Bouchié
Abstract:
Mould growth in living environments has been widely reported for decades all throughout the world. A higher level of moisture in housings can lead to building degradation, chemical component emissions from construction materials as well as enhancing mould growth within the envelope elements or on the internal surfaces. Moreover, a significant number of studies have highlighted the link between mould presence and the prevalence of respiratory diseases. In recent years, the proportion of biobased materials used in construction has been increasing, as seen as an effective lever to reduce the environmental impact of the building sector. Besides, bio-based materials are also hygroscopic materials: when in contact with the wet air of a surrounding environment, their porous structures enable a better capture of water molecules, thus providing a more suitable background for mould growth. Many studies have been conducted to develop reliable models to be able to predict mould appearance, growth, and decay over many building materials and external exposures. Some of them require information about temperature and/or relative humidity, exposure times, material sensitivities, etc. Nevertheless, several studies have highlighted a large disparity between predictions and actual mould growth in experimental settings as well as in occupied buildings. The difficulty of considering the influence of all parameters appears to be the most challenging issue. As many complex phenomena take place simultaneously, a preliminary study has been carried out to evaluate the feasibility to sadopt a reliability approach rather than a deterministic approach. Both epistemic and random uncertainties were identified specifically for the prediction of mould appearance and growth. Several studies published in the literature were selected and analysed, from the agri-food or automotive sectors, as the deployed methodology appeared promising.Keywords: bio-based materials, mould growth, numerical prediction, reliability approach
Procedia PDF Downloads 46140 Reduction of Plutonium Production in Heavy Water Research Reactor: A Feasibility Study through Neutronic Analysis Using MCNPX2.6 and CINDER90 Codes
Authors: H. Shamoradifar, B. Teimuri, P. Parvaresh, S. Mohammadi
Abstract:
One of the main characteristics of Heavy Water Moderated Reactors is their high production of plutonium. This article demonstrates the possibility of reduction of plutonium and other actinides in Heavy Water Research Reactor. Among the many ways for reducing plutonium production in a heavy water reactor, in this research, changing the fuel from natural Uranium fuel to Thorium-Uranium mixed fuel was focused. The main fissile nucleus in Thorium-Uranium fuels is U-233 which would be produced after neutron absorption by Th-232, so the Thorium-Uranium fuels have some known advantages compared to the Uranium fuels. Due to this fact, four Thorium-Uranium fuels with different compositions ratios were chosen in our simulations; a) 10% UO2-90% THO2 (enriched= 20%); b) 15% UO2-85% THO2 (enriched= 10%); c) 30% UO2-70% THO2 (enriched= 5%); d) 35% UO2-65% THO2 (enriched= 3.7%). The natural Uranium Oxide (UO2) is considered as the reference fuel, in other words all of the calculated data are compared with the related data from Uranium fuel. Neutronic parameters were calculated and used as the comparison parameters. All calculations were performed by Monte Carol (MCNPX2.6) steady state reaction rate calculation linked to a deterministic depletion calculation (CINDER90). The obtained computational data showed that Thorium-Uranium fuels with four different fissile compositions ratios can satisfy the safety and operating requirements for Heavy Water Research Reactor. Furthermore, Thorium-Uranium fuels have a very good proliferation resistance and consume less fissile material than uranium fuels at the same reactor operation time. Using mixed Thorium-Uranium fuels reduced the long-lived α emitter, high radiotoxic wastes and the radio toxicity level of spent fuel.Keywords: Heavy Water Reactor, Burn up, Minor Actinides, Neutronic Calculation
Procedia PDF Downloads 245139 Probabilistic Analysis of Bearing Capacity of Isolated Footing using Monte Carlo Simulation
Authors: Sameer Jung Karki, Gokhan Saygili
Abstract:
The allowable bearing capacity of foundation systems is determined by applying a factor of safety to the ultimate bearing capacity. Conventional ultimate bearing capacity calculations routines are based on deterministic input parameters where the nonuniformity and inhomogeneity of soil and site properties are not accounted for. Hence, the laws of mathematics like probability calculus and statistical analysis cannot be directly applied to foundation engineering. It’s assumed that the Factor of Safety, typically as high as 3.0, incorporates the uncertainty of the input parameters. This factor of safety is estimated based on subjective judgement rather than objective facts. It is an ambiguous term. Hence, a probabilistic analysis of the bearing capacity of an isolated footing on a clayey soil is carried out by using the Monte Carlo Simulation method. This simulated model was compared with the traditional discrete model. It was found out that the bearing capacity of soil was found higher for the simulated model compared with the discrete model. This was verified by doing the sensitivity analysis. As the number of simulations was increased, there was a significant % increase of the bearing capacity compared with discrete bearing capacity. The bearing capacity values obtained by simulation was found to follow a normal distribution. While using the traditional value of Factor of safety 3, the allowable bearing capacity had lower probability (0.03717) of occurring in the field compared to a higher probability (0.15866), while using the simulation derived factor of safety of 1.5. This means the traditional factor of safety is giving us bearing capacity that is less likely occurring/available in the field. This shows the subjective nature of factor of safety, and hence probability method is suggested to address the variability of the input parameters in bearing capacity equations.Keywords: bearing capacity, factor of safety, isolated footing, montecarlo simulation
Procedia PDF Downloads 187138 Rethinking Riba in an Agency Theoretic Framework: Islamic Banking and Finance beyond Sophistry
Authors: Muhammad Arsalan
Abstract:
The efficiency of a financial intermediation system is assessed by its ability to achieve allocative efficiency, asset transformation, and the subsequent economic development. Islamic Banking and Finance (IBF) was conceived to serve as an alternate financial intermediation system adherent to the injunctions of Islam. A critical appraisal of the state of contemporary IBF reveals that it neither fulfills the aspirations of Islamic rhetoric nor is efficient in terms of asset transformation and economic development. This paper is an intuitive pursuit to explore the economic rationale of established principles of IBF, and the reasons of the persistent divergence of IBF being accused of ruses and sophistry. Disentangling the varying viewpoints, the underdevelopment of IBF has been attributed to misinterpretation of Riba, which has been explicated through a narrow fiqhi and legally deterministic approach. It presents a critical account of how incorrect conceptualization of the key injunction on Riba, steered flawed institutionalization of an Islamic Financial intermediation system. It also emphasizes on the wrong interpretation of the ontological and epistemological sources of Islamic Law (primarily Riba), that explains the perennial economic underdevelopment of the Muslim world. Deeming ‘a collaborative and dynamic Ijtihad’ as the elixir, this paper insists on the exigency of redefining Riba, i.e., a definition that incorporates the modern modes of economic cooperation and the contemporary financial intermediation ecosystem. Finally, Riba has been articulated in an agency theoretic framework to eschew expropriation of wealth, and assure protection of property rights, aimed at realizing the twin goals of a) Shari’ah adherence in true spirit, b) financial and economic development of the Muslim world.Keywords: agency theory, financial intermediation, Islamic banking and finance, ijtihad, economic development, Riba, information asymmetry
Procedia PDF Downloads 139137 Application of Bim Model Data to Estimate ROI for Robots and Automation in Construction Projects
Authors: Brian Romansky
Abstract:
There are many practical, commercially available robots and semi-autonomous systems that are currently available for use in a wide variety of construction tasks. Adoption of these technologies has the potential to reduce the time and cost to deliver a project, reduce variability and risk in delivery time, increase quality, and improve safety on the job site. These benefits come with a cost for equipment rental or contract fees, access to specialists to configure the system, and time needed for set-up and support of the machines while in use. Calculation of the net ROI (Return on Investment) requires detailed information about the geometry of the site, the volume of work to be done, the overall project schedule, as well as data on the capabilities and past performance of available robotic systems. Assembling the required data and comparing the ROI for several options is complex and tedious. Many project managers will only consider the use of a robot in targeted applications where the benefits are obvious, resulting in low levels of adoption of automation in the construction industry. This work demonstrates how data already resident in many BIM (Building Information Model) projects can be used to automate ROI estimation for a sample set of commercially available construction robots. Calculations account for set-up and operating time along with scheduling support tasks required while the automated technology is in use. Configuration parameters allow for prioritization of time, cost, or safety as the primary benefit of the technology. A path toward integration and use of automatic ROI calculation with a database of available robots in a BIM platform is described.Keywords: automation, BIM, robot, ROI.
Procedia PDF Downloads 87136 Application of an Analytical Model to Obtain Daily Flow Duration Curves for Different Hydrological Regimes in Switzerland
Authors: Ana Clara Santos, Maria Manuela Portela, Bettina Schaefli
Abstract:
This work assesses the performance of an analytical model framework to generate daily flow duration curves, FDCs, based on climatic characteristics of the catchments and on their streamflow recession coefficients. According to the analytical model framework, precipitation is considered to be a stochastic process, modeled as a marked Poisson process, and recession is considered to be deterministic, with parameters that can be computed based on different models. The analytical model framework was tested for three case studies with different hydrological regimes located in Switzerland: pluvial, snow-dominated and glacier. For that purpose, five time intervals were analyzed (the four meteorological seasons and the civil year) and two developments of the model were tested: one considering a linear recession model and the other adopting a nonlinear recession model. Those developments were combined with recession coefficients obtained from two different approaches: forward and inverse estimation. The performance of the analytical framework when considering forward parameter estimation is poor in comparison with the inverse estimation for both, linear and nonlinear models. For the pluvial catchment, the inverse estimation shows exceptional good results, especially for the nonlinear model, clearing suggesting that the model has the ability to describe FDCs. For the snow-dominated and glacier catchments the seasonal results are better than the annual ones suggesting that the model can describe streamflows in those conditions and that future efforts should focus on improving and combining seasonal curves instead of considering single annual ones.Keywords: analytical streamflow distribution, stochastic process, linear and non-linear recession, hydrological modelling, daily discharges
Procedia PDF Downloads 162135 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures
Authors: Silvina Caíno-Lores, Jesús Carretero
Abstract:
Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.Keywords: data locality, data-centric computing, large scale infrastructures, cloud computing
Procedia PDF Downloads 259134 Free Will and Compatibilism in Decision Theory: A Solution to Newcomb’s Paradox
Authors: Sally Heyeon Hwang
Abstract:
Within decision theory, there are normative principles that dictate how one should act in addition to empirical theories of actual behavior. As a normative guide to one’s actual behavior, evidential or causal decision-theoretic equations allow one to identify outcomes with maximal utility values. The choice that each person makes, however, will, of course, differ according to varying assignments of weight and probability values. Regarding these different choices, it remains a subject of considerable philosophical controversy whether individual subjects have the capacity to exercise free will with respect to the assignment of probabilities, or whether instead the assignment is in some way constrained. A version of this question is given a precise form in Richard Jeffrey’s assumption that free will is necessary for Newcomb’s paradox to count as a decision problem. This paper will argue, against Jeffrey, that decision theory does not require the assumption of libertarian freedom. One of the hallmarks of decision-making is its application across a wide variety of contexts; the implications of a background assumption of free will is similarly varied. One constant across the contexts of decision is that there are always at least two levels of choice for a given agent, depending on the degree of prior constraint. Within the context of Newcomb’s problem, when the predictor is attempting to guess the choice the agent will make, he or she is analyzing the determined aspects of the agent such as past characteristics, experiences, and knowledge. On the other hand, as David Lewis’ backtracking argument concerning the relationship between past and present events brings to light, there are similarly varied ways in which the past can actually be dependent on the present. One implication of this argument is that even in deterministic settings, an agent can have more free will than it may seem. This paper will thus argue against the view that a stable background assumption of free will or determinism in decision theory is necessary, arguing instead for a compatibilist decision theory yielding a novel treatment of Newcomb’s problem.Keywords: decision theory, compatibilism, free will, Newcomb’s problem
Procedia PDF Downloads 321133 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty
Authors: D. S. Gomes, A. T. Silva
Abstract:
Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.Keywords: logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation
Procedia PDF Downloads 291132 BIM-Based Tool for Sustainability Assessment and Certification Documents Provision
Authors: Taki Eddine Seghier, Mohd Hamdan Ahmad, Yaik-Wah Lim, Samuel Opeyemi Williams
Abstract:
The assessment of building sustainability to achieve a specific green benchmark and the preparation of the required documents in order to receive a green building certification, both are considered as major challenging tasks for green building design team. However, this labor and time-consuming process can take advantage of the available Building Information Modeling (BIM) features such as material take-off and scheduling. Furthermore, the workflow can be automated in order to track potentially achievable credit points and provide rating feedback for several design options by using integrated Visual Programing (VP) to handle the stored parameters within the BIM model. Hence, this study proposes a BIM-based tool that uses Green Building Index (GBI) rating system requirements as a unique input case to evaluate the building sustainability in the design stage of the building project life cycle. The tool covers two key models for data extraction, firstly, a model for data extraction, calculation and the classification of achievable credit points in a green template, secondly, a model for the generation of the required documents for green building certification. The tool was validated on a BIM model of residential building and it serves as proof of concept that building sustainability assessment of GBI certification can be automatically evaluated and documented through BIM.Keywords: green building rating system, GBRS, building information modeling, BIM, visual programming, VP, sustainability assessment
Procedia PDF Downloads 326131 Soil Degradati̇on Mapping Using Geographic Information System, Remote Sensing and Laboratory Analysis in the Oum Er Rbia High Basin, Middle Atlas, Morocco
Authors: Aafaf El Jazouli, Ahmed Barakat, Rida Khellouk
Abstract:
Mapping of soil degradation is derived from field observations, laboratory measurements, and remote sensing data, integrated quantitative methods to map the spatial characteristics of soil properties at different spatial and temporal scales to provide up-to-date information on the field. Since soil salinity, texture and organic matter play a vital role in assessing topsoil characteristics and soil quality, remote sensing can be considered an effective method for studying these properties. The main objective of this research is to asses soil degradation by combining remote sensing data and laboratory analysis. In order to achieve this goal, the required study of soil samples was taken at 50 locations in the upper basin of Oum Er Rbia in the Middle Atlas in Morocco. These samples were dried, sieved to 2 mm and analyzed in the laboratory. Landsat 8 OLI imagery was analyzed using physical or empirical methods to derive soil properties. In addition, remote sensing can serve as a supporting data source. Deterministic potential (Spline and Inverse Distance weighting) and probabilistic interpolation methods (ordinary kriging and universal kriging) were used to produce maps of each grain size class and soil properties using GIS software. As a result, a correlation was found between soil texture and soil organic matter content. This approach developed in ongoing research will improve the prospects for the use of remote sensing data for mapping soil degradation in arid and semi-arid environments.Keywords: Soil degradation, GIS, interpolation methods (spline, IDW, kriging), Landsat 8 OLI, Oum Er Rbia high basin
Procedia PDF Downloads 165130 Optimization of Topology-Aware Job Allocation on a High-Performance Computing Cluster by Neural Simulated Annealing
Authors: Zekang Lan, Yan Xu, Yingkun Huang, Dian Huang, Shengzhong Feng
Abstract:
Jobs on high-performance computing (HPC) clusters can suffer significant performance degradation due to inter-job network interference. Topology-aware job allocation problem (TJAP) is such a problem that decides how to dedicate nodes to specific applications to mitigate inter-job network interference. In this paper, we study the window-based TJAP on a fat-tree network aiming at minimizing the cost of communication hop, a defined inter-job interference metric. The window-based approach for scheduling repeats periodically, taking the jobs in the queue and solving an assignment problem that maps jobs to the available nodes. Two special allocation strategies are considered, i.e., static continuity assignment strategy (SCAS) and dynamic continuity assignment strategy (DCAS). For the SCAS, a 0-1 integer programming is developed. For the DCAS, an approach called neural simulated algorithm (NSA), which is an extension to simulated algorithm (SA) that learns a repair operator and employs them in a guided heuristic search, is proposed. The efficacy of NSA is demonstrated with a computational study against SA and SCIP. The results of numerical experiments indicate that both the model and algorithm proposed in this paper are effective.Keywords: high-performance computing, job allocation, neural simulated annealing, topology-aware
Procedia PDF Downloads 116129 An Examination of the Role of Perceived Leadership Styles on Job Satisfaction among Selected Bank Employees
Authors: Solomon Ojo
Abstract:
The study set out to investigate the role of perceived leadership style on achievement motivation of selected bank employees. The study was a cross-sectional survey. A total of 585 bank workers took part in the study; 283 (48.4%) were males while 302% (51.6%) were females. Mean age of 31.8 yrs (SD = 7.8 yrs) was reported for the participants for the study. Questionnaires were used for data collection. Data was analyzed using both descriptive and inferential statistic. The t- test for independent measures was used to test all the hypotheses, using the statistical package for social sciences version 21.0. The results in the study revealed that bank employees who perceived their leaders as high on consideration style of leadership reported more job satisfaction than bank employees who perceived their leaders as low on consideration style of leadership [t(583) = 16.43, p<.001]; bank employees who perceived their leaders as high in initiating structure style reported more job satisfaction than bank employees who perceived their leaders as low in initiating structure style [t(583)=12.06, p<.01]. The results showed further the influence of perceived leadership styles on all measures of job satisfaction. First, the result showed that bank employees who perceived their leaders as high on consideration style reported more satisfaction with hours worked each day than bank employees who perceived their leaders as low on consideration style [t(583) = 9.23, p<.01]. Second, the results revealed that bank employees who perceived their leaders as high on consideration style reported more satisfaction with flexibility in scheduling than bank employees who perceived their leaders as low on consideration style [t(583) = 8.80, p<.01]. Third, it was shown that bank employees who perceived their leaders as high on consideration style reported more satisfaction with location of work than bank employees who perceived their leaders as low on consideration style [t(583) = 14.17, p<.01] e.t.c. The results were extensively discussed in relation to relevant body of literature.Keywords: leadership styles, job satisfaction, bank employees, perceived
Procedia PDF Downloads 219128 Multiple-Material Flow Control in Construction Supply Chain with External Storage Site
Authors: Fatmah Almathkour
Abstract:
Managing and controlling the construction supply chain (CSC) are very important components of effective construction project execution. The goals of managing the CSC are to reduce uncertainty and optimize the performance of a construction project by improving efficiency and reducing project costs. The heart of much SC activity is addressing risk, and the CSC is no different. The delivery and consumption of construction materials is highly variable due to the complexity of construction operations, rapidly changing demand for certain components, lead time variability from suppliers, transportation time variability, and disruptions at the job site. Current notions of managing and controlling CSC, involve focusing on one project at a time with a push-based material ordering system based on the initial construction schedule and, then, holding a tremendous amount of inventory. A two-stage methodology was proposed to coordinate the feed-forward control of advanced order placement with a supplier to a feedback local control in the form of adding the ability to transship materials between projects to improve efficiency and reduce costs. It focused on the single supplier integrated production and transshipment problem with multiple products. The methodology is used as a design tool for the CSC because it includes an external storage site not associated with one of the projects. The idea is to add this feature to a highly constrained environment to explore its effectiveness in buffering the impact of variability and maintaining project schedule at low cost. The methodology uses deterministic optimization models with objectives that minimizing the total cost of the CSC. To illustrate how this methodology can be used in practice and the types of information that can be gleaned, it is tested on a number of cases based on the real example of multiple construction projects in Kuwait.Keywords: construction supply chain, inventory control supply chain, transshipment
Procedia PDF Downloads 122127 Enhancing the Resilience of Combat System-Of-Systems Under Certainty and Uncertainty: Two-Phase Resilience Optimization Model and Deep Reinforcement Learning-Based Recovery Optimization Method
Authors: Xueming Xu, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge
Abstract:
A combat system-of-systems (CSoS) comprises various types of functional combat entities that interact to meet corresponding task requirements in the present and future. Enhancing the resilience of CSoS holds significant military value in optimizing the operational planning process, improving military survivability, and ensuring the successful completion of operational tasks. Accordingly, this research proposes an integrated framework called CSoS resilience enhancement (CSoSRE) to enhance the resilience of CSoS from a recovery perspective. Specifically, this research presents a two-phase resilience optimization model to define a resilience optimization objective for CSoS. This model considers not only task baseline, recovery cost, and recovery time limit but also the characteristics of emergency recovery and comprehensive recovery. Moreover, the research extends it from the deterministic case to the stochastic case to describe the uncertainty in the recovery process. Based on this, a resilience-oriented recovery optimization method based on deep reinforcement learning (RRODRL) is proposed to determine a set of entities requiring restoration and their recovery sequence, thereby enhancing the resilience of CSoS. This method improves the deep Q-learning algorithm by designing a discount factor that adapts to changes in CSoS state at different phases, simultaneously considering the network’s structural and functional characteristics within CSoS. Finally, extensive experiments are conducted to test the feasibility, effectiveness and superiority of the proposed framework. The obtained results offer useful insights for guiding operational recovery activity and designing a more resilient CSoS.Keywords: combat system-of-systems, resilience optimization model, recovery optimization method, deep reinforcement learning, certainty and uncertainty
Procedia PDF Downloads 16126 Patient Service Improvement in Public Emergency Department Using Discrete Event Simulation
Authors: Dana Mohammed, Fatemah Abdullah, Hawraa Ali, Najat Al-Shaer, Rawan Al-Awadhi, , Magdy Helal
Abstract:
We study the patient service performance at the emergency department of a major Kuwaiti public hospital, using discrete simulation and lean concepts. In addition to the common problems in such health care systems (over crowdedness, facilities planning and usage, scheduling and staffing, capacity planning) the emergency department suffered from several cultural and patient behavioural issues. Those contributed significantly to the system problems and constituted major obstacles in maintaining the performance in control. This led to overly long waiting times and the potential of delaying providing help to critical cases. We utilized the visual management tools to mitigate the impact of the patients’ behaviours and attitudes and improve the logistics inside the system. In addition a proposal is made to automate the date collection and communication within the department using RFID-based barcoding system. Discrete event simulation models were developed as decision support systems; to study the operational problems and assess achieved improvements. The simulation analysis resulted in cutting the patient delays to about 35% of their current values by reallocating and rescheduling the medical staff. Combined with the application of the visual management concepts, this provided the basis to improving patient service without any major investments.Keywords: simulation, visual management, health care system, patient
Procedia PDF Downloads 475125 Development of a Model for Predicting Radiological Risks in Interventional Cardiology
Authors: Stefaan Carpentier, Aya Al Masri, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis, and ulceration to appear. In order to prevent these deterministic effects, a prediction of the peak skin-dose for the patient is important in order to improve the post-operative care to be given to the patient. The objective of this study is to estimate, before the intervention, the patient dose for ‘Chronic Total Occlusion (CTO)’ procedures by selecting relevant clinical indicators. Materials and methods: 103 procedures were performed in the ‘Interventional Cardiology (IC)’ department using a Siemens Artis Zee image intensifier that provides the Air Kerma of each IC exam. Peak Skin Dose (PSD) was measured for each procedure using radiochromic films. Patient parameters such as sex, age, weight, and height were recorded. The complexity index J-CTO score, specific to each intervention, was determined by the cardiologist. A correlation method applied to these indicators allowed to specify their influence on the dose. A predictive model of the dose was created using multiple linear regressions. Results: Out of 103 patients involved in the study, 5 were excluded for clinical reasons and 2 for placement of radiochromic films outside the exposure field. 96 2D-dose maps were finally used. The influencing factors having the highest correlation with the PSD are the patient's diameter and the J-CTO score. The predictive model is based on these parameters. The comparison between estimated and measured skin doses shows an average difference of 0.85 ± 0.55 Gy for doses of less than 6 Gy. The mean difference between air-Kerma and PSD is 1.66 Gy ± 1.16 Gy. Conclusion: Using our developed method, a first estimate of the dose to the skin of the patient is available before the start of the procedure, which helps the cardiologist in carrying out its intervention. This estimation is more accurate than that provided by the Air-Kerma.Keywords: chronic total occlusion procedures, clinical experimentation, interventional radiology, patient's peak skin dose
Procedia PDF Downloads 136124 Significance of Water Saving through Subsurface Drip Irrigation for Date Palm Trees
Authors: Ahmed I. Al-Amoud
Abstract:
A laboratory and field study were conducted on subsurface drip irrigation systems. In the first laboratory study, eight subsurface drip irrigation lines available locally, were selected and a number of experiments were made to evaluate line hydraulic characteristics to insure it's suitability for drip irrigation design requirements and high performance to select the best for field experiments. The second study involves field trials on mature date palm trees to study the effect of subsurface drip irrigation system on the yield and water consumption of date palms, and to compare that with the traditional surface drip irrigation system. Experiments were conducted in Alwatania Agricultural Project, on 50 mature palm trees (17 years old) of Helwa type with 10 meters spacing between rows and between trees. A high efficiency subsurface line (Techline) was used based on the results of the first study. Irrigation scheduling was made through a soil moisture sensing device to ensure enough soil water levels in the soil. Experiment layouts were installed during 2001 season, measurements continued till end of 2008 season. Results have indicated that there is an increase in the yield and a considerable saving in water compared to the conventional drip irrigation method. In addition there were high increases in water use efficiency using the subsurface system. The subsurface system proves to be durable and highly efficient for irrigating date palm trees.Keywords: drip irrigation, subsurface drip irrigation, date palm trees, date palm water use, date palm yield
Procedia PDF Downloads 431123 Developing Academic English through Interaction
Authors: John Bankier
Abstract:
Development of academic English occurs not only in communities of practice but also within wider social networks, referred to by Zappa-Hollman and Duff as individual networks of practice. Such networks may exist whether students are developing academic English in English-dominant contexts or in contexts in which English is not a majority language. As yet, little research has examined how newcomers to universities interact with a variety of social ties in such networks to receive academic and emotional support as they develop the academic English necessary to succeed in local and global academia. The one-year ethnographic study described in this presentation followed five Japanese university students enrolled on an academic English program in their home country. We graphically represent participants’ individual networks of practice related to academic English and display the role of interaction in these networks to socialization. Specific examples of academic practices will be linked to specific instances of social interaction. Interaction supportive of the development of academic practices often occurred during unplanned interactions outside the classroom and among small groups of close friends who were connected to each other in more than one way, such as those taking multiple classes together. These interactions occurred in study spaces, in hallways between class periods, at lunchtimes, and online. However, constraints such as differing accommodation arrangements, class scheduling and the hierarchical levelling of English classes by test scores discouraged some participants both from forming strong ties related to English and from interacting with existing ties. The presentation will briefly describe ways in which teachers in all contexts can maximise interaction outside the classroom.Keywords: academic, english, practice, network
Procedia PDF Downloads 258122 Spatial Variation of WRF Model Rainfall Prediction over Uganda
Authors: Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Triphonia Ngailo
Abstract:
Rainfall is a major climatic parameter affecting many sectors such as health, agriculture and water resources. Its quantitative prediction remains a challenge to weather forecasters although numerical weather prediction models are increasingly being used for rainfall prediction. The performance of six convective parameterization schemes, namely the Kain-Fritsch scheme, the Betts-Miller-Janjic scheme, the Grell-Deveny scheme, the Grell-3D scheme, the Grell-Fretas scheme, the New Tiedke scheme of the weather research and forecast (WRF) model regarding quantitative rainfall prediction over Uganda is investigated using the root mean square error for the March-May (MAM) 2013 season. The MAM 2013 seasonal rainfall amount ranged from 200 mm to 900 mm over Uganda with northern region receiving comparatively lower rainfall amount (200–500 mm); western Uganda (270–550 mm); eastern Uganda (400–900 mm) and the lake Victoria basin (400–650 mm). A spatial variation in simulated rainfall amount by different convective parameterization schemes was noted with the Kain-Fritsch scheme over estimating the rainfall amount over northern Uganda (300–750 mm) but also presented comparable rainfall amounts over the eastern Uganda (400–900 mm). The Betts-Miller-Janjic, the Grell-Deveny, and the Grell-3D underestimated the rainfall amount over most parts of the country especially the eastern region (300–600 mm). The Grell-Fretas captured rainfall amount over the northern region (250–450 mm) but also underestimated rainfall over the lake Victoria Basin (150–300 mm) while the New Tiedke generally underestimated rainfall amount over many areas of Uganda. For deterministic rainfall prediction, the Grell-Fretas is recommended for rainfall prediction over northern Uganda while the Kain-Fritsch scheme is recommended over eastern region.Keywords: convective parameterization schemes, March-May 2013 rainfall season, spatial variation of parameterization schemes over Uganda, WRF model
Procedia PDF Downloads 310121 The Relevance of Family Involvement in the Journey of Dementia Patients
Authors: Akankunda Veronicah Karuhanga
Abstract:
Dementia is an age mental disorder that makes victims lose normal functionality that needs delicate attention. It has been technically defined as a clinical syndrome that presents a number of difficulties in speech and other cognitive functions that change someone’s behaviors and can also cause impairments in activities of daily living, not forgetting a range of neurological disorders that bring memory loss and cognitive impairment. Family members are the primary healthcare givers and therefore, the way how they handle the situation in its early stages determines future deterioration syndromes like total memory loss. Unfortunately, most family members are ignorant about this condition and in most cases, the patients are brought to our facilities when their condition was already mismanaged by family members and we thus cannot do much. For example, incontinence can be managed at early stages through potty training or toilet scheduling before resorting to 24/7 diapers which are also not good. Professional Elderly care should be understood and practiced as an extension of homes, not a dumping place for people considered “abnormal” on account of ignorance. Immediate relatives should therefore be sensitized concerning the normalcy of dementia in the context of old age so that they can be understanding and supportive of dementia patients rather than discriminating against them as present-day lepers. There is a need to skill home-based caregivers on how to handle dementia in its early stages. Unless this is done, many of our elderly homes shall be filled with patients who should have been treated and supported from their homes. This skilling of home-based caregivers is a vital intervention because until elderly care is appreciated as a human moral obligation, many transactional rehabilitation centers will crop up and this shall be one of the worst moral decadences of our times.Keywords: dementia, family, Alzheimers, relevancy
Procedia PDF Downloads 96120 Deployment of Beyond 4G Wireless Communication Networks with Carrier Aggregation
Authors: Bahram Khan, Anderson Rocha Ramos, Rui R. Paulo, Fernando J. Velez
Abstract:
With the growing demand for a new blend of applications, the users dependency on the internet is increasing day by day. Mobile internet users are giving more attention to their own experiences, especially in terms of communication reliability, high data rates and service stability on move. This increase in the demand is causing saturation of existing radio frequency bands. To address these challenges, researchers are investigating the best approaches, Carrier Aggregation (CA) is one of the newest innovations, which seems to fulfill the demands of the future spectrum, also CA is one the most important feature for Long Term Evolution - Advanced (LTE-Advanced). For this purpose to get the upcoming International Mobile Telecommunication Advanced (IMT-Advanced) mobile requirements (1 Gb/s peak data rate), the CA scheme is presented by 3GPP, which would sustain a high data rate using widespread frequency bandwidth up to 100 MHz. Technical issues such as aggregation structure, its implementations, deployment scenarios, control signal techniques, and challenges for CA technique in LTE-Advanced, with consideration of backward compatibility, are highlighted in this paper. Also, performance evaluation in macro-cellular scenarios through a simulation approach is presented, which shows the benefits of applying CA, low-complexity multi-band schedulers in service quality, system capacity enhancement and concluded that enhanced multi-band scheduler is less complex than the general multi-band scheduler, which performs better for a cell radius longer than 1800 m (and a PLR threshold of 2%).Keywords: component carrier, carrier aggregation, LTE-advanced, scheduling
Procedia PDF Downloads 199119 Integrated Genetic-A* Graph Search Algorithm Decision Model for Evaluating Cost and Quality of School Renovation Strategies
Authors: Yu-Ching Cheng, Yi-Kai Juan, Daniel Castro
Abstract:
Energy consumption of buildings has been an increasing concern for researchers and practitioners in the last decade. Sustainable building renovation can reduce energy consumption and carbon dioxide emissions; meanwhile, it also can extend existing buildings useful life and facilitate environmental sustainability while providing social and economic benefits to the society. School buildings are different from other designed spaces as they are more crowded and host the largest portion of daily activities and occupants. Strategies that focus on reducing energy use but also improve the students’ learning environment becomes a significant subject in sustainable school buildings development. A decision model is developed in this study to solve complicated and large-scale combinational, discrete and determinate problems such as school renovation projects. The task of this model is to automatically search for the most cost-effective (lower cost and higher quality) renovation strategies. In this study, the search process of optimal school building renovation solutions is by nature a large-scale zero-one programming determinate problem. A* is suitable for solving deterministic problems due to its stable and effective search process, and genetic algorithms (GA) provides opportunities to acquire global optimal solutions in a short time via its indeterminate search process based on probability. These two algorithms are combined in this study to consider trade-offs between renovation cost and improved quality, this decision model is able to evaluate current school environmental conditions and suggest an optimal scheme of sustainable school buildings renovation strategies. Through adoption of this decision model, school managers can overcome existing limitations and transform school buildings into spaces more beneficial to students and friendly to the environment.Keywords: decision model, school buildings, sustainable renovation, genetic algorithm, A* search algorithm
Procedia PDF Downloads 118118 Reinforcement Learning for Robust Missile Autopilot Design: TRPO Enhanced by Schedule Experience Replay
Authors: Bernardo Cortez, Florian Peter, Thomas Lausenhammer, Paulo Oliveira
Abstract:
Designing missiles’ autopilot controllers have been a complex task, given the extensive flight envelope and the nonlinear flight dynamics. A solution that can excel both in nominal performance and in robustness to uncertainties is still to be found. While Control Theory often debouches into parameters’ scheduling procedures, Reinforcement Learning has presented interesting results in ever more complex tasks, going from videogames to robotic tasks with continuous action domains. However, it still lacks clearer insights on how to find adequate reward functions and exploration strategies. To the best of our knowledge, this work is a pioneer in proposing Reinforcement Learning as a framework for flight control. In fact, it aims at training a model-free agent that can control the longitudinal non-linear flight dynamics of a missile, achieving the target performance and robustness to uncertainties. To that end, under TRPO’s methodology, the collected experience is augmented according to HER, stored in a replay buffer and sampled according to its significance. Not only does this work enhance the concept of prioritized experience replay into BPER, but it also reformulates HER, activating them both only when the training progress converges to suboptimal policies, in what is proposed as the SER methodology. The results show that it is possible both to achieve the target performance and to improve the agent’s robustness to uncertainties (with low damage on nominal performance) by further training it in non-nominal environments, therefore validating the proposed approach and encouraging future research in this field.Keywords: Reinforcement Learning, flight control, HER, missile autopilot, TRPO
Procedia PDF Downloads 264117 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem
Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee
Abstract:
Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research
Procedia PDF Downloads 336116 A Design of Elliptic Curve Cryptography Processor based on SM2 over GF(p)
Authors: Shiji Hu, Lei Li, Wanting Zhou, DaoHong Yang
Abstract:
The data encryption, is the foundation of today’s communication. On this basis, how to improve the speed of data encryption and decryption is always a problem that scholars work for. In this paper, we proposed an elliptic curve crypto processor architecture based on SM2 prime field. In terms of hardware implementation, we optimized the algorithms in different stages of the structure. In finite field modulo operation, we proposed an optimized improvement of Karatsuba-Ofman multiplication algorithm, and shorten the critical path through pipeline structure in the algorithm implementation. Based on SM2 recommended prime field, a fast modular reduction algorithm is used to reduce 512-bit wide data obtained from the multiplication unit. The radix-4 extended Euclidean algorithm was used to realize the conversion between affine coordinate system and Jacobi projective coordinate system. In the parallel scheduling of point operations on elliptic curves, we proposed a three-level parallel structure of point addition and point double based on the Jacobian projective coordinate system. Combined with the scalar multiplication algorithm, we added mutual pre-operation to the point addition and double point operation to improve the efficiency of the scalar point multiplication. The proposed ECC hardware architecture was verified and implemented on Xilinx Virtex-7 and ZYNQ-7 platforms, and each 256-bit scalar multiplication operation took 0.275ms. The performance for handling scalar multiplication is 32 times that of CPU(dual-core ARM Cortex-A9).Keywords: Elliptic curve cryptosystems, SM2, modular multiplication, point multiplication.
Procedia PDF Downloads 98115 Spatiotemporal Variability in Rainfall Trends over Sinai Peninsula Using Nonparametric Methods and Discrete Wavelet Transforms
Authors: Mosaad Khadr
Abstract:
Knowledge of the temporal and spatial variability of rainfall trends has been of great concern for efficient water resource planning, management. In this study annual, seasonal and monthly rainfall trends over the Sinai Peninsula were analyzed by using absolute homogeneity tests, nonparametric Mann–Kendall (MK) test and Sen’s slope estimator methods. The homogeneity of rainfall time-series was examined using four absolute homogeneity tests namely, the Pettitt test, standard normal homogeneity test, Buishand range test, and von Neumann ratio test. Further, the sequential change in the trend of annual and seasonal rainfalls is conducted using sequential MK (SQMK) method. Then the trend analysis based on discrete wavelet transform technique (DWT) in conjunction with SQMK method is performed. The spatial patterns of the detected rainfall trends were investigated using a geostatistical and deterministic spatial interpolation technique. The results achieved from the Mann–Kendall test to the data series (using the 5% significance level) highlighted that rainfall was generally decreasing in January, February, March, November, December, wet season, and annual rainfall. A significant decreasing trend in the winter and annual rainfall with significant levels were inferred based on the Mann-Kendall rank statistics and linear trend. Further, the discrete wavelet transform (DWT) analysis reveal that in general, intra- and inter-annual events (up to 4 years) are more influential in affecting the observed trends. The nature of the trend captured by both methods is similar for all of the cases. On the basis of spatial trend analysis, significant rainfall decreases were also noted in the investigated stations. Overall, significant downward trends in winter and annual rainfall over the Sinai Peninsula was observed during the study period.Keywords: trend analysis, rainfall, Mann–Kendall test, discrete wavelet transform, Sinai Peninsula
Procedia PDF Downloads 170114 Optimal Sequential Scheduling of Imperfect Maintenance Last Policy for a System Subject to Shocks
Authors: Yen-Luan Chen
Abstract:
Maintenance has a great impact on the capacity of production and on the quality of the products, and therefore, it deserves continuous improvement. Maintenance procedure done before a failure is called preventive maintenance (PM). Sequential PM, which specifies that a system should be maintained at a sequence of intervals with unequal lengths, is one of the commonly used PM policies. This article proposes a generalized sequential PM policy for a system subject to shocks with imperfect maintenance and random working time. The shocks arrive according to a non-homogeneous Poisson process (NHPP) with varied intensity function in each maintenance interval. As a shock occurs, the system suffers two types of failures with number-dependent probabilities: type-I (minor) failure, which is rectified by a minimal repair, and type-II (catastrophic) failure, which is removed by a corrective maintenance (CM). The imperfect maintenance is carried out to improve the system failure characteristic due to the altered shock process. The sequential preventive maintenance-last (PML) policy is defined as that the system is maintained before any CM occurs at a planned time Ti or at the completion of a working time in the i-th maintenance interval, whichever occurs last. At the N-th maintenance, the system is replaced rather than maintained. This article first takes up the sequential PML policy with random working time and imperfect maintenance in reliability engineering. The optimal preventive maintenance schedule that minimizes the mean cost rate of a replacement cycle is derived analytically and determined in terms of its existence and uniqueness. The proposed models provide a general framework for analyzing the maintenance policies in reliability theory.Keywords: optimization, preventive maintenance, random working time, minimal repair, replacement, reliability
Procedia PDF Downloads 274