Search results for: Andrew Aaron James
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 107

Search results for: Andrew Aaron James

17 Sustainable Development of Medium Strength Concrete Using Polypropylene as Aggregate Replacement

Authors: Reza Keihani, Ali Bahadori-Jahromi, Timothy James Clacy

Abstract:

Plastic as an environmental burden is a well-rehearsed topic in the research area. This is due to its global demand and destructive impacts on the environment, which has been a significant concern to the governments. Typically, the use of plastic in the construction industry is seen across low-density, non-structural applications due to its diverse range of benefits including high strength-to-weight ratios, manipulability and durability. It can be said that with the level of plastic consumption experienced in the construction industry, an ongoing responsibility is shown for this sector to continually innovate alternatives for application of recycled plastic waste such as using plastic made replacement from polyethylene, polystyrene, polyvinyl and polypropylene in the concrete mix design. In this study, the impact of partially replaced fine aggregate with polypropylene in the concrete mix design was investigated to evaluate the concrete’s compressive strength by conducting an experimental work which comprises of six concrete mix batches with polypropylene replacements ranging from 0.5 to 3.0%. The results demonstrated a typical decline in the compressive strength with the addition of plastic aggregate, despite this reduction generally mitigated as the level of plastic in the concrete mix increased. Furthermore, two of the six plastic-containing concrete mixes tested in the current study exceeded the ST5 standardised prescribed concrete mix compressive strength requirement at 28-days containing 1.50% and 2.50% plastic aggregates, which demonstrated the potential for use of recycled polypropylene in structural applications, as a partial by mass, fine aggregate replacement in the concrete mix.

Keywords: Compressive strength, concrete, polypropylene, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 887
16 Performance Analysis of Digital Signal Processors Using SMV Benchmark

Authors: Erh-Wen Hu, Cyril S. Ku, Andrew T. Russo, Bogong Su, Jian Wang

Abstract:

Unlike general-purpose processors, digital signal processors (DSP processors) are strongly application-dependent. To meet the needs for diverse applications, a wide variety of DSP processors based on different architectures ranging from the traditional to VLIW have been introduced to the market over the years. The functionality, performance, and cost of these processors vary over a wide range. In order to select a processor that meets the design criteria for an application, processor performance is usually the major concern for digital signal processing (DSP) application developers. Performance data are also essential for the designers of DSP processors to improve their design. Consequently, several DSP performance benchmarks have been proposed over the past decade or so. However, none of these benchmarks seem to have included recent new DSP applications. In this paper, we use a new benchmark that we recently developed to compare the performance of popular DSP processors from Texas Instruments and StarCore. The new benchmark is based on the Selectable Mode Vocoder (SMV), a speech-coding program from the recent third generation (3G) wireless voice applications. All benchmark kernels are compiled by the compilers of the respective DSP processors and run on their simulators. Weighted arithmetic mean of clock cycles and arithmetic mean of code size are used to compare the performance of five DSP processors. In addition, we studied how the performance of a processor is affected by code structure, features of processor architecture and optimization of compiler. The extensive experimental data gathered, analyzed, and presented in this paper should be helpful for DSP processor and compiler designers to meet their specific design goals.

Keywords: digital signal processors, DSP benchmark, instruction level parallelism, modified cyclomatic complexity, performance analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1565
15 Simulation of Hydrogenated Boron Nitride Nanotube’s Mechanical Properties for Radiation Shielding Applications

Authors: Joseph E. Estevez, Mahdi Ghazizadeh, James G. Ryan, Ajit D. Kelkar

Abstract:

Radiation shielding is an obstacle in long duration space exploration. Boron Nitride Nanotubes (BNNTs) have attracted attention as an additive to radiation shielding material due to B10’s large neutron capture cross section. The B10 has an effective neutron capture cross section suitable for low energy neutrons ranging from 10-5 to 104 eV and hydrogen is effective at slowing down high energy neutrons. Hydrogenated BNNTs are potentially an ideal nanofiller for radiation shielding composites. We use Molecular Dynamics (MD) Simulation via Material Studios Accelrys 6.0 to model the Young’s Modulus of Hydrogenated BNNTs. An extrapolation technique was employed to determine the Young’s Modulus due to the deformation of the nanostructure at its theoretical density. A linear regression was used to extrapolate the data to the theoretical density of 2.62g/cm3. Simulation data shows that the hydrogenated BNNTs will experience a 11% decrease in the Young’s Modulus for (6,6) BNNTs and 8.5% decrease for (8,8) BNNTs compared to non-hydrogenated BNNT’s. Hydrogenated BNNTs are a viable option as a nanofiller for radiation shielding nanocomposite materials for long range and long duration space exploration.

Keywords: Boron Nitride Nanotube, Radiation Shielding, Young Modulus, Atomistic Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6631
14 Effect of Environmental Factors on Photoreactivation of Microorganisms under Indoor Conditions

Authors: Shirin Shafaei, James R. Bolton, Mohamed Gamal El Din

Abstract:

Ultraviolet (UV) disinfection causes damage to the DNA or RNA of microorganisms, but many microorganisms can repair this damage after exposure to near-UV or visible wavelengths (310–480 nm) by a mechanism called photoreactivation. Photoreactivation is gaining more attention because it can reduce the efficiency of UV disinfection of wastewater several hours after treatment. The focus of many photoreactivation research activities on the single species has caused a considerable lack in knowledge about complex natural communities of microorganisms and their response to UV treatment. In this research, photoreactivation experiments were carried out on the influent of the UV disinfection unit at a municipal wastewater treatment plant (WWTP) in Edmonton, Alberta after exposure to a Medium-Pressure (MP) UV lamp system to evaluate the effect of environmental factors on photoreactivation of microorganisms in the actual municipal wastewater. The effect of reactivation fluence, temperature, and river water on photoreactivation of total coliforms was examined under indoor conditions. The results showed that higher effective reactivation fluence values (up to 20 J/cm2) and higher temperatures (up to 25 °C) increased the photoreactivation of total coliforms. However, increasing the percentage of river in the mixtures of the effluent and river water decreased the photoreactivation of the mixtures. The results of this research can help the municipal wastewater treatment industry to examine the environmental effects of discharging their effluents into receiving waters.

Keywords: Photoreactivation, reactivation fluence, river water, temperature, ultraviolet disinfection, wastewater effluent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1369
13 Performance of the Aptima® HIV-1 Quant Dx Assay on the Panther System

Authors: Siobhan O’Shea, Sangeetha Vijaysri Nair, Hee Cheol Kim, Charles Thomas Nugent, Cheuk Yan William Tong, Sam Douthwaite, Andrew Worlock

Abstract:

The Aptima® HIV-1 Quant Dx Assay is a fully automated assay on the Panther system. It is based on Transcription- Mediated Amplification and real time detection technologies. This assay is intended for monitoring HIV-1 viral load in plasma specimens and for the detection of HIV-1 in plasma and serum specimens. Nine-hundred and seventy nine specimens selected at random from routine testing at St Thomas’ Hospital, London were anonymised and used to compare the performance of the Aptima HIV-1 Quant Dx assay and Roche COBAS® AmpliPrep/COBAS® TaqMan® HIV-1 Test, v2.0. Two-hundred and thirty four specimens gave quantitative HIV-1 viral load results in both assays. The quantitative results reported by the Aptima Assay were comparable to those reported by the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 Test, v2.0 with a linear regression slope of 1.04 and an intercept on -0.097. The Aptima assay detected HIV-1 in more samples than the COBAS assay. This was not due to lack of specificity of the Aptima assay because this assay gave 99.83% specificity on testing plasma specimens from 600 HIV-1 negative individuals. To understand the reason for this higher detection rate a side-by-side comparison of low level panels made from the HIV-1 3rd international standard (NIBSC10/152) and clinical samples of various subtypes were tested in both assays. The Aptima assay was more sensitive than the COBAS assay. The good sensitivity, specificity and agreement with other commercial assays make the HIV-1 Quant Dx Assay appropriate for both viral load monitoring and detection of HIV-1 infections.

Keywords: HIV viral load, Aptima, Roche, Panther system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3182
12 Early Depression Detection for Young Adults with a Psychiatric and AI Interdisciplinary Multimodal Framework

Authors: Raymond Xu, Ashley Hua, Andrew Wang, Yuru Lin

Abstract:

During COVID-19, the depression rate has increased dramatically. Young adults are most vulnerable to the mental health effects of the pandemic. Lower-income families have a higher ratio to be diagnosed with depression than the general population, but less access to clinics. This research aims to achieve early depression detection at low cost, large scale, and high accuracy with an interdisciplinary approach by incorporating clinical practices defined by American Psychiatric Association (APA) as well as multimodal AI framework. The proposed approach detected the nine depression symptoms with Natural Language Processing sentiment analysis and a symptom-based Lexicon uniquely designed for young adults. The experiments were conducted on the multimedia survey results from adolescents and young adults and unbiased Twitter communications. The result was further aggregated with the facial emotional cues analyzed by the Convolutional Neural Network on the multimedia survey videos. Five experiments each conducted on 10k data entries reached consistent results with an average accuracy of 88.31%, higher than the existing natural language analysis models. This approach can reach 300+ million daily active Twitter users and is highly accessible by low-income populations to promote early depression detection to raise awareness in adolescents and young adults and reveal complementary cues to assist clinical depression diagnosis.

Keywords: Artificial intelligence, depression detection, facial emotion recognition, natural language processing, mental disorder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1104
11 Quantifying the UK’s Future Thermal Electricity Generation Water Use: Regional Analysis

Authors: Daniel Murrant, Andrew Quinn, Lee Chapman

Abstract:

A growing population has led to increasing global water and energy demand. This demand, combined with the effects of climate change and an increasing need to maintain and protect the natural environment, represents a potentially severe threat to many national infrastructure systems. This has resulted in a considerable quantity of published material on the interdependencies that exist between the supply of water and the thermal generation of electricity, often known as the water-energy nexus. Focusing specifically on the UK, there is a growing concern that the future availability of water may at times constrain thermal electricity generation, and therefore hinder the UK in meeting its increasing demand for a secure, and affordable supply of low carbon electricity. To provide further information on the threat the water-energy nexus may pose to the UK’s energy system, this paper models the regional water demand of UK thermal electricity generation in 2030 and 2050. It uses the strategically important Energy Systems Modelling Environment model developed by the Energy Technologies Institute. Unlike previous research, this paper was able to use abstraction and consumption factors specific to UK power stations. It finds that by 2050 the South East, Yorkshire and Humber, the West Midlands and North West regions are those with the greatest freshwater demand and therefore most likely to suffer from a lack of resource. However, it finds that by 2050 it is the East, South West and East Midlands regions with the greatest total water (fresh, estuarine and seawater) demand and the most likely to be constrained by environmental standards.

Keywords: Water-energy nexus, water resources, abstraction, climate change, power station cooling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518
10 Modeling Parametric Vibration of Multistage Gear Systems as a Tool for Design Optimization

Authors: James Kuria, John Kihiu

Abstract:

This work presents a numerical model developed to simulate the dynamics and vibrations of a multistage tractor gearbox. The effect of time varying mesh stiffness, time varying frictional torque on the gear teeth, lateral and torsional flexibility of the shafts and flexibility of the bearings were included in the model. The model was developed by using the Lagrangian method, and it was applied to study the effect of three design variables on the vibration and stress levels on the gears. The first design variable, module, had little effect on the vibration levels but a higher module resulted to higher bending stress levels. The second design variable, pressure angle, had little effect on the vibration levels, but had a strong effect on the stress levels on the pinion of a high reduction ratio gear pair. A pressure angle of 25o resulted to lower stress levels for a pinion with 14 teeth than a pressure angle of 20o. The third design variable, contact ratio, had a very strong effect on both the vibration levels and bending stress levels. Increasing the contact ratio to 2.0 reduced both the vibration levels and bending stress levels significantly. For the gear train design used in this study, a module of 2.5 and contact ratio of 2.0 for the various meshes was found to yield the best combination of low vibration levels and low bending stresses. The model can therefore be used as a tool for obtaining the optimum gear design parameters for a given multistage spur gear train.

Keywords: bending stress levels, frictional torque, gear designparameters, mesh stiffness, multistage gear train, vibration levels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2539
9 Impact of Computer-Mediated Communication on Virtual Teams- Performance: An Empirical Study

Authors: Nadeem Ehsan, Ebtisam Mirza, Muhammad Ahmad

Abstract:

In a complex project environment, project teams face multi-dimensional communication problems that can ultimately lead to project breakdown. Team Performance varies in Face-to-Face (FTF) environment versus groups working remotely in a computermediated communication (CMC) environment. A brief review of the Input_Process_Output model suggested by James E. Driskell, Paul H. Radtke and Eduardo Salas in “Virtual Teams: Effects of Technological Mediation on Team Performance (2003)", has been done to develop the basis of this research. This model theoretically analyzes the effects of technological mediation on team processes, such as, cohesiveness, status and authority relations, counternormative behavior and communication. An empirical study described in this paper has been undertaken to test the “cohesiveness" of diverse project teams in a multi-national organization. This study uses both quantitative and qualitative techniques for data gathering and analysis. These techniques include interviews, questionnaires for data collection and graphical data representation for analyzing the collected data. Computer-mediated technology may impact team performance because of difference in cohesiveness among teams and this difference may be moderated by factors, such as, the type of communication environment, the type of task and the temporal context of the team. Based on the reviewed model, sets of hypotheses are devised and tested. This research, reports on a study that compared team cohesiveness among virtual teams using CMC and non-CMC communication mediums. The findings suggest that CMC can help virtual teams increase team cohesiveness among their members, making CMC an effective medium for increasing productivity and team performance.

Keywords: Computer-mediated Communication, Virtual Teams, Team Performance, Team Cohesiveness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2300
8 Hypertensive Response to Maximal Exercise Test in Young and Middle Age Hypertensive on Blood Pressure Lowering Medication: Monotherapy vs. Combination Therapy

Authors: James Patrick A. Diaz, Raul E. Ramboyong

Abstract:

Background: Hypertensive response during maximal exercise test provides important information on the level of blood pressure control and evaluation of treatment. Method: A single center retrospective descriptive study was conducted among 117 young (aged 20 to 40) and middle age (aged 40 to 65) hypertensive patients, who underwent treadmill stress test. Currently on maintenance frontline medication either monotherapy (Angiotensin-converting enzyme inhibitor/Angiotensin receptor blocker [ACEi/ARB], Calcium channel blocker [CCB], Diuretic - Hydrochlorthiazide [HCTZ]) or combination therapy (ARB+CCB, ARB+HCTZ), who attained a maximal exercise on treadmill stress test (TMST) with hypertensive response (systolic blood pressure: male >210 mm Hg, female >190 mm Hg, diastolic blood pressure >100 mmHg, or increase of >10 mm Hg at any time during the test), on Bruce and Modified Bruce protocol. Exaggerated blood pressure response during exercise (systolic [SBP] and diastolic [DBP]), peak exercise blood pressure (SBP and DBP), recovery period (SBP and DBP) and test for ischemia and their antihypertensive medication/s were investigated. Analysis of variance and chi-square test were used for statistical analysis. Results: Hypertensive responses on maximal exercise test were seen mostly among female population (P < 0.000) and middle age (P < 0.000) patients. Exaggerated diastolic blood pressure responses were significantly lower in patients who were taking CCB (P < 0.004). A longer recovery period that showed a delayed decline in SBP was observed in patients taking ARB+HCTZ (P < 0.036). There were no significant differences in the level of exaggerated systolic blood pressure response and during peak exercise (both systolic and diastolic) in patients using either monotherapy or combination antihypertensives. Conclusion: Calcium channel blockers provided lower exaggerated diastolic BP response during maximal exercise test in hypertensive middle age patients. Patients on combination therapy using ARB+HCTZ exhibited a longer recovery period of systolic blood pressure.

Keywords: Antihypertensive, exercise test, hypertension, hypertensive response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 889
7 Impact of Fischer-Tropsch Wax on Ethylene Vinyl Acetate/Waste Crumb Rubber Modified Bitumen: An Energy-Sustainability Nexus

Authors: Keith D. Nare, Mohau J. Phiri, James Carson, Chris D. Woolard, Shanganyane P. Hlangothi

Abstract:

In an energy-intensive world, minimizing energy consumption is paramount to cost saving and reducing the carbon footprint. Improving mixture procedures utilizing warm mix additive Fischer-Tropsch (FT) wax in ethylene vinyl acetate (EVA) and modified bitumen highlights a greener and sustainable approach to modified bitumen. In this study, the impact of FT wax on optimized EVA/waste crumb rubber modified bitumen is assayed with a maximum loading of 2.5%. The rationale of the FT wax loading is to maintain the original maximum loading of EVA in the optimized mixture. The phase change abilities of FT wax enable EVA co-crystallization with the support of the elastomeric backbone of crumb rubber. Less than 1% loading of FT wax worked in the EVA/crumb rubber modified bitumen energy-sustainability nexus. Response surface methodology approach to the mixture design is implemented amongst the different loadings of FT wax, EVA for a consistent amount of crumb rubber and bitumen. Rheological parameters (complex shear modulus, phase angle and rutting parameter) were the factors used as performance indicators of the different optimized mixtures. The low temperature chemistry of the optimized mixtures is analyzed using elementary beam theory and the elastic-viscoelastic correspondence principle. Master curves and black space diagrams are developed and used to predict age-induced cracking of the different long term aged mixtures. Modified binder rheology reveals that the strain response is not linear and that there is substantial re-arrangement of polymer chains as stress is increased, this is based on the age state of the mixture and the FT wax and EVA loadings. Dominance of individual effects is evident over effects of synergy in co-interaction of EVA and FT wax. All-inclusive FT wax and EVA formulations were best optimized in mixture 4 with mixture 7 reflecting increase in ease of workability. Findings show that interaction chemistry of bitumen, crumb rubber EVA, and FT wax is first and second order in all cases involving individual contributions and co-interaction amongst the components of the mixture.

Keywords: Bitumen, crumb rubber, ethylene vinyl acetate, FT wax.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 906
6 Achieving Design-Stage Elemental Cost Planning Accuracy: Case Study of New Zealand

Authors: Johnson Adafin, James O. B. Rotimi, Suzanne Wilkinson, Abimbola O. Windapo

Abstract:

An aspect of client expenditure management that requires attention is the level of accuracy achievable in design-stage elemental cost planning. This has been a major concern for construction clients and practitioners in New Zealand (NZ). Pre-tender estimating inaccuracies are significantly influenced by the level of risk information available to estimators. Proper cost planning activities should ensure the production of a project’s likely construction costs (initial and final), and subsequent cost control activities should prevent unpleasant consequences of cost overruns, disputes and project abandonment. If risks were properly identified and priced at the design stage, observed variance between design-stage elemental cost plans (ECPs) and final tender sums (FTS) (initial contract sums) could be reduced. This study investigates the variations between design-stage ECPs and FTS of construction projects, with a view to identifying risk factors that are responsible for the observed variance. Data were sourced through interviews, and risk factors were identified by using thematic analysis. Access was obtained to project files from the records of study participants (consultant quantity surveyors), and document analysis was employed in complementing the responses from the interviews. Study findings revealed the discrepancies between ECPs and FTS in the region of -14% and +16%. It is opined in this study that the identified risk factors were responsible for the variability observed. The values obtained from the analysis would enable greater accuracy in the forecast of FTS by Quantity Surveyors. Further, whilst inherent risks in construction project developments are observed globally, these findings have important ramifications for construction projects by expanding existing knowledge on what is needed for reasonable budgetary performance and successful delivery of construction projects. The findings contribute significantly to the study by providing quantitative confirmation to justify the theoretical conclusions generated in the literature from around the world. This therefore adds to and consolidates existing knowledge.

Keywords: Accuracy, design-stage, elemental cost plan, final tender sum, New Zealand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750
5 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: Probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1148
4 Innovative Fabric Integrated Thermal Storage Systems and Applications

Authors: Ahmed Elsayed, Andrew Shea, Nicolas Kelly, John Allison

Abstract:

In northern European climates, domestic space heating and hot water represents a significant proportion of total primary total primary energy use and meeting these demands from a national electricity grid network supplied by renewable energy sources provides an opportunity for a significant reduction in EU CO2 emissions. However, in order to adapt to the intermittent nature of renewable energy generation and to avoid co-incident peak electricity usage from consumers that may exceed current capacity, the demand for heat must be decoupled from its generation. Storage of heat within the fabric of dwellings for use some hours, or days, later provides a route to complete decoupling of demand from supply and facilitates the greatly increased use of renewable energy generation into a local or national electricity network. The integration of thermal energy storage into the building fabric for retrieval at a later time requires much evaluation of the many competing thermal, physical, and practical considerations such as the profile and magnitude of heat demand, the duration of storage, charging and discharging rate, storage media, space allocation, etc. In this paper, the authors report investigations of thermal storage in building fabric using concrete material and present an evaluation of several factors that impact upon performance including heating pipe layout, heating fluid flow velocity, storage geometry, thermo-physical material properties, and also present an investigation of alternative storage materials and alternative heat transfer fluids. Reducing the heating pipe spacing from 200 mm to 100 mm enhances the stored energy by 25% and high-performance Vacuum Insulation results in heat loss flux of less than 3 W/m2, compared to 22 W/m2 for the more conventional EPS insulation. Dense concrete achieved the greatest storage capacity, relative to medium and light-weight alternatives, although a material thickness of 100 mm required more than 5 hours to charge fully. Layers of 25 mm and 50 mm thickness can be charged in 2 hours, or less, facilitating a fast response that could, aggregated across multiple dwellings, provide significant and valuable reduction in demand from grid-generated electricity in expected periods of high demand and potentially eliminate the need for additional new generating capacity from conventional sources such as gas, coal, or nuclear.

Keywords: Fabric integrated thermal storage, FITS, demand side management, energy storage, load shifting, renewable energy integration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
3 Considering Aerosol Processes in Nuclear Transport Package Containment Safety Cases

Authors: Andrew Cummings, Rhianne Boag, Sarah Bryson, Gordon Turner

Abstract:

Packages designed for transport of radioactive material must satisfy rigorous safety regulations specified by the International Atomic Energy Agency (IAEA). Higher Activity Waste (HAW) transport packages have to maintain containment of their contents during normal and accident conditions of transport (NCT and ACT). To ensure containment criteria is satisfied these packages are required to be leak-tight in all transport conditions to meet allowable activity release rates. Package design safety reports are the safety cases that provide the claims, evidence and arguments to demonstrate that packages meet the regulations and once approved by the competent authority (in the UK this is the Office for Nuclear Regulation) a licence to transport radioactive material is issued for the package(s). The standard approach to demonstrating containment in the RWM transport safety case is set out in BS EN ISO 12807. In this document a method for measuring a leak rate from the package is explained by way of a small interspace test volume situated between two O-ring seals on the underside of the package lid. The interspace volume is pressurised and a pressure drop measured. A small interspace test volume makes the method more sensitive enabling the measurement of smaller leak rates. By ascertaining the activity of the contents, identifying a releasable fraction of material and by treating that fraction of material as a gas, allowable leak rates for NCT and ACT are calculated. The adherence to basic safety principles in ISO12807 is very pessimistic and current practice in the demonstration of transport safety, which is accepted by the UK regulator. It is UK government policy that management of HAW will be through geological disposal. It is proposed that the intermediate level waste be transported to the geological disposal facility (GDF) in large cuboid packages. This poses a challenge for containment demonstration because such packages will have long seals and therefore large interspace test volumes. There is also uncertainty on the releasable fraction of material within the package ullage space. This is because the waste may be in many different forms which makes it difficult to define the fraction of material released by the waste package. Additionally because of the large interspace test volume, measuring the calculated leak rates may not be achievable. For this reason a justification for a lower releasable fraction of material is sought. This paper considers the use of aerosol processes to reduce the releasable fraction for both NCT and ACT. It reviews the basic coagulation and removal processes and applies the dynamic aerosol balance equation. The proposed solution includes only the most well understood physical processes namely; Brownian coagulation and gravitational settling. Other processes have been eliminated either on the basis that they would serve to reduce the release to the environment further (pessimistically in keeping with the essence of nuclear transport safety cases) or that they are not credible in the conditions of transport considered.

Keywords: Aerosol processes, Brownian coagulation, gravitational settling, transport regulations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 596
2 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: Autonomous surveillance, Bayesian reasoning, decision-support, interventions, patterns-of-life, predictive analytics, predictive insights.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 496
1 A Study on the Relation among Primary Care Professionals Serving the Disadvantaged Community, Socioeconomic Status, and Adverse Health Outcome

Authors: Chau-Kuang Chen, Juanita Buford, Colette Davis, Raisha Allen, John Hughes, Jr., James Tyus, Dexter Samuels

Abstract:

During the post-Civil War era, the city of Nashville, Tennessee, had the highest mortality rate in the United States. The elevated death and disease rates among former slaves were attributable to lack of quality healthcare. To address the paucity of healthcare services, Meharry Medical College, an institution with the mission of educating minority professionals and serving the underserved population, was established in 1876. Purpose: The social ecological framework and partial least squares (PLS) path modeling were used to quantify the impact of socioeconomic status and adverse health outcome on primary care professionals serving the disadvantaged community. Thus, the study results could demonstrate the accomplishment of the College’s mission of training primary care professionals to serve in underserved areas. Methods: Various statistical methods were used to analyze alumni data from 1975 – 2013. K-means cluster analysis was utilized to identify individual medical and dental graduates in the cluster groups of the practice communities (Disadvantaged or Non-disadvantaged Communities). Discriminant analysis was implemented to verify the classification accuracy of cluster analysis. The independent t-test was performed to detect the significant mean differences of respective clustering and criterion variables. Chi-square test was used to test if the proportions of primary care and non-primary care specialists are consistent with those of medical and dental graduates practicing in the designated community clusters. Finally, the PLS path model was constructed to explore the construct validity of analytic model by providing the magnitude effects of socioeconomic status and adverse health outcome on primary care professionals serving the disadvantaged community. Results: Approximately 83% (3,192/3,864) of Meharry Medical College’s medical and dental graduates from 1975 to 2013 were practicing in disadvantaged communities. Independent t-test confirmed the content validity of the cluster analysis model. Also, the PLS path modeling demonstrated that alumni served as primary care professionals in communities with significantly lower socioeconomic status and higher adverse health outcome (p < .001). The PLS path modeling exhibited the meaningful interrelation between primary care professionals practicing communities and surrounding environments (socioeconomic statues and adverse health outcome), which yielded model reliability, validity, and applicability. Conclusion: This study applied social ecological theory and analytic modeling approaches to assess the attainment of Meharry Medical College’s mission of training primary care professionals to serve in underserved areas, particularly in communities with low socioeconomic status and high rates of adverse health outcomes. In summary, the majority of medical and dental graduates from Meharry Medical College provided primary care services to disadvantaged communities with low socioeconomic status and high adverse health outcome, which demonstrated that Meharry Medical College has fulfilled its mission. The high reliability, validity, and applicability of this model imply that it could be replicated for comparable universities and colleges elsewhere.

Keywords: Disadvantaged Community, K-means Cluster Analysis, PLS Path Modeling, Primary care.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1996