Search results for: Thomas Richard
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 721

Search results for: Thomas Richard

391 Security in Cyberspace: A Comprehensive Review of COVID-19 Continued Effects on Security Threats and Solutions in 2021 and the Trajectory of Cybersecurity Going into 2022

Authors: Mojtaba Fayaz, Richard Hallal

Abstract:

This study examines the various types of dangers that our virtual environment is vulnerable to, including how it can be attacked and how to avoid/secure our data. The terrain of cyberspace is never completely safe, and Covid- 19 has added to the confusion, necessitating daily periodic checks and evaluations. Cybercriminals have been able to enact with greater skill and undertake more conspicuous and sophisticated attacks while keeping a higher level of finesse by operating from home. Different types of cyberattacks, such as operation-based attacks, authentication-based attacks, and software-based attacks, are constantly evolving, but research suggests that software-based threats, such as Ransomware, are becoming more popular, with attacks expected to increase by 93 percent by 2020. The effectiveness of cyber frameworks has shifted dramatically as the pandemic has forced work and private life to become intertwined, destabilising security overall and creating a new front of cyber protection for security analysis and personal. The high-rise formats in which cybercrimes are carried out, as well as the types of cybercrimes that exist, such as phishing, identity theft, malware, and DDoS attacks, have created a new front of cyber protection for security analysis and personal safety. The overall strategy for 2022 will be the introduction of frameworks that address many of the issues associated with offsite working, as well as education that provides better information about commercialised software that does not provide the highest level of security for home users, allowing businesses to plan better security around their systems.

Keywords: cyber security, authentication, software, hardware, malware, COVID-19, threat actors, awareness, home users, confidentiality, integrity, availability, attacks

Procedia PDF Downloads 110
390 Assessing the Risk of Condensation and Moisture Accumulation in Solid Walls: Comparing Different Internal Wall Insulation Options

Authors: David Glew, Felix Thomas, Matthew Brooke-Peat

Abstract:

Improving the thermal performance of homes is seen as an essential step in achieving climate change, fuel security, fuel poverty targets. One of the most effective thermal retrofits is to insulate solid walls. However, it has been observed that applying insulation to the internal face of solid walls reduces the surface temperature of the inner wall leaf, which may introduce condensation risk and may interrupt seasonal moisture accumulation and dissipation. This research quantifies the extent to which the risk of condensation and moisture accumulation in the wall increases (which can increase the risk of timber rot) following the installation of six different types of internal wall insulation. In so doing, it compares how risk is affected by both the thermal resistance, thickness, and breathability of the insulation. Thermal bridging, surface temperatures, condensation risk, and moisture accumulation are evaluated using hygrothermal simulation software before and after the thermal upgrades. The research finds that installing internal wall insulation will always introduce some risk of condensation and moisture. However, it identifies that risks were present prior to insulation and that breathable materials and insulation with lower resistance have lower risks than alternative insulation options. The implications of this may be that building standards that encourage the enhanced thermal performance of solid walls may be introducing moisture risks into homes.

Keywords: condensation risk, hygrothermal simulation, internal wall insulation, thermal bridging

Procedia PDF Downloads 156
389 Experimental Modeling and Simulation of Zero-Surface Temperature of Controlled Water Jet Impingement Cooling System for Hot-Rolled Steel Plates

Authors: Thomas Okechukwu Onah, Onyekachi Marcel Egwuagu

Abstract:

Zero-surface temperature, which controlled the cooling profile, was modeled and used to investigate the effect of process parameters on the hot-rolled steel plates. The parameters include impingement gaps of 40mm to 70mm; pipe diameters of 20mm to 45mm feeding jet nozzle with 30 holes of 8mm diameters each; and flow rates within 2.896x10-⁶m³/s and 3.13x10-⁵m³/s. The developed simulation model of the Zero-Surface Temperature, upon validation, showed 99% prediction accuracy with dimensional homogeneity established. The evaluated Zero-Surface temperature of Controlled Water Jet Impingement Steel plates showed a high cooling rate of 36.31 Celsius degree/sec at an optimal cooling nozzle diameter of 20mm, impingement gap of 70mm and a flow rate of 1.77x10-⁵m³/s resulting in Reynold's number 2758.586, in the turbulent regime was obtained. It was also deduced that as the nozzle diameter was increasing, the impingement gap was reducing. This achieved a faster rate of cooling to an optimum temperature of 300oC irrespective of the starting surface cooling temperature. The results additionally showed that with a tested-plate initial temperature of 550oC, a controlled cooling temperature of about 160oC produced a film and nucleated boiling heat extraction that was particularly beneficial at the end of controlled cooling and influenced the microstructural properties of the test plates.

Keywords: temperature, mechanistic-model, plates, impingements, dimensionless-numbers

Procedia PDF Downloads 30
388 A Constitutional Theory of the American Presidency

Authors: Elvin Lim

Abstract:

This article integrates the debate about presidential powers with the debate about federalism, arguing that there are two ways of exercising presidential powers, one working in tandem with expanding federal powers, and the other working against it. Alexander Hamilton and Thomas Jefferson—the former a Federalist and the latter echoing the views of many Anti-Federalists—disagreed not only on the constitutional basis of prerogative, but also on the ends for which it should be deployed. This tension has always existed in American politics, and is reproduced today. Modern Democrats and Republicans both want a strong executive, but the Democrats who want a strong executive to pass legislation to expand the reach of the federal government; naturally, they must rely on an equally empowered Congress to do so. Republicans generally do not want an intrusive federal government, which is why their defense of a strong presidency does not come alongside a call for a strong Congress. This distinction cannot be explained without recourse to foundational yet opposing views about the appropriate role of federal power. When we bring federalism back in, we see that there are indeed two presidencies; one neo-Federalist, in favor of moderate presidential prerogative alongside a robust Congress directed collectively to a national state-building agenda and expanding the federal prerogative; another, neo-Anti-Federalist, in favor of expansive presidential prerogative and an ideologically sympathetic Congress equally suspicious of federal power to retard or roll back national state-building in favour of states rights.

Keywords: US presidency, federalism, prerogative, anti-federalism

Procedia PDF Downloads 110
387 An Analysis of a Relational Frame Skills Training Intervention to Increase General Intelligence in Early Childhood

Authors: Ian M. Grey, Bryan Roche, Anna Dillon, Justin Thomas, Sarah Cassidy, Dylan Colbert, Ian Stewart

Abstract:

This paper presents findings from a study conducted in two schools in Abu Dhabi. The hypothesis is that teaching young children to derive various relations between stimuli leads to increases in full-scale IQ scores of typically developing children. In the experimental group, sixteen 6-7-year-old children were exposed over six weeks to an intensive training intervention designed specifically for their age group. This training intervention, presented on a tablet, aimed to improve their understanding of the relations Same, Opposite, Different, contextual control over the concept of Sameness and Difference, and purely arbitrary derived relational responding for Sameness and Difference. In the control group, sixteen 6-7-year-old children interacted with KIBO robotics over six weeks. KIBO purports to improve cognitive skills through engagement with STEAM activities. Increases in full-scale IQ were recorded for most children in the experimental group, while no increases in full-scale IQ were recorded for the control group. These findings support the hypothesis that relational skills underlie many aspects of general cognitive ability.

Keywords: early childhood, derived relational responding, intelligence, relational frame theory, relational skills

Procedia PDF Downloads 179
386 An Ensemble Learning Method for Applying Particle Swarm Optimization Algorithms to Systems Engineering Problems

Authors: Ken Hampshire, Thomas Mazzuchi, Shahram Sarkani

Abstract:

As a subset of metaheuristics, nature-inspired optimization algorithms such as particle swarm optimization (PSO) have shown promise both in solving intractable problems and in their extensibility to novel problem formulations due to their general approach requiring few assumptions. Unfortunately, single instantiations of algorithms require detailed tuning of parameters and cannot be proven to be best suited to a particular illustrative problem on account of the “no free lunch” (NFL) theorem. Using these algorithms in real-world problems requires exquisite knowledge of the many techniques and is not conducive to reconciling the various approaches to given classes of problems. This research aims to present a unified view of PSO-based approaches from the perspective of relevant systems engineering problems, with the express purpose of then eliciting the best solution for any problem formulation in an ensemble learning bucket of models approach. The central hypothesis of the research is that extending the PSO algorithms found in the literature to real-world optimization problems requires a general ensemble-based method for all problem formulations but a specific implementation and solution for any instance. The main results are a problem-based literature survey and a general method to find more globally optimal solutions for any systems engineering optimization problem.

Keywords: particle swarm optimization, nature-inspired optimization, metaheuristics, systems engineering, ensemble learning

Procedia PDF Downloads 90
385 Analysis of Differences between Public and Experts’ Views Regarding Sustainable Development of Developing Cities: A Case Study in the Iraqi Capital Baghdad

Authors: Marwah Mohsin, Thomas Beach, Alan Kwan, Mahdi Ismail

Abstract:

This paper describes the differences in views on sustainable development between the general public and experts in a developing country, Iraq. This paper will answer the question: How do the views of the public differ from the generally accepted view of experts in the context of sustainable urban development in Iraq? In order to answer this question, the views of both the public and the experts will be analysed. These results are taken from a public survey and a Delphi questionnaire. These will be analysed using statistical methods in order to identify the significant differences. This will enable investigation of the different perceptions between the public perceptions and the experts’ views towards urban sustainable development factors. This is important due to the fact that different viewpoints between policy-makers and the public will impact on the acceptance by the public of any future sustainable development work that is undertaken. The brief findings of the statistical analysis show that the views of both the public and the experts are considered different in most of the variables except six variables show no differences. Those variables are ‘The importance of establishing sustainable cities in Iraq’, ‘Mitigate traffic congestion’, ‘Waste recycling and separating’, ‘Use wastewater recycling’, ‘Parks and green spaces’, and ‘Promote investment’.

Keywords: urban sustainability, experts views, public views, principle component analysis, PCA

Procedia PDF Downloads 124
384 A Reminder of a Rare Anatomical Variant of the Spinal Accessory Nerve Encountered During Routine Neck Dissection: A Case Report and Updated Review of the Literature

Authors: Sophie Mills, Constantinos Aristotelous, Leila L. Touil, Richard C. W. James

Abstract:

Objectives: Historical studies of the anatomy of the spinal accessory nerve (SAN) have reported conflicting results regarding its relationship with the internal jugular vein (IJV). A literature review was undertaken to establish the prevalence of anatomical variations of the SAN encountered during routine neck dissection surgery in order to increase awareness and reduce morbidity associated with iatrogenic SAN injury. Materials and Methods: The largest systematic review to date was performed using PRISMA-ScR guidelines, which yielded nine articles following the application of inclusion and exclusion criteria. A case report is also included, which demonstrates the rare anatomical relationship of the SAN traversing a fenestrated IJV, seen for the first time in the senior author’s career. Results: The mean number of dissections per study was 119, of which 55.6% (n=5) studies were performed on cadaver subjects, and 44.4% (n=4) were surgical dissections. Incidences of the SAN lateral to the IJV and medial to the IJV ranged from 38.9%-95.7% and 2.8%-57.4%, respectively. Over half of the studies reported incidences of the SAN traversing the IJV in 0.9%-2.8% of dissections. One study reported an isolated variant of the SAN dividing around the IJV with a prevalence of 0.5%. Conclusion: At the level of the posterior belly of the digastric muscle, the surgeon can anticipate the identification of the SAN lateral to the IJV in approximately three-quarters of cases, whilst around one-quarter are estimated to be medial. A mean of 1.6% of SANs traverses a fenestration of the vein. It is essential for surgeons to be aware of these anatomical variations and their prevalence to prevent injury to vital structures during surgery.

Keywords: anatomical variant, internal jugular vein, neck dissection, spinal accessory nerve

Procedia PDF Downloads 135
383 A Linearly Scalable Family of Swapped Networks

Authors: Richard Draper

Abstract:

A supercomputer can be constructed from identical building blocks which are small parallel processors connected by a network referred to as the local network. The routers have unused ports which are used to interconnect the building blocks. These connections are referred to as the global network. The address space has a global and a local component (g, l). The conventional way to connect the building blocks is to connect (g, l) to (g’,l). If there are K blocks, this requires K global ports in each router. If a block is of size M, the result is a machine with KM routers having diameter two. To increase the size of the machine to 2K blocks, each router connects to only half of the other blocks. The result is a larger machine but also one with greater diameter. This is a crude description of how the network of the CRAY XC® is designed. In this paper, a family of interconnection networks using routers with K global and M local ports is defined. Coordinates are (c,d, p) and the global connections are (c,d,p)↔(c’,p,d) which swaps p and d. The network is denoted D3(K,M) and is called a Swapped Dragonfly. D3(K,M) has KM2 routers and has diameter three, regardless of the size of K. To produce a network of size KM2 conventionally, diameter would be an increasing function of K. The family of Swapped Dragonflies has other desirable properties: 1) D3(K,M) scales linearly in K and quadratically in M. 2) If L < K, D3(K,M) contains many copies of D3(L,M). 3) If L < M, D3(K,M) contains many copies of D3(K,L). 4) D3(K,M) can perform an all-to-all exchange in KM2+KM time which is only slightly more than the time to do a one-to-all. This paper makes several contributions. It is the first time that a swap has been used to define a linearly scalable family of networks. Structural properties of this new family of networks are thoroughly examined. A synchronizing packet header is introduced. It specifies the path to be followed and it makes it possible to define highly parallel communication algorithm on the network. Among these is an all-to-all exchange in time KM2+KM. To demonstrate the effectiveness of the swap properties of the network of the CRAY XC® and D3(K,16) are compared.

Keywords: all-to-all exchange, CRAY XC®, Dragonfly, interconnection network, packet switching, swapped network, topology

Procedia PDF Downloads 114
382 Received Signal Strength Indicator Based Localization of Bluetooth Devices Using Trilateration: An Improved Method for the Visually Impaired People

Authors: Muhammad Irfan Aziz, Thomas Owens, Uzair Khaleeq uz Zaman

Abstract:

The instantaneous and spatial localization for visually impaired people in dynamically changing environments with unexpected hazards and obstacles, is the most demanding and challenging issue faced by the navigation systems today. Since Bluetooth cannot utilize techniques like Time Difference of Arrival (TDOA) and Time of Arrival (TOA), it uses received signal strength indicator (RSSI) to measure Receive Signal Strength (RSS). The measurements using RSSI can be improved significantly by improving the existing methodologies related to RSSI. Therefore, the current paper focuses on proposing an improved method using trilateration for localization of Bluetooth devices for visually impaired people. To validate the method, class 2 Bluetooth devices were used along with the development of a software. Experiments were then conducted to obtain surface plots that showed the signal interferences and other environmental effects. Finally, the results obtained show the surface plots for all Bluetooth modules used along with the strong and weak points depicted as per the color codes in red, yellow and blue. It was concluded that the suggested improved method of measuring RSS using trilateration helped to not only measure signal strength affectively but also highlighted how the signal strength can be influenced by atmospheric conditions such as noise, reflections, etc.

Keywords: Bluetooth, indoor/outdoor localization, received signal strength indicator, visually impaired

Procedia PDF Downloads 127
381 Tunable Control of Therapeutics Release from the Nanochannel Delivery System (nDS)

Authors: Thomas Geninatti, Bruno Giacomo, Alessandro Grattoni

Abstract:

Nanofluidic devices have been investigated for over a decade as promising platforms for the controlled release of therapeutics. The nanochannel drug delivery system (nDS), a membrane fabricated with high precision silicon techniques, capable of zero-order release of drugs by exploiting diffusion transport at the nanoscale originated from the interactions between molecules with nanochannel surfaces, showed the flexibility of the sustained release in vitro and in vivo, over periods of time ranging from weeks to months. To improve the implantable bio nanotechnology, in order to create a system that possesses the key features for achieve the suitable release of therapeutics, the next generation of nDS has been created. Platinum electrodes are integrated by e-beam deposition onto both surfaces of the membrane allowing low voltage (<2 V) and active temporal control of drug release through modulation of electrostatic potentials at the inlet and outlet of the membrane’s fluidic channels. Hence, a tunable administration of drugs is ensured from the nanochannel drug delivery system. The membrane will be incorporated into a peek implantable capsule, which will include drug reservoir, control hardware and RF system to allow suitable therapeutic regimens in real-time. Therefore, this new nanotechnology offers tremendous potential solutions to manage chronic disease such as cancer, heart disease, circadian dysfunction, pain and stress.

Keywords: nanochannel membrane, drug delivery, tunable release, personalized administration, nanoscale transport, biomems

Procedia PDF Downloads 305
380 The Prodomain-Bound Form of Bone Morphogenetic Protein 10 is Biologically Active on Endothelial Cells

Authors: Austin Jiang, Richard M. Salmon, Nicholas W. Morrell, Wei Li

Abstract:

BMP10 is highly expressed in the developing heart and plays essential roles in cardiogenesis. BMP10 deletion in mice results in embryonic lethality due to impaired cardiac development. In adults, BMP10 expression is restricted to the right atrium, though ventricular hypertrophy is accompanied by increased BMP10 expression in a rat hypertension model. However, reports of BMP10 activity in the circulation are inconclusive. In particular it is not known whether in vivo secreted BMP10 is active or whether additional factors are required to achieve its bioactivity. It has been shown that high-affinity binding of the BMP10 prodomain to the mature ligand inhibits BMP10 signaling activity in C2C12 cells, and it was proposed that prodomain-bound BMP10 (pBMP10) complex is latent. In this study, we demonstrated that the BMP10 prodomain did not inhibit BMP10 signaling activity in multiple endothelial cells, and that recombinant human pBMP10 complex, expressed in mammalian cells and purified under native conditions, was fully active. In addition, both BMP10 in human plasma and BMP10 secreted from the mouse right atrium were fully active. Finally, we confirmed that active BMP10 secreted from mouse right atrium was in the prodomain-bound form. Our data suggest that circulating BMP10 in adults is fully active and that the reported vascular quiescence function of BMP10 in vivo is due to the direct activity of pBMP10 and does not require an additional activation step. Moreover, being an active ligand, recombinant pBMP10 may have therapeutic potential as an endothelial-selective BMP ligand, in conditions characterized by loss of BMP9/10 signaling.

Keywords: bone morphogenetic protein 10 (BMP10), endothelial cell, signal transduction, transforming growth factor beta (TGF-B)

Procedia PDF Downloads 271
379 The Methanotrophic Activity in a Landfill Bio-Cover through a Subzero Winter

Authors: Parvin Berenjkar, Qiuyan Yuan, Richard Sparling, Stan Lozecznik

Abstract:

Landfills highly contribute to anthropological global warming through CH₄ emissions. Landfills are usually capped by a conventional soil cover to control the migration of gases. Methane is consumed by CH₄-oxidizing microorganisms known as methanotrophs that naturally exist in the landfill soil cover. The growth of methanotrophs can be optimized in a bio-cover that typically consists of a gas distribution layer (GDL) to homogenize landfill gas fluxes and an overlying oxidation layer composed of suitable materials that support methanotrophic populations. Materials such as mature yard waste composts can provide an inexpensive and favourable porous support for the growth and activity of methanotrophs. In areas with seasonal cold climates, it is valuable to know if methanotrophs in a bio-cover can survive in winter until the next spring, and how deep they are active in the bio-cover to mitigate CH₄. In this study, a pilot bio-cover was constructed in a closed landfill cell in Winnipeg that has a very cold climate in Canada. The bio-cover has a surface area of 2.5 m x 3.5 m and 1.5 m of depth, filled with 50 cm of gravel as a GDL and 70 cm of biosolids compost amended with yard and leaf waste compost. The observed in situ potential of methanotrophs for CH₄ oxidation was investigated at a specific period of time from December 2016 to April 2017 as well as November 2017 to April 2018, when the transition to surface frost and thawing happens in the bio-cover. Compost samples taken from different depths of the bio-cover were incubated in the laboratory under standardized conditions; an optimal air: methane atmosphere, at 22ºC, but at in situ moisture content. Results showed that the methanotrophs were alive oxidizing methane without a lag, indicating that there was the potential for methanotrophic activity at some depths of the bio-cover.

Keywords: bio-cover, global warming, landfill, methanotrophic activity

Procedia PDF Downloads 117
378 Uncertainty in Near-Term Global Surface Warming Linked to Pacific Trade Wind Variability

Authors: M. Hadi Bordbar, Matthew England, Alex Sen Gupta, Agus Santoso, Andrea Taschetto, Thomas Martin, Wonsun Park, Mojib Latif

Abstract:

Climate models generally simulate long-term reductions in the Pacific Walker Circulation with increasing atmospheric greenhouse gases. However, over two recent decades (1992-2011) there was a strong intensification of the Pacific Trade Winds that is linked with a slowdown in global surface warming. Using large ensembles of multiple climate models forced by increasing atmospheric greenhouse gas concentrations and starting from different ocean and/or atmospheric initial conditions, we reveal very diverse 20-year trends in the tropical Pacific climate associated with a considerable uncertainty in the globally averaged surface air temperature (SAT) in each model ensemble. This result suggests low confidence in our ability to accurately predict SAT trends over 20-year timescale only from external forcing. We show, however, that the uncertainty can be reduced when the initial oceanic state is adequately known and well represented in the model. Our analyses suggest that internal variability in the Pacific trade winds can mask the anthropogenic signal over a 20-year time frame, and drive transitions between periods of accelerated global warming and temporary slowdown periods.

Keywords: trade winds, walker circulation, hiatus in the global surface warming, internal climate variability

Procedia PDF Downloads 259
377 A Comparison of Clinical and Pathological TNM Staging in a COVID-19 Era

Authors: Sophie Mills, Leila L. Touil, Richard Sisson

Abstract:

Introduction: The TNM classification is the global standard for the staging of head and neck cancers. Accurate clinical-radiological staging of tumours (cTNM) is essential to predict prognosis, facilitate surgical planning and determine the need for other therapeutic modalities. This study aims to determine the accuracy of pre-operative cTNM staging using pathological TNM (pTNM) and consider possible causes of TNM stage migration, noting any variation throughout the COVID-19 pandemic. Materials and Methods: A retrospective cohort study examined records of patients with surgical management of head and neck cancer at a tertiary head and neck centre from November 2019 to November 2020. Data was extracted from Somerset Cancer Registry and histopathology reports. cTNM and pTNM were compared before and during the first wave of COVID-19, as well as with other potential prognostic factors such as tumour site and tumour stage. Results: 119 cases were identified, of which 52.1% (n=62) were male, and 47.9% (n=57) were female with a mean age of 67 years. Clinical and pathological staging differed in 54.6% (n=65) of cases. Of the patients with stage migration, 40.4% (n=23) were up-staged and 59.6% (n=34) were down-staged compared with pTNM. There was no significant difference in the accuracy of cTNM staging compared with age, sex, or tumour site. There was a statistically highly significant (p < 0.001) correlation between cTNM accuracy and tumour stage, with the accuracy of cTNM staging decreasing with the advancement of pTNM staging. No statistically significant variation was noted between patients staged prior to and during COVID-19. Conclusions: Discrepancies in staging can impact management and outcomes for patients. This study found that the higher the pTNM, the more likely stage migration will occur. These findings are concordant with the oncology literature, which highlights the need to improve the accuracy of cTNM staging for more advanced tumours.

Keywords: COVID-19, head and neck cancer, stage migration, TNM staging

Procedia PDF Downloads 101
376 Free Will and Compatibilism in Decision Theory: A Solution to Newcomb’s Paradox

Authors: Sally Heyeon Hwang

Abstract:

Within decision theory, there are normative principles that dictate how one should act in addition to empirical theories of actual behavior. As a normative guide to one’s actual behavior, evidential or causal decision-theoretic equations allow one to identify outcomes with maximal utility values. The choice that each person makes, however, will, of course, differ according to varying assignments of weight and probability values. Regarding these different choices, it remains a subject of considerable philosophical controversy whether individual subjects have the capacity to exercise free will with respect to the assignment of probabilities, or whether instead the assignment is in some way constrained. A version of this question is given a precise form in Richard Jeffrey’s assumption that free will is necessary for Newcomb’s paradox to count as a decision problem. This paper will argue, against Jeffrey, that decision theory does not require the assumption of libertarian freedom. One of the hallmarks of decision-making is its application across a wide variety of contexts; the implications of a background assumption of free will is similarly varied. One constant across the contexts of decision is that there are always at least two levels of choice for a given agent, depending on the degree of prior constraint. Within the context of Newcomb’s problem, when the predictor is attempting to guess the choice the agent will make, he or she is analyzing the determined aspects of the agent such as past characteristics, experiences, and knowledge. On the other hand, as David Lewis’ backtracking argument concerning the relationship between past and present events brings to light, there are similarly varied ways in which the past can actually be dependent on the present. One implication of this argument is that even in deterministic settings, an agent can have more free will than it may seem. This paper will thus argue against the view that a stable background assumption of free will or determinism in decision theory is necessary, arguing instead for a compatibilist decision theory yielding a novel treatment of Newcomb’s problem.

Keywords: decision theory, compatibilism, free will, Newcomb’s problem

Procedia PDF Downloads 314
375 Interpersonal Variation of Salivary Microbiota Using Denaturing Gradient Gel Electrophoresis

Authors: Manjula Weerasekera, Chris Sissons, Lisa Wong, Sally Anderson, Ann Holmes, Richard Cannon

Abstract:

The aim of this study was to characterize bacterial population and yeasts in saliva by Polymerase chain reaction followed by denaturing gradient gel electrophoresis (PCR-DGGE) and measure yeast levels by culture. PCR-DGGE was performed to identify oral bacteria and yeasts in 24 saliva samples. DNA was extracted and used to generate DNA amplicons of the V2–V3 hypervariable region of the bacterial 16S rDNA gene using PCR. Further universal primers targeting the large subunit rDNA gene (25S-28S) of fungi were used to amplify yeasts present in human saliva. Resulting PCR products were subjected to denaturing gradient gel electrophoresis using Universal mutation detection system. DGGE bands were extracted and sequenced using Sanger method. A potential relationship was evaluated between groups of bacteria identified by cluster analysis of DGGE fingerprints with the yeast levels and with their diversity. Significant interpersonal variation of salivary microbiome was observed. Cluster and principal component analysis of the bacterial DGGE patterns yielded three significant major clusters, and outliers. Seventeen of the 24 (71%) saliva samples were yeast positive going up to 10³ cfu/mL. Predominately, C. albicans, and six other species of yeast were detected. The presence, amount and species of yeast showed no clear relationship to the bacterial clusters. Microbial community in saliva showed a significant variation between individuals. A lack of association between yeasts and the bacterial fingerprints in saliva suggests the significant ecological person-specific independence in highly complex oral biofilm systems under normal oral conditions.

Keywords: bacteria, denaturing gradient gel electrophoresis, oral biofilm, yeasts

Procedia PDF Downloads 218
374 Flashsonar or Echolocation Education: Expanding the Function of Hearing and Changing the Meaning of Blindness

Authors: Thomas, Daniel Tajo, Kish

Abstract:

Sight is primarily associated with the function of gathering and processing near and extended spatial information which is largely used to support self-determined interaction with the environment through self-directed movement and navigation. By contrast, hearing is primarily associated with the function of gathering and processing sequential information which may typically be used to support self-determined communication through the self-directed use of music and language. Blindness or the lack of vision is traditionally characterized by a lack of capacity to access spatial information which, in turn, is presumed to result in a lack of capacity for self-determined interaction with the environment due to limitations in self-directed movement and navigation. However, through a specific protocol of FlashSonar education developed by World Access for the Blind, the function of hearing can be expanded in blind people to carry out some of the functions normally associated with sight, that is to access and process near and extended spatial information to construct three-dimensional acoustic images of the environment. This perceptual education protocol results in a significant restoration in blind people of self-determined environmental interaction, movement, and navigational capacities normally attributed to vision - a new way to see. Thus, by expanding the function of hearing to process spatial information to restore self-determined movement, we are not only changing the meaning of blindness, and what it means to be blind, but we are also recasting the meaning of vision and what it is to see.

Keywords: echolocation, changing, sensory, function

Procedia PDF Downloads 151
373 Recovery from Detrimental pH Troughs in a Moorland River Using Monitored Calcium Carbonate Introductions

Authors: Lauren Dawson, Sean Comber, Richard Sandford, Alan Tappin, Bruce Stockley

Abstract:

The West Dart River is underperforming for Salmon (Salmo salar) survival rates due to acidified pH troughs under the European Water Framework Directive (2000/60/EC). These troughs have been identified as being caused by historic acid rain pollution which is being held in situ by peat bog presence at site and released during flushing events. Natural recovery has been deemed unlikely by the year 2020 using steady state water chemistry models and therefore a program of monitored calcium carbonate (CaCO3) introductions are being conducted to eliminate these troughs, which can drop to pH 2.93 (salmon survival – pH 5.5). The river should be naturally acidic (pH 5.5-6) due to the granite geology of Dartmoor and therefore the CaCO3 introductions are under new methodology (the encasing of the CaCO3 in permeable sacks) to ensure removal should the water pH rise above neutral levels. The water chemistry and ecology are undergoing comprehensive monitoring, including pH and turbidity levels, dissolved organic carbon and aluminum concentration and speciation, while the aquatic biota is being used to assess the potential water chemistry changes. While this project is ongoing, results from the preliminary field trial show only a temporary, localized increase in pH following CaCO3 introductions into the water column. However, changes to the water chemistry have only been identified in the West Dart after methodology adjustments to account for flow rates and spate-dissolution, though no long-term changes have so far been found in the ecology of the river. However, this is not necessarily a negative factor, as the aim of the study is to protect the current ecological communities and the natural pH of the river while remediating only the detrimental pH troughs.

Keywords: anthropogenic acidification recovery, calcium carbonate introductions, ecology monitoring, water chemistry monitoring

Procedia PDF Downloads 141
372 Predicting Stem Borer Density in Maize Using RapidEye Data and Generalized Linear Models

Authors: Elfatih M. Abdel-Rahman, Tobias Landmann, Richard Kyalo, George Ong’amo, Bruno Le Ru

Abstract:

Maize (Zea mays L.) is a major staple food crop in Africa, particularly in the eastern region of the continent. The maize growing area in Africa spans over 25 million ha and 84% of rural households in Africa cultivate maize mainly as a means to generate food and income. Average maize yields in Sub Saharan Africa are 1.4 t/ha as compared to global average of 2.5–3.9 t/ha due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In East Africa, yield losses due to stem borers are currently estimated between 12% to 40% of the total production. The objective of the present study was therefore to predict stem borer larvae density in maize fields using RapidEye reflectance data and generalized linear models (GLMs). RapidEye images were captured for a test site in Kenya (Machakos) in January and in February 2015. Stem borer larva numbers were modeled using GLMs assuming Poisson (Po) and negative binomial (NB) distributions with error with log arithmetic link. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were employed to assess the models performance using a leave one-out cross-validation approach. Results showed that NB models outperformed Po ones in all study sites. RMSE and RPD ranged between 0.95 and 2.70, and between 2.39 and 6.81, respectively. Overall, all models performed similar when used the January and the February image data. We conclude that reflectance data from RapidEye data can be used to estimate stem borer larvae density. The developed models could to improve decision making regarding controlling maize stem borers using various integrated pest management (IPM) protocols.

Keywords: maize, stem borers, density, RapidEye, GLM

Procedia PDF Downloads 489
371 Deficiencies in Vitamin A and Iron Supply Potential of Selected Indigenous Complementary Foods of Infants in Uganda

Authors: Richard Kajjura, Joyce Kikafunda, Roger Whitehead

Abstract:

Introduction: Indigenous complementary recipes for children (6-23 months) are bulky and inextricably linked. The potential contribution of indigenous complementary foods to infant’s vitamin A and iron needs is not well investigated in Uganda. Less is known whether children in Uganda are living with or without adequate supply of vitamin A and iron nutrients. In this study, vitamin A and iron contents were assessed in the complementary foods fed to infants aged 6-11 months in a Peri-urban setting in Kampala District in Central Uganda. Objective: Assessment of vitamin A and iron contents of indigenous complementary foods of children as fed and associated demographic factor. Method: In a cross sectional study design, one hundred and three (153) households with children aged 6-11 months were randomly selected to participate in the assessment. Complementary food samples were collected from the children’s mothers/caretakers at the time of feeding the child. The mothers’ socio-demographic characteristics of age, education, marital status, occupation and sex collected a semi-qualitative questionnaire. The Vitamin A and iron contents in the complementary foods were analyzed using a UV/VIS spectrophotometer for vitamin A and Atomic Absorption spectrophotometer for iron samples. The data was analyzed using Gene-stat software program. Results: The mean vitamin A content was 97.0± 72.5 µg while that of iron was 1.5 ± 0.4 mg per 100g of food sample as fed. The contribution of indigenous complementary foods found was 32% for vitamin A and 15% iron of the recommended dietary allowance. Age of children was found to be significantly associated Vitamin A and Iron supply potential. Conclusion: The contribution of indigenous complementary foods to infant’s vitamin A and iron needs was low. Complementary foods in Uganda are more likely to be deficient in vitamin A and iron content. Nutrient dense dietary supplementation should be intervened in to make possible for Ugandan children attain full growth potential.

Keywords: indigenous complementary food, infant, iron, vitamin A

Procedia PDF Downloads 472
370 Comparative Life Cycle Assessment of an Extensive Green Roof with a Traditional Gravel-Asphalted Roof: An Application for the Lebanese Context

Authors: Makram El Bachawati, Rima Manneh, Thomas Dandres, Carla Nassab, Henri El Zakhem, Rafik Belarbi

Abstract:

A vegetative roof, also called a garden roof, is a "roofing system that endorses the growth of plants on a rooftop". Garden roofs serve several purposes for a building, such as embellishing the roofing system, enhancing the water management, and reducing the energy consumption and heat island effects. Lebanon is a Middle East country that lacks the use of a sustainable energy system. It imports 98% of its non-renewable energy from neighboring countries and suffers flooding during heavy rains. The objective of this paper is to determine if the implementation of vegetative roofs is effectively better than the traditional roofs for the Lebanese context. A Life Cycle Assessment (LCA) is performed in order to compare an existing extensive green roof to a traditional gravel-asphalted roof. The life cycle inventory (LCI) was established and modeled using the SimaPro 8.0 software, while the environmental impacts were classified using the IMPACT 2002+ methodology. Results indicated that, for the existing extensive green roof, the waterproofing membrane and the growing medium were the highest contributors to the potential environmental impacts. When comparing the vegetative to the traditional roof, results showed that, for all impact categories, the extensive green roof had the less environmental impacts.

Keywords: life cycle assessment, green roofs, vegatative roof, environmental impact

Procedia PDF Downloads 455
369 Smart Oxygen Deprivation Mask: An Improved Design with Biometric Feedback

Authors: Kevin V. Bui, Richard A. Claytor, Elizabeth M. Priolo, Weihui Li

Abstract:

Oxygen deprivation masks operate through the use of restricting valves as a means to reduce respiratory flow where flow is inversely proportional to the resistance applied. This produces the same effect as higher altitudes where lower pressure leads to reduced respiratory flow. Both increased resistance with restricting valves and reduce the pressure of higher altitudes make breathing difficultier and force breathing muscles (diaphragm and intercostal muscles) working harder. The process exercises these muscles, improves their strength and results in overall better breathing efficiency. Currently, these oxygen deprivation masks are purely mechanical devices without any electronic sensor to monitor the breathing condition, thus not be able to provide feedback on the breathing effort nor to evaluate the lung function. That is part of the reason that these masks are mainly used for high-level athletes to mimic training in higher altitude conditions, not suitable for patients or customers. The design aims to improve the current method of oxygen deprivation mask to include a larger scope of patients and customers while providing quantitative biometric data that the current design lacks. This will be accomplished by integrating sensors into the mask’s breathing valves along with data acquisition and Bluetooth modules for signal processing and transmission. Early stages of the sensor mask will measure breathing rate as a function of changing the air pressure in the mask, with later iterations providing feedback on flow rate. Data regarding breathing rate will be prudent in determining whether training or therapy is improving breathing function and quantify this improvement.

Keywords: oxygen deprivation mask, lung function, spirometer, Bluetooth

Procedia PDF Downloads 216
368 Joubert Syndrome in Children as Multicentric Screening in Ten Different Places in World

Authors: Bajraktarevic Adnan, Djukic Branka, Sporisevic Lutvo, Krdzalic Zecevic Belma, Uzicanin Sajra, Hadzimuratovic Admir, Hadzimuratovic Hadzipasic Emina, Abduzaimovic Alisa, Kustric Amer, Suljevic Ismet, Serafi Ismail, Tahmiscija Indira, Khatib Hakam, Semic Jusufagic Aida, Haas Helmut, Vladicic Aleksandra, Aplenc Richard, Kadic Deovic Aida

Abstract:

Introduction: Joubert syndrome has an autosomal recessive pattern of inheritance. It is referred as the brain malfunctioning and caused due to the underdevelopment of the cerebellar vermis. Associated conditions involving the eye, the kidney, and ocular disease are well described. Aims: Research helps us better understand this diseases, Joubert syndrome and can lead to advances in diagnosis and treatment. Methods: Different several conditions have been described in which the molar tooth sign and characteristics of Joubert syndrome in ten different places in the world. Carrier testing and diagnosis are available if one of these gene mutations has been identified in an affected family member. Results: Authors have described eleven cases during twenty years of Joubert syndrome. It is a clinically and genetically heterogeneous group of disorders characterized by hypoplasia of the cerebellar vermis with the characteristic neuroradiologic molar tooth sign, and accompanying neurologic symptoms, including dysregulation of breathing pattern and developmental delay. We made confirmation of diagnosis in twin sisters with Joubert syndrome with renal anomalies. Ocular symptoms have existed in seven cases (63.64%) from total eleven. Eleven cases were different sex, five boys (45.45%) and six girls (54.44%). Conclusions: Joubert syndrome is inherited as an autosomal recessive genetic disorder with several features of the disease.

Keywords: Joubert syndrome, cerebellooculorenal syndrome, autosomal recessive genetic disorder (ARGD), children

Procedia PDF Downloads 276
367 Understanding and Explaining Urban Resilience and Vulnerability: A Framework for Analyzing the Complex Adaptive Nature of Cities

Authors: Richard Wolfel, Amy Richmond

Abstract:

Urban resilience and vulnerability are critical concepts in the modern city due to the increased sociocultural, political, economic, demographic, and environmental stressors that influence current urban dynamics. Urban scholars need help explaining urban resilience and vulnerability. First, cities are dominated by people, which is challenging to model, both from an explanatory and a predictive perspective. Second, urban regions are highly recursive in nature, meaning they not only influence human action, but the structures of cities are constantly changing due to human actions. As a result, explanatory frameworks must continuously evolve as humans influence and are influenced by the urban environment in which they operate. Finally, modern cities have populations, sociocultural characteristics, economic flows, and environmental impacts on order of magnitude well beyond the cities of the past. As a result, the frameworks that seek to explain the various functions of a city that influence urban resilience and vulnerability must address the complex adaptive nature of cities and the interaction of many distinct factors that influence resilience and vulnerability in the city. This project develops a taxonomy and framework for organizing and explaining urban vulnerability. The framework is built on a well-established political development model that includes six critical classes of urban dynamics: political presence, political legitimacy, political participation, identity, production, and allocation. In addition, the framework explores how environmental security and technology influence and are influenced by the six elements of political development. The framework aims to identify key tipping points in society that act as influential agents of urban vulnerability in a region. This will help analysts and scholars predict and explain the influence of both physical and human geographical stressors in a dense urban area.

Keywords: urban resilience, vulnerability, sociocultural stressors, political stressors

Procedia PDF Downloads 108
366 Epicardial Fat Necrosis in a Young Female: A Case Report

Authors: Tayyibah Shah Alam, Joe Thomas, Nayantara Shenoy

Abstract:

Presenting a case that we would like to share, the answer is straight forward but the path taken to get to the diagnosis is where it gets interesting. A 31-year-old lady presented to the Rheumatology Outpatient department with left-sided chest pain associated with left-sided elbow joint pain intensifying over the last 2 days. She had been having a prolonged history of chest pain with minimal intensity since 2016. The pain is intermittent in nature. Aggravated while exerting, lifting heavy weights and lying down. Relieved while sitting. Her physical examination and laboratory tests were within normal limits. An electrocardiogram (ECG) showed normal sinus rhythm and a chest X-ray with no significant abnormality was noted. The primary suspicion was recurrent costochondritis. Cardiac blood inflammatory markers and Echo were normal, ruling out ACS. CT chest and MRI Thorax contrast showed small ill-defined STIR hyperintensity with thin peripheral enhancement in the anterior mediastinum in the left side posterior to the 5th costal cartilage and anterior to the pericardium suggestive of changes in the fat-focal panniculitis. Confirming the diagnosis as Epicardial fat necrosis. She was started on Colchicine and Nonsteroidal anti-inflammatory drugs for 2-3 weeks, following which a repeat CT showed resolution of the lesion and improvement in her. It is often under-recognized or misdiagnosed. CT scan was collectively used to establish the diagnosis. Making the correct diagnosis prospectively alleviates unnecessary testing in favor of conservative management.

Keywords: EFN, panniculitis, unknown etiology, recurrent chest pain

Procedia PDF Downloads 94
365 Technical, Environmental and Financial Assessment for Optimal Sizing of Run-of-River Small Hydropower Project: Case Study in Colombia

Authors: David Calderon Villegas, Thomas Kaltizky

Abstract:

Run-of-river (RoR) hydropower projects represent a viable, clean, and cost-effective alternative to dam-based plants and provide decentralized power production. However, RoR schemes cost-effectiveness depends on the proper selection of site and design flow, which is a challenging task because it requires multivariate analysis. In this respect, this study presents the development of an investment decision support tool for assessing the optimal size of an RoR scheme considering the technical, environmental, and cost constraints. The net present value (NPV) from a project perspective is used as an objective function for supporting the investment decision. The tool has been tested by applying it to an actual RoR project recently proposed in Colombia. The obtained results show that the optimum point in financial terms does not match the flow that maximizes energy generation from exploiting the river's available flow. For the case study, the flow that maximizes energy corresponds to a value of 5.1 m3/s. In comparison, an amount of 2.1 m3/s maximizes the investors NPV. Finally, a sensitivity analysis is performed to determine the NPV as a function of the debt rate changes and the electricity prices and the CapEx. Even for the worst-case scenario, the optimal size represents a positive business case with an NPV of 2.2 USD million and an IRR 1.5 times higher than the discount rate.

Keywords: small hydropower, renewable energy, RoR schemes, optimal sizing, objective function

Procedia PDF Downloads 126
364 Optimization of Photocatalytic Degradation of Para-Nitrophenol in Visible Light by Nitrogen and Phosphorus Co-Doped Zinc Oxide Using Factorial Design of Experimental

Authors: Friday Godwin Okibe, Elaoyi David Paul, Oladayo Thomas Ojekunle

Abstract:

In this study, Nitrogen and Phosphorous co-doped Zinc Oxide (NPZ) was prepared through a solvent-free reaction. The NPZ was characterized by Scanning Electron Microscopy (SEM) and Fourier Transform Infrared (FTIR) spectroscopy. The photocatalytic activity of the catalyst was investigated by monitoring the degradation of para-nitrophenol (PNP) under visible light irradiation and the process was optimized using factorial design of experiment. The factors investigated were initial concentration of para-nitrophenol, catalyst loading, pH and irradiation time. The characterization results revealed a successful doping of ZnO by nitrogen and phosphorus and an improvement in the surface morphology of the catalyst. The photo-catalyst exhibited improved photocatalytic activity under visible light by 73.8%. The statistical analysis of the optimization result showed that the model terms were significant at 95% confidence level. Interactions plots revealed that irradiation time was the most significant factor affecting the degradation process. The cube plots of the interactions of the variables showed that an optimum degradation efficiency of 66.9% was achieved at 10mg/L initial PNP concentration, 0.5g catalyst loading, pH 7 and 150 minutes irradiation time.

Keywords: nitrogen and phosphorous co-doped Zno, p-nitrophenol, photocatalytic degradation, optimization, factorial design of experimental

Procedia PDF Downloads 516
363 Infrared Thermography as an Informative Tool in Energy Audit and Software Modelling of Historic Buildings: A Case Study of the Sheffield Cathedral

Authors: Ademuyiwa Agbonyin, Stamatis Zoras, Mohammad Zandi

Abstract:

This paper investigates the extent to which building energy modelling can be informed based on preliminary information provided by infrared thermography using a thermal imaging camera in a walkthrough audit. The case-study building is the Sheffield Cathedral, built in the early 1400s. Based on an informative qualitative report generated from the thermal images taken at the site, the regions showing significant heat loss are input into a computer model of the cathedral within the integrated environmental solution (IES) virtual environment software which performs an energy simulation to determine quantitative heat losses through the building envelope. Building data such as material thermal properties and building plans are provided by the architects, Thomas Ford and Partners Ltd. The results of the modelling revealed the portions of the building with the highest heat loss and these aligned with those suggested by the thermal camera. Retrofit options for the building are also considered, however, may not see implementation due to a desire to conserve the architectural heritage of the building. Results show that thermal imaging in a walk-through audit serves as a useful guide for the energy modelling process. Hand calculations were also performed to serve as a 'control' to estimate losses, providing a second set of data points of comparison.

Keywords: historic buildings, energy retrofit, thermal comfort, software modelling, energy modelling

Procedia PDF Downloads 164
362 Evaluation of Pesticide Residues in Honey from Cocoa and Forest Ecosystems in Ghana

Authors: Richard G. Boakye, Dara A Stanley, Mathavan Vickneswaran, Blanaid White

Abstract:

The cultivation of cocoa (Theobroma cocoa), an important cash crop that contributes immensely towards the economic growth of several Western African countries, depends almost entirely on pesticide application owing to the plant’s vulnerability to pest and disease attacks. However, the extent to which pesticides inputted for cocoa cultivation impact bees and bee products has rarely received attention in research. Through this study, the effects of pesticides applied for cocoa cultivation on honey in Ghana were examined by evaluating honey samples from cocoa and forest ecosystems in Ghana. An analysis of five honey samples from each land use type confirmed pesticide contaminants from these land use types at measured concentrations for acetamiprid (0.051mg/kg); imidacloprid (0.004-0.02 mg/kg), thiamethoxam (0.013-0.017 mg/kg); indoxacarb (0.004-0.045 mg/kg) and sulfoxaflor (0.004-0.026 mg/kg). None of the observed pesticide concentrations exceeded EU maximum residue levels, indicating no compromise of the honey quality for human consumption. However, from the results, it could be inferred that toxic effects on bees may not be ruled out because observed concentrations largely exceeded the threshold of 0.001 mg/kg at which sublethal effects on bees have previously been reported. One of the most remarkable results to emerge from this study is the detection of imidacloprid in all honey samples analyzed, with sulfoxaflor and thiamethoxam also being detected in 93% and 73% of the honey samples, respectively. This suggests the probable prevalence of pesticide use in the landscape. However, the conclusions reached in this study should be interpreted within the scope of pesticide applications within Bia West District and not necessarily extended to other cocoa-producing districts in Ghana. Future studies should therefore include multiple cocoa-growing districts and other non-cocoa farming landscapes. Such an approach can give a broader outlook on pesticide residues in honey produced in Ghana.

Keywords: honey, cocoa, pesticides, bees, land use, landscape, residues, Ghana

Procedia PDF Downloads 72