Search results for: Richard Um
170 Quantifying Meaning in Biological Systems
Authors: Richard L. Summers
Abstract:
The advanced computational analysis of biological systems is becoming increasingly dependent upon an understanding of the information-theoretic structure of the materials, energy and interactive processes that comprise those systems. The stability and survival of these living systems are fundamentally contingent upon their ability to acquire and process the meaning of information concerning the physical state of its biological continuum (biocontinuum). The drive for adaptive system reconciliation of a divergence from steady-state within this biocontinuum can be described by an information metric-based formulation of the process for actionable knowledge acquisition that incorporates the axiomatic inference of Kullback-Leibler information minimization driven by survival replicator dynamics. If the mathematical expression of this process is the Lagrangian integrand for any change within the biocontinuum then it can also be considered as an action functional for the living system. In the direct method of Lyapunov, such a summarizing mathematical formulation of global system behavior based on the driving forces of energy currents and constraints within the system can serve as a platform for the analysis of stability. As the system evolves in time in response to biocontinuum perturbations, the summarizing function then conveys information about its overall stability. This stability information portends survival and therefore has absolute existential meaning for the living system. The first derivative of the Lyapunov energy information function will have a negative trajectory toward a system's steady state if the driving force is dissipating. By contrast, system instability leading to system dissolution will have a positive trajectory. The direction and magnitude of the vector for the trajectory then serves as a quantifiable signature of the meaning associated with the living system’s stability information, homeostasis and survival potential.Keywords: meaning, information, Lyapunov, living systems
Procedia PDF Downloads 131169 Hybrid Adaptive Modeling to Enhance Robustness of Real-Time Optimization
Authors: Hussain Syed Asad, Richard Kwok Kit Yuen, Gongsheng Huang
Abstract:
Real-time optimization has been considered an effective approach for improving energy efficient operation of heating, ventilation, and air-conditioning (HVAC) systems. In model-based real-time optimization, model mismatches cannot be avoided. When model mismatches are significant, the performance of the real-time optimization will be impaired and hence the expected energy saving will be reduced. In this paper, the model mismatches for chiller plant on real-time optimization are considered. In the real-time optimization of the chiller plant, simplified semi-physical or grey box model of chiller is always used, which should be identified using available operation data. To overcome the model mismatches associated with the chiller model, hybrid Genetic Algorithms (HGAs) method is used for online real-time training of the chiller model. HGAs combines Genetic Algorithms (GAs) method (for global search) and traditional optimization method (i.e. faster and more efficient for local search) to avoid conventional hit and trial process of GAs. The identification of model parameters is synthesized as an optimization problem; and the objective function is the Least Square Error between the output from the model and the actual output from the chiller plant. A case study is used to illustrate the implementation of the proposed method. It has been shown that the proposed approach is able to provide reliability in decision making, enhance the robustness of the real-time optimization strategy and improve on energy performance.Keywords: energy performance, hybrid adaptive modeling, hybrid genetic algorithms, real-time optimization, heating, ventilation, and air-conditioning
Procedia PDF Downloads 417168 A Low Order Thermal Envelope Model for Heat Transfer Characteristics of Low-Rise Residential Buildings
Authors: Nadish Anand, Richard D. Gould
Abstract:
A simplistic model is introduced for determining the thermal characteristics of a Low-rise Residential (LRR) building and then predicts the energy usage by its Heating Ventilation & Air Conditioning (HVAC) system according to changes in weather conditions which are reflected in the Ambient Temperature (Outside Air Temperature). The LRR buildings are treated as a simple lump for solving the heat transfer problem and the model is derived using the lumped capacitance model of transient conduction heat transfer from bodies. Since most contemporary HVAC systems have a thermostat control which will have an offset temperature and user defined set point temperatures which define when the HVAC system will switch on and off. The aim is to predict without any error the Body Temperature (i.e. the Inside Air Temperature) which will estimate the switching on and off of the HVAC system. To validate the mathematical model derived from lumped capacitance we have used EnergyPlus simulation engine, which simulates Buildings with considerable accuracy. We have predicted through the low order model the Inside Air Temperature of a single house kept in three different climate zones (Detroit, Raleigh & Austin) and different orientations for summer and winter seasons. The prediction error from the model for the same day as that of model parameter calculation has showed an error of < 10% in winter for almost all the orientations and climate zones. Whereas the prediction error is only <10% for all the orientations in the summer season for climate zone at higher latitudes (Raleigh & Detroit). Possible factors responsible for the large variations are also noted in the work, paving way for future research.Keywords: building energy, energy consumption, energy+, HVAC, low order model, lumped capacitance
Procedia PDF Downloads 266167 Assessing the Risk of Pressure Injury during Percutaneous Nephrolithotomy Using Pressure Mapping
Authors: Jake Tempo, Taylor Smithurst, Jen Leah, Skye Waddingham, Amanda Catlin, Richard Cetti
Abstract:
Introduction: Percutaneous nephrolithotomy (PCNL) is the gold-standard procedure for removing large or complex renal stones. Many operating positions can be used, and the debate over the ideal position continues. PCNL can be a long procedure during which patients can sustain pressure injuries. These injuries are often underreported in the literature. Interface pressure mapping records the pressure loading between a surface and the patient. High pressures with prolonged loading result in ischaemia, muscle deformation, and reperfusion which can cause skin breakdown and muscular injury. We compared the peak pressure indexes of common PCNL positions to identify positions which may be at high risk of pressure injuries. We hope the data can be used to adapt high-risk positions so that the PPI can be lessened by either adapting the positions or by using adjuncts to lower PPI. Materials and Methods: We placed a 23-year-old male subject in fourteen different PCNL positions while performing interface pressure mapping. The subject was 179 cm with a weight of 63.3 kg, BMI 19.8kg/m². Results: Supine positions had a higher mean PPI (119mmHg (41-137)) compared to prone positions (64mmHg (32-89)) (p=0.046 two tailed t-test). The supine flexed position with a bolster under the flank produced the highest PPI (194mmHg), and this was at the sacrum. Peak pressure indexes >100mmHg were recorded in eight PCNL positions. Conclusion: Supine PCNL positions produce higher PPI than prone PCNL positions. Our study shows where ‘at risk’ bony prominences are for each PCNL position. Surgeons must ensure these areas are protected during prolonged operations.Keywords: PCNL, pressure ulcer, interface pressure mapping, surgery
Procedia PDF Downloads 83166 Effect of Atrial Flutter on Alcoholic Cardiomyopathy
Authors: Ibrahim Ahmed, Richard Amoateng, Akhil Jain, Mohamed Ahmed
Abstract:
Alcoholic cardiomyopathy (ACM) is a type of acquired cardiomyopathy caused by chronic alcohol consumption. Frequently ACM is associated with arrhythmias such as atrial flutter. Our aim was to characterize the patient demographics and investigate the effect of atrial flutter (AF) on ACM. This was a retrospective cohort study using the Nationwide Inpatient Sample database to identify admissions in adults with principal and secondary diagnoses of alcoholic cardiomyopathy and atrial flutter from 2019. Multivariate linear and logistic regression models were adjusted for age, gender, race, household income, insurance status, Elixhauser comorbidity score, hospital location, bed size, and teaching status. The primary outcome was all-cause mortality, and secondary outcomes were the length of stay (LOS) and total charge in USD. There was a total of 21,855 admissions with alcoholic cardiomyopathy, of which 1,635 had atrial flutter (AF-ACM). Compared to Non-AF-ACM cohort, AF-ACM cohort had fewer females (4.89% vs 14.54%, p<0.001), were older (58.66 vs 56.13 years, p<0.001), fewer Native Americans (0.61% vs2.67%, p<0.01), had fewer smaller (19.27% vs 22.45%, p<0.01) & medium-sized hospitals (23.24% vs28.98%, p<0.01), but more large-sized hospitals (57.49% vs 48.57%, p<0.01), more Medicare (40.37% vs 34.08%, p<0.05) and fewer Medicaid insured (23.55% vs 33.70%, p=<0.001), fewer hypertension (10.7% vs 15.01%, p<0.05), and more obesity (24.77% vs 16.35%, p<0.001). Compared to Non-AF-ACM cohort, there was no difference in AF-ACM cohort mortality rate (6.13% vs 4.20%, p=0.0998), unadjusted mortality OR 1.49 (95% CI 0.92-2.40, p=0.102), adjusted mortality OR 1.36 (95% CI 0.83-2.24, p=0.221), but there was a difference in LOS 1.23 days (95% CI 0.34-2.13, p<0.01), total charge $28,860.30 (95% CI 11,883.96-45,836.60, p<0.01). In patients admitted with ACM, the presence of AF was not associated with a higher all-cause mortality rate or odds of all-cause mortality; however, it was associated with 1.23 days increase in LOS and a $28,860.30 increase in total hospitalization charge. Native Americans, older age and obesity were risk factors for the presence of AF in ACM.Keywords: alcoholic cardiomyopathy, atrial flutter, cardiomyopathy, arrhythmia
Procedia PDF Downloads 112165 Community Engagement Policy for Decreasing Childhood Lead Poisoning in Philadelphia
Authors: Hasibe Caballero-Gomez, Richard Pepino
Abstract:
Childhood lead poisoning is an issue that continues to plague major U.S. cities. Lead poisoning has been linked to decreases in academic achievement and IQ at levels as low as 5 ug/dL. Despite efforts from the Philadelphia Health Department to curtail systemic childhood lead poisoning, children continue to be identified with elevated blood lead levels (EBLLs) above the CDC reference level for diagnosis. This problem disproportionately affects low-income Black communities. At the moment, remediation is costly, and with the current policies in place, comprehensive remediation seems unrealistic. This research investigates community engagement policy and the ways pre-exisiting resources in target communities can be adjusted to decrease childhood lead poisoning. The study was done with two methods: content analysis and case studies. The content analysis includes 12 interviews from stakeholders and five published policy recommendations. The case studies focus on Baltimore, Chicago, Rochester, and St. Louis, four cities with significant childhood lead poisoning. Target communities were identified by mapping five factors that indicate a higher risk for lead poisoning. Seven priority zipcodes were identified for the model developed in this study. For these urban centers, 28 policy solutions and suggestions were identified, with three being identified at least four times in the content analysis and case studies. These three solutions create an interdependent model that offers increased community awareness and engagement with the issue that could potentially improve health and social outcomes for at-risk children.Keywords: at-risk populations, community engagement, environmental justice, policy translation
Procedia PDF Downloads 120164 Design Study on a Contactless Material Feeding Device for Electro Conductive Workpieces
Authors: Oliver Commichau, Richard Krimm, Bernd-Arno Behrens
Abstract:
A growing demand on the production rate of modern presses leads to higher stroke rates. Commonly used material feeding devices for presses like grippers and roll-feeding systems can only achieve high stroke rates along with high gripper forces, to avoid stick-slip. These forces are limited by the sensibility of the surfaces of the workpieces. Stick-slip leads to scratches on the surface and false positioning of the workpiece. In this paper, a new contactless feeding device is presented, which develops higher feeding force without damaging the surface of the workpiece through gripping forces. It is based on the principle of the linear induction motor. A primary part creates a magnetic field and induces eddy currents in the electrically conductive material. A Lorentz-Force applies to the workpiece in feeding direction as a mutual reaction between the eddy-currents and the magnetic induction. In this study, the FEA model of this approach is shown. The calculation of this model was used to identify the influence of various design parameters on the performance of the feeder and thus showing the promising capabilities and limits of this technology. In order to validate the study, a prototype of the feeding device has been built. An experimental setup was used to measure pulling forces and placement accuracy of the experimental feeder in order to give an outlook of a potential industrial application of this approach.Keywords: conductive material, contactless feeding, linear induction, Lorentz-Force
Procedia PDF Downloads 179163 Architecture - Performance Relationship in GPU Computing - Composite Process Flow Modeling and Simulations
Authors: Ram Mohan, Richard Haney, Ajit Kelkar
Abstract:
Current developments in computing have shown the advantage of using one or more Graphic Processing Units (GPU) to boost the performance of many computationally intensive applications but there are still limits to these GPU-enhanced systems. The major factors that contribute to the limitations of GPU(s) for High Performance Computing (HPC) can be categorized as hardware and software oriented in nature. Understanding how these factors affect performance is essential to develop efficient and robust applications codes that employ one or more GPU devices as powerful co-processors for HPC computational modeling. This research and technical presentation will focus on the analysis and understanding of the intrinsic interrelationship of both hardware and software categories on computational performance for single and multiple GPU-enhanced systems using a computationally intensive application that is representative of a large portion of challenges confronting modern HPC. The representative application uses unstructured finite element computations for transient composite resin infusion process flow modeling as the computational core, characteristics and results of which reflect many other HPC applications via the sparse matrix system used for the solution of linear system of equations. This work describes these various software and hardware factors and how they interact to affect performance of computationally intensive applications enabling more efficient development and porting of High Performance Computing applications that includes current, legacy, and future large scale computational modeling applications in various engineering and scientific disciplines.Keywords: graphical processing unit, software development and engineering, performance analysis, system architecture and software performance
Procedia PDF Downloads 362162 Gender Responsiveness of Water, Sanitation Policies and Legal Frameworks at Makerere University
Authors: Harriet Kebirungi, Majaliwa Jackson-Gilbert Mwanjalolo, S. Livingstone Luboobi, Richard Joseph Kimwaga, Consolata Kabonesa
Abstract:
This paper assessed gender responsiveness of water and sanitation policies and legal frameworks at Makerere University, Uganda. The objectives of the study were to i) examine the gender responsiveness of water and sanitation related policies and frameworks implemented at Makerere University; and ii) assess the challenges faced by the University in customizing national water and sanitation policies and legal frameworks into University policies. A cross-sectional gender-focused study design was adopted. A checklist was developed to analyze national water and sanitation policies and legal frameworks and University based policies. In addition, primary data was obtained from Key informants at the Ministry of Water and Environment and Makerere University. A gender responsive five-step analytical framework was used to analyze the collected data. Key findings indicated that the policies did not adequately address issues of gender, water and sanitation and the policies were gender neutral consistently. The national policy formulation process was found to be gender blind and not backed by situation analysis of different stakeholders including higher education institutions like Universities. At Makerere University, due to lack of customized and gender responsive water and sanitation policy and implementation framework, there were gender differences and deficiencies in access to and utilization of water and sanitation facilities. The University should take advantage of existing expertise within them to customize existing national water policies and gender, and water and sanitation sub-sector strategy. This will help the University to design gender responsive, culturally acceptable and environmental friendly water and sanitation systems that provide adequate water and sanitation facilities that address the needs and interests of male and female students.Keywords: gender, Makerere University, policies, water, sanitation
Procedia PDF Downloads 403161 Contribution of Different Farming Systems to Soil and Ecological Health in Trans Nzoia County, Kenya
Authors: Janeth Chepkemoi, Richard Onwonga, Noel Templer, Elkana Kipkoech, Angela Gitau
Abstract:
Conventional agriculture is one of the leading causes of land degradation, threatening the sustainability of food production. Organic farming promotes practices that have the potential of feeding the world while also promoting ecological health. A study was therefore carried out with the aim of conceptualizing how such farming systems are contributing to ecological health in Trans Nzoia County. 71 farmers were interviewed and data was collected on parameters such as land preparation, agroforestry, soil fertility management, soil and water conservation, and pests and diseases. A soil sample was also collected from each farm for laboratory analysis. Data collected were analyzed using Microsoft Excel and SPSS version 21. Results showed that 66% of the respondents practiced organic farming whereas 34% practiced conventional farming. Intercropping and crop rotations were the most common cropping systems and the most preferred land preparation tools among both organic and conventional farmers were tractors and hand hoes. Organic farms fared better in agroforestry, organic soil amendments, land and water conservation, and soil chemical properties. Pests and disease, however, affected organic farms more than conventional. The average nitrogen (%), K (Cmol/ kg and P (ppm) of organic soils were 0.26, 0.7 and 26.18 respectively, conventional soils were 0.21, 0.66 and 22.85. Soil organic carbon content of organic farms averaged a higher percentage of 2.07% as compared to 1.91 for the conventional. In conclusion, most farmers in Trans Nzoia County had transitioned into ecologically friendly farming practices that improved the quality and health of the soil and therefore promoted its sustainability.Keywords: organic farming, conventional farming, ecological health, soil health
Procedia PDF Downloads 124160 Study of the Effect of the Contra-Rotating Component on the Performance of the Centrifugal Compressor
Authors: Van Thang Nguyen, Amelie Danlos, Richard Paridaens, Farid Bakir
Abstract:
This article presents a study of the effect of a contra-rotating component on the efficiency of centrifugal compressors. A contra-rotating centrifugal compressor (CRCC) is constructed using two independent rotors, rotating in the opposite direction and replacing the single rotor of a conventional centrifugal compressor (REF). To respect the geometrical parameters of the REF one, two rotors of the CRCC are designed, based on a single rotor geometry, using the hub and shroud length ratio parameter of the meridional contour. Firstly, the first rotor is designed by choosing a value of length ratio. Then, the second rotor is calculated to be adapted to the fluid flow of the first rotor according aerodynamics principles. In this study, four values of length ratios 0.3, 0.4, 0.5, and 0.6 are used to create four configurations CF1, CF2, CF3, and CF4 respectively. For comparison purpose, the circumferential velocity at the outlet of the REF and the CRCC are preserved, which means that the single rotor of the REF and the second rotor of the CRCC rotate with the same speed of 16000rpm. The speed of the first rotor in this case is chosen to be equal to the speed of the second rotor. The CFD simulation is conducted to compare the performance of the CRCC and the REF with the same boundary conditions. The results show that the configuration with a higher length ratio gives higher pressure rise. However, its efficiency is lower. An investigation over the entire operating range shows that the CF1 is the best configuration in this case. In addition, the CRCC can improve the pressure rise as well as the efficiency by changing the speed of each rotor independently. The results of changing the first rotor speed show with a 130% speed increase, the pressure ratio rises of 8.7% while the efficiency remains stable at the flow rate of the design operating point.Keywords: centrifugal compressor, contra-rotating, interaction rotor, vacuum
Procedia PDF Downloads 134159 In vitro Antioxidant Properties and Phytochemistry of Some Philippine Creeping Medicinal Plants
Authors: Richard I. Licayan, Aisle Janne B. Dagpin, Romeo M. Del Rosario, Nenita D. Palmes
Abstract:
Hiptage benghalensis, Antigonon leptopus, Macroptillium atropurpureum, and Dioscorea bulbifera L. are herbal weeds that have been used by traditional healers in rural communities in the Philippines as medicine. In this study, the basic pharmacological components of the crude secondary metabolites extracted from the four herbal weeds and their in vitro antioxidant properties was investigated to provide baseline data for the possible development of these metabolites in pharmaceutical products. Qualitative screening of the secondary metabolites showed that alkaloids, tannins, saponins, steroids, and flavonoids were present in their leaf extracts. All of the plant extracts showed varied antioxidant activity. The greatest DPPH radical scavenging activity was observed in H. begnhalensis (84.64%), followed by A. leptopus (68.21%), M. atropurpureum (26.62%), and D. bulbifera L. (19.04%). The FRAP assay revealed that H. benghalensis had the highest antioxidant activity (8.32 mg/g) while ABTS assay showed that M. atropurpureum had the strongest scavenging ability of free radicals (0.0842 mg Trolox/g). The total flavonoid content (TFC) analysis showed that D. bulbifera L. had the highest TFC (420.35 mg quercetin per gram-dried material). The total phenolic content (TPC) of the four herbal weeds showed large variations, between 26.56±0.160 and 55.91±0.087 mg GAE/g dried material. The plant leaf extracts arranged in increasing values of TPC are H. benghalensis (26.565) < A. leptopus (37.29) < D. bulbifera L. (46.81) < M. atropurpureum (55.91). The obtained results may support their use in herbal medicine and as baseline data for the development of new drugs and standardized phytomedicines.Keywords: antioxidant properties, total flavonoids, total phenolics, creeping herbal weeds
Procedia PDF Downloads 731158 Assessing the Feasibility of Commercial Meat Rabbit Production in the Kumasi Metropolis of Ghana
Authors: Nana Segu Acquaah-Harrison, James Osei Mensah, Richard Aidoo, David Amponsah, Amy Buah, Gilbert Aboagye
Abstract:
The study aimed at assessing the feasibility of commercial meat rabbit production in the Kumasi Metropolis of Ghana. Structured and unstructured questionnaires were utilized in obtaining information from two hundred meat consumers and 15 meat rabbit farmers. Data were analyzed using Net Present Value (NPV), Internal Rate of Return (IRR), Benefit Cost Ratio (BCR)/Profitability Index (PI) technique, percentages and chi-square contingency test. The study found that the current demand for rabbit meat is low (36%). The desirable nutritional attributes of rabbit meat and other socio economic factors of meat consumers make the potential demand for rabbit meat high (69%). It was estimated that GH¢5,292 (approximately $ 2672) was needed as a start-up capital for a 40-doe unit meat rabbit farm in Kumasi Metropolis. The cost of breeding animals, housing and equipment formed 12.47%, 53.97% and 24.87% respectively of the initial estimated capital. A Net Present Value of GH¢ 5,910.75 (approximately $ 2984) was obtained at the end of the fifth year, with an internal rate return and profitability index of 70% and 1.12 respectively. The major constraints identified in meat rabbit production were low price of rabbit meat, shortage of fodder, pest and diseases, high cost of capital, high cost of operating materials and veterinary care. Based on the analysis, it was concluded that meat rabbit production is feasible in the Kumasi Metropolis of Ghana. The study recommends embarking on mass advertisement; farmer association and adapting to new technologies in the production process will help to enhance productivity.Keywords: feasibility, commercial meat rabbit, production, Kumasi, Ghana
Procedia PDF Downloads 132157 Role of ABC Transporters in Non-Target Site Herbicide Resistance in Black Grass (Alopecurus myosuroides)
Authors: Alina Goldberg Cavalleri, Sara Franco Ortega, Nawaporn Onkokesung, Richard Dale, Melissa Brazier-Hicks, Robert Edwards
Abstract:
Non-target site based resistance (NTSR) to herbicides in weeds is a polygenic trait associated with the upregulation of proteins involved in xenobiotic detoxification and translocation we have termed the xenome. Among the xenome proteins, ABC transporters play a key role in enhancing herbicide metabolism by effluxing conjugated xenobiotics from the cytoplasm into the vacuole. The importance of ABC transporters is emphasized by the fact that they often contribute to multidrug resistance in human cells and antibiotic resistance in bacteria. They also play a key role in insecticide resistance in major vectors of human diseases and crop pests. By surveying available databases, transcripts encoding ABCs have been identified as being enhanced in populations exhibiting NTSR in several weed species. Based on a transcriptomics data in black grass (Alopecurus myosuroides, Am), we have identified three proteins from the ABC-C subfamily that are upregulated in NTSR populations. ABC-C transporters are poorly characterized proteins in plants, but in Arabidopsis localize to the vacuolar membrane and have functional roles in transporting glutathionylated (GSH)-xenobiotic conjugates. We found that the up-regulation of AmABCs strongly correlates with the up-regulation of a glutathione transferase termed AmGSTU2, which can conjugate GSH to herbicides. The expression profile of the ABC transcripts was profiled in populations of black grass showing different degree of resistance to herbicides. This, together with a phylogenetic analysis, revealed that AmABCs cluster in different groups which might indicate different substrate and roles in the herbicide resistance phenotype in the different populationsKeywords: black grass, herbicide, resistance, transporters
Procedia PDF Downloads 154156 Study of Methods to Reduce Carbon Emissions in Structural Engineering
Authors: Richard Krijnen, Alan Wang
Abstract:
As the world is aiming to reach net zero around 2050, structural engineers must begin finding solutions to contribute to this global initiative. Approximately 40% of global energy-related emissions are due to buildings and construction, and a building’s structure accounts for 50% of its embodied carbon, which indicates that structural engineers are key contributors to finding solutions to reach carbon neutrality. However, this task presents a multifaceted challenge as structural engineers must navigate technical, safety and economic considerations while striving to reduce emissions. This study reviews several options and considerations to reduce carbon emissions that structural engineers can use in their future designs without compromising the structural integrity of their proposed design. Low-carbon structures should adhere to several guiding principles. Firstly, prioritize the selection of materials with low carbon footprints, such as recyclable or alternative materials. Optimization of design and engineering methods is crucial to minimize material usage. Encouraging the use of recyclable and renewable materials reduces dependency on natural resources. Energy efficiency is another key consideration involving the design of structures to minimize energy consumption across various systems. Choosing local materials and minimizing transportation distances help in reducing carbon emissions during transport. Innovation, such as pre-fabrication and modular design or low-carbon concrete, can further cut down carbon emissions during manufacturing and construction. Collaboration among stakeholders and sharing experiences and resources are essential for advancing the development and application of low-carbon structures. This paper identifies current available tools and solutions to reduce embodied carbon in structures, which can be used as part of daily structural engineering practice.Keywords: efficient structural design, embodied carbon, low-carbon material, sustainable structural design
Procedia PDF Downloads 41155 Next Generation UK Storm Surge Model for the Insurance Market: The London Case
Authors: Iacopo Carnacina, Mohammad Keshtpoor, Richard Yablonsky
Abstract:
Non-structural protection measures against flooding are becoming increasingly popular flood risk mitigation strategies. In particular, coastal flood insurance impacts not only private citizens but also insurance and reinsurance companies, who may require it to retain solvency and better understand the risks they face from a catastrophic coastal flood event. In this context, a framework is presented here to assess the risk for coastal flooding across the UK. The area has a long history of catastrophic flood events, including the Great Flood of 1953 and the 2013 Cyclone Xaver storm, both of which led to significant loss of life and property. The current framework will leverage a technology based on a hydrodynamic model (Delft3D Flexible Mesh). This flexible mesh technology, coupled with a calibration technique, allows for better utilisation of computational resources, leading to higher resolution and more detailed results. The generation of a stochastic set of extra tropical cyclone (ETC) events supports the evaluation of the financial losses for the whole area, also accounting for correlations between different locations in different scenarios. Finally, the solution shows a detailed analysis for the Thames River, leveraging the information available on flood barriers and levees. Two realistic disaster scenarios for the Greater London area are simulated: In the first scenario, the storm surge intensity is not high enough to fail London’s flood defences, but in the second scenario, London’s flood defences fail, highlighting the potential losses from a catastrophic coastal flood event.Keywords: storm surge, stochastic model, levee failure, Thames River
Procedia PDF Downloads 232154 Molecular Identification and Evolutionary Status of Lucilia bufonivora: An Obligate Parasite of Amphibians in Europe
Authors: Gerardo Arias, Richard Wall, Jamie Stevens
Abstract:
Lucilia bufonivora Moniez, is an obligate parasite of toads and frogs widely distributed in Europe. Its sister taxon Lucilia silvarum Meigen behaves mainly as a carrion breeder in Europe, however it has been reported as a facultative parasite of amphibians. These two closely related species are morphologically almost identical, which has led to misidentification, and in fact, it has been suggested that the amphibian myiasis cases by L. silvarum reported in Europe should be attributed to L. bufonivora. Both species remain poorly studied and their taxonomic relationships are still unclear. The identification of the larval specimens involved in amphibian myiasis with molecular tools and phylogenetic analysis of these two closely related species may resolve this problem. In this work seventeen unidentified larval specimens extracted from toad myiasis cases of the UK, the Netherlands and Switzerland were obtained, their COX1 (mtDNA) and EF1-α (Nuclear DNA) gene regions were amplified and then sequenced. The 17 larval samples were identified with both molecular markers as L. bufonivora. Phylogenetic analysis was carried out with 10 other blowfly species, including L. silvarum samples from the UK and USA. Bayesian Inference trees of COX1 and a combined-gene dataset suggested that L. silvarum and L. bufonivora are separate sister species. However, the nuclear gene EF1-α does not appear to resolve their relationships, suggesting that the rates of evolution of the mtDNA are much faster than those of the nuclear DNA. This work provides the molecular evidence for successful identification of L. bufonivora and a molecular analysis of the populations of this obligate parasite from different locations across Europe. The relationships with L. silvarum are discussed.Keywords: calliphoridae, molecular evolution, myiasis, obligate parasitism
Procedia PDF Downloads 242153 Sustainable Renovation of Cultural Buildings Case Study: Red Bay National Historic Site, Canada
Authors: Richard Briginshaw, Hana Alaojeli, Javaria Ahmad, Hamza Gaffar, Nourtan Murad
Abstract:
Sustainable renovations to cultural buildings and sites require a high level of competency in the sometimes conflicting areas of social/historical demands, environmental concerns, and the programmatic and technical requirements of the project. A detailed analysis of the existing site, building and client program are critical to reveal both challenges and opportunities. This forms the starting point for the design process – empirical explorations that search for a balanced and inspired architectural solution to the project. The Red Bay National Historic Site on the Labrador Coast of eastern Canada is a challenging project to explore and resolve these ideas. Originally the site of a 16ᵗʰ century whaling station occupied by Basque sailors from France and Spain, visitors now experience this history at the interpretive center, along with the unique geography, climate, local culture and vernacular architecture of the area. Working with our client, Parks Canada, the project called for significant alterations and expansion to the existing facility due to an increase in the number of annual visitors. Sustainable aspects of the design are focused on sensitive site development, passive energy strategies such as building orientation and building envelope efficiency, active renewable energy systems, carefully considered material selections, water efficiency, and interiors that respond to human comfort and a unique visitor experience.Keywords: sustainability, renovations and expansion, cultural project, architectural design, green building
Procedia PDF Downloads 168152 Association among Trait Mindfulness, Leukocyte Telomere Length, and Psychological Symptoms in Singaporean Han Chinese
Authors: Shian-Ling Keng, Onn Siong Yim, Poh San Lai, Soo Chong Chew, Anne Chong, Richard Ebstein
Abstract:
Research has demonstrated a positive association between mindfulness meditation and physical health. Little work, however, has examined the association between trait mindfulness and leukocyte telomere length (LTL), an emerging marker of cellular aging. The present study aimed to examine whether facets of trait mindfulness are correlated with longer LTL in a Singaporean Han Chinese sample and whether these facets may mediate the association between psychological symptoms and LTL. 158 adults (mean age = 27.24 years) completed measures assessing trait mindfulness and psychological symptoms (i.e., depression and stress) and provided blood samples for analyses of LTL using qPCR. Multiple regression analyses were conducted to assess the association between facets of trait mindfulness and LTL. Bootstrapping-based mediational analyses were run to examine the role of trait mindfulness as a mediator of the association between psychological symptoms and LTL. Of five facets of trait mindfulness (describe, act with awareness, observe, nonreactivity, and nonjudging), nonreactivity was significantly associated with LTL, after controlling for the effects of age, gender, and education, β = .21, p = .006. Further, there was a trend for overall trait mindfulness, β = .15, p = .06, and nonjudging, β = .13, p = .095, to each predict longer LTL. Nonreactivity significantly mediated the association between depression and LTL, BCa 95% CI [-.004, -.0004], p=.03, as well as the association between stress and LTL, BCa 95% CI [-.004, -.0004], p=.04. The results provide preliminary evidence for a positive association between selected facets of trait mindfulness and slower cellular aging, indexed by LTL. The findings suggest that individuals who are high on equanimity may experience slower aging at the cellular level, presumably through engaging in more effective coping mechanisms and modulation of stress. The findings also highlight the role of nonreactivity as a potential mechanism that underlies the association between LTL and psychological symptoms.Keywords: depression, mindfulness, stress, telomere length
Procedia PDF Downloads 341151 Enhancing Inservice Education Training Effectiveness Using a Mobile Based E-Learning Model
Authors: Richard Patrick Kabuye
Abstract:
This study focuses on the addressing the enhancement of in-service training programs as a tool of transforming the existing traditional approaches of formal lectures/contact hours. This will be supported with a more versatile, robust, and remotely accessible means of mobile based e-learning, as a support tool for the traditional means. A combination of various factors in education and incorporation of the eLearning strategy proves to be a key factor in effective in-service education. Key factor needs to be factored in so as to maintain a credible co-existence of the programs, with the prevailing social, economic and political environments. Effective in-service education focuses on having immediate transformation of knowledge into practice for a good time period, active participation of attendees, enable before training planning, in training assessment and post training feedback training analysis which will yield knowledge to the trainers of the applicability of knowledge given out. All the above require a more robust approach to attain success in implementation. Incorporating mobile technology in eLearning will enable the above to be factored together in a more coherent manner, as it is evident that participants have to take time off their duties and attend to these training programs. Making it mobile, will save a lot of time since participants would be in position to follow certain modules while away from lecture rooms, get continuous program updates after completing the program, send feedback to instructors on knowledge gaps, and a wholly conclusive evaluation of the entire program on a learn as you work platform. This study will follow both qualitative and quantitative approaches in data collection, and this will be compounded incorporating a mobile eLearning application using Android.Keywords: in service, training, mobile, e- learning, model
Procedia PDF Downloads 218150 Holographic Visualisation of 3D Point Clouds in Real-time Measurements: A Proof of Concept Study
Authors: Henrique Fernandes, Sofia Catalucci, Richard Leach, Kapil Sugand
Abstract:
Background: Holograms are 3D images formed by the interference of light beams from a laser or other coherent light source. Pepper’s ghost is a form of hologram conceptualised in the 18th century. This Holographic visualisation with metrology measuring techniques by displaying measurements taken in real-time in holographic form can assist in research and education. New structural designs such as the Plexiglass Stand and the Hologram Box can optimise the holographic experience. Method: The equipment used included: (i) Zeiss’s ATOS Core 300 optical coordinate measuring instrument that scanned real-world objects; (ii) Cloud Compare, open-source software used for point cloud processing; and (iii) Hologram Box, designed and manufactured during this research to provide the blackout environment needed to display 3D point clouds in real-time measurements in holographic format, in addition to a portability aspect to holograms. The equipment was tailored to realise the goal of displaying measurements in an innovative technique and to improve on conventional methods. Three test scans were completed before doing a holographic conversion. Results: The outcome was a precise recreation of the original object in the holographic form presented with dense point clouds and surface density features in a colour map. Conclusion: This work establishes a way to visualise data in a point cloud system. To our understanding, this is a work that has never been attempted. This achievement provides an advancement in holographic visualisation. The Hologram Box could be used as a feedback tool for measurement quality control and verification in future smart factories.Keywords: holography, 3D scans, hologram box, metrology, point cloud
Procedia PDF Downloads 89149 Security in Cyberspace: A Comprehensive Review of COVID-19 Continued Effects on Security Threats and Solutions in 2021 and the Trajectory of Cybersecurity Going into 2022
Authors: Mojtaba Fayaz, Richard Hallal
Abstract:
This study examines the various types of dangers that our virtual environment is vulnerable to, including how it can be attacked and how to avoid/secure our data. The terrain of cyberspace is never completely safe, and Covid- 19 has added to the confusion, necessitating daily periodic checks and evaluations. Cybercriminals have been able to enact with greater skill and undertake more conspicuous and sophisticated attacks while keeping a higher level of finesse by operating from home. Different types of cyberattacks, such as operation-based attacks, authentication-based attacks, and software-based attacks, are constantly evolving, but research suggests that software-based threats, such as Ransomware, are becoming more popular, with attacks expected to increase by 93 percent by 2020. The effectiveness of cyber frameworks has shifted dramatically as the pandemic has forced work and private life to become intertwined, destabilising security overall and creating a new front of cyber protection for security analysis and personal. The high-rise formats in which cybercrimes are carried out, as well as the types of cybercrimes that exist, such as phishing, identity theft, malware, and DDoS attacks, have created a new front of cyber protection for security analysis and personal safety. The overall strategy for 2022 will be the introduction of frameworks that address many of the issues associated with offsite working, as well as education that provides better information about commercialised software that does not provide the highest level of security for home users, allowing businesses to plan better security around their systems.Keywords: cyber security, authentication, software, hardware, malware, COVID-19, threat actors, awareness, home users, confidentiality, integrity, availability, attacks
Procedia PDF Downloads 116148 Excited State Structural Dynamics of Retinal Isomerization Revealed by a Femtosecond X-Ray Laser
Authors: Przemyslaw Nogly, Tobias Weinert, Daniel James, Sergio Carbajo, Dmitry Ozerov, Antonia Furrer, Dardan Gashi, Veniamin Borin, Petr Skopintsev, Kathrin Jaeger, Karol Nass, Petra Bath, Robert Bosman, Jason Koglin, Matthew Seaberg, Thomas Lane, Demet Kekilli, Steffen Brünle, Tomoyuki Tanaka, Wenting Wu, Christopher Milne, Thomas A. White, Anton Barty, Uwe Weierstall, Valerie Panneels, Eriko Nango, So Iwata, Mark Hunter, Igor Schapiro, Gebhard Schertler, Richard Neutze, Jörg Standfuss
Abstract:
Ultrafast isomerization of retinal is the primary step in a range of photoresponsive biological functions including vision in humans and ion-transport across bacterial membranes. We studied the sub-picosecond structural dynamics of retinal isomerization in the light-driven proton pump bacteriorhodopsin using an X-ray laser. Twenty snapshots with near-atomic spatial and temporal resolution in the femtosecond regime show how the excited all-trans retinal samples conformational states within the protein binding pocket prior to passing through a highly-twisted geometry and emerging in the 13-cis conformation. The aspartic acid residues and functional water molecules in proximity of the retinal Schiff base respond collectively to formation and decay of the initial excited state and retinal isomerization. These observations reveal how the protein scaffold guides this remarkably efficient photochemical reaction.Keywords: bacteriorhodopsin, free-electron laser, retinal isomerization mechanism, time-resolved crystallography
Procedia PDF Downloads 248147 Optimizing Glycemic Control with AI-Guided Dietary Supplements: A Randomized Trial in Type 2 Diabetes
Authors: Evgeny Pokushalov, Claire Garcia, Andrey Ponomarenko, John Smith, Michael Johnson, Inessa Pak, Evgenya Shrainer, Dmitry Kudlay, Leila Kasimova, Richard Miller
Abstract:
This study evaluated the efficacy of an AI-guided dietary supplement regimen compared to a standard physician-guided regimen in managing Type 2 diabetes (T2D). A total of 160 patients were randomly assigned to either the AI-guided group (n=80) or the physician-guided group (n=80) and followed over 90 days. The AI-guided group received 5.3 ± 1.2 supplements per patient, while the physician-guided group received 2.7 ± 0.6 supplements per patient. The AI system personalized supplement types and dosages based on individual genetic and metabolic profiles. The AI-guided group showed a significant reduction in HbA1c levels from 7.5 ± 0.8% to 7.1 ± 0.7%, compared to a reduction from 7.6 ± 0.9% to 7.4 ± 0.8% in the physician-guided group (mean difference: -0.3%, 95% CI: -0.5% to -0.1%; p < 0.01). Secondary outcomes, including fasting plasma glucose, HOMA-IR, and insulin levels, also improved more in the AI-guided group. Subgroup analyses revealed that the AI-guided regimen was particularly effective in patients with specific genetic polymorphisms and elevated metabolic markers. Safety profiles were comparable between both groups, with no serious adverse events reported. In conclusion, the AI-guided dietary supplement regimen significantly improved glycemic control and metabolic health in T2D patients compared to the standard physician-guided approach, demonstrating the potential of personalized AI-driven interventions in diabetes management.Keywords: Type 2 diabetes, AI-guided supplementation, personalized medicine, glycemic control, metabolic health, genetic polymorphisms, dietary supplements, HbA1c, fasting plasma glucose, HOMA-IR, personalized nutrition
Procedia PDF Downloads 7146 A Reminder of a Rare Anatomical Variant of the Spinal Accessory Nerve Encountered During Routine Neck Dissection: A Case Report and Updated Review of the Literature
Authors: Sophie Mills, Constantinos Aristotelous, Leila L. Touil, Richard C. W. James
Abstract:
Objectives: Historical studies of the anatomy of the spinal accessory nerve (SAN) have reported conflicting results regarding its relationship with the internal jugular vein (IJV). A literature review was undertaken to establish the prevalence of anatomical variations of the SAN encountered during routine neck dissection surgery in order to increase awareness and reduce morbidity associated with iatrogenic SAN injury. Materials and Methods: The largest systematic review to date was performed using PRISMA-ScR guidelines, which yielded nine articles following the application of inclusion and exclusion criteria. A case report is also included, which demonstrates the rare anatomical relationship of the SAN traversing a fenestrated IJV, seen for the first time in the senior author’s career. Results: The mean number of dissections per study was 119, of which 55.6% (n=5) studies were performed on cadaver subjects, and 44.4% (n=4) were surgical dissections. Incidences of the SAN lateral to the IJV and medial to the IJV ranged from 38.9%-95.7% and 2.8%-57.4%, respectively. Over half of the studies reported incidences of the SAN traversing the IJV in 0.9%-2.8% of dissections. One study reported an isolated variant of the SAN dividing around the IJV with a prevalence of 0.5%. Conclusion: At the level of the posterior belly of the digastric muscle, the surgeon can anticipate the identification of the SAN lateral to the IJV in approximately three-quarters of cases, whilst around one-quarter are estimated to be medial. A mean of 1.6% of SANs traverses a fenestration of the vein. It is essential for surgeons to be aware of these anatomical variations and their prevalence to prevent injury to vital structures during surgery.Keywords: anatomical variant, internal jugular vein, neck dissection, spinal accessory nerve
Procedia PDF Downloads 145145 A Linearly Scalable Family of Swapped Networks
Authors: Richard Draper
Abstract:
A supercomputer can be constructed from identical building blocks which are small parallel processors connected by a network referred to as the local network. The routers have unused ports which are used to interconnect the building blocks. These connections are referred to as the global network. The address space has a global and a local component (g, l). The conventional way to connect the building blocks is to connect (g, l) to (g’,l). If there are K blocks, this requires K global ports in each router. If a block is of size M, the result is a machine with KM routers having diameter two. To increase the size of the machine to 2K blocks, each router connects to only half of the other blocks. The result is a larger machine but also one with greater diameter. This is a crude description of how the network of the CRAY XC® is designed. In this paper, a family of interconnection networks using routers with K global and M local ports is defined. Coordinates are (c,d, p) and the global connections are (c,d,p)↔(c’,p,d) which swaps p and d. The network is denoted D3(K,M) and is called a Swapped Dragonfly. D3(K,M) has KM2 routers and has diameter three, regardless of the size of K. To produce a network of size KM2 conventionally, diameter would be an increasing function of K. The family of Swapped Dragonflies has other desirable properties: 1) D3(K,M) scales linearly in K and quadratically in M. 2) If L < K, D3(K,M) contains many copies of D3(L,M). 3) If L < M, D3(K,M) contains many copies of D3(K,L). 4) D3(K,M) can perform an all-to-all exchange in KM2+KM time which is only slightly more than the time to do a one-to-all. This paper makes several contributions. It is the first time that a swap has been used to define a linearly scalable family of networks. Structural properties of this new family of networks are thoroughly examined. A synchronizing packet header is introduced. It specifies the path to be followed and it makes it possible to define highly parallel communication algorithm on the network. Among these is an all-to-all exchange in time KM2+KM. To demonstrate the effectiveness of the swap properties of the network of the CRAY XC® and D3(K,16) are compared.Keywords: all-to-all exchange, CRAY XC®, Dragonfly, interconnection network, packet switching, swapped network, topology
Procedia PDF Downloads 121144 The Prodomain-Bound Form of Bone Morphogenetic Protein 10 is Biologically Active on Endothelial Cells
Authors: Austin Jiang, Richard M. Salmon, Nicholas W. Morrell, Wei Li
Abstract:
BMP10 is highly expressed in the developing heart and plays essential roles in cardiogenesis. BMP10 deletion in mice results in embryonic lethality due to impaired cardiac development. In adults, BMP10 expression is restricted to the right atrium, though ventricular hypertrophy is accompanied by increased BMP10 expression in a rat hypertension model. However, reports of BMP10 activity in the circulation are inconclusive. In particular it is not known whether in vivo secreted BMP10 is active or whether additional factors are required to achieve its bioactivity. It has been shown that high-affinity binding of the BMP10 prodomain to the mature ligand inhibits BMP10 signaling activity in C2C12 cells, and it was proposed that prodomain-bound BMP10 (pBMP10) complex is latent. In this study, we demonstrated that the BMP10 prodomain did not inhibit BMP10 signaling activity in multiple endothelial cells, and that recombinant human pBMP10 complex, expressed in mammalian cells and purified under native conditions, was fully active. In addition, both BMP10 in human plasma and BMP10 secreted from the mouse right atrium were fully active. Finally, we confirmed that active BMP10 secreted from mouse right atrium was in the prodomain-bound form. Our data suggest that circulating BMP10 in adults is fully active and that the reported vascular quiescence function of BMP10 in vivo is due to the direct activity of pBMP10 and does not require an additional activation step. Moreover, being an active ligand, recombinant pBMP10 may have therapeutic potential as an endothelial-selective BMP ligand, in conditions characterized by loss of BMP9/10 signaling.Keywords: bone morphogenetic protein 10 (BMP10), endothelial cell, signal transduction, transforming growth factor beta (TGF-B)
Procedia PDF Downloads 273143 The Methanotrophic Activity in a Landfill Bio-Cover through a Subzero Winter
Authors: Parvin Berenjkar, Qiuyan Yuan, Richard Sparling, Stan Lozecznik
Abstract:
Landfills highly contribute to anthropological global warming through CH₄ emissions. Landfills are usually capped by a conventional soil cover to control the migration of gases. Methane is consumed by CH₄-oxidizing microorganisms known as methanotrophs that naturally exist in the landfill soil cover. The growth of methanotrophs can be optimized in a bio-cover that typically consists of a gas distribution layer (GDL) to homogenize landfill gas fluxes and an overlying oxidation layer composed of suitable materials that support methanotrophic populations. Materials such as mature yard waste composts can provide an inexpensive and favourable porous support for the growth and activity of methanotrophs. In areas with seasonal cold climates, it is valuable to know if methanotrophs in a bio-cover can survive in winter until the next spring, and how deep they are active in the bio-cover to mitigate CH₄. In this study, a pilot bio-cover was constructed in a closed landfill cell in Winnipeg that has a very cold climate in Canada. The bio-cover has a surface area of 2.5 m x 3.5 m and 1.5 m of depth, filled with 50 cm of gravel as a GDL and 70 cm of biosolids compost amended with yard and leaf waste compost. The observed in situ potential of methanotrophs for CH₄ oxidation was investigated at a specific period of time from December 2016 to April 2017 as well as November 2017 to April 2018, when the transition to surface frost and thawing happens in the bio-cover. Compost samples taken from different depths of the bio-cover were incubated in the laboratory under standardized conditions; an optimal air: methane atmosphere, at 22ºC, but at in situ moisture content. Results showed that the methanotrophs were alive oxidizing methane without a lag, indicating that there was the potential for methanotrophic activity at some depths of the bio-cover.Keywords: bio-cover, global warming, landfill, methanotrophic activity
Procedia PDF Downloads 121142 A Comparison of Clinical and Pathological TNM Staging in a COVID-19 Era
Authors: Sophie Mills, Leila L. Touil, Richard Sisson
Abstract:
Introduction: The TNM classification is the global standard for the staging of head and neck cancers. Accurate clinical-radiological staging of tumours (cTNM) is essential to predict prognosis, facilitate surgical planning and determine the need for other therapeutic modalities. This study aims to determine the accuracy of pre-operative cTNM staging using pathological TNM (pTNM) and consider possible causes of TNM stage migration, noting any variation throughout the COVID-19 pandemic. Materials and Methods: A retrospective cohort study examined records of patients with surgical management of head and neck cancer at a tertiary head and neck centre from November 2019 to November 2020. Data was extracted from Somerset Cancer Registry and histopathology reports. cTNM and pTNM were compared before and during the first wave of COVID-19, as well as with other potential prognostic factors such as tumour site and tumour stage. Results: 119 cases were identified, of which 52.1% (n=62) were male, and 47.9% (n=57) were female with a mean age of 67 years. Clinical and pathological staging differed in 54.6% (n=65) of cases. Of the patients with stage migration, 40.4% (n=23) were up-staged and 59.6% (n=34) were down-staged compared with pTNM. There was no significant difference in the accuracy of cTNM staging compared with age, sex, or tumour site. There was a statistically highly significant (p < 0.001) correlation between cTNM accuracy and tumour stage, with the accuracy of cTNM staging decreasing with the advancement of pTNM staging. No statistically significant variation was noted between patients staged prior to and during COVID-19. Conclusions: Discrepancies in staging can impact management and outcomes for patients. This study found that the higher the pTNM, the more likely stage migration will occur. These findings are concordant with the oncology literature, which highlights the need to improve the accuracy of cTNM staging for more advanced tumours.Keywords: COVID-19, head and neck cancer, stage migration, TNM staging
Procedia PDF Downloads 109141 Free Will and Compatibilism in Decision Theory: A Solution to Newcomb’s Paradox
Authors: Sally Heyeon Hwang
Abstract:
Within decision theory, there are normative principles that dictate how one should act in addition to empirical theories of actual behavior. As a normative guide to one’s actual behavior, evidential or causal decision-theoretic equations allow one to identify outcomes with maximal utility values. The choice that each person makes, however, will, of course, differ according to varying assignments of weight and probability values. Regarding these different choices, it remains a subject of considerable philosophical controversy whether individual subjects have the capacity to exercise free will with respect to the assignment of probabilities, or whether instead the assignment is in some way constrained. A version of this question is given a precise form in Richard Jeffrey’s assumption that free will is necessary for Newcomb’s paradox to count as a decision problem. This paper will argue, against Jeffrey, that decision theory does not require the assumption of libertarian freedom. One of the hallmarks of decision-making is its application across a wide variety of contexts; the implications of a background assumption of free will is similarly varied. One constant across the contexts of decision is that there are always at least two levels of choice for a given agent, depending on the degree of prior constraint. Within the context of Newcomb’s problem, when the predictor is attempting to guess the choice the agent will make, he or she is analyzing the determined aspects of the agent such as past characteristics, experiences, and knowledge. On the other hand, as David Lewis’ backtracking argument concerning the relationship between past and present events brings to light, there are similarly varied ways in which the past can actually be dependent on the present. One implication of this argument is that even in deterministic settings, an agent can have more free will than it may seem. This paper will thus argue against the view that a stable background assumption of free will or determinism in decision theory is necessary, arguing instead for a compatibilist decision theory yielding a novel treatment of Newcomb’s problem.Keywords: decision theory, compatibilism, free will, Newcomb’s problem
Procedia PDF Downloads 321