Search results for: Robert Stone
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 596

Search results for: Robert Stone

56 Variability of Physico-Chemical and Carbonate Chemistry of Seawater in Selected Portions of the Central Atlantic Coastline of Ghana

Authors: Robert Kwame Kpaliba, Dennis Kpakpor Adotey, Yaw Serfor-Armah

Abstract:

Increase in the oceanic carbon dioxide absorbance from the atmosphere due to climate change has led to appreciable change in the chemistry of the oceans. The change in oceanic pH referred to as ocean acidification poses multiple threats and stresses on marine species, biodiversity, goods and services, and livelihoods. Marine ecosystems are continuously threatened by plethora of natural and anthropogenic stressors including carbon dioxide (CO₂) emissions causing a lot of changes which has not been experienced for approximately 60 years. Little has been done in Africa as a whole and Ghana in particular to improve the understanding of the variations of the carbonate chemistry of seawater and the biophysical impacts of ocean acidification on security of seafood, nutrition, climate and environmental change. There is, therefore, the need for regular monitoring of carbonate chemistry of seawater along Ghana’s coastline to generate reliable data to aid marine policy formulation. Samples of seawater were collected thrice every month for a one-year period from five study sites for the various parameters to be analyzed. Analysis of the measured physico-chemical and the carbonate chemistry parameters was done using simple statistics. Correlation test and ANOVA were run on both of the physico-chemical and carbonate chemistry parameters. The carbonate chemistry parameters were measured using computer software programme (CO₂cal v4.0.9) except total alkalinity and pH. The study assessed the variability of seawater carbonate chemistry in selected portions of the Central Atlantic Coastline of Ghana (Tsokomey/Bortianor, Kokrobitey, Gomoa Nyanyanor, Gomoa Fetteh, and Senya Breku landing beaches) over a 1-year period (June 2016–May 2017). For physico-chemical parameters, there was insignificant variation in nitrate (NO₃⁻) (1.62 - 2.3 mg/L), ammonia (NH₃) (1.52 - 2.05 mg/L), and salinity (sal) (34.50 - 34.74 ppt). Carbonate chemistry parameters for all the five study sites showed significant variation: partial pressure of carbon dioxide (pCO₂) (414.08-715.5 µmol/kg), carbonate ion (CO₃²⁻) (115-157.92 µmol/kg), pH (7.9-8.12), total alkalinity (TA) (1711.8-1986 µmol/kg), total carbon dioxide (TCO₂) (1512.1 - 1792 µmol/kg), dissolved carbon dioxide (CO₂aq) (10.97-18.92 µmol/kg), Revelle Factor (RF) (9.62-11.84), aragonite (ΩAr) (0.75-1.48) and calcite (ΩCa) (1.08-2.14). The study revealed that the partial pressure of carbon dioxide and temperature did not have a significant effect on each other (r² = 0.31) (p-value = 0.0717). There was an appreciable effect of pH on dissolved carbon dioxide (r² = 0.921) (p-value = 0.0000). The variation between total alkalinity and dissolved carbon dioxide was appreciable (r² = 0.731) (p-value = 0.0008). There was a significant correlation between total carbon dioxide and dissolved carbon dioxide (r² = 0.852) (p-value = 0.0000). Revelle factor correlated strongly with dissolved carbon dioxide (r² = 0.982) (p-value = 0.0000). Partial pressure of carbon dioxide corresponds strongly with atmospheric carbon dioxide (r² = 0.9999) (p-value = 0.00000).

Keywords: carbonate chemistry, seawater, central atlantic coastline, Ghana, ocean acidification

Procedia PDF Downloads 529
55 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics

Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca

Abstract:

The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.

Keywords: adulteration, multivariate analysis, potential functions, regression

Procedia PDF Downloads 101
54 Digital Advance Care Planning and Directives: Early Observations of Adoption Statistics and Responses from an All-Digital Consumer-Driven Approach

Authors: Robert L. Fine, Zhiyong Yang, Christy Spivey, Bonnie Boardman, Maureen Courtney

Abstract:

Importance: Barriers to traditional advance care planning (ACP) and advance directive (AD) creation have limited the promise of ACP/AD for individuals and families, the healthcare team, and society. Reengineering ACP by using a web-based, consumer-driven process has recently been suggested. We report early experience with such a process. Objective: Begin to analyze the potential of the creation and use of ACP/ADs as generated by a consumer-friendly, digital process by 1) assessing the likelihood that consumers would create ACP/ADs without structured intervention by medical or legal professionals, and 2) analyzing the responses to determine if the plans can help doctors better understand a person’s goals, preferences, and priorities for their medical treatments and the naming of healthcare agents. Design: The authors chose 900 users of MyDirectives.com, a digital ACP/AD tool, solely based on their state of residence in order to achieve proportional representation of all 50 states by population size and then reviewed their responses, summarizing these through descriptive statistics including treatment preferences, demographics, and revision of preferences. Setting: General United States population. Participants: The 900 participants had an average age of 50.8 years (SD = 16.6); 84.3% of the men and 91% of the women were in self-reported good health when signing their ADs. Main measures: Preferences regarding the use of life-sustaining treatments, where to spend final days, consulting a supportive and palliative care team, attempted cardiopulmonary resuscitation (CPR), autopsy, and organ and tissue donation. Results: Nearly 85% of respondents prefer cessation of life-sustaining treatments during their final days whenever those may be, 76% prefer to spend their final days at home or in a hospice facility, and 94% wanted their future doctors to consult a supportive and palliative care team. 70% would accept attempted CPR in certain limited circumstances. Most respondents would want an autopsy under certain conditions, and 62% would like to donate their organs. Conclusions and relevance: Analysis of early experience with an all-digital web-based ACP/AD platform demonstrates that individuals from a wide range of ages and conditions can engage in an interrogatory process about values, goals, preferences, and priorities for their medical treatments by developing advance directives and easily make changes to the AD created. Online creation, storage, and retrieval of advance directives has the potential to remove barriers to ACP/AD and, thus, to further improve patient-centered end-of-life care.

Keywords: Advance Care Plan, Advance Decisions, Advance Directives, Consumer; Digital, End of Life Care, Goals, Living Wills, Prefences, Universal Advance Directive, Statements

Procedia PDF Downloads 301
53 A Protocol Study of Accessibility: Physician’s Perspective Regarding Disability and Continuum of Care

Authors: Sidra Jawed

Abstract:

The accessibility constructs and the body privilege discourse has been a major problem while dealing with health inequities and inaccessibility. The inherent problem in this arbitrary view of disability is that disability would never be the productive way of living. For past thirty years, disability activists have been working to differentiate ‘impairment’ from ‘disability’ and probing for more understanding of limitation imposed by society, this notion is ultimately known as the Social Model of Disability. The vulnerable population as disability community remains marginalized and seen relentlessly fighting to highlight the importance of social factors. It does not only constitute physical architectural barriers and famous blue symbol of access to the healthcare but also invisible, intangible barriers as attitudes and behaviours. Conventionally the idea of ‘disability’ has been laden with prejudiced perception amalgamating with biased attitude. Equity in contemporary setup necessitates the restructuring of organizational structure. Apparently simple, the complex interplay of disability and contemporary healthcare set up often ends up at negotiating vital components of basic healthcare needs. The role of society is indispensable when it comes to people with disability (PWD), everything from the access to healthcare to timely interventions are strongly related to the set up in place and the attitude of healthcare providers. It is vital to understand the association between assumptions and the quality of healthcare PWD receives in our global healthcare setup. Most of time the crucial physician-patient relationship with PWD is governed by the negative assumptions of the physicians. The multifaceted, troubled patient-physicians’ relationship has been neglected in past. To compound it, insufficient work has been done to explore physicians’ perspective about the disability and access to healthcare PWD have currently. This research project is directed towards physicians’ perspective on the intersection of health and access of healthcare for PWD. The principal aim of the study is to explore the perception of disability in family medicine physicians, highlighting the underpinning of medical perspective in healthcare institution. In the quest of removing barriers, the first step must be to identify the barriers and formulate a plan for future policies, involving all the stakeholders. There would be semi-structured interviews to explore themes as accessibility, medical training, construct of social model and medical model of disability, time limitations, financial constraints. The main research interest is to identify the obstacles to inclusion and marginalization continuing from the basic living necessities to wide health inequity in present society. Physicians point of view is largely missing from the research landscape and the current forum of knowledge with regards to physicians’ standpoint. This research will provide policy makers with a starting point and comprehensive background knowledge that can be a stepping stone for future researches and furthering the knowledge translation process to strengthen healthcare. Additionally, it would facilitate the process of knowledge translation between the much needed medical and disability community.

Keywords: disability, physicians, social model, accessibility

Procedia PDF Downloads 193
52 An Emergentist Defense of Incompatibility between Morally Significant Freedom and Causal Determinism

Authors: Lubos Rojka

Abstract:

The common perception of morally responsible behavior is that it presupposes freedom of choice, and that free decisions and actions are not determined by natural events, but by a person. In other words, the moral agent has the ability and the possibility of doing otherwise when making morally responsible decisions, and natural causal determinism cannot fully account for morally significant freedom. The incompatibility between a person’s morally significant freedom and causal determinism appears to be a natural position. Nevertheless, some of the most influential philosophical theories on moral responsibility are compatibilist or semi-compatibilist, and they exclude the requirement of alternative possibilities, which contradicts the claims of classical incompatibilism. The compatibilists often employ Frankfurt-style thought experiments to prove their theory. The goal of this paper is to examine the role of imaginary Frankfurt-style examples in compatibilist accounts. More specifically, the compatibilist accounts defended by John Martin Fischer and Michael McKenna will be inserted into the broader understanding of a person elaborated by Harry Frankfurt, Robert Kane and Walter Glannon. Deeper analysis reveals that the exclusion of alternative possibilities based on Frankfurt-style examples is problematic and misleading. A more comprehensive account of moral responsibility and morally significant (source) freedom requires higher order complex theories of human will and consciousness, in which rational and self-creative abilities and a real possibility to choose otherwise, at least on some occasions during a lifetime, are necessary. Theoretical moral reasons and their logical relations seem to require a sort of higher-order agent-causal incompatibilism. The ability of theoretical or abstract moral reasoning requires complex (strongly emergent) mental and conscious properties, among which an effective free will, together with first and second-order desires. Such a hierarchical theoretical model unifies reasons-responsiveness, mesh theory and emergentism. It is incompatible with physical causal determinism, because such determinism only allows non-systematic processes that may be hard to predict, but not complex (strongly) emergent systems. An agent’s effective will and conscious reflectivity is the starting point of a morally responsible action, which explains why a decision is 'up to the subject'. A free decision does not always have a complete causal history. This kind of an emergentist source hyper-incompatibilism seems to be the best direction of the search for an adequate explanation of moral responsibility in the traditional (merit-based) sense. Physical causal determinism as a universal theory would exclude morally significant freedom and responsibility in the traditional sense because it would exclude the emergence of and supervenience by the essential complex properties of human consciousness.

Keywords: consciousness, free will, determinism, emergence, moral responsibility

Procedia PDF Downloads 141
51 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 66
50 The Power of in situ Characterization Techniques in Heterogeneous Catalysis: A Case Study of Deacon Reaction

Authors: Ramzi Farra, Detre Teschner, Marc Willinger, Robert Schlögl

Abstract:

Introduction: The conventional approach of characterizing solid catalysts under static conditions, i.e., before and after reaction, does not provide sufficient knowledge on the physicochemical processes occurring under dynamic conditions at the molecular level. Hence, the necessity of improving new in situ characterizing techniques with the potential of being used under real catalytic reaction conditions is highly desirable. In situ Prompt Gamma Activation Analysis (PGAA) is a rapidly developing chemical analytical technique that enables us experimentally to assess the coverage of surface species under catalytic turnover and correlate these with the reactivity. The catalytic HCl oxidation (Deacon reaction) over bulk ceria will serve as our example. Furthermore, the in situ Transmission Electron Microscopy is a powerful technique that can contribute to the study of atmosphere and temperature induced morphological or compositional changes of a catalyst at atomic resolution. The application of such techniques (PGAA and TEM) will pave the way to a greater and deeper understanding of the dynamic nature of active catalysts. Experimental/Methodology: In situ Prompt Gamma Activation Analysis (PGAA) experiments were carried out to determine the Cl uptake and the degree of surface chlorination under reaction conditions by varying p(O2), p(HCl), p(Cl2), and the reaction temperature. The abundance and dynamic evolution of OH groups on working catalyst under various steady-state conditions were studied by means of in situ FTIR with a specially designed homemade transmission cell. For real in situ TEM we use a commercial in situ holder with a home built gas feeding system and gas analytics. Conclusions: Two complimentary in situ techniques, namely in situ PGAA and in situ FTIR were utilities to investigate the surface coverage of the two most abundant species (Cl and OH). The OH density and Cl uptake were followed under multiple steady-state conditions as a function of p(O2), p(HCl), p(Cl2), and temperature. These experiments have shown that, the OH density positively correlates with the reactivity whereas Cl negatively. The p(HCl) experiments give rise to increased activity accompanied by Cl-coverage increase (opposite trend to p(O2) and T). Cl2 strongly inhibits the reaction, but no measurable increase of the Cl uptake was found. After considering all previous observations we conclude that only a minority of the available adsorption sites contribute to the reactivity. In addition, the mechanism of the catalysed reaction was proposed. The chlorine-oxygen competition for the available active sites renders re-oxidation as the rate-determining step of the catalysed reaction. Further investigations using in situ TEM are planned and will be conducted in the near future. Such experiments allow us to monitor active catalysts at the atomic scale under the most realistic conditions of temperature and pressure. The talk will shed a light on the potential and limitations of in situ PGAA and in situ TEM in the study of catalyst dynamics.

Keywords: CeO2, deacon process, in situ PGAA, in situ TEM, in situ FTIR

Procedia PDF Downloads 267
49 Development of an Interface between BIM-model and an AI-based Control System for Building Facades with Integrated PV Technology

Authors: Moser Stephan, Lukasser Gerald, Weitlaner Robert

Abstract:

Urban structures will be used more intensively in the future through redensification or new planned districts with high building densities. Especially, to achieve positive energy balances like requested for Positive Energy Districts (PED) the single use of roofs is not sufficient for dense urban areas. However, the increasing share of window significantly reduces the facade area available for use in PV generation. Through the use of PV technology at other building components, such as external venetian blinds, onsite generation can be maximized and standard functionalities of this product can be positively extended. While offering advantages in terms of infrastructure, sustainability in the use of resources and efficiency, these systems require an increased optimization in planning and control strategies of buildings. External venetian blinds with PV technology require an intelligent control concept to meet the required demands such as maximum power generation, glare prevention, high daylight autonomy, avoidance of summer overheating but also use of passive solar gains in wintertime. Today, geometric representation of outdoor spaces and at the building level, three-dimensional geometric information is available for planning with Building Information Modeling (BIM). In a research project, a web application which is called HELLA DECART was developed to provide this data structure to extract the data required for the simulation from the BIM models and to make it usable for the calculations and coupled simulations. The investigated object is uploaded as an IFC file to this web application and includes the object as well as the neighboring buildings and possible remote shading. This tool uses a ray tracing method to determine possible glare from solar reflections of a neighboring building as well as near and far shadows per window on the object. Subsequently, an annual estimate of the sunlight per window is calculated by taking weather data into account. This optimized daylight assessment per window provides the ability to calculate an estimation of the potential power generation at the integrated PV on the venetian blind but also for the daylight and solar entry. As a next step, these results of the calculations as well as all necessary parameters for the thermal simulation can be provided. The overall aim of this workflow is to advance the coordination between the BIM model and coupled building simulation with the resulting shading and daylighting system with the artificial lighting system and maximum power generation in a control system. In the research project Powershade, an AI based control concept for PV integrated façade elements with coupled simulation results is investigated. The developed automated workflow concept in this paper is tested by using an office living lab at the HELLA company.

Keywords: BIPV, building simulation, optimized control strategy, planning tool

Procedia PDF Downloads 83
48 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands

Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé

Abstract:

The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.

Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis

Procedia PDF Downloads 132
47 Landslide Hazard Assessment Using Physically Based Mathematical Models in Agricultural Terraces at Douro Valley in North of Portugal

Authors: C. Bateira, J. Fernandes, A. Costa

Abstract:

The Douro Demarked Region (DDR) is a production Porto wine region. On the NE of Portugal, the strong incision of the Douro valley developed very steep slopes, organized with agriculture terraces, have experienced an intense and deep transformation in order to implement the mechanization of the work. The old terrace system, based on stone vertical wall support structure, replaced by terraces with earth embankments experienced a huge terrace instability. This terrace instability has important economic and financial consequences on the agriculture enterprises. This paper presents and develops cartographic tools to access the embankment instability and identify the area prone to instability. The priority on this evaluation is related to the use of physically based mathematical models and develop a validation process based on an inventory of the past embankment instability. We used the shallow landslide stability model (SHALSTAB) based on physical parameters such us cohesion (c’), friction angle(ф), hydraulic conductivity, soil depth, soil specific weight (ϱ), slope angle (α) and contributing areas by Multiple Flow Direction Method (MFD). A terraced area can be analysed by this models unless we have very detailed information representative of the terrain morphology. The slope angle and the contributing areas depend on that. We can achieve that propose using digital elevation models (DEM) with great resolution (pixel with 40cm side), resulting from a set of photographs taken by a flight at 100m high with pixel resolution of 12cm. The slope angle results from this DEM. In the other hand, the MFD contributing area models the internal flow and is an important element to define the spatial variation of the soil saturation. That internal flow is based on the DEM. That is supported by the statement that the interflow, although not coincident with the superficial flow, have important similitude with it. Electrical resistivity monitoring values which related with the MFD contributing areas build from a DEM of 1m resolution and revealed a consistent correlation. That analysis, performed on the area, showed a good correlation with R2 of 0,72 and 0,76 at 1,5m and 2m depth, respectively. Considering that, a DEM with 1m resolution was the base to model the real internal flow. Thus, we assumed that the contributing area of 1m resolution modelled by MFD is representative of the internal flow of the area. In order to solve this problem we used a set of generalized DEMs to build the contributing areas used in the SHALSTAB. Those DEMs, with several resolutions (1m and 5m), were built from a set of photographs with 50cm resolution taken by a flight with 5km high. Using this maps combination, we modelled several final maps of terrace instability and performed a validation process with the contingency matrix. The best final instability map resembles the slope map from a DEM of 40cm resolution and a MFD map from a DEM of 1m resolution with a True Positive Rate (TPR) of 0,97, a False Positive Rate of 0,47, Accuracy (ACC) of 0,53, Precision (PVC) of 0,0004 and a TPR/FPR ratio of 2,06.

Keywords: agricultural terraces, cartography, landslides, SHALSTAB, vineyards

Procedia PDF Downloads 154
46 The Expression of the Social Experience in Film Narration: Cinematic ‘Free Indirect Discourse’ in the Dancing Hawk (1977) by Grzegorz Krolikiewicz

Authors: Robert Birkholc

Abstract:

One of the basic issues related to the creation of characters in media, such as literature and film, is the representation of the characters' thoughts, emotions, and perceptions. This paper is devoted to the social perspective (or the focalization) expressed in film narration. The aim of the paper is to show how social point of view of the hero –conditioned by his origin and the environment from which he comes– can be created by using non-verbal, purely audiovisual means of expression. The issue will be considered on the example of the little-known polish movie The Dancing Hawk (1977) by Grzegorz Królikiewicz, based on the novel by Julian Kawalec. The thesis of the paper is that the polish director uses a narrative figure, which is somewhat analogous to literary form of free indirect discourse. In literature, free indirect discourse is formally ‘spoken’ by the external narrator, but the narration is clearly filtered through the language and thoughts of the character. According to some scholars (such as Roy Pascal), the narrator in this form of speech does not cite the character's words, but uses his way of thinking and imitates his perspective – sometimes with a deep irony. Free indirect discourse is frequently used in Julian Kawalec’s novel. Through the linguistic stylization, the author tries to convey the socially determined perspective of a peasant who migrates to the big city after the Second World War. Grzegorz Królikiewicz expresses the same social experience by pure cinematic form in the adaptation of the book. Both Kawalec and Królikiewicz show the consequences of so-called ‘social advancement’ in Poland after 1945, when the communist party took over political power. On the example of the fate of the main character, Michał Toporny, the director presents the experience of peasants who left their villages and had to adapt to new, urban space. However, the paper is not focused on the historical topic itself, but on the audiovisual form of the movie. Although Królikiewicz doesn’t use frequently POV shots, the narration of The Dancing Hawk is filtered through the sensations of the main character, who feels uprooted and alienated in the new social space. The director captures the hero's feelings through very complex audiovisual procedures – high or low points of view (representing the ‘social position’), grotesque soundtrack, expressionist scenery, and associative editing. In this way, he manages to create the world from the perspective of a socially maladjusted and internally split subject. The Dancing Hawk is a successful attempt to adapt the subjective narration of the book to the ‘language’ of the cinema. Mieke Bal’s notion of focalization helps to describe ‘free indirect discourse’ as a transmedial figure of representing of the characters’ perceptions. However, the polysemiotic medium of the film also significantly transforms this figure of representation. The paper shows both the similarities and differences between literary and cinematic ‘free indirect discourse.’

Keywords: film and literature, free indirect discourse, social experience, subjective narration

Procedia PDF Downloads 107
45 Raman Spectroscopy of Fossil-like Feature in Sooke #1 from Vancouver Island

Authors: J. A. Sawicki, C. Ebrahimi

Abstract:

The first geochemical, petrological, X-ray diffraction, Raman, Mössbauer, and oxygen isotopic analyses of very intriguing 13-kg Sooke #1 stone covered in 70% of its surface with black fusion crust, found in and recovered from Sooke Basin, near Juan de Fuca Strait, in British Columbia, were reported as poster #2775 at LPSC52 in March. Our further analyses reported in poster #6305 at 84AMMS in August and comparisons with the Mössbauer spectra of Martian meteorite MIL03346 and Martian rocks in Gusev Crater reported by Morris et al. suggest that Sooke #1 find could be a stony achondrite of Martian polymict breccia type ejected from early watery Mars. Here, the Raman spectra of a carbon-rich ~1-mm² fossil-like white area identified in this rock on a surface of polished cut have been examined in more detail. The low-intensity 532 nm and 633 nm beams of the InviaRenishaw microscope were used to avoid any destructive effects. The beam was focused through the microscope objective to a 2 m spot on a sample, and backscattered light collected through this objective was recorded with CCD detector. Raman spectra of dark areas outside fossil have shown bands of clinopyroxene at 320, 660, and 1020 cm-1 and small peaks of forsteritic olivine at 820-840 cm-1, in agreement with results of X-ray diffraction and Mössbauer analyses. Raman spectra of the white area showed the broad band D at ~1310 cm-1 consisting of main mode A1g at 1305 cm⁻¹, E2g mode at 1245 cm⁻¹, and E1g mode at 1355 cm⁻¹ due to stretching diamond-like sp3 bonds in diamond polytype lonsdaleite, as in Ovsyuk et al. study. The band near 1600 cm-1 mostly consists of D2 band at 1620 cm-1 and not of the narrower G band at 1583 cm⁻¹ due to E2g stretching in planar sp2 bonds that are fundamental building blocks of carbon allotropes graphite and graphene. In addition, the broad second-order Raman bands were observed with 532 nm beam at 2150, ~2340, ~2500, 2650, 2800, 2970, 3140, and ~3300 cm⁻¹ shifts. Second-order bands in diamond and other carbon structures are ascribed to the combinations of bands observed in the first-order region: here 2650 cm⁻¹ as 2D, 2970 cm⁻¹ as D+G, and 3140 cm⁻¹ as 2G ones. Nanodiamonds are abundant in the Universe, found in meteorites, interplanetary dust particles, comets, and carbon-rich stars. The diamonds in meteorites are presently intensely investigated using Raman spectroscopy. Such particles can be formed by CVD process and during major impact shocks at ~1000-2300 K and ~30-40 GPa. It cannot be excluded that the fossil discovered in Sooke #1 could be a remnant of an alien carbon organism that transformed under shock impact to nanodiamonds. We trust that for the benefit of research in astro-bio-geology of meteorites, asteroids, Martian rocks, and soil, this find deserves further, more thorough investigations. If possible, the Raman SHERLOCK spectrometer operating on the Perseverance Rover should also search for such objects in the Martian rocks.

Keywords: achondrite, nanodiamonds, lonsdaleite, raman spectra

Procedia PDF Downloads 125
44 Computational Approaches to Study Lineage Plasticity in Human Pancreatic Ductal Adenocarcinoma

Authors: Almudena Espin Perez, Tyler Risom, Carl Pelz, Isabel English, Robert M. Angelo, Rosalie Sears, Andrew J. Gentles

Abstract:

Pancreatic ductal adenocarcinoma (PDAC) is one of the most deadly malignancies. The role of the tumor microenvironment (TME) is gaining significant attention in cancer research. Despite ongoing efforts, the nature of the interactions between tumors, immune cells, and stromal cells remains poorly understood. The cell-intrinsic properties that govern cell lineage plasticity in PDAC and extrinsic influences of immune populations require technically challenging approaches due to the inherently heterogeneous nature of PDAC. Understanding the cell lineage plasticity of PDAC will improve the development of novel strategies that could be translated to the clinic. Members of the team have demonstrated that the acquisition of ductal to neuroendocrine lineage plasticity in PDAC confers therapeutic resistance and is a biomarker of poor outcomes in patients. Our approach combines computational methods for deconvolving bulk transcriptomic cancer data using CIBERSORTx and high-throughput single-cell imaging using Multiplexed Ion Beam Imaging (MIBI) to study lineage plasticity in PDAC and its relationship to the infiltrating immune system. The CIBERSORTx algorithm uses signature matrices from immune cells and stroma from sorted and single-cell data in order to 1) infer the fractions of different immune cell types and stromal cells in bulked gene expression data and 2) impute a representative transcriptome profile for each cell type. We studied a unique set of 300 genomically well-characterized primary PDAC samples with rich clinical annotation. We deconvolved the PDAC transcriptome profiles using CIBERSORTx, leveraging publicly available single-cell RNA-seq data from normal pancreatic tissue and PDAC to estimate cell type proportions in PDAC, and digitally reconstruct cell-specific transcriptional profiles from our study dataset. We built signature matrices and optimized by simulations and comparison to ground truth data. We identified cell-type-specific transcriptional programs that contribute to cancer cell lineage plasticity, especially in the ductal compartment. We also studied cell differentiation hierarchies using CytoTRACE and predict cell lineage trajectories for acinar and ductal cells that we believe are pinpointing relevant information on PDAC progression. Collaborators (Angelo lab, Stanford University) has led the development of the Multiplexed Ion Beam Imaging (MIBI) platform for spatial proteomics. We will use in the very near future MIBI from tissue microarray of 40 PDAC samples to understand the spatial relationship between cancer cell lineage plasticity and stromal cells focused on infiltrating immune cells, using the relevant markers of PDAC plasticity identified from the RNA-seq analysis.

Keywords: deconvolution, imaging, microenvironment, PDAC

Procedia PDF Downloads 98
43 The Location of Park and Ride Facilities Using the Fuzzy Inference Model

Authors: Anna Lower, Michal Lower, Robert Masztalski, Agnieszka Szumilas

Abstract:

Contemporary cities are facing serious congestion and parking problems. In urban transport policy the introduction of the park and ride system (P&R) is an increasingly popular way of limiting vehicular traffic. The determining of P&R facilities location is a key aspect of the system. Criteria for assessing the quality of the selected location are formulated generally and descriptively. The research outsourced to specialists are expensive and time consuming. The most focus is on the examination of a few selected places. The practice has shown that the choice of the location of these sites in a intuitive way without a detailed analysis of all the circumstances, often gives negative results. Then the existing facilities are not used as expected. Methods of location as a research topic are also widely taken in the scientific literature. Built mathematical models often do not bring the problem comprehensively, e.g. assuming that the city is linear, developed along one important communications corridor. The paper presents a new method where the expert knowledge is applied to fuzzy inference model. With such a built system even a less experienced person could benefit from it, e.g. urban planners, officials. The analysis result is obtained in a very short time, so a large number of the proposed location can also be verified in a short time. The proposed method is intended for testing of car parks location in a city. The paper will show selected examples of locations of the P&R facilities in cities planning to introduce the P&R. The analysis of existing objects will also be shown in the paper and they will be confronted with the opinions of the system users, with particular emphasis on unpopular locations. The research are executed using the fuzzy inference model which was built and described in more detail in the earlier paper of the authors. The results of analyzes are compared to documents of P&R facilities location outsourced by the city and opinions of existing facilities users expressed on social networking sites. The research of existing facilities were conducted by means of the fuzzy model. The results are consistent with actual users feedback. The proposed method proves to be good, but does not require the involvement of a large experts team and large financial contributions for complicated research. The method also provides an opportunity to show the alternative location of P&R facilities. The performed studies show that the method has been confirmed. The method can be applied in urban planning of the P&R facilities location in relation to the accompanying functions. Although the results of the method are approximate, they are not worse than results of analysis of employed experts. The advantage of this method is ease of use, which simplifies the professional expert analysis. The ability of analyzing a large number of alternative locations gives a broader view on the problem. It is valuable that the arduous analysis of the team of people can be replaced by the model's calculation. According to the authors, the proposed method is also suitable for implementation on a GIS platform.

Keywords: fuzzy logic inference, park and ride system, P&R facilities, P&R location

Procedia PDF Downloads 309
42 Home Environment and Self-Efficacy Beliefs among Native American, African American and Latino Adolescents

Authors: Robert H. Bradley

Abstract:

Many minority adolescents in the United States live in adverse circumstances that pose long-term threats to their well-being. A strong sense of personal control and self-efficacy can help youth mitigate some of those risks and may help protect youth from influences connected with deviant peer groups. Accordingly, it is important to identify conditions that help foster feelings of efficacy in areas that seem critical for the accomplishment of developmental tasks during adolescence. The purpose of this study is to examine two aspects of the home environment (modeling and encouragement of maturity, family companionship and investment) and their relation to three components of self efficacy (self efficacy in enlisting social resources, self efficacy for engaging in independent learning, and self-efficacy for self-regulatory behavior) in three groups of minority adolescents (Native American, African American, Latino). The sample for this study included 54 Native American, 131 African American, and 159 Latino families, each with a child between 16 and 20 years old. The families were recruited from four states: Arizona, Arkansas, California, and Oklahoma. Each family was administered the Late Adolescence version of the Home Observation for Measurement of the Environment (HOME) Inventory and each adolescent completed a 30-item measure of perceived self-efficacy. Three areas of self-efficacy beliefs were examined for this study: enlisting social resources, independent learning, and self-regulation. Each of the three areas of self-efficacy was regressed on the two aspects of the home environment plus overall household risk. For Native Americans, modeling and encouragement were significant for self-efficacy pertaining to enlisting social resources and independent learning. For African Americans, companionship and investment was significant in all three models. For Latinos, modeling and encouragement was significant for self-efficacy pertaining to enlisting social resources and companionship and investment were significant for the other two areas of self-efficacy. The findings show that even as minority adolescents are becoming more individuated from their parents, the quality of experiences at home continues to be associated with their feelings of self-efficacy in areas important for adaptive functioning in adult life. Specifically, individuals can develop a sense that they are efficacious in performing key tasks relevant to work, social relationships, and management of their own behavior if they are guided in how to deal with key challenges and they have been exposed and supported by others who are competent in dealing with such challenges. The findings presented in this study would seem useful given that there is so little current research on home environmental factors connected to self-efficacy beliefs among adolescents in the three groups examined. It would seem worthwhile that personnel from health, human service and juvenile justice agencies give attention to supporting parents in communicating with adolescents, offering expectations to adolescents in mutually supportive ways, and in engaging with adolescents in productive activities. In comparison to programs for parents of young children, there are few specifically designed for parents of children in middle childhood and adolescence.

Keywords: family companionship, home environment, household income, modeling, self-efficacy

Procedia PDF Downloads 219
41 The Use of Intraarticular Aqueous Sarapin for Treatment of Chronic Knee Pain in Elderly Patients in a Primary Care Setting

Authors: Robert E. Kenney, Richard B. Aguilar, Efrain Antunez, Gregory Schor-Haskin, Rafael Rey, Catie Falcon, Luis Arce

Abstract:

This study sought to explore the effect of Sarapin injections on chronic knee pain (CKP). Many adults suffer from CKP which is most often attributed to osteoarthritis. Current treatment regimens for CKP involve the use NSAIDS medications, injections with steroids/analgesic, platelet rich plasma injections, or orthopedic surgical interventions. Sarapin is a commercially available homeopathic aqueous extract from the pitcher plant. Studies on the use of Sarapin as a treatment for cervical, thoracic, and lumbosacral facet joint nerve blocks have been performed with mixed results. There is little available evidence on the use of Sarapin in CKP. This study examines the effect of a series of 3 weekly injections of aqueous Sarapin in 95 elderly patients with CKP in a primary care setting. Cano Health, a primary care group, identified 95 successive patients with CKP from its multimodal physiotherapy program for chronic pain. Patients underwent evaluation by a clinician, underwent diagnostic Xrays of the knees, and the treatment plan with three weekly Sarapin injections was discussed. A pain and functional limitation survey (a modified Lower Extremity Functional Scale (mLEFS)) was administered prior to initiating treatment (Entry Survey (ES)). Each patient received an intraarticular injection of 2 cc of aqueous Sarapin with 1cc 1% lidocaine during weeks 1, 2 and 3. The mLEFS was administered again at week 4, one week after the third Sarapin injection (Exit Survey (ExS)). Demographics: Mean Age 62 +/- 9.8; 73% female; 89% Hispanic/Latino; mean time between ES and ExS was 27.5 +/-8.2 days. Survey: The mLEFS was based on a published Lower Extremity Functional Scale and each patient rated their pain or functional limitation from 0 (no difficulty) to 5 (severe difficulty) for 10 questions. Answers were summed and compared. Maximum score for severe difficulty would be 50 points. Results: Mean pain/functional scores: ES was 30.3 +/-12.1 and ExS was 19.5 +/- 12.5. This represents a relative improvement of 35.7% (P<0.00001). A total of 81% (77/95) of the patients showed improvement in symptoms at week four as assessed by the mLEFS. There were 11 patients who reported an increase in their survey scores while 7 patients reported no change. When evaluating the cohort that reported improvement, the ES was 30.9 +/-11.4 and ExS was 16.3 +/-9.8 yielding a 47.2% relative improvement (P<0.00001). Injections were well tolerated, and no adverse events were reported. Conclusions: In this cohort of 95 elderly patients with CKP, treatment with 3 weekly injections of Sarapin significantly improved pain and function as assessed by a mLEFS survey. The majority (81%) of patients responded positively to therapy, 12% had worsening symptoms and 7% reported no change. The use of intraarticular injections of Sarapin for CKP was shown to be an effective modality of treatment. Sarapin’s low cost, tolerability, and ease of use make it an attractive alternative to NSAIDS, steroids, PRP or surgical intervention for this common debilitating condition.

Keywords: Sarapin, intraarticular, chronic knee pain, osteoarthritis

Procedia PDF Downloads 61
40 Non-Invasive Evaluation of Patients After Percutaneous Coronary Revascularization. The Role of Cardiac Imaging

Authors: Abdou Elhendy

Abstract:

Numerous study have shown the efficacy of the percutaneous intervention (PCI) and coronary stenting in improving left ventricular function and relieving exertional angina. Furthermore, PCI remains the main line of therapy in acute myocardial infarction. Improvement of procedural techniques and new devices have resulted in an increased number of PCI in those with difficult and extensive lesions, multivessel disease as well as total occlusion. Immediate and late outcome may be compromised by acute thrombosis or the development of fibro-intimal hyperplasia. In addition, progression of coronary artery disease proximal or distal to the stent as well as in non-stented arteries is not uncommon. As a result, complications can occur, such as acute myocardial infarction, worsened heart failure or recurrence of angina. In a stent, restenosis can occur without symptoms or with atypical complaints rendering the clinical diagnosis difficult. Routine invasive angiography is not appropriate as a follow up tool due to associated risk and cost and the limited functional assessment. Exercise and pharmacologic stress testing are increasingly used to evaluate the myocardial function, perfusion and adequacy of revascularization. Information obtained by these techniques provide important clues regarding presence and severity of compromise in myocardial blood flow. Stress echocardiography can be performed in conjunction with exercise or dobutamine infusion. The diagnostic accuracy has been moderate, but the results provide excellent prognostic stratification. Adding myocardial contrast agents can improve imaging quality and allows assessment of both function and perfusion. Stress radionuclide myocardial perfusion imaging is an alternative to evaluate these patients. The extent and severity of wall motion and perfusion abnormalities observed during exercise or pharmacologic stress are predictors of survival and risk of cardiac events. According to current guidelines, stress echocardiography and radionuclide imaging are considered to have appropriate indication among patients after PCI who have cardiac symptoms and those who underwent incomplete revascularization. Stress testing is not recommended in asymptomatic patients, particularly early after revascularization, Coronary CT angiography is increasingly used and provides high sensitive for the diagnosis of coronary artery stenosis. Average sensitivity and specificity for the diagnosis of in stent stenosis in pooled data are 79% and 81%, respectively. Limitations include blooming artifacts and low feasibility in patients with small stents or thick struts. Anatomical and functional cardiac imaging modalities are corner stone for the assessment of patients after PCI and provide salient diagnostic and prognostic information. Current imaging techniques cans serve as gate keeper for coronary angiography, thus limiting the risk of invasive procedures to those who are likely to benefit from subsequent revascularization. The determination of which modality to apply requires careful identification of merits and limitation of each technique as well as the unique characteristic of each individual patient.

Keywords: coronary artery disease, stress testing, cardiac imaging, restenosis

Procedia PDF Downloads 137
39 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia

Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger

Abstract:

Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.

Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia

Procedia PDF Downloads 53
38 Automated Prediction of HIV-associated Cervical Cancer Patients Using Data Mining Techniques for Survival Analysis

Authors: O. J. Akinsola, Yinan Zheng, Rose Anorlu, F. T. Ogunsola, Lifang Hou, Robert Leo-Murphy

Abstract:

Cervical Cancer (CC) is the 2nd most common cancer among women living in low and middle-income countries, with no associated symptoms during formative periods. With the advancement and innovative medical research, there are numerous preventive measures being utilized, but the incidence of cervical cancer cannot be truncated with the application of only screening tests. The mortality associated with this invasive cervical cancer can be nipped in the bud through the important role of early-stage detection. This study research selected an array of different top features selection techniques which was aimed at developing a model that could validly diagnose the risk factors of cervical cancer. A retrospective clinic-based cohort study was conducted on 178 HIV-associated cervical cancer patients in Lagos University teaching Hospital, Nigeria (U54 data repository) in April 2022. The outcome measure was the automated prediction of the HIV-associated cervical cancer cases, while the predictor variables include: demographic information, reproductive history, birth control, sexual history, cervical cancer screening history for invasive cervical cancer. The proposed technique was assessed with R and Python programming software to produce the model by utilizing the classification algorithms for the detection and diagnosis of cervical cancer disease. Four machine learning classification algorithms used are: the machine learning model was split into training and testing dataset into ratio 80:20. The numerical features were also standardized while hyperparameter tuning was carried out on the machine learning to train and test the data. Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), and K-Nearest Neighbor (KNN). Some fitting features were selected for the detection and diagnosis of cervical cancer diseases from selected characteristics in the dataset using the contribution of various selection methods for the classification cervical cancer into healthy or diseased status. The mean age of patients was 49.7±12.1 years, mean age at pregnancy was 23.3±5.5 years, mean age at first sexual experience was 19.4±3.2 years, while the mean BMI was 27.1±5.6 kg/m2. A larger percentage of the patients are Married (62.9%), while most of them have at least two sexual partners (72.5%). Age of patients (OR=1.065, p<0.001**), marital status (OR=0.375, p=0.011**), number of pregnancy live-births (OR=1.317, p=0.007**), and use of birth control pills (OR=0.291, p=0.015**) were found to be significantly associated with HIV-associated cervical cancer. On top ten 10 features (variables) considered in the analysis, RF claims the overall model performance, which include: accuracy of (72.0%), the precision of (84.6%), a recall of (84.6%) and F1-score of (74.0%) while LR has: an accuracy of (74.0%), precision of (70.0%), recall of (70.0%) and F1-score of (70.0%). The RF model identified 10 features predictive of developing cervical cancer. The age of patients was considered as the most important risk factor, followed by the number of pregnancy livebirths, marital status, and use of birth control pills, The study shows that data mining techniques could be used to identify women living with HIV at high risk of developing cervical cancer in Nigeria and other sub-Saharan African countries.

Keywords: associated cervical cancer, data mining, random forest, logistic regression

Procedia PDF Downloads 58
37 Technology Optimization of Compressed Natural Gas Home Fast Refueling Units

Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Robert Strods, Adam Szurlej

Abstract:

Despіte all glоbal ecоnоmіc shіfts and the fact that Natural Gas іs recоgnіzed wоrldwіde as the maіn and the leadіng alternatіve tо оіl prоducts іn transpоrtatіоn sectоr, there іs a huge barrіer tо swіtch passenger vehіcle segment tо Natural gas - the lack оf refuelіng іnfrastructure fоr Natural Gas Vehіcles. Whіle іnvestments іn publіc gas statіоns requіre establіshed NGV market іn оrder tо be cоst effectіve, the market іs nоt there due tо lack оf refuelіng statіоns. The key tо sоlvіng that prоblem and prоvіdіng barrіer breakіng refuelіng іnfrastructure sоlutіоn fоr Natural Gas Vehіcles (NGV) іs Hоme Fast Refuelіng Unіts. Іt оperates usіng Natural Gas (Methane), whіch іs beіng prоvіded thrоugh gas pіpelіnes at clіents hоme, and electrіcіty cоnnectіоn pоіnt. Іt enables an envіrоnmentally frіendly NGV’s hоme refuelіng just іn mіnutes. The underlyіng technоlоgy іs a patented technоlоgy оf оne stage hydraulіc cоmpressоr (іnstead оf multіstage mechanіcal cоmpressоr technоlоgy avaіlable оn the market nоw) whіch prоvіdes the pоssіbіlіty tо cоmpress lоw pressure gas frоm resіdentіal gas grіd tо 200 bar fоr іts further usage as a fuel fоr NGVs іn the mоst ecоnоmіcally effіcіent and cоnvenіent fоr custоmer way. Descrіptіоn оf wоrkіng algоrіthm: Twо hіgh pressure cylіnders wіth upper necks cоnnected tо lоw pressure gas sоurce are placed vertіcally. Іnіtіally оne оf them іs fіlled wіth lіquіd and anоther оne – wіth lоw pressure gas. Durіng the wоrkіng prоcess lіquіd іs transferred by means оf hydraulіc pump frоm оne cylіnder tо anоther and back. Wоrkіng lіquіd plays a rоle оf pіstоns іnsіde cylіnders. Mоvement оf wоrkіng lіquіd іnsіde cylіnders prоvіdes sіmultaneоus suctіоn оf a pоrtіоn оf lоw pressure gas іntо оne оf the cylіnder (where lіquіd mоves dоwn) and fоrcіng оut gas оf hіgher pressure frоm anоther cylіnder (where lіquіd mоves up) tо the fuel tank оf the vehіcle / stоrage tank. Each cycle оf fоrcіng the gas оut оf the cylіnder rіses up the pressure оf gas іn the fuel tank оf a vehіcle wіth 2 cylіnders. The prоcess іs repeated untіl the pressure оf gas іn the fuel tank reaches 200 bar. Mоbіlіty has becоme a necessіty іn peоple’s everyday lіfe, whіch led tо оіl dependence. CNG Hоme Fast Refuelіng Unіts can become a part fоr exіstіng natural gas pіpelіne іnfrastructure and becоme the largest vehіcle refuelіng іnfrastructure. Hоme Fast Refuelіng Unіts оwners wіll enjоy day-tо-day tіme savіngs and cоnvenіence - Hоme Car refuelіng іn mіnutes, mоnth-tо-mоnth fuel cоst ecоnоmy, year-tо-year іncentіves and tax deductіbles оn NG refuelіng systems as per cоuntry, reduce CО2 lоcal emіssіоns, savіng cоsts and mоney.

Keywords: CNG (compressed natural gas), CNG stations, NGVs (natural gas vehicles), natural gas

Procedia PDF Downloads 183
36 An Interoperability Concept for Detect and Avoid and Collision Avoidance Systems: Results from a Human-In-The-Loop Simulation

Authors: Robert Rorie, Lisa Fern

Abstract:

The integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS) poses a variety of technical challenges to UAS developers and aviation regulators. In response to growing demand for access to civil airspace in the United States, the Federal Aviation Administration (FAA) has produced a roadmap identifying key areas requiring further research and development. One such technical challenge is the development of a ‘detect and avoid’ system (DAA; previously referred to as ‘sense and avoid’) to replace the ‘see and avoid’ requirement in manned aviation. The purpose of the DAA system is to support the pilot, situated at a ground control station (GCS) rather than in the cockpit of the aircraft, in maintaining ‘well clear’ of nearby aircraft through the use of GCS displays and alerts. In addition to its primary function of aiding the pilot in maintaining well clear, the DAA system must also safely interoperate with existing NAS systems and operations, such as the airspace management procedures of air traffic controllers (ATC) and collision avoidance (CA) systems currently in use by manned aircraft, namely the Traffic alert and Collision Avoidance System (TCAS) II. It is anticipated that many UAS architectures will integrate both a DAA system and a TCAS II. It is therefore necessary to explicitly study the integration of DAA and TCAS II alerting structures and maneuver guidance formats to ensure that pilots understand the appropriate type and urgency of their response to the various alerts. This paper presents a concept of interoperability for the two systems. The concept was developed with the goal of avoiding any negative impact on the performance level of TCAS II (understanding that TCAS II must largely be left as-is) while retaining a DAA system that still effectively enables pilots to maintain well clear, and, as a result, successfully reduces the frequency of collision hazards. The interoperability concept described in the paper focuses primarily on facilitating the transition from a late-stage DAA encounter (where a loss of well clear is imminent) to a TCAS II corrective Resolution Advisory (RA), which requires pilot compliance with the directive RA guidance (e.g., climb, descend) within five seconds of its issuance. The interoperability concept was presented to 10 participants (6 active UAS pilots and 4 active commercial pilots) in a medium-fidelity, human-in-the-loop simulation designed to stress different aspects of the DAA and TCAS II systems. Pilot response times, compliance rates and subjective assessments were recorded. Results indicated that pilots exhibited comprehension of, and appropriate prioritization within, the DAA-TCAS II combined alert structure. Pilots demonstrated a high rate of compliance with TCAS II RAs and were also seen to respond to corrective RAs within the five second requirement established for manned aircraft. The DAA system presented under test was also shown to be effective in supporting pilots’ ability to maintain well clear in the overwhelming majority of cases in which pilots had sufficient time to respond. The paper ends with a discussion of next steps for research on integrating UAS into civil airspace.

Keywords: detect and avoid, interoperability, traffic alert and collision avoidance system (TCAS II), unmanned aircraft systems

Procedia PDF Downloads 245
35 The Late Bronze Age Archeometallurgy of Copper in Mountainous Colchis (Lechkhumi), Georgia

Authors: Nino Sulava, Brian Gilmour, Nana Rezesidze, Tamar Beridze, Rusudan Chagelishvili

Abstract:

Studies of ancient metallurgy are a subject of worldwide current interest. Georgia with its famous early metalworking traditions is one of the central parts of in the Caucasus region. The aim of the present study is to introduce the results of archaeometallurgical investigations being undertaken in the mountain region of Colchis, Lechkhumi (the Tsageri Municipality of western Georgia) and establish their place in the existing archaeological context. Lechkhumi (one of the historic provinces of Georgia known from Georgian, Greek, Byzantine and Armenian written sources as Lechkhumi/Skvimnia/Takveri) is the part of the Colchian mountain area. It is one of the important but little known centres of prehistoric metallurgy in the Caucasian region and of Colchian Bronze Age culture. Reconnaissance archaeological expeditions (2011-2015) revealed significant prehistoric metallurgical sites in Lechkhumi. Sites located in the vicinity of Dogurashi Village (Tsageri Municipality) have become the target area for archaeological excavations. During archaeological excavations conducted in 2016-2018 two archaeometallurgical sites – Dogurashi I and Dogurashi II were investigated. As a result of an interdisciplinary (archaeological, geological and geophysical) survey, it has been established that at both prehistoric Dogurashi mountain sites, it was copper that was being smelted and the ore sources are likely to be of local origin. Radiocarbon dating results confirm they were operating between about the 13th and 9th century BC. More recently another similar site has been identified in this area (Dogurashi III), and this is about to undergo detailed investigation. Other prehistoric metallurgical sites are being located and investigated in the Lechkhumi region as well as chance archaeological finds (often in hoards) – copper ingots, metallurgical production debris, slag, fragments of crucibles, tuyeres (air delivery pipes), furnace wall fragments and other related waste debris. Other chance finds being investigated are the many copper, bronze and (some) iron artefacts that have been found over many years. These include copper ingots, copper, bronze and iron artefacts such as tools, jewelry, and decorative items. These show the important but little known or understood the role of Lechkhumi in the late Bronze Age culture of Colchis. It would seem that mining and metallurgical manufacture form part of the local agricultural yearly lifecycle. Colchian ceramics have been found and also evidence for artefact production, small stone mould fragments and encrusted material from the casting of a fylfot (swastika) form of Colchian bronze buckle found in the vicinities of the early settlements of Tskheta and Dekhviri. Excavation and investigation of previously unknown archaeometallurgical sites in Lechkhumi will contribute significantly to the knowledge and understanding of prehistoric Colchian metallurgy in western Georgia (Adjara, Guria, Samegrelo, and Svaneti) and will reveal the importance of this region in the study of ancient metallurgy in Georgia and the Caucasus. Acknowledgment: This work has been supported by the Shota Rustaveli National Science Foundation (grant FR # 217128).

Keywords: archaeometallurgy, Colchis, copper, Lechkhumi

Procedia PDF Downloads 117
34 Middle School as a Developmental Context for Emergent Citizenship

Authors: Casta Guillaume, Robert Jagers, Deborah Rivas-Drake

Abstract:

Civically engaged youth are critical to maintaining and/or improving the functioning of local, national and global communities and their institutions. The present study investigated how school climate and academic beliefs (academic self-efficacy and school belonging) may inform emergent civic behaviors (emergent citizenship) among self-identified middle school youth of color (African American, Multiracial or Mixed, Latino, Asian American or Pacific Islander, Native American, and other). Study aims: 1) Understand whether and how school climate is associated with civic engagement behaviors, directly and indirectly, by fostering a positive sense of connection to the school and/or engendering feelings of self-efficacy in the academic domain. Accordingly, we examined 2) The association of youths’ sense of school connection and academic self-efficacy with their personally responsible and participatory civic behaviors in school and community contexts—both concurrently and longitudinally. Data from two subsamples of a larger study of social/emotional development among middle school students were used for longitudinal and cross sectional analysis. The cross-sectional sample included 324 6th-8th grade students, of which 43% identified as African American, 20% identified as Multiracial or Mixed, 18% identified as Latino, 12% identified as Asian American or Pacific Islander, 6% identified as Other, and 1% identified as Native American. The age of the sample ranged from 11 – 15 (M = 12.33, SD = .97). For the longitudinal test of our mediation model, we drew on data from the 6th and 7th grade cohorts only (n =232); the ethnic and racial diversity of this longitudinal subsample was virtually identical to that of the cross-sectional sample. For both the cross-sectional and longitudinal analyses, full information maximum likelihood was used to deal with missing data. Fit indices were inspected to determine if they met the recommended thresholds of RMSEA below .05 and CFI and TLI values of at least .90. To determine if particular mediation pathways were significant, the bias-corrected bootstrap confidence intervals for each indirect pathway were inspected. Fit indices for the latent variable mediation model using the cross-sectional data suggest that the hypothesized model fit the observed data well (CFI = .93; TLI =. 92; RMSEA = .05, 90% CI = [.04, .06]). In the model, students’ perceptions of school climate were significantly and positively associated with greater feelings of school connectedness, which were in turn significantly and positively associated with civic engagement. In addition, school climate was significantly and positively associated with greater academic self-efficacy, but academic self-efficacy was not significantly associated with civic engagement. Tests of mediation indicated there was one significant indirect pathway between school climate and civic engagement behavior. There was an indirect association between school climate and civic engagement via its association with sense of school connectedness, indirect association estimate = .17 [95% CI: .08, .32]. The aforementioned indirect association via school connectedness accounted for 50% (.17/.34) of the total effect. Partial support was found for the prediction that students’ perceptions of a positive school climate are linked to civic engagement in part through their role in students’ sense of connection to school.

Keywords: civic engagement, early adolescence, school climate, school belonging, developmental niche

Procedia PDF Downloads 345
33 Structural and Functional Correlates of Reaction Time Variability in a Large Sample of Healthy Adolescents and Adolescents with ADHD Symptoms

Authors: Laura O’Halloran, Zhipeng Cao, Clare M. Kelly, Hugh Garavan, Robert Whelan

Abstract:

Reaction time (RT) variability on cognitive tasks provides the index of the efficiency of executive control processes (e.g. attention and inhibitory control) and is considered to be a hallmark of clinical disorders, such as attention-deficit disorder (ADHD). Increased RT variability is associated with structural and functional brain differences in children and adults with various clinical disorders, as well as poorer task performance accuracy. Furthermore, the strength of functional connectivity across various brain networks, such as the negative relationship between the task-negative default mode network and task-positive attentional networks, has been found to reflect differences in RT variability. Although RT variability may provide an index of attentional efficiency, as well as being a useful indicator of neurological impairment, the brain substrates associated with RT variability remain relatively poorly defined, particularly in a healthy sample. Method: Firstly, we used the intra-individual coefficient of variation (ICV) as an index of RT variability from “Go” responses on the Stop Signal Task. We then examined the functional and structural neural correlates of ICV in a large sample of 14-year old healthy adolescents (n=1719). Of these, a subset had elevated symptoms of ADHD (n=80) and was compared to a matched non-symptomatic control group (n=80). The relationship between brain activity during successful and unsuccessful inhibitions and gray matter volume were compared with the ICV. A mediation analysis was conducted to examine if specific brain regions mediated the relationship between ADHD symptoms and ICV. Lastly, we looked at functional connectivity across various brain networks and quantified both positive and negative correlations during “Go” responses on the Stop Signal Task. Results: The brain data revealed that higher ICV was associated with increased structural and functional brain activation in the precentral gyrus in the whole sample and in adolescents with ADHD symptoms. Lower ICV was associated with lower activation in the anterior cingulate cortex (ACC) and medial frontal gyrus in the whole sample and in the control group. Furthermore, our results indicated that activation in the precentral gyrus (Broadman Area 4) mediated the relationship between ADHD symptoms and behavioural ICV. Conclusion: This is the first study first to investigate the functional and structural correlates of ICV collectively in a large adolescent sample. Our findings demonstrate a concurrent increase in brain structure and function within task-active prefrontal networks as a function of increased RT variability. Furthermore, structural and functional brain activation patterns in the ACC, and medial frontal gyrus plays a role-optimizing top-down control in order to maintain task performance. Our results also evidenced clear differences in brain morphometry between adolescents with symptoms of ADHD but without clinical diagnosis and typically developing controls. Our findings shed light on specific functional and structural brain regions that are implicated in ICV and yield insights into effective cognitive control in healthy individuals and in clinical groups.

Keywords: ADHD, fMRI, reaction-time variability, default mode, functional connectivity

Procedia PDF Downloads 229
32 Fort Conger: A Virtual Museum and Virtual Interactive World for Exploring Science in the 19th Century

Authors: Richard Levy, Peter Dawson

Abstract:

Ft. Conger, located in the Canadian Arctic was one of the most remote 19th-century scientific stations. Established in 1881 on Ellesmere Island, a wood framed structure established a permanent base from which to conduct scientific research. Under the charge of Lt. Greely, Ft. Conger was one of 14 expeditions conducted during the First International Polar Year (FIPY). Our research project “From Science to Survival: Using Virtual Exhibits to Communicate the Significance of Polar Heritage Sites in the Canadian Arctic” focused on the creation of a virtual museum website dedicated to one of the most important polar heritage site in the Canadian Arctic. This website was developed under a grant from Virtual Museum of Canada and enables visitors to explore the fort’s site from 1875 to the present, http://fortconger.org. Heritage sites are often viewed as static places. A goal of this project was to present the change that occurred over time as each new group of explorers adapted the site to their needs. The site was first visited by British explorer George Nares in 1875 – 76. Only later did the United States government select this site for the Lady Franklin Bay Expedition (1881-84) with research to be conducted under the FIPY (1882 – 83). Still later Robert Peary and Matthew Henson attempted to reach the North Pole from Ft. Conger in 1899, 1905 and 1908. A central focus of this research is on the virtual reconstruction of the Ft. Conger. In the summer of 2010, a Zoller+Fröhlich Imager 5006i and Minolta Vivid 910 laser scanner were used to scan terrain and artifacts. Once the scanning was completed, the point clouds were registered and edited to form the basis of a virtual reconstruction. A goal of this project has been to allow visitors to step back in time and explore the interior of these buildings with all of its artifacts. Links to text, historic documents, animations, panorama images, computer games and virtual labs provide explanations of how science was conducted during the 19th century. A major feature of this virtual world is the timeline. Visitors to the website can begin to explore the site when George Nares, in his ship the HMS Discovery, appeared in the harbor in 1875. With the emergence of Lt Greely’s expedition in 1881, we can track the progress made in establishing a scientific outpost. Still later in 1901, with Peary’s presence, the site is transformed again, with the huts having been built from materials salvaged from Greely’s main building. Still later in 2010, we can visit the site during its present state of deterioration and learn about the laser scanning technology which was used to document the site. The Science and Survival at Fort Conger project represents one of the first attempts to use virtual worlds to communicate the historical and scientific significance of polar heritage sites where opportunities for first-hand visitor experiences are not possible because of remote location.

Keywords: 3D imaging, multimedia, virtual reality, arctic

Procedia PDF Downloads 394
31 Investigation of Yard Seam Workings for the Proposed Newcastle Light Rail Project

Authors: David L. Knott, Robert Kingsland, Alistair Hitchon

Abstract:

The proposed Newcastle Light Rail is a key part of the revitalisation of Newcastle, NSW and will provide a frequent and reliable travel option throughout the city centre, running from Newcastle Interchange at Wickham to Pacific Park in Newcastle East, a total of 2.7 kilometers in length. Approximately one-third of the route, along Hunter and Scott Streets, is subject to potential shallow underground mine workings. The extent of mining and seams mined is unclear. Convicts mined the Yard Seam and overlying Dudley (Dirty) Seam in Newcastle sometime between 1800 and 1830. The Australian Agricultural Company mined the Yard Seam from about 1831 to the 1860s in the alignment area. The Yard Seam was about 3 feet (0.9m) thick, and therefore, known as the Yard Seam. Mine maps do not exist for the workings in the area of interest and it was unclear if both or just one seam was mined. Information from 1830s geological mapping and other data showing shaft locations were used along Scott Street and information from the 1908 Royal Commission was used along Hunter Street to develop an investigation program. In addition, mining was encountered for several sites to the south of the alignment at depths of about 7 m to 25 m. Based on the anticipated depths of mining, it was considered prudent to assess the potential for sinkhole development on the proposed alignment and realigned underground utilities and to obtain approval for the work from Subsidence Advisory NSW (SA NSW). The assessment consisted of a desktop study, followed by a subsurface investigation. Four boreholes were drilled along Scott Street and three boreholes were drilled along Hunter Street using HQ coring techniques in the rock. The placement of boreholes was complicated by the presence of utilities in the roadway and traffic constraints. All the boreholes encountered the Yard Seam, with conditions varying from unmined coal to an open void, indicating the presence of mining. The geotechnical information obtained from the boreholes was expanded by using various downhole techniques including; borehole camera, borehole sonar, and downhole geophysical logging. The camera provided views of the rock and helped to explain zones of no recovery. In addition, timber props within the void were observed. Borehole sonar was performed in the void and provided an indication of room size as well as the presence of timber props within the room. Downhole geophysical logging was performed in the boreholes to measure density, natural gamma, and borehole deviation. The data helped confirm that all the mining was in the Yard Seam and that the overlying Dudley Seam had been eroded in the past over much of the alignment. In summary, the assessment allowed the potential for sinkhole subsidence to be assessed and a mitigation approach developed to allow conditional approval by SA NSW. It also confirmed the presence of mining in the Yard Seam, the depth to the seam and mining conditions, and indicated that subsidence did not appear to have occurred in the past.

Keywords: downhole investigation techniques, drilling, mine subsidence, yard seam

Procedia PDF Downloads 290
30 Respiratory Health and Air Movement Within Equine Indoor Arenas

Authors: Staci McGill, Morgan Hayes, Robert Coleman, Kimberly Tumlin

Abstract:

The interaction and relationships between horses and humans have been shown to be positive for physical, mental, and emotional wellbeing, however equine spaces where these interactions occur do include some environmental risks. There are 1.7 million jobs associated with the equine industry in the United States in addition to recreational riders, owners, and volunteers who interact with horses for substantial amounts of time daily inside built structures. One specialized facility, an “indoor arena” is a semi-indoor structure used for exercising horses and exhibiting skills during competitive events. Typically, indoor arenas have a sand or sand mixture as the footing or surface over which the horse travels, and increasingly, silica sand is being recommended due to its durable nature. It was previously identified in a semi-qualitative survey that the majority of individuals using indoor arenas have environmental concerns with dust. 27% (90/333) of respondents reported respiratory issues or allergy-like symptoms while riding with 21.6% (71/329) of respondents reporting these issues while standing on the ground observing or teaching. Frequent headaches and/or lightheadedness was reported in 9.9% (33/333) of respondents while riding and in 4.3% 14/329 while on the ground. Horse respiratory health is also negatively impacted with 58% (194/333) of respondents indicating horses cough during or after time in the indoor arena. Instructors who spent time in indoor arenas self-reported more respiratory issues than those individuals who identified as smokers, highlighting the health relevance of understanding these unique structures. To further elucidate environmental concerns and self-reported health issues, 35 facility assessments were conducted in a cross-sectional sampling design in the states of Kentucky and Ohio (USA). Data, including air speeds, were collected in a grid fashion at 15 points within the indoor arenas and then mapped spatially using krigging in ARCGIS. From the spatial maps, standard variances were obtained and differences were analyzed using multivariant analysis of variances (MANOVA) and analysis of variances (ANOVA). There were no differences for the variance of the air speeds in the spaces for facility orientation, presence and type of roof ventilation, climate control systems, amount of openings, or use of fans. Variability of the air speeds in the indoor arenas was 0.25 or less. Further analysis yielded that average air speeds within the indoor arenas were lower than 100 ft/min (0.51 m/s) which is considered still air in other animal facilities. The lack of air movement means that dust clearance is reliant on particle size and weight rather than ventilation. While further work on respirable dust is necessary, this characterization of the semi-indoor environment where animals and humans interact indicates insufficient air flow to eliminate or reduce respiratory hazards. Finally, engineering solutions to address air movement deficiencies within indoor arenas or mitigate particulate matter are critical to ensuring exposures do not lead to adverse health outcomes for equine professionals, volunteers, participants, and horses within these spaces.

Keywords: equine, indoor arena, ventilation, particulate matter, respiratory health

Procedia PDF Downloads 83
29 Audience Members' Perspective-Taking Predicts Accurate Identification of Musically Expressed Emotion in a Live Improvised Jazz Performance

Authors: Omer Leshem, Michael F. Schober

Abstract:

This paper introduces a new method for assessing how audience members and performers feel and think during live concerts, and how audience members' recognized and felt emotions are related. Two hypotheses were tested in a live concert setting: (1) that audience members’ cognitive perspective taking ability predicts their accuracy in identifying an emotion that a jazz improviser intended to express during a performance, and (2) that audience members' affective empathy predicts their likelihood of feeling the same emotions as the performer. The aim was to stage a concert with audience members who regularly attend live jazz performances, and to measure their cognitive and affective reactions during the performance as non-intrusively as possible. Pianist and Grammy nominee Andy Milne agreed, without knowing details of the method or hypotheses, to perform a full-length solo improvised concert that would include an ‘unusual’ piece. Jazz fans were recruited through typical advertising for New York City jazz performances. The event was held at the New School’s Glass Box Theater, the home of leading NYC jazz venue ‘The Stone.’ Audience members were charged typical NYC jazz club admission prices; advertisements informed them that anyone who chose to participate in the study would be reimbursed their ticket price after the concert. The concert, held in April 2018, had 30 attendees, 23 of whom participated in the study. Twenty-two minutes into the concert, the performer was handed a paper note with the instruction: ‘Perform a 3-5-minute improvised piece with the intention of conveying sadness.’ (Sadness was chosen based on previous music cognition lab studies, where solo listeners were less likely to select sadness as the musically-expressed emotion accurately from a list of basic emotions, and more likely to misinterpret sadness as tenderness). Then, audience members and the performer were invited to respond to a questionnaire from a first envelope under their seat. Participants used their own words to describe the emotion the performer had intended to express, and then to select the intended emotion from a list. They also reported the emotions they had felt while listening using Izard’s differential emotions scale. The concert then continued as usual. At the end, participants answered demographic questions and Davis’ interpersonal reactivity index (IRI), a 28-item scale designed to assess both cognitive and affective empathy. Hypothesis 1 was supported: audience members with greater cognitive empathy were more likely to accurately identify sadness as the expressed emotion. Moreover, audience members who accurately selected ‘sadness’ reported feeling marginally sadder than people who did not select sadness. Hypotheses 2 was not supported; audience members with greater affective empathy were not more likely to feel the same emotions as the performer. If anything, members with lower cognitive perspective-taking ability had marginally greater emotional overlap with the performer, which makes sense given that these participants were less likely to identify the music as sad, which corresponded with the performer’s actual feelings. Results replicate findings from solo lab studies in a concert setting and demonstrate the viability of exploring empathy and collective cognition in improvised live performance.

Keywords: audience, cognition, collective cognition, emotion, empathy, expressed emotion, felt emotion, improvisation, live performance, recognized emotion

Procedia PDF Downloads 112
28 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire

Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan

Abstract:

Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.

Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer

Procedia PDF Downloads 136
27 Gas Systems of the Amadeus Basin, Australia

Authors: Chris J. Boreham, Dianne S. Edwards, Amber Jarrett, Justin Davies, Robert Poreda, Alex Sessions, John Eiler

Abstract:

The origins of natural gases in the Amadeus Basin have been assessed using molecular and stable isotope (C, H, N, He) systematics. A dominant end-member thermogenic, oil-associated gas is considered for the Ordovician Pacoota−Stairway sandstones of the Mereenie gas and oil field. In addition, an abiogenic end-member is identified in the latest Proterozoic lower Arumbera Sandstone of the Dingo gasfield, being most likely associated with radiolysis of methane with polymerisation to wet gases. The latter source assignment is based on a similar geochemical fingerprint derived from the laboratory gamma irradiation experiments on methane. A mixed gas source is considered for the Palm Valley gasfield in the Ordovician Pacoota Sandstone. Gas wetness (%∑C₂−C₅/∑C₁−C₅) decreases in the order Mereenie (19.1%) > Palm Valley (9.4%) > Dingo (4.1%). Non-produced gases at Magee-1 (23.5%; Late Proterozoic Heavitree Quartzite) and Mount Kitty-1 (18.9%; Paleo-Mesoproterozoic fractured granitoid basement) are very wet. Methane thermometry based on clumped isotopes of methane (¹³CDH₃) is consistent with the abiogenic origin for the Dingo gas field with methane formation temperature of 254ᵒC. However, the low methane formation temperature of 57°C for the Mereenie gas suggests either a mixed thermogenic-biogenic methane source or there is no thermodynamic equilibrium between the methane isotopomers. The shallow reservoir depth and present-day formation temperature below 80ᵒC would support microbial methanogenesis, but there is no accompanying alteration of the C- and H-isotopes of the wet gases and CO₂ that is typically associated with biodegradation. The Amadeus Basin gases show low to extremely high inorganic gas contents. Carbon dioxide is low in abundance (< 1% CO₂) and becomes increasing depleted in ¹³C from the Palm Valley (av. δ¹³C 0‰) to the Mereenie (av. δ¹³C -6.6‰) and Dingo (av. δ¹³C -14.3‰) gas fields. Although the wide range in carbon isotopes for CO₂ is consistent with multiple origins from inorganic to organic inputs, the most likely process is fluid-rock alteration with enrichment in ¹²C in the residual gaseous CO₂ accompanying progressive carbonate precipitation within the reservoir. Nitrogen ranges from low−moderate (1.7−9.9% N₂) abundance (Palm Valley av. 1.8%; Mereenie av. 9.1%; Dingo av. 9.4%) to extremely high abundance in Magee-1 (43.6%) and Mount Kitty-1 (61.0%). The nitrogen isotopes for the production gases have δ¹⁵N = -3.0‰ for Mereenie, -3.0‰ for Palm Valley and -7.1‰ for Dingo, suggest all being mixed inorganic and thermogenic nitrogen sources. Helium (He) abundance varies over a wide range from a low of 0.17% to one of the world’s highest at 9% (Mereenie av. 0.23%; Palm Valley av. 0.48%, Dingo av. 0.18%, Magee-1 6.2%; Mount Kitty-1 9.0%). Complementary helium isotopes (R/Ra = ³He/⁴Hesample / ³He/⁴Heair) range from 0.013 to 0.031 R/Ra, indicating a dominant crustal origin for helium with a sustained input of radiogenic 4He from the decomposition of U- and Th-bearing minerals, effectively diluting any original mantle helium input. The high helium content in the non-produced gases compared to the shallower producing wells most likely reflects their stratigraphic position relative to the Tonian Bitter Springs Group with the former below and the latter above an effective carbonate-salt seal.

Keywords: amadeus gas, thermogenic, abiogenic, C, H, N, He isotopes

Procedia PDF Downloads 171