Search results for: Malani igneous suite
36 Exploring an Exome Target Capture Method for Cross-Species Population Genetic Studies
Authors: Benjamin A. Ha, Marco Morselli, Xinhui Paige Zhang, Elizabeth A. C. Heath-Heckman, Jonathan B. Puritz, David K. Jacobs
Abstract:
Next-generation sequencing has enhanced the ability to acquire massive amounts of sequence data to address classic population genetic questions for non-model organisms. Targeted approaches allow for cost effective or more precise analyses of relevant sequences; although, many such techniques require a known genome and it can be costly to purchase probes from a company. This is challenging for non-model organisms with no published genome and can be expensive for large population genetic studies. Expressed exome capture sequencing (EecSeq) synthesizes probes in the lab from expressed mRNA, which is used to capture and sequence the coding regions of genomic DNA from a pooled suite of samples. A normalization step produces probes to recover transcripts from a wide range of expression levels. This approach offers low cost recovery of a broad range of genes in the genome. This research project expands on EecSeq to investigate if mRNA from one taxon may be used to capture relevant sequences from a series of increasingly less closely related taxa. For this purpose, we propose to use the endangered Northern Tidewater goby, Eucyclogobius newberryi, a non-model organism that inhabits California coastal lagoons. mRNA will be extracted from E. newberryi to create probes and capture exomes from eight other taxa, including the more at-risk Southern Tidewater goby, E. kristinae, and more divergent species. Captured exomes will be sequenced, analyzed bioinformatically and phylogenetically, then compared to previously generated phylogenies across this group of gobies. This will provide an assessment of the utility of the technique in cross-species studies and for analyzing low genetic variation within species as is the case for E. kristinae. This method has potential applications to provide economical ways to expand population genetic and evolutionary biology studies for non-model organisms.Keywords: coastal lagoons, endangered species, non-model organism, target capture method
Procedia PDF Downloads 19035 Multi-Indicator Evaluation of Agricultural Drought Trends in Ethiopia: Implications for Dry Land Agriculture and Food Security
Authors: Dawd Ahmed, Venkatesh Uddameri
Abstract:
Agriculture in Ethiopia is the main economic sector influenced by agricultural drought. A simultaneous assessment of drought trends using multiple drought indicators is useful for drought planning and management. Intra-season and seasonal drought trends in Ethiopia were studied using a suite of drought indicators. Standardized Precipitation Index (SPI), Standardized Precipitation Evapotranspiration Index (SPEI), Palmer Drought Severity Index (PDSI), and Z-index for long-rainy, dry, and short-rainy seasons are used to identify drought-causing mechanisms. The Statistical software package R version 3.5.2 was used for data extraction and data analyses. Trend analysis indicated shifts in late-season long-rainy season precipitation into dry in the southwest and south-central portions of Ethiopia. Droughts during the dry season (October–January) were largely temperature controlled. Short-term temperature-controlled hydrologic processes exacerbated rainfall deficits during the short rainy season (February–May) and highlight the importance of temperature- and hydrology-induced soil dryness on the production of short-season crops such as tef. Droughts during the long-rainy season (June–September) were largely driven by precipitation declines arising from the narrowing of the intertropical convergence zone (ITCZ). Increased dryness during long-rainy season had severe consequences on the production of corn and sorghum. PDSI was an aggressive indicator of seasonal droughts suggesting the low natural resilience to combat the effects of slow-acting, moisture-depleting hydrologic processes. The lack of irrigation systems in the nation limits the ability to combat droughts and improve agricultural resilience. There is an urgent need to monitor soil moisture (a key agro-hydrologic variable) to better quantify the impacts of meteorological droughts on agricultural systems in Ethiopia.Keywords: autocorrelation, climate change, droughts, Ethiopia, food security, palmer z-index, PDSI, SPEI, SPI, trend analysis
Procedia PDF Downloads 14334 Implication of Soil and Seismic Ground Motion Variability on Dynamic Pile Group Impedance for Bridges
Authors: Muhammad Tariq Chaudhary
Abstract:
Bridges constitute a vital link in a transportation system and their functionality after an earthquake is critical in reducing disruption to social and economic activities of the society. Bridges supported on pile foundations are commonly used in many earthquake-prone regions. In order to properly design or investigate the performance of such structures, it is imperative that the effect of soil-foundation-structure interaction be properly taken into account. This study focused on the influence of soil and seismic ground motion variability on the dynamic impedance of pile-group foundations typically used for medium-span (about 30 m) urban viaduct bridges. Soil profiles corresponding to various AASHTO soil classes were selected from actual data of such bridges and / or from the literature. The selected soil profiles were subjected to 1-D wave propagation analysis to determine effective values of soil shear modulus and damping ratio for a suite of properly selected actual seismic ground motions varying in PGA from 0.01g to 0.64g, and having variable velocity and frequency content. The effective values of the soil parameters were then employed to determine the dynamic impedance of pile groups in horizontal, vertical and rocking modes in various soil profiles. Pile diameter was kept constant for bridges in various soil profiles while pile length and number of piles were changed based on AASHTO design requirements for various soil profiles and earthquake ground motions. Conclusions were drawn regarding variability in effective soil shear modulus, soil damping, shear wave velocity and pile group impedance for various soil profiles and ground motions and its implications for design and evaluation of pile-supported bridges. It was found that even though the effective soil parameters underwent drastic variation with increasing PGA, the pile group impedance was not affected much in properly designed pile foundations due to the corresponding increase in pile length or increase in a number of piles or both when subjected to increasing PGA or founded in weaker soil profiles.Keywords: bridge, pile foundation, dynamic foundation impedance, soil profile, shear wave velocity, seismic ground motion, seismic wave propagation
Procedia PDF Downloads 32533 Q Slope Rock Mass Classification and Slope Stability Assessment Methodology Application in Steep Interbedded Sedimentary Rock Slopes for a Motorway Constructed North of Auckland, New Zealand
Authors: Azariah Sosa, Carlos Renedo Sanchez
Abstract:
The development of a new motorway north of Auckland (New Zealand) includes steep rock cuts, from 63 up to 85 degrees, in an interbedded sandstone and siltstone rock mass of the geological unit Waitemata Group (Pakiri Formation), which shows sub-horizontal bedding planes, various sub-vertical joint sets, and a diverse weathering profile. In this kind of rock mass -that can be classified as a weak rock- the definition of the stable maximum geometry is not only governed by discontinuities and defects evident in the rock but is important to also consider the global stability of the rock slope, including (in the analysis) the rock mass characterisation, influence of the groundwater, the geological evolution, and the weathering processes. Depending on the weakness of the rock and the processes suffered, the global stability could, in fact, be a more restricting element than the potential instability of individual blocks through discontinuities. This paper discusses those elements that govern the stability of the rock slopes constructed in a rock formation with favourable bedding and distribution of discontinuities (horizontal and vertical) but with a weak behaviour in terms of global rock mass characterisation. In this context, classifications as Q-Slope and slope stability assessment methodology (SSAM) have been demonstrated as important tools which complement the assessment of the global stability together with the analytical tools related to the wedge-type failures and limit equilibrium methods. The paper focuses on the applicability of these two new empirical classifications to evaluate the slope stability in 18 already excavated rock slopes in the Pakiri formation through comparison between the predicted and observed stability issues and by reviewing the outcome of analytical methods (Rocscience slope stability software suite) compared against the expected stability determined from these rock classifications. This exercise will help validate such findings and correlations arising from the two empirical methods in order to adjust the methods to the nature of this specific kind of rock mass and provide a better understanding of the long-term stability of the slopes studied.Keywords: Pakiri formation, Q-slope, rock slope stability, SSAM, weak rock
Procedia PDF Downloads 20832 Beyond Inclusion: The Need for Health Equity for Women with Disabilities
Authors: Jaishree Ellis
Abstract:
The United States Centers for Disease Control tells us that many women with disabilities will not receive regular health screenings, including Pap Smears and mammograms. This article was comprised and written to recognize the barriers to care, gaps in existing healthcare implementation, and viable methodologies for the provision of comprehensive and robust gynecologic care for women with disabilities. According to the World Health Organization, 15% of the world's population, or approximately 1 billion people, have disabilities, most of whom are identified as women. Women with disabilities are described as being multi-disabled, as in some places, they suffer exclusion because of their disabilities as well as their gender. The paucity of information regarding how to create a healthcare system that is inclusive of every woman, regardless of her type of disability (physical, mental, intellectual or medical), has made it challenging to establish an environment that makes it possible for individuals to access care in an equitable, respectful and comprehensive way. A review of the current literature, institutional websites within the United States and American resource guides was implemented to determine where comprehensive models of care for women with disabilities exist, as well as the modalities that are being employed to meet their healthcare needs. The many barriers to care that women with disabilities face were also extracted from various sources within the literature to provide an exhaustive list that can be tackled, one by one. Of the 637 Hospital Systems in the United States, only 7 provide website documentation of health care services that address the unique needs of women with disabilities. The presumption is that if institutions have not marketed such interventions to the community, then it is likely that they do not have a robust suite of services with which to make gynecologic care available to patients with disabilities. Through this review, 7 main barriers to comprehensive gynecologic care were identified, with more than 20 sub-categories existing within those. As with many other areas of community life, inclusion remains lacking in the delivery of healthcare for women with disabilities. There are at least 7 barriers that must be overcome in order to provide equity in the medical office, the exam room, the hospital and the operating room. While few institutions have prioritized this, those few have provided blueprints that can easily be adopted by others. However, as the general population lives longer and ages, the incidence of disabilities increases, as do the healthcare disparities surrounding them. Further compounded by this is a lack of formal education for medical providers in the United States.Keywords: health equity, inclusion, healthcare disparities, education
Procedia PDF Downloads 5531 Attention Deficit Hyperactivity Disorder and Criminality: A Psychological Profile of Convicts Serving Prison Sentences
Authors: Agnieszka Nowogrodzka
Abstract:
Objectives: ADHD is a neurodevelopmental disorder in which symptoms are most prominent throughout childhood. In the longer term, these symptoms, as well as the behaviour of the child, the experiences arising from the response of the community to the child's symptoms, as well as the functioning of the community itself, all contribute to the onset of secondary symptoms and subsequent outcomes of the disorder, such as crime or mental disorders. The purpose of this study is to estimate the prevalence of ADHD among Polish convicts serving a prison sentence. To that end, the study will focus on the relationship between the severity of ADHD and early childhood trauma, family relations, maladaptive cognitive schemas, as well as mental disorders. It is an attempt to assess the interdependence between ADHD, childhood experiences, and secondary outcomes. Methods: The study enrolled two groups of first-time convicts and repeat offenders aged between 21 and 65 –each of the study groups comprised 120 participants; 240 participants in total took part in the study. Participants were recruited in semi-open penal institutions in Poland (Poznań Custody Suite, Wronki Penal Institution, Iława Penal Institution). The control group comprised 110 men without criminal records aged 21 to 65. The DIVA 5.0 questionnaire was employed to identify the severity of ADHD symptoms. Other questionnaires employed in the course of the study included the Childhood Trauma Questionnaire (CTQ), The Family Adaptability and Cohesion Scale IV (FACES-IV), Young Schema Questionnaire (YSQ), and the General Health Questionnaire (GHQ-30). Results: The findings of the study in question are currently still being compiled and will be shared during the conference. The findings of a pilot study involving two cohorts of convicts (each numbering 20 men) and a control group (20 men with no criminal records) indicate a significant correlation between ADHD and the experience of early childhood trauma. The severity of ADHD also shows a correlation with the assessment of the functioning of the family, with the subjects assessing the relationships in their families more negatively than the control group. Furthermore, the severity of ADHD is also correlated with maladaptive emotional schemas manifesting in the participants. The findings also show a correlation between selected dimensions and the severity of offenses.Keywords: ADHD, social impairments, mental disorders, early childhood traumas, criminality
Procedia PDF Downloads 9330 Seismic Assessment of a Pre-Cast Recycled Concrete Block Arch System
Authors: Amaia Martinez Martinez, Martin Turek, Carlos Ventura, Jay Drew
Abstract:
This study aims to assess the seismic performance of arch and dome structural systems made from easy to assemble precast blocks of recycled concrete. These systems have been developed by Lock Block Ltd. Company from Vancouver, Canada, as an extension of their currently used retaining wall system. The characterization of the seismic behavior of these structures is performed by a combination of experimental static and dynamic testing, and analytical modeling. For the experimental testing, several tilt tests, as well as a program of shake table testing were undertaken using small scale arch models. A suite of earthquakes with different characteristics from important past events are chosen and scaled properly for the dynamic testing. Shake table testing applying the ground motions in just one direction (in the weak direction of the arch) and in the three directions were conducted and compared. The models were tested with increasing intensity until collapse occurred; which determines the failure level for each earthquake. Since the failure intensity varied with type of earthquake, a sensitivity analysis of the different parameters was performed, being impulses the dominant factor. For all cases, the arches exhibited the typical four-hinge failure mechanism, which was also shown in the analytical model. Experimental testing was also performed reinforcing the arches using a steel band over the structures anchored at both ends of the arch. The models were tested with different pretension levels. The bands were instrumented with strain gauges to measure the force produced by the shaking. These forces were used to develop engineering guidelines for the design of the reinforcement needed for these systems. In addition, an analytical discrete element model was created using 3DEC software. The blocks were designed as rigid blocks, assigning all the properties to the joints including also the contribution of the interlocking shear key between blocks. The model is calibrated to the experimental static tests and validated with the obtained results from the dynamic tests. Then the model can be used to scale up the results to the full scale structure and expanding it to different configurations and boundary conditions.Keywords: arch, discrete element model, seismic assessment, shake-table testing
Procedia PDF Downloads 20729 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms
Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson
Abstract:
This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection
Procedia PDF Downloads 46528 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 10927 Long Short-Term Memory Stream Cruise Control Method for Automated Drift Detection and Adaptation
Authors: Mohammad Abu-Shaira, Weishi Shi
Abstract:
Adaptive learning, a commonly employed solution to drift, involves updating predictive models online during their operation to react to concept drifts, thereby serving as a critical component and natural extension for online learning systems that learn incrementally from each example. This paper introduces LSTM-SCCM “Long Short-Term Memory Stream Cruise Control Method”, a drift adaptation-as-a-service framework for online learning. LSTM-SCCM automates drift adaptation through prompt detection, drift magnitude quantification, dynamic hyperparameter tuning, performing shortterm optimization and model recalibration for immediate adjustments, and, when necessary, conducting long-term model recalibration to ensure deeper enhancements in model performance. LSTM-SCCM is incorporated into a suite of cutting-edge online regression models, assessing their performance across various types of concept drift using diverse datasets with varying characteristics. The findings demonstrate that LSTM-SCCM represents a notable advancement in both model performance and efficacy in handling concept drift occurrences. LSTM-SCCM stands out as the sole framework adept at effectively tackling concept drifts within regression scenarios. Its proactive approach to drift adaptation distinguishes it from conventional reactive methods, which typically rely on retraining after significant degradation to model performance caused by drifts. Additionally, LSTM-SCCM employs an in-memory approach combined with the Self-Adjusting Memory (SAM) architecture to enhance real-time processing and adaptability. The framework incorporates variable thresholding techniques and does not assume any particular data distribution, making it an ideal choice for managing high-dimensional datasets and efficiently handling large-scale data. Our experiments, which include abrupt, incremental, and gradual drifts across both low- and high-dimensional datasets with varying noise levels, and applied to four state-of-the-art online regression models, demonstrate that LSTM-SCCM is versatile and effective, rendering it a valuable solution for online regression models to address concept drift.Keywords: automated drift detection and adaptation, concept drift, hyperparameters optimization, online and adaptive learning, regression
Procedia PDF Downloads 1726 Comparative Study on Fire Safety Evaluation Methods for External Cladding Systems: ISO 13785-2 and BS 8414
Authors: Kyungsuk Cho, H. Y. Kim, S. U. Chae, J. H. Choi
Abstract:
Technological development has led to the construction of super-tall buildings and insulators are increasingly used as exterior finishing materials to save energy. However, insulators are usually combustible and vulnerable to fire. Fires like that at Wooshin Golden Suite Building in Busan, Korea in 2010 and that at CCTV Building in Beijing, China are the major examples of fire spread accelerated by combustible insulators. The exterior finishing materials of a high-rise building are not made of insulators only, but they are integrated with the building’s external cladding system. There is a limit in evaluating the fire safety of a cladding system with a single small-unit material such as a cone calorimeter. Therefore, countries provide codes to evaluate the fire safety of exterior finishing materials using full-scale tests. This study comparesKeywords: external cladding systems, fire safety evaluation, ISO 13785-2, BS 8414
Procedia PDF Downloads 24225 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles
Authors: Nozar Kishi, Babak Kamrani, Filmon Habte
Abstract:
Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM
Procedia PDF Downloads 27124 Collaborative Governance in Dutch Flood Risk Management: An Historical Analysis
Authors: Emma Avoyan
Abstract:
The safety standards for flood protection in the Netherlands have been revised recently. It is expected that all major flood-protection structures will have to be reinforced to meet the new standards. The Dutch Flood Protection Programme aims at accomplishing this task through innovative integrated projects such as construction of multi-functional flood defenses. In these projects, flood safety purposes will be combined with spatial planning, nature development, emergency management or other sectoral objectives. Therefore, implementation of dike reinforcement projects requires early involvement and collaboration between public and private sectors, different governmental actors and agencies. The development and implementation of such integrated projects has been an issue in Dutch flood risk management since long. Therefore, this article analyses how cross-sector collaboration within flood risk governance in the Netherlands has evolved over time, and how this development can be explained. The integrative framework for collaborative governance is applied as an analytical tool to map external factors framing possibilities as well as constraints for cross-sector collaboration in Dutch flood risk domain. Supported by an extensive document and literature analysis, the paper offers insights on how the system context and different drivers changing over time either promoted or hindered cross-sector collaboration between flood protection sector, urban development, nature conservation or any other sector involved in flood risk governance. The system context refers to the multi-layered and interrelated suite of conditions that influence the formation and performance of complex governance systems, such as collaborative governance regimes, whereas the drivers initiate and enable the overall process of collaboration. In addition, by applying a method of process tracing we identify a causal and chronological chain of events shaping cross-sectoral interaction in Dutch flood risk management. Our results indicate that in order to evaluate the performance of complex governance systems, it is important to firstly study the system context that shapes it. Clear understanding of the system conditions and drivers for collaboration gives insight into the possibilities of and constraints for effective performance of complex governance systems. The performance of the governance system is affected by the system conditions, while at the same time the governance system can also change the system conditions. Our results show that the sequence of changes within the system conditions and drivers over time affect how cross-sector interaction in Dutch flood risk governance system happens now. Moreover, we have traced the potential of this governance system to shape and change the system context.Keywords: collaborative governance, cross-sector interaction, flood risk management, the Netherlands
Procedia PDF Downloads 13223 Restoring Ecosystem Balance in Arid Regions: A Case Study of a Royal Nature Reserve in the Kingdom of Saudi Arabia
Authors: Talal Alharigi, Kawther Alshlash, Mariska Weijerman
Abstract:
The government of Saudi Arabia has developed an ambitious “Vision 2030”, which includes a Green Initiative (i.e., the planting of 10 billion trees) and the establishment of seven Royal Reserves as protected areas that comprise 13% of the total land area. The main objective of the reserves is to restore ecosystem balance and reconnect people with nature. Two royal reserves are managed by The Imam Abdulaziz bin Mohammed Royal Reserve Development Authority, including Imam Abdulaziz bin Mohammed Royal Reserve and King Khalid Royal Reserve. The authority has developed a management plan to enhance the habitat through seed dispersal and the planting of 10 million trees, and to restock wildlife that was once abundant in these arid ecosystems (e.g., oryx, Nubian ibex, gazelles, red-necked ostrich). Expectations are that with the restoration of the native vegetation, soil condition and natural hydrologic processes will improve and lead to further enhancement of vegetation and, over time, an increase in biodiversity of flora and fauna. To evaluate the management strategies in reaching these expectations, a comprehensive monitoring and evaluation program was developed. The main objectives of this program are to (1) monitor the status and trends of indicator species, (2) improve desert ecosystem understanding, (3) assess the effects of human activities, and (4) provide science-based management recommendations. Using a random stratified survey design, a diverse suite of survey methods will be implemented, including belt and quadrant transects, camera traps, GPS tracking devices, and drones. Data will be gathered on biotic parameters (plant and animal diversity, density, and distribution) and abiotic parameters (humidity, temperature, precipitation, wind, air, soil quality, vibrations, and noise levels) to meet the goals of the monitoring program. This case study intends to provide a detailed overview of the management plan and monitoring program of two royal reserves and outlines the types of data gathered which can be made available for future research projects.Keywords: camera traps, desert ecosystem, enhancement, GPS tracking, management evaluation, monitoring, planting, restocking, restoration
Procedia PDF Downloads 11822 Observational Versus Angioembolisation in Blunt Splenic Trauma: A Systematic Review
Authors: E. Gopi, E. Devaindran
Abstract:
Objective: Non-operative management of blunt splenic trauma have started to overtake the traditional splenectomy in recent years across the grade of splenic injury. The two main non-operative methods are observation and angioembolisation. However, the post management convalescence in these groups are still being investigated. The study attempts to quantify the clinical indicators among the two in particular complications, mortalities, conversions to operative management and duration of inpatient stay. Methodology: A systematic search was done via PUBMED, MEDLINE, and EMBASE. A total of 639 articles identified and subsequently 68 articles were identified post duplicates, full text, and inclusion and exclusion criteria. Main exclusions were non-English articles without English translation, pure observational or angioembolisation articles of which no comparison data could be identified and articles looking into pure hemodynamically unstable patients. Results: 24 non randomized controlled trial, 5 clinical control trial and 39 retrospective studies analyzing a total of 23700 patients with blunt splenic trauma. Discrepancies in data were noted in the group who had observational management versus angioembolisation in particular as data was compared among the classes of splenic rupture, the protocol of management in different centers, availability of angiogram suite, and the study design. Further variability was also noted in the angioembolisation arm as the preference for treatment differs between distal versus proximal splenic artery involvement. Overall the cumulative mortality in both observational and angioembolisation group were similar, 2.78% and 5.97% respectively. The cause of death however is not directly attributed to the management itself but rather patient comorbidities, other associated injuries and conversions to splenectomy leading to post splenectomy complications. The cumulative morbidity among each group appears to be same approximately 12% in observational versus 15% in angioembolisation. However, the type of complications varies with the observational group having higher rates of inpatient stay and intrabdominal hematoma infection and angioembolisation group developing more splenic infarcts and bleeds. There were significant disparity in reporting the actual data on duration of inpatient stay and complications to allow a statistically significant quantitative analysis to be done, 15 articles however are currently being considered. Conclusions: Observational management appears to be much effective in managing lower grade splenic trauma (grade 1 and 2) where else angioembolisation appears to play a bigger role in intermediate grades (grade 3-4) in ensuring splenic function preservation. Care has to be taken however in the angioembolisation group in view of distal splenic infarct group compromising splenic function. The cumulated data of 15 articles are now being considered for a meta-analysis.Keywords: blunt splenic trauma, conservative, non-operative, angioembolisation
Procedia PDF Downloads 26721 Strategic Metals and Rare Earth Elements Exploration of Lithium Cesium Tantalum Type Pegmatites: A Case Study from Northwest Himalayas
Authors: Auzair Mehmood, Mohammad Arif
Abstract:
The LCT (Li, Cs and Ta rich)-type pegmatites, genetically related to peraluminous S-type granites, are being mined for strategic metals (SMs) and rare earth elements (REEs) around the world. This study investigates the SMs and REEs potentials of pegmatites that are spatially associated with an S-type granitic suite of the Himalayan sequence, specifically Mansehra Granitic Complex (MGC), northwest Pakistan. Geochemical signatures of the pegmatites and some of their mineral extracts were analyzed using Inductive Coupled Plasma Mass Spectroscopy (ICP-MS) technique to explore and generate potential prospects (if any) for SMs and REEs. In general, the REE patterns of the studied whole-rock pegmatite samples show tetrad effect and possess low total REE abundances, strong positive Europium (Eu) anomalies, weak negative Cesium (Cs) anomalies and relative enrichment in heavy REE. Similar features have been observed on the REE patterns of the feldspar extracts. However, the REE patterns of the muscovite extracts reflect preferential enrichment and possess negative Eu anomalies. The trace element evaluation further suggests that the MGC pegmatites have undergone low levels of fractionation. Various trace elements concentrations (and their ratios) including Ta versus Cs, K/Rb (Potassium/Rubidium) versus Rb and Th/U (Thorium/Uranium) versus K/Cs, were used to analyze the economically viable mineral potential of the studied rocks. On most of the plots, concentrations fall below the dividing line and confer either barren or low-level mineralization potential of the studied rocks for both SMs and REEs. The results demonstrate paucity of the MGC pegmatites with respect to Ta-Nb (Tantalum-Niobium) mineralization, which is in sharp contrast to many Pan-African S-type granites around the world. The MGC pegmatites are classified as muscovite pegmatites based on their K/Rb versus Cs relationship. This classification is consistent with the occurrence of rare accessory minerals like garnet, biotite, tourmaline, and beryl. Furthermore, the classification corroborates with an earlier sorting of the MCG pegmatites into muscovite-bearing, biotite-bearing, and subordinate muscovite-biotite types. These types of pegmatites lack any significant SMs and REEs mineralization potentials. Field relations, such as close spatial association with parent granitic rocks and absence of internal zonation structure, also reflect the barren character and hence lack of any potential prospects of the MGC pegmatites.Keywords: exploration, fractionation, Himalayas, pegmatites, rare earth elements
Procedia PDF Downloads 20520 Applications of Multi-Path Futures Analyses for Homeland Security Assessments
Authors: John Hardy
Abstract:
A range of future-oriented intelligence techniques is commonly used by states to assess their national security and develop strategies to detect and manage threats, to develop and sustain capabilities, and to recover from attacks and disasters. Although homeland security organizations use future's intelligence tools to generate scenarios and simulations which inform their planning, there have been relatively few studies of the methods available or their applications for homeland security purposes. This study presents an assessment of one category of strategic intelligence techniques, termed Multi-Path Futures Analyses (MPFA), and how it can be applied to three distinct tasks for the purpose of analyzing homeland security issues. Within this study, MPFA are categorized as a suite of analytic techniques which can include effects-based operations principles, general morphological analysis, multi-path mapping, and multi-criteria decision analysis techniques. These techniques generate multiple pathways to potential futures and thereby generate insight into the relative influence of individual drivers of change, the desirability of particular combinations of pathways, and the kinds of capabilities which may be required to influence or mitigate certain outcomes. The study assessed eighteen uses of MPFA for homeland security purposes and found that there are five key applications of MPFA which add significant value to analysis. The first application is generating measures of success and associated progress indicators for strategic planning. The second application is identifying homeland security vulnerabilities and relationships between individual drivers of vulnerability which may amplify or dampen their effects. The third application is selecting appropriate resources and methods of action to influence individual drivers. The fourth application is prioritizing and optimizing path selection preferences and decisions. The fifth application is informing capability development and procurement decisions to build and sustain homeland security organizations. Each of these applications provides a unique perspective of a homeland security issue by comparing a range of potential future outcomes at a set number of intervals and by contrasting the relative resource requirements, opportunity costs, and effectiveness measures of alternative courses of action. These findings indicate that MPFA enhances analysts’ ability to generate tangible measures of success, identify vulnerabilities, select effective courses of action, prioritize future pathway preferences, and contribute to ongoing capability development in homeland security assessments.Keywords: homeland security, intelligence, national security, operational design, strategic intelligence, strategic planning
Procedia PDF Downloads 13919 Broad Survey of Fine Root Traits to Investigate the Root Economic Spectrum Hypothesis and Plant-Fire Dynamics Worldwide
Authors: Jacob Lewis Watts, Adam F. A. Pellegrini
Abstract:
Prairies, grasslands, and forests cover an expansive portion of the world’s surface and contribute significantly to Earth’s carbon cycle. The largest driver of carbon dynamics in some of these ecosystems is fire. As the global climate changes, most fire-dominated ecosystems will experience increased fire frequency and intensity, leading to increased carbon flux into the atmosphere and soil nutrient depletion. The plant communities associated with different fire regimes are important for reassimilation of carbon lost during fire and soil recovery. More frequent fires promote conservative plant functional traits aboveground; however, belowground fine root traits are poorly explored and arguably more important drivers of ecosystem function as the primary interface between the soil and plant. The root economic spectrum (RES) hypothesis describes single-dimensional covariation between important fine-root traits along a range of plant strategies from acquisitive to conservative – parallel to the well-established leaf economic spectrum (LES). However, because of the paucity of root trait data, the complex nature of the rhizosphere, and the phylogenetic conservatism of root traits, it is unknown whether the RES hypothesis accurately describes plant nutrient and water acquisition strategies. This project utilizesplants grown in common garden conditions in the Cambridge University Botanic Garden and a meta-analysis of long-term fire manipulation experiments to examine the belowground physiological traits of fire-adapted and non-fire-adapted herbaceous species to 1) test the RES hypothesis and 2) describe the effect of fire regimes on fine root functional traits – which in turn affect carbon and nutrient cycling. A suite of morphological, chemical, and biological root traits (e.g. root diameter, specific root length, percent N, percent mycorrhizal colonization, etc.) of 50 herbaceous species were measuredand tested for phylogenetic conservatism and RES dimensionality. Fire-adapted and non-fire-adapted plants traits were compared using phylogenetic PCA techniques. Preliminary evidence suggests that phylogenetic conservatism may weaken the single-dimensionality of the RES, suggesting that there may not be a single way that plants optimize nutrient and water acquisition and storage in the complex rhizosphere; additionally, fire-adapted species are expected to be more conservative than non-fire-adapted species, which may be indicative of slower carbon cycling with increasing fire frequency and intensity.Keywords: climate change, fire regimes, root economic spectrum, fine roots
Procedia PDF Downloads 12418 Facies Sedimentology and Astronomic Calibration of the Reinech Member (Lutetian)
Authors: Jihede Haj Messaoud, Hamdi Omar, Hela Fakhfakh Ben Jemia, Chokri Yaich
Abstract:
The Upper Lutetian alternating marl–limestone succession of Reineche Member was deposited over a warm shallow carbonate platform that permits Nummulites proliferation. High-resolution studies of 30 meters thick Nummulites-bearing Reineche Member, cropping out in Central Tunisia (Jebel Siouf), have been undertaken, regarding pronounced cyclical sedimentary sequences, in order to investigate the periodicity of cycles and their related orbital-scale oceanic and climatic changes. The palaeoenvironmental and palaeoclimatic data are preserved in several proxies obtainable through high-resolution sampling and laboratories measurement and analysis as magnetic susceptibility (MS) and carbonates contents in conjunction with a wireline logging tools. The time series analysis of proxies permits to establish cyclicity orders present in the studied intervals which could be linked to the orbital cycles. MS records provide high-resolution proxies for relative sea level change in Late Lutetian strata. The spectral analysis of MS fluctuations confirmed the orbital forcing by the presence of the complete suite of orbital frequencies in the precession of 23 ka, the obliquity of 41 ka, and notably the two modes of eccentricity of 100 and 405 ka. Regarding the two periodic sedimentary cycles detected by wavelet analysis of proxy fluctuations which coincide with the long-term 405 ka eccentricity cycle, the Reineche Member spanned 0,8 Myr. Wireline logging tools as gamma ray and sonic were used as a proxies to decipher cyclicity and trends in sedimentation and contribute to identifying and correlate units. There are used to constraint the highest frequency cyclicity modulated by a long term wavelength cycling apparently controlled by clay content. Interpreted as a result of variations in carbonate productivity, it has been suggested that the marl-limestone couplets, represent the sedimentary response to the orbital forcing. The calculation of cycle durations through Reineche Member, is used as a geochronometer and permit the astronomical calibration of the geologic time scale. Furthermore, MS coupled with carbonate contents, and fossil occurrences provide strong evidence for combined detrital inputs and marine surface carbonate productivity cycles. These two synchronous processes were driven by the precession index and ‘fingerprinted’ in the basic marl–limestone couplets, modulated by orbital eccentricity.Keywords: magnetic susceptibility, cyclostratigraphy, orbital forcing, spectral analysis, Lutetian
Procedia PDF Downloads 29417 STML: Service Type-Checking Markup Language for Services of Web Components
Authors: Saqib Rasool, Adnan N. Mian
Abstract:
Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.Keywords: REST, STML, type checking, web component
Procedia PDF Downloads 25516 Exploring Neural Responses to Urban Spaces in Older People Using Mobile EEG
Authors: Chris Neale, Jenny Roe, Peter Aspinall, Sara Tilley, Steve Cinderby, Panos Mavros, Richard Coyne, Neil Thin, Catharine Ward Thompson
Abstract:
This research directly assesses older people’s neural activation in response to walking through a changing urban environment, as measured by electroencephalography (EEG). As the global urban population is predicted to grow, there is a need to understand the role that the urban environment may play on the health of its older inhabitants. There is a large body of evidence suggesting green space has a beneficial restorative effect, but this effect remains largely understudied in both older people and by using a neuroimaging assessment. For this study, participants aged 65 years and over were required to walk between a busy urban built environment and a green urban environment, in a counterbalanced design, wearing an Emotiv EEG headset to record real-time neural responses to place. Here we report on the outputs for these responses derived from both the proprietary Affectiv Suite software, which creates emotional parameters with a real time value assigned to them, as well as the raw EEG output focusing on alpha and beta changes, associated with changes in relaxation and attention respectively. Each walk lasted around fifteen minutes and was undertaken at the natural walking pace of the participant. The two walking environments were compared using a form of high dimensional correlated component regression (CCR) on difference data between the urban busy and urban green spaces. For the Emotiv parameters, results showed that levels of ‘engagement’ increased in the urban green space (with a subsequent decrease in the urban busy built space) whereas levels of ‘excitement’ increased in the urban busy environment (with a subsequent decrease in the urban green space). In the raw data, low beta (13 – 19 Hz) increased in the urban busy space with a subsequent decrease shown in the green space, similar to the pattern shown with the ‘excitement’ result. Alpha activity (9 – 13 Hz) shows a correlation with low beta, but not with dependent change in the regression model. This suggests that alpha is acting as a suppressor variable. These results suggest that there are neural signatures associated with the experience of urban spaces which may reflect the age of the cohort or the spatiality of the settings themselves. These are shown both in the outputs of the proprietary software as well as the raw EEG output. Built busy urban spaces appear to induce neural activity associated with vigilance and low level stress, while this effect is ameliorated in the urban green space, potentially suggesting a beneficial effect on attentional capacity in urban green space in this participant group. The interaction between low beta and alpha requires further investigation, in particular the role of alpha in this relationship.Keywords: ageing, EEG, green space, urban space
Procedia PDF Downloads 22615 The Internet of Things in Luxury Hotels: Generating Customized Multisensory Guest Experiences
Authors: Jean-Eric Pelet, Erhard Lick, Basma Taieb
Abstract:
Purpose This research bridges the gap between sensory marketing and the use of the Internet of Things (IoT) in luxury hotels. We investigated how stimulating guests’ senses through IoT devices influenced their emotions, affective experiences, eudaimonism (well-being), and, ultimately, guest behavior. We examined potential moderating effects of gender. Design/methodology/approach We adopted a mixed method approach, combining qualitative research (semi-structured interviews) to explore hotel managers’ perspectives on the potential use of IoT in luxury hotels and quantitative research (surveying hotel guests; n=357). Findings The results showed that while the senses of smell, hearing, and sight had an impact on guests’ emotions, the senses of touch, hearing, and sight impacted guests’ affective experiences. The senses of smell and taste influenced guests’ eudaimonism. The sense of smell had a greater effect on eudaimonism and behavioral intentions among women compared to men. Originality IoT can be applied in creating customized multi-sensory hotel experiences. For example, hotels may offer unique and diverse ambiences in their rooms and suites to improve guest experiences. Research limitations/implications This study concentrated on luxury hotels located in Europe. Further research may explore the generalizability of the findings (e.g., in other cultures, comparison between high-end and low-end hotels). Practical implications Context awareness and hyper-personalization, through intensive and continuous data collection (hyper-connectivity) and real time processing, are key trends in the service industry. Therefore, big data plays a crucial role in the collection of information since it allows hoteliers to retrieve, analyze, and visualize data to provide personalized services in real time. Together with their guests, hotels may co-create customized sensory experiences. For instance, if the hotel knows about the guest’s music preferences based on social media as well as their age and gender, etc. and considers the temperature and size (standard, suite, etc.) of the guest room, this may determine the playlist of the concierge-tablet made available in the guest room. Furthermore, one may record the guest’s voice to use it for voice command purposes once the guest arrives at the hotel. Based on our finding that the sense of smell has a greater impact on eudaimonism and behavioral intentions among women than men, hotels may deploy subtler scents with lower intensities, or even different scents, for female guests in comparison to male guests.Keywords: affective experience, emotional value, eudaimonism, hospitality industry, Internet of Things, sensory marketing
Procedia PDF Downloads 5814 Progressing Institutional Quality Assurance and Accreditation of Higher Education Programmes
Authors: Dominique Parrish
Abstract:
Globally, higher education institutions are responsible for the quality assurance and accreditation of their educational programmes (Courses). The primary purpose of these activities is to ensure that the educational standards of the governing higher education authority are met and the quality of the education provided to students is assured. Despite policies and frameworks being established in many countries, to improve the veracity and accountability of quality assurance and accreditation processes, there are reportedly still mistakes, gaps and deficiencies in these processes. An analysis of Australian universities’ quality assurance and accreditation processes noted that significant improvements were needed in managing these processes and ensuring that review recommendations were implemented. It has also been suggested that the following principles are critical for higher education quality assurance and accreditation to be effective and sustainable: academic standards and performance outcomes must be defined, attainable and monitored; those involved in providing the higher education must assume responsibility for the associated quality assurance and accreditation; potential academic risks must be identified and management solutions developed; and the expectations of the public, governments and students should be considered and incorporated into Course enhancements. This phenomenological study, which was conducted in a Faculty of Science, Medicine and Health in an Australian university, sought to systematically and iteratively develop an effective quality assurance and accreditation process that integrated the evidence-based principles of success and promoted meaningful and sustainable change. Qualitative evaluative feedback was gathered, over a period of eleven months (January - November 2014), from faculty staff engaged in the quality assurance and accreditation of forty-eight undergraduate and postgraduate Courses. Reflexive analysis was used to analyse the data and inform ongoing modifications and developments to the assurance and accreditation process as well as the associated supporting resources. The study resulted in the development of a formal quality assurance and accreditation process together with a suite of targeted resources that were identified as critical for success. The research findings also provided some insights into the institutional enablers that were antecedents to successful quality assurance and accreditation processes as well as meaningful change in the educational practices of academics. While longitudinal data will be collected to further assess the value of the assurance and accreditation process on educational quality, early indicators are that there has been a change in the pedagogical perspectives and activities of academic staff and growing momentum to explore opportunities to further enhance and develop Courses. This presentation will explain the formal quality assurance and accreditation process as well as the component parts, which resulted from this study. The targeted resources that were developed will be described, the pertinent factors that contributed to the success of the process will be discussed and early indicators of sustainable academic change as well as suggestions for future research will be outlined.Keywords: academic standards, quality assurance and accreditation, phenomenological study, process, resources
Procedia PDF Downloads 37813 Leveraging Multimodal Neuroimaging Techniques to in vivo Address Compensatory and Disintegration Patterns in Neurodegenerative Disorders: Evidence from Cortico-Cerebellar Connections in Multiple Sclerosis
Authors: Efstratios Karavasilis, Foteini Christidi, Georgios Velonakis, Agapi Plousi, Kalliopi Platoni, Nikolaos Kelekis, Ioannis Evdokimidis, Efstathios Efstathopoulos
Abstract:
Introduction: Advanced structural and functional neuroimaging techniques contribute to the study of anatomical and functional brain connectivity and its role in the pathophysiology and symptoms’ heterogeneity in several neurodegenerative disorders, including multiple sclerosis (MS). Aim: In the present study, we applied multiparametric neuroimaging techniques to investigate the structural and functional cortico-cerebellar changes in MS patients. Material: We included 51 MS patients (28 with clinically isolated syndrome [CIS], 31 with relapsing-remitting MS [RRMS]) and 51 age- and gender-matched healthy controls (HC) who underwent MRI in a 3.0T MRI scanner. Methodology: The acquisition protocol included high-resolution 3D T1 weighted, diffusion-weighted imaging and echo planar imaging sequences for the analysis of volumetric, tractography and functional resting state data, respectively. We performed between-group comparisons (CIS, RRMS, HC) using CAT12 and CONN16 MATLAB toolboxes for the analysis of volumetric (cerebellar gray matter density) and functional (cortico-cerebellar resting-state functional connectivity) data, respectively. Brainance suite was used for the analysis of tractography data (cortico-cerebellar white matter integrity; fractional anisotropy [FA]; axial and radial diffusivity [AD; RD]) to reconstruct the cerebellum tracts. Results: Patients with CIS did not show significant gray matter (GM) density differences compared with HC. However, they showed decreased FA and increased diffusivity measures in cortico-cerebellar tracts, and increased cortico-cerebellar functional connectivity. Patients with RRMS showed decreased GM density in cerebellar regions, decreased FA and increased diffusivity measures in cortico-cerebellar WM tracts, as well as a pattern of increased and mostly decreased functional cortico-cerebellar connectivity compared to HC. The comparison between CIS and RRMS patients revealed significant GM density difference, reduced FA and increased diffusivity measures in WM cortico-cerebellar tracts and increased/decreased functional connectivity. The identification of decreased WM integrity and increased functional cortico-cerebellar connectivity without GM changes in CIS and the pattern of decreased GM density decreased WM integrity and mostly decreased functional connectivity in RRMS patients emphasizes the role of compensatory mechanisms in early disease stages and the disintegration of structural and functional networks with disease progression. Conclusions: In conclusion, our study highlights the added value of multimodal neuroimaging techniques for the in vivo investigation of cortico-cerebellar brain changes in neurodegenerative disorders. An extension and future opportunity to leverage multimodal neuroimaging data inevitably remain the integration of such data in the recently-applied mathematical approaches of machine learning algorithms to more accurately classify and predict patients’ disease course.Keywords: advanced neuroimaging techniques, cerebellum, MRI, multiple sclerosis
Procedia PDF Downloads 14112 Beyond Personal Evidence: Using Learning Analytics and Student Feedback to Improve Learning Experiences
Authors: Shawndra Bowers, Allie Brandriet, Betsy Gilbertson
Abstract:
This paper will highlight how Auburn Online’s instructional designers leveraged student and faculty data to update and improve online course design and instructional materials. When designing and revising online courses, it can be difficult for faculty to know what strategies are most likely to engage learners and improve educational outcomes in a specific discipline. It can also be difficult to identify which metrics are most useful for understanding and improving teaching, learning, and course design. At Auburn Online, the instructional designers use a suite of data based student’s performance, participation, satisfaction, and engagement, as well as faculty perceptions, to inform sound learning and design principles that guide growth-mindset consultations with faculty. The consultations allow the instructional designer, along with the faculty member, to co-create an actionable course improvement plan. Auburn Online gathers learning analytics from a variety of sources that any instructor or instructional design team may have access to at their own institutions. Participation and performance data, such as page: views, assignment submissions, and aggregate grade distributions, are collected from the learning management system. Engagement data is pulled from the video hosting platform, which includes unique viewers, views and downloads, the minutes delivered, and the average duration each video is viewed. Student satisfaction is also obtained through a short survey that is embedded at the end of each instructional module. This survey is included in each course every time it is taught. The survey data is then analyzed by an instructional designer for trends and pain points in order to identify areas that can be modified, such as course content and instructional strategies, to better support student learning. This analysis, along with the instructional designer’s recommendations, is presented in a comprehensive report to instructors in an hour-long consultation where instructional designers collaborate with the faculty member on how and when to implement improvements. Auburn Online has developed a triage strategy of priority 1 or 2 level changes that will be implemented in future course iterations. This data-informed decision-making process helps instructors focus on what will best work in their teaching environment while addressing which areas need additional attention. As a student-centered process, it has created improved learning environments for students and has been well received by faculty. It has also shown to be effective in addressing the need for improvement while removing the feeling the faculty’s teaching is being personally attacked. The process that Auburn Online uses is laid out, along with the three-tier maintenance and revision guide that will be used over a three-year implementation plan. This information can help others determine what components of the maintenance and revision plan they want to utilize, as well as guide them on how to create a similar approach. The data will be used to analyze, revise, and improve courses by providing recommendations and models of good practices through determining and disseminating best practices that demonstrate an impact on student success.Keywords: data-driven, improvement, online courses, faculty development, analytics, course design
Procedia PDF Downloads 6211 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads
Authors: Raja Umer Sajjad, Chang Hee Lee
Abstract:
Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters
Procedia PDF Downloads 24110 Hydrogen Purity: Developing Low-Level Sulphur Speciation Measurement Capability
Authors: Sam Bartlett, Thomas Bacquart, Arul Murugan, Abigail Morris
Abstract:
Fuel cell electric vehicles provide the potential to decarbonise road transport, create new economic opportunities, diversify national energy supply, and significantly reduce the environmental impacts of road transport. A potential issue, however, is that the catalyst used at the fuel cell cathode is susceptible to degradation by impurities, especially sulphur-containing compounds. A recent European Directive (2014/94/EU) stipulates that, from November 2017, all hydrogen provided to fuel cell vehicles in Europe must comply with the hydrogen purity specifications listed in ISO 14687-2; this includes reactive and toxic chemicals such as ammonia and total sulphur-containing compounds. This requirement poses great analytical challenges due to the instability of some of these compounds in calibration gas standards at relatively low amount fractions and the difficulty associated with undertaking measurements of groups of compounds rather than individual compounds. Without the available reference materials and analytical infrastructure, hydrogen refuelling stations will not be able to demonstrate compliance to the ISO 14687 specifications. The hydrogen purity laboratory at NPL provides world leading, accredited purity measurements to allow hydrogen refuelling stations to evidence compliance to ISO 14687. Utilising state-of-the-art methods that have been developed by NPL’s hydrogen purity laboratory, including a novel method for measuring total sulphur compounds at 4 nmol/mol and a hydrogen impurity enrichment device, we provide the capabilities necessary to achieve these goals. An overview of these capabilities will be given in this paper. As part of the EMPIR Hydrogen co-normative project ‘Metrology for sustainable hydrogen energy applications’, NPL are developing a validated analytical methodology for the measurement of speciated sulphur-containing compounds in hydrogen at low amount fractions pmol/mol to nmol/mol) to allow identification and measurement of individual sulphur-containing impurities in real samples of hydrogen (opposed to a ‘total sulphur’ measurement). This is achieved by producing a suite of stable gravimetrically-prepared primary reference gas standards containing low amount fractions of sulphur-containing compounds (hydrogen sulphide, carbonyl sulphide, carbon disulphide, 2-methyl-2-propanethiol and tetrahydrothiophene have been selected for use in this study) to be used in conjunction with novel dynamic dilution facilities to enable generation of pmol/mol to nmol/mol level gas mixtures (a dynamic method is required as compounds at these levels would be unstable in gas cylinder mixtures). Method development and optimisation are performed using gas chromatographic techniques assisted by cryo-trapping technologies and coupled with sulphur chemiluminescence detection to allow improved qualitative and quantitative analyses of sulphur-containing impurities in hydrogen. The paper will review the state-of-the art gas standard preparation techniques, including the use and testing of dynamic dilution technologies for reactive chemical components in hydrogen. Method development will also be presented highlighting the advances in the measurement of speciated sulphur compounds in hydrogen at low amount fractions.Keywords: gas chromatography, hydrogen purity, ISO 14687, sulphur chemiluminescence detector
Procedia PDF Downloads 2269 Hydrocarbon Source Rocks of the Maragh Low
Authors: Elhadi Nasr, Ibrahim Ramadan
Abstract:
Biostratigraphical analyses of well sections from the Maragh Low in the Eastern Sirt Basin has allowed high resolution correlations to be undertaken. Full integration of this data with available palaeoenvironmental, lithological, gravity, seismic, aeromagnetic, igneous, radiometric and wireline log information and a geochemical analysis of source rock quality and distribution has led to a more detailed understanding of the geological and the structural history of this area. Pre Sirt Unconformity two superimposed rifting cycles have been identified. The oldest is represented by the Amal Group of sediments and is of Late Carboniferous, Kasimovian / Gzelian to Middle Triassic, Anisian age. Unconformably overlying is a younger rift cycle which is represented the Sarir Group of sediments and is of Early Cretaceous, late Neocomian to Aptian in age. Overlying the Sirt Unconformity is the marine Late Cretaceous section. An assessment of pyrolysis results and a palynofacies analysis has allowed hydrocarbon source facies and quality to be determined. There are a number of hydrocarbon source rock horizons in the Maragh Low, these are sometimes vertically stacked and they are of fair to excellent quality. The oldest identified source rock is the Triassic Shale, this unit is unconformably overlain by sandstones belonging to the Sarir Group and conformably overlies a Triassic Siltstone unit. Palynological dating of the Triassic Shale unit indicates a Middle Triassic, Anisian age. The Triassic Shale is interpreted to have been deposited in a lacustrine palaeoenvironment. This particularly is evidenced by the dark, fine grained, organic rich nature of the sediment and is supported by palynofacies analysis and by the recovery of fish fossils. Geochemical analysis of the Triassic Shale indicates total organic carbon varying between 1.37 and 3.53. S2 pyrolysate yields vary between 2.15 mg/g and 6.61 mg/g and hydrogen indices vary between 156.91 and 278.91. The source quality of the Triassic Shale varies from being of fair to very good / rich. Linked to thermal maturity it is now a very good source for light oil and gas. It was once a very good to rich oil source. The Early Barremian Shale was also deposited in a lacustrine palaeoenvironment. Recovered palynomorphs indicate an Early Cretaceous, late Neocomian to early Barremian age. The Early Barremian Shale is conformably underlain and overlain by sandstone units belonging to the Sarir Group of sediments which are also of Early Cretaceous age. Geochemical analysis of the Early Barremian Shale indicates that it is a good oil source and was originally very good. Total organic carbon varies between 3.59% and 7%. S2 varies between 6.30 mg/g and 10.39 mg/g and the hydrogen indices vary between 148.4 and 175.5. A Late Barremian Shale unit of this age has also been identified in the central Maragh Low. Geochemical analyses indicate that total organic carbon varies between 1.05 and 2.38%, S2 pyrolysate between 1.6 and 5.34 mg/g and the hydrogen index between 152.4 and 224.4. It is a good oil source rock which is now mature. In addition to the non marine hydrocarbon source rocks pre Sirt Unconformity, three formations in the overlying Late Cretaceous section also provide hydrocarbon quality source rocks. Interbedded shales within the Rachmat Formation of Late Cretaceous, early Campanian age have total organic carbon ranging between, 0.7 and 1.47%, S2 pyrolysate varying between 1.37 and 4.00 mg/g and hydrogen indices varying between 195.7 and 272.1. The indication is that this unit would provide a fair gas source to a good oil source. Geochemical analyses of the overlying Tagrifet Limestone indicate that total organic carbon varies between 0.26% and 1.01%. S2 pyrolysate varies between 1.21 and 2.16 mg/g and hydrogen indices vary between 195.7 and 465.4. For the overlying Sirt Shale Formation of Late Cretaceous, late Campanian age, total organic carbon varies between 1.04% and 1.51%, S2 pyrolysate varies between 4.65 mg/g and 6.99 mg/g and the hydrogen indices vary between 151 and 462.9. The study has proven that both the Sirt Shale Formation and the Tagrifet Limestone are good to very good and rich sources for oil in the Maragh Low. High resolution biostratigraphical interpretations have been integrated and calibrated with thermal maturity determinations (Vitrinite Reflectance (%Ro), Spore Colour Index (SCI) and Tmax (ºC) and the determined present day geothermal gradient of 25ºC / Km for the Maragh Low. Interpretation of generated basin modelling profiles allows a detailed prediction of timing of maturation development of these source horizons and leads to a determination of amounts of missing section at major unconformities. From the results the top of the oil window (0.72% Ro) is picked as high as 10,700’ and the base of the oil window (1.35% Ro) assuming a linear trend and by projection is picked as low as 18,000’ in the Maragh Low. For the Triassic Shale the early phase of oil generation was in the Late Palaeocene / Early to Middle Eocene and the main phase of oil generation was in the Middle to Late Eocene. The Early Barremian Shale reached the main phase of oil generation in the Early Oligocene with late generation being reached in the Middle Miocene. For the Rakb Group section (Rachmat Formation, Tagrifet Limestone and Sirt Shale Formation) the early phase of oil generation started in the Late Eocene with the main phase of generation being between the Early Oligocene and the Early Miocene. From studying maturity profiles and from regional considerations it can be predicted that up to 500’ of sediment may have been deposited and eroded by the Sirt Unconformity in the central Maragh Low while up to 2000’ of sediment may have been deposited and then eroded to the south of the trough.Keywords: Geochemical analysis of the source rocks from wells in Eastern Sirt Basin.
Procedia PDF Downloads 4098 Wind Resource Classification and Feasibility of Distributed Generation for Rural Community Utilization in North Central Nigeria
Authors: O. D. Ohijeagbon, Oluseyi O. Ajayi, M. Ogbonnaya, Ahmeh Attabo
Abstract:
This study analyzed the electricity generation potential from wind at seven sites spread across seven states of the North-Central region of Nigeria. Twenty-one years (1987 to 2007) wind speed data at a height of 10m were assessed from the Nigeria Meteorological Department, Oshodi. The data were subjected to different statistical tests and also compared with the two-parameter Weibull probability density function. The outcome shows that the monthly average wind speeds ranged between 2.2 m/s in November for Bida and 10.1 m/s in December for Jos. The yearly average ranged between 2.1m/s in 1987 for Bida and 11.8 m/s in 2002 for Jos. Also, the power density for each site was determined to range between 29.66 W/m2 for Bida and 864.96 W/m2 for Jos, Two parameters (k and c) of the Weibull distribution were found to range between 2.3 in Lokoja and 6.5 in Jos for k, while c ranged between 2.9 in Bida and 9.9m/s in Jos. These outcomes points to the fact that wind speeds at Jos, Minna, Ilorin, Makurdi and Abuja are compatible with the cut-in speeds of modern wind turbines and hence, may be economically feasible for wind-to-electricity at and above the height of 10 m. The study further assessed the potential and economic viability of standalone wind generation systems for off-grid rural communities located in each of the studied sites. A specific electric load profile was developed to suite hypothetic communities, each consisting of 200 homes, a school and a community health center. Assessment of the design that will optimally meet the daily load demand with a loss of load probability (LOLP) of 0.01 was performed, considering 2 stand-alone applications of wind and diesel. The diesel standalone system (DSS) was taken as the basis of comparison since the experimental locations have no connection to a distribution network. The HOMER® software optimizing tool was utilized to determine the optimal combination of system components that will yield the lowest life cycle cost. Sequel to the analysis for rural community utilization, a Distributed Generation (DG) analysis that considered the possibility of generating wind power in the MW range in order to take advantage of Nigeria’s tariff regime for embedded generation was carried out for each site. The DG design incorporated each community of 200 homes, freely catered for and offset from the excess electrical energy generated above the minimum requirement for sales to a nearby distribution grid. Wind DG systems were found suitable and viable in producing environmentally friendly energy in terms of life cycle cost and levelised value of producing energy at Jos ($0.14/kWh), Minna ($0.12/kWh), Ilorin ($0.09/kWh), Makurdi ($0.09/kWh), and Abuja ($0.04/kWh) at a particluar turbine hub height. These outputs reveal the value retrievable from the project after breakeven point as a function of energy consumed Based on the results, the study demonstrated that including renewable energy in the rural development plan will enhance fast upgrade of the rural communities.Keywords: wind speed, wind power, distributed generation, cost per kilowatt-hour, clean energy, North-Central Nigeria
Procedia PDF Downloads 5147 Application of 3D Apparel CAD for Costume Reproduction
Authors: Zi Y. Kang, Tracy D. Cassidy, Tom Cassidy
Abstract:
3D apparel CAD is one of the remarkable products in advanced technology which enables intuitive design, visualisation and evaluation of garments through stereoscopic drape simulation. The progressive improvements of 3D apparel CAD have led to the creation of more realistic clothing simulation which is used not only in design development but also in presentation, promotion and communication for fashion as well as other industries such as film, game and social network services. As a result, 3D clothing technology is becoming more ubiquitous in human culture and lives today. This study considers that such phenomenon implies that the technology has reached maturity and it is time to inspect the status of current technology and to explore its potential uses in ways to create cultural values to further move forward. For this reason, this study aims to generate virtual costumes as culturally significant objects using 3D apparel CAD and to assess its capability, applicability and attitudes of the audience towards clothing simulation through comparison with physical counterparts. Since the access to costume collection is often limited due to the conservative issues, the technology may make valuable contribution by democratization of culture and knowledge for museums and its audience. This study is expected to provide foundation knowledge for development of clothing technology and for expanding its boundary of practical uses. To prevent any potential damage, two replicas of the costumes in the 1860s and 1920s at the Museum of London were chosen as samples. Their structural, visual and physical characteristics were measured and collected using patterns, scanned images of fabrics and objective fabric measurements with scale, KES-F (Kawabata Evaluation System of Fabrics) and Titan. Commercial software, DC Suite 5.0 was utilised to create virtual costumes applying collected data and the following outcomes were produced for the evaluation: Images of virtual costumes and video clips showing static and dynamic simulation. Focus groups were arranged with fashion design students and the public for evaluation which exposed the outcomes together with physical samples, fabrics swatches and photographs. The similarities, application and acceptance of virtual costumes were estimated through discussion and a questionnaire. The findings show that the technology has the capability to produce realistic or plausible simulation but expression of some factors such as details and capability of light material requires improvements. While the use of virtual costumes was viewed as more interesting and futuristic replacements to physical objects by the public group, the fashion student group noted more differences in detail and preferred physical garments highlighting the absence of tangibility. However, the advantages and potential of virtual costumes as effective and useful visual references for educational and exhibitory purposes were underlined by both groups. Although 3D apparel CAD has sufficient capacity to assist garment design process, it has limits in identical replication and more study on accurate reproduction of details and drape is needed for its technical improvements. Nevertheless, the virtual costumes in this study demonstrated the possibility of the technology to contribute to cultural and knowledgeable value creation through its applicability and as an interesting way to offer 3D visual information.Keywords: digital clothing technology, garment simulation, 3D Apparel CAD, virtual costume
Procedia PDF Downloads 223