Search results for: capacitance-resistance models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6743

Search results for: capacitance-resistance models

113 An Integrated Approach to the Carbonate Reservoir Modeling: Case Study of the Eastern Siberia Field

Authors: Yana Snegireva

Abstract:

Carbonate reservoirs are known for their heterogeneity, resulting from various geological processes such as diagenesis and fracturing. These complexities may cause great challenges in understanding fluid flow behavior and predicting the production performance of naturally fractured reservoirs. The investigation of carbonate reservoirs is crucial, as many petroleum reservoirs are naturally fractured, which can be difficult due to the complexity of their fracture networks. This can lead to geological uncertainties, which are important for global petroleum reserves. The problem outlines the key challenges in carbonate reservoir modeling, including the accurate representation of fractures and their connectivity, as well as capturing the impact of fractures on fluid flow and production. Traditional reservoir modeling techniques often oversimplify fracture networks, leading to inaccurate predictions. Therefore, there is a need for a modern approach that can capture the complexities of carbonate reservoirs and provide reliable predictions for effective reservoir management and production optimization. The modern approach to carbonate reservoir modeling involves the utilization of the hybrid fracture modeling approach, including the discrete fracture network (DFN) method and implicit fracture network, which offer enhanced accuracy and reliability in characterizing complex fracture systems within these reservoirs. This study focuses on the application of the hybrid method in the Nepsko-Botuobinskaya anticline of the Eastern Siberia field, aiming to prove the appropriateness of this method in these geological conditions. The DFN method is adopted to model the fracture network within the carbonate reservoir. This method considers fractures as discrete entities, capturing their geometry, orientation, and connectivity. But the method has significant disadvantages since the number of fractures in the field can be very high. Due to limitations in the amount of main memory, it is very difficult to represent these fractures explicitly. By integrating data from image logs (formation micro imager), core data, and fracture density logs, a discrete fracture network (DFN) model can be constructed to represent fracture characteristics for hydraulically relevant fractures. The results obtained from the DFN modeling approaches provide valuable insights into the East Siberia field's carbonate reservoir behavior. The DFN model accurately captures the fracture system, allowing for a better understanding of fluid flow pathways, connectivity, and potential production zones. The analysis of simulation results enables the identification of zones of increased fracturing and optimization opportunities for reservoir development with the potential application of enhanced oil recovery techniques, which were considered in further simulations on the dual porosity and dual permeability models. This approach considers fractures as separate, interconnected flow paths within the reservoir matrix, allowing for the characterization of dual-porosity media. The case study of the East Siberia field demonstrates the effectiveness of the hybrid model method in accurately representing fracture systems and predicting reservoir behavior. The findings from this study contribute to improved reservoir management and production optimization in carbonate reservoirs with the use of enhanced and improved oil recovery methods.

Keywords: carbonate reservoir, discrete fracture network, fracture modeling, dual porosity, enhanced oil recovery, implicit fracture model, hybrid fracture model

Procedia PDF Downloads 75
112 Effect of Climate Change on Rainfall Induced Failures for Embankment Slopes in Timor-Leste

Authors: Kuo Chieh Chao, Thishani Amarathunga, Sangam Shrestha

Abstract:

Rainfall induced slope failures are one of the most damaging and disastrous natural hazards which occur frequently in the world. This type of sliding mainly occurs in the zone above the groundwater level in silty/sandy soils. When the rainwater begins to infiltrate into the vadose zone of the soil, the negative pore-water pressure tends to decrease and reduce the shear strength of soil material. Climate change has resulted in excessive and unpredictable rainfall in all around the world, resulting in landslides with dire consequences to human lives and infrastructure. Such problems could be overcome by examining in detail the causes for such slope failures and recommending effective repair plans for vulnerable locations by considering future climatic change. The selected area for this study is located in the road rehabilitation section from Maubara to Mota Ain road in Timor-Leste. Slope failures and cracks have occurred in 2013 and after repairs reoccurred again in 2017 subsequent to heavy rains. Both observed and future predicted climate data analyses were conducted to understand the severe precipitation conditions in past and future. Observed climate data were collected from NOAA global climate data portal. CORDEX data portal was used to collect Regional Climate Model (RCM) future predicted climate data. Both observed and RCM data were extracted to location-based data using ArcGIS Software. Linear scaling method was used for the bias correction of future data and bias corrected climate data were assigned to GeoStudio Software. Precipitations of wet seasons (December to March ) in 2007 to 2013 is higher than 2001-2006 period and it is more than nearly 40% higher precipitation than usual monthly average precipitation of 160mm.The results of seepage analyses which were carried out using SEEP/W model with observed climate, clearly demonstrated that the pore water pressure within the fill slope was significantly increased due to the increase of the infiltration during the wet season of 2013.One main Regional Climate Models (RCM) was analyzed in order to predict future climate variation under two Representative Concentration Pathways (RCPs).In the projected period of 76 years ahead from 2014, shows that the amount of precipitation is considerably getting higher in the future in both RCP 4.5 and RCP 8.5 emission scenarios. Critical pore water pressure conditions during 2014-2090 were used in order to recommend appropriate remediation methods. Results of slope stability analyses indicated that the factor of safety of the fill slopes was reduced from 1.226 to 0.793 during the dry season to wet season in 2013.Results of future slope stability which were obtained using SLOPE/W model for the RCP emissions scenarios depict that, the use of tieback anchors and geogrids in slope protection could be effective in increasing the stability of slopes to an acceptable level during the wet seasons. Moreover, methods and procedures like monitoring of slopes showing signs or susceptible for movement and installing surface protections could be used to increase the stability of slopes.

Keywords: climate change, precipitation, SEEP/W, SLOPE/W, unsaturated soil

Procedia PDF Downloads 136
111 Rabies Free Pakistan - Eliminating Rabies Through One Health Approach

Authors: Anzal Abbas Jaffari, Wajiha Javed, Naseem Salahuddin

Abstract:

Rationale: Rabies, a vaccine preventable disease, continues to be a critical public health issue as it kills around 2000-5000 people annually in Pakistan. Along with the disease spread among animals, the dog population remains a victim of brutal culling practices by the local authorities, which adversely affects ecosystem (sinking of poison in the soil – affecting vegetation & contaminating water) and the disease spread. The dog population has been exponentially rising primarily because a lack of a consolidated nationwide Animal Birth Control program and awareness among the local communities in general and children in particular. This is reflected in Pakistan’s low SARE score - 1.5, which makes the country trails behind other developing countries like Bangladesh (2.5) and Philippines (3.5).According to an estimate, the province of Sindh alone is home to almost 2.5 million dogs. The clustering of dogs in Peri-Urban areas and inner cities localities leads to an increase of reported dog bite cases in these areas specifically. Objective: Rabies Free Pakistan (RFP), which is a joint venture of Getz Pharma Private Limited and Indus Hospital & Health Network (IHHN); it was established in 2018 to eliminate Rabies from Pakistan by 2030 using the One Health Approach. Methodology: The RFP team is actively working on advocacy and policy front with both the Federal & Provincial government to ensure that all stakeholders currently involved in dog culling in Pakistan have a paradigm shift towards humane methods of vaccination and ABC. Along with the federal government, RFP aims to declare Rabies as a notifiable disease. Whereas RFP closely works with the provincial government of Sindh to initiate a province wide Rabies Control Program.RFP program follows international standards and WHO approved protocols for this program in Pakistan.RFP team has achieved various milestones in the fight against Rabies after successfully scaling up project operations and has vaccinated more than 30,000 dogs and neutered around 7,000 dogs since 2018. Recommendations: Effective implementation of Rabies program (MDV and ABC) requires a concentrated effort to address a variety of structural and policy challenges. This essentially demands a massive shift in the attitude of individuals towards rabies. The two most significant challenges in implementing a standard policy at the structural level are lack of institutional capacity, shortage of vaccine, and absence of inter-departmental coordination among major stakeholders: federal government, provincial ministry of health, livestock, and local bodies (including local councils). The lack of capacity in health care workers to treat dog bite cases emerges as a critical challenge at the clinical level. Conclusion: Pakistan can learn from the successful international models of Sri Lanka and Mexico as they adopted the One Health Approach to eliminate rabies like RFP. The WHO advised One Health approach provides the policymakers with an interactive and cross-sectoral guide, which involves all the essential elements of the eco system (including animals, humans, and other components).

Keywords: animal birth control, dog population, mass dog vaccination, one health, rabies elimination

Procedia PDF Downloads 180
110 The Impact of Riparian Alien Plant Removal on Aquatic Invertebrate Communities in the Upper Reaches of Luvuvhu River Catchment, Limpopo Province

Authors: Rifilwe Victor Modiba, Stefan Hendric Foord

Abstract:

Alien invasive plants (IAP’s) have considerable negative impacts on freshwater habitats and South Africa has implemented an innovative Work for Water (WfW) programme for the systematic removal of these plants aimed at, amongst other objectives, restoring biodiversity and ecosystem services in these threatened habitats. These restoration processes are expensive and have to be evidence-based. In this study in-stream macroinvertebrate and adult Odonata assemblages were used as indicators of restoration success by quantifying the response of biodiversity metrics for these two groups to the removal of IAP’s in a strategic water resource of South Africa that is extensively invaded by invasive alien plants (IAP’s). The study consisted of a replicated design that included 45 sampling units, viz. 15 invaded, 15 uninvaded and 15 cleared sites stratified across the upper reaches of six sub-catchments of the Luvuvhu river catchment, Limpopo Province. Cleared sites were only considered if they received at least two WfW treatments in the last 3 years. The Benthic macroinvertebrate and adult Odonate assemblages in each of these sampling were surveyed from between November and March, 2013/2014 and 2014/2015 respectively. Generalized Linear Models (GLM) with a log link function and Poisson error distribution were done for metrics (invaded, cleared, and uninvaded) whose residuals were not normally distributed or had unequal variance and for abundance. RDA was done for EPTO genera (Ephemeroptera, Plecoptera, Trichoptera and Odonata) and adult Odonata species abundance. GLM was done to for the abundance of Genera and Odonates that had the association with the RDA environmental factors. Sixty four benthic macroinvertebrate families, 57 EPTO genera, and 45 adult Odonata species were recorded across all 45 sampling units. There was no significant difference between the SASS5 total score, ASPT, and family richness of the three invasion classes. Although clearing only had a weak positive effect on the adult Odonate species richness it had a positive impact on DBI scores. These differences were mainly the result of significantly larger DBI scores in the cleared sites as compared to the invaded sites. Results suggest that water quality is positively impacted by repeated clearing pointing to the importance of follow up procedures after initial clearing. Adult Odonate diversity as measured by richness, endemicity, threat and distribution respond positively to all forms of the clearing. The clearing had a significant impact on Odonate assemblage structure but did not affect EPTO structure. Variation partitioning showed that 21.8% of the variation in EPTO assemblage can be explained by spatial and environmental variables, 16% of the variation in Odonate structure was explained by spatial and environmental variables. The response of the diversity metrics to clearing increased in significance at finer taxonomic resolutions, particularly of adult Odonates whose metrics significantly improved with clearing and whose structure responded to both invasion and clearing. The study recommends the use of DBI for surveying river health when hydraulic biotopes are poor.

Keywords: DBI, evidence-based conservation, EPTO, macroinvetebrates

Procedia PDF Downloads 186
109 OpenFOAM Based Simulation of High Reynolds Number Separated Flows Using Bridging Method of Turbulence

Authors: Sagar Saroha, Sawan S. Sinha, Sunil Lakshmipathy

Abstract:

Reynolds averaged Navier-Stokes (RANS) model is the popular computational tool for prediction of turbulent flows. Being computationally less expensive as compared to direct numerical simulation (DNS), RANS has received wide acceptance in industry and research community as well. However, for high Reynolds number flows, the traditional RANS approach based on the Boussinesq hypothesis is incapacitated to capture all the essential flow characteristics, and thus, its performance is restricted in high Reynolds number flows of practical interest. RANS performance turns out to be inadequate in regimes like flow over curved surfaces, flows with rapid changes in the mean strain rate, duct flows involving secondary streamlines and three-dimensional separated flows. In the recent decade, partially averaged Navier-Stokes (PANS) methodology has gained acceptability among seamless bridging methods of turbulence- placed between DNS and RANS. PANS methodology, being a scale resolving bridging method, is inherently more suitable than RANS for simulating turbulent flows. The superior ability of PANS method has been demonstrated for some cases like swirling flows, high-speed mixing environment, and high Reynolds number turbulent flows. In our work, we intend to evaluate PANS in case of separated turbulent flows past bluff bodies -which is of broad aerodynamic research and industrial application. PANS equations, being derived from base RANS, continue to inherit the inadequacies from the parent RANS model based on linear eddy-viscosity model (LEVM) closure. To enhance PANS’ capabilities for simulating separated flows, the shortcomings of the LEVM closure need to be addressed. Inabilities of the LEVMs have inspired the development of non-linear eddy viscosity models (NLEVM). To explore the potential improvement in PANS performance, in our study we evaluate the PANS behavior in conjugation with NLEVM. Our work can be categorized into three significant steps: (i) Extraction of PANS version of NLEVM from RANS model, (ii) testing the model in the homogeneous turbulence environment and (iii) application and evaluation of the model in the canonical case of separated non-homogeneous flow field (flow past prismatic bodies and bodies of revolution at high Reynolds number). PANS version of NLEVM shall be derived and implemented in OpenFOAM -an open source solver. Homogeneous flows evaluation will comprise the study of the influence of the PANS’ filter-width control parameter on the turbulent stresses; the homogeneous analysis performed over typical velocity fields and asymptotic analysis of Reynolds stress tensor. Non-homogeneous flow case will include the study of mean integrated quantities and various instantaneous flow field features including wake structures. Performance of PANS + NLEVM shall be compared against the LEVM based PANS and LEVM based RANS. This assessment will contribute to significant improvement of the predictive ability of the computational fluid dynamics (CFD) tools in massively separated turbulent flows past bluff bodies.

Keywords: bridging methods of turbulence, high Re-CFD, non-linear PANS, separated turbulent flows

Procedia PDF Downloads 145
108 Fair Federated Learning in Wireless Communications

Authors: Shayan Mohajer Hamidi

Abstract:

Federated Learning (FL) has emerged as a promising paradigm for training machine learning models on distributed data without the need for centralized data aggregation. In the realm of wireless communications, FL has the potential to leverage the vast amounts of data generated by wireless devices to improve model performance and enable intelligent applications. However, the fairness aspect of FL in wireless communications remains largely unexplored. This abstract presents an idea for fair federated learning in wireless communications, addressing the challenges of imbalanced data distribution, privacy preservation, and resource allocation. Firstly, the proposed approach aims to tackle the issue of imbalanced data distribution in wireless networks. In typical FL scenarios, the distribution of data across wireless devices can be highly skewed, resulting in unfair model updates. To address this, we propose a weighted aggregation strategy that assigns higher importance to devices with fewer samples during the aggregation process. By incorporating fairness-aware weighting mechanisms, the proposed approach ensures that each participating device's contribution is proportional to its data distribution, thereby mitigating the impact of data imbalance on model performance. Secondly, privacy preservation is a critical concern in federated learning, especially in wireless communications where sensitive user data is involved. The proposed approach incorporates privacy-enhancing techniques, such as differential privacy, to protect user privacy during the model training process. By adding carefully calibrated noise to the gradient updates, the proposed approach ensures that the privacy of individual devices is preserved without compromising the overall model accuracy. Moreover, the approach considers the heterogeneity of devices in terms of computational capabilities and energy constraints, allowing devices to adaptively adjust the level of privacy preservation to strike a balance between privacy and utility. Thirdly, efficient resource allocation is crucial for federated learning in wireless communications, as devices operate under limited bandwidth, energy, and computational resources. The proposed approach leverages optimization techniques to allocate resources effectively among the participating devices, considering factors such as data quality, network conditions, and device capabilities. By intelligently distributing the computational load, communication bandwidth, and energy consumption, the proposed approach minimizes resource wastage and ensures a fair and efficient FL process in wireless networks. To evaluate the performance of the proposed fair federated learning approach, extensive simulations and experiments will be conducted. The experiments will involve a diverse set of wireless devices, ranging from smartphones to Internet of Things (IoT) devices, operating in various scenarios with different data distributions and network conditions. The evaluation metrics will include model accuracy, fairness measures, privacy preservation, and resource utilization. The expected outcomes of this research include improved model performance, fair allocation of resources, enhanced privacy preservation, and a better understanding of the challenges and solutions for fair federated learning in wireless communications. The proposed approach has the potential to revolutionize wireless communication systems by enabling intelligent applications while addressing fairness concerns and preserving user privacy.

Keywords: federated learning, wireless communications, fairness, imbalanced data, privacy preservation, resource allocation, differential privacy, optimization

Procedia PDF Downloads 75
107 Environmental Life Cycle Assessment of Circular, Bio-Based and Industrialized Building Envelope Systems

Authors: N. Cihan KayaçEtin, Stijn Verdoodt, Alexis Versele

Abstract:

The construction industry is accounted for one-third of all waste generated in the European Union (EU) countries. The Circular Economy Action Plan of the EU aims to tackle this issue and aspires to enhance the sustainability of the construction industry by adopting more circular principles and bio-based material use. The Interreg Circular Bio-Based Construction Industry (CBCI) project was conceived to research how this adoption can be facilitated. For this purpose, an approach is developed that integrates technical, legal and social aspects and provides business models for circular designing and building with bio-based materials. In the scope of the project, the research outputs are to be displayed in a real-life setting by constructing a demo terraced single-family house, the living lab (LL) located in Ghent (Belgium). The realization of the LL is conducted in a step-wise approach that includes iterative processes for design, description, criteria definition and multi-criteria assessment of building components. The essence of the research lies within the exploratory approach to the state-of-art building envelope and technical systems options for achieving an optimum combination for a circular and bio-based construction. For this purpose, nine preliminary designs (PD) for building envelope are generated, which consist of three basic construction methods: masonry, lightweight steel construction and wood framing construction supplemented with bio-based construction methods like cross-laminated timber (CLT) and massive wood framing. A comparative analysis on the PDs was conducted by utilizing several complementary tools to assess the circularity. This paper focuses on the life cycle assessment (LCA) approach for evaluating the environmental impact of the LL Ghent. The adoption of an LCA methodology was considered critical for providing a comprehensive set of environmental indicators. The PDs were developed at the component level, in particular for the (i) inclined roof, (ii-iii) front and side façade, (iv) internal walls and (v-vi) floors. The assessment was conducted on two levels; component and building level. The options for each component were compared at the first iteration and then, the PDs as an assembly of components were further analyzed. The LCA was based on a functional unit of one square meter of each component and CEN indicators were utilized for impact assessment for a reference study period of 60 years. A total of 54 building components that are composed of 31 distinct materials were evaluated in the study. The results indicate that wood framing construction supplemented with bio-based construction methods performs environmentally better than the masonry or steel-construction options. An analysis on the correlation between the total weight of components and environmental impact was also conducted. It was seen that masonry structures display a high environmental impact and weight, steel structures display low weight but relatively high environmental impact and wooden framing construction display low weight and environmental impact. The study provided valuable outputs in two levels: (i) several improvement options at component level with substitution of materials with critical weight and/or impact per unit, (ii) feedback on environmental performance for the decision-making process during the design phase of a circular single family house.

Keywords: circular and bio-based materials, comparative analysis, life cycle assessment (LCA), living lab

Procedia PDF Downloads 183
106 An Investigation about the Health-Promoting Lifestyle of 1389 Emergency Nurses in China

Authors: Lei Ye, Min Liu, Yong-Li Gao, Jun Zhang

Abstract:

Purpose: The aims of the study are to investigate the status of health-promoting lifestyle and to compare the healthy lifestyle of emergency nurses in different levels of hospitals in Sichuan province, China. The investigation is mainly about the health-promoting lifestyle, including spiritual growth, health responsibility, physical activity, nutrition, interpersonal relations, stress management. Then the factors were analyzed influencing the health-promoting lifestyle of emergency nurses in hospitals of Sichuan province in order to find the relevant models to provide reference evidence for intervention. Study Design: A cross-sectional research method was adopted. Stratified cluster sampling, based on geographical location, was used to select the health facilities of 1389 emergency nurses in 54 hospitals from Sichuan province in China. Method: The 52-item, six-factor structure Health-Promoting Lifestyle Profile II (HPLP- II) instrument was used to explore participants’ self-reported health-promoting behaviors and measure the dimensions of health responsibility, physical activity, nutrition, interpersonal relations, spiritual growth, and stress management. Demographic characteristics, education, work duration, emergency nursing work duration and self-rated health status were documented. Analysis: Data were analyzed through SPSS software ver. 17.0. Frequency, percentage, mean ± standard deviation were used to describe the general information, while the Nonparametric Test was used to compare the constituent ratio of general data of different hospitals. One-way ANOVA was used to compare the scores of health-promoting lifestyle in different levels hospital. A multiple linear regression model was established. P values which were less than 0.05 determined statistical significance in all analyses. Result: The survey showed that the total score of health-promoting lifestyle of nurses at emergency departments in Sichuan Province was 120.49 ± 21.280. The relevant dimensions are ranked by scores in descending order: interpersonal relations, nutrition, health responsibility, physical activity, stress management, spiritual growth. The total scores of the three-A hospital were the highest (121.63 ± 0.724), followed by the senior class hospital (119.7 ± 1.362) and three-B hospital (117.80 ± 1.255). The difference was statistically significant (P=0.024). The general data of nurses was used as the independent variable which includes age, gender, marital status, living conditions, nursing income, hospital level, Length of Service in nursing, Length of Service in emergency, Professional Title, education background, and the average number of night shifts. The total score of health-promoting lifestyle was used as dependent variable; Multiple linear regression analysis method was adopted to establish the regression model. The regression equation F = 20.728, R2 = 0.061, P < 0.05, the age, gender, nursing income, turnover intention and status of coping stress affect the health-promoting lifestyle of nurses in emergency department, the result was statistically significant (P < 0.05 ). Conclusion: The results of the investigation indicate that it will help to develop health promoting interventions for emergency nurses in all levels of hospital in Sichuan Province through further research. Managers need to pay more attention to emergency nurses’ exercise, stress management, self-realization, and conduct intervention in nurse training programs.

Keywords: emergency nurse, health-promoting lifestyle profile II, health behaviors, lifestyle

Procedia PDF Downloads 282
105 Finite Element Analysis of Mini-Plate Stabilization of Mandible Fracture

Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski

Abstract:

The aim of the presented investigation is to recognize the possible mechanical issues of mini-plate connection used to treat mandible fractures and to check the impact of different factors for the stresses and displacements within the bone-stabilizer system. The mini-plate osteosynthesis technique is a common type of internal fixation using metal plates connected to the fractured bone parts by a set of screws. The selected two types of plate application methodology used by maxillofacial surgeons were investigated in the work. Those patterns differ in location and number of plates. The bone geometry was modeled on the base of computed tomography scans of hospitalized patient done just after mini-plate application. The solid volume geometry consisting of cortical and cancellous bone was created based on gained cloud of points. Temporomandibular joint and muscle system were simulated to imitate the real masticatory system behavior. Finite elements mesh and analysis were performed by ANSYS software. To simulate realistic connection behavior nonlinear contact conditions were used between the connecting elements and bones. The influence of the initial compression of the connected bone parts or the gap between them was analyzed. Nonlinear material properties of the bone tissues and elastic-plastic model of titanium alloy were used. The three cases of loading assuming the force of magnitude of 100N acting on the left molars, the right molars and the incisors were investigated. Stress distribution within connecting plate shows that the compression of the bone parts in the connection results in high stress concentration in the plate and the screws, however the maximum stress levels do not exceed material (titanium) yield limit. There are no significant differences between negative offset (gap) and no-offset conditions. The location of the external force influences the magnitude of stresses around both the plate and bone parts. Two-plate system gives generally lower von Misses stress under the same loading than the one-plating approach. Von Mises stress distribution within the cortical bone shows reduction of high stress field for the cases without the compression (neutral initial contact). For the initial prestressing there is a visible significant stress increase around the fixing holes at the bottom mini-plate due to the assembly stress. The local stress concentration may be the reason of bone destruction in those regions. The performed calculations prove that the bone-mini-plate system is able to properly stabilize the fractured mandible bone. There is visible strong dependency between the mini-plate location and stress distribution within the stabilizer structure and the surrounding bone tissue. The results (stresses within the bone tissues and within the devices, relative displacements of the bone parts at the interface) corresponding to different models of the connection provide a basis for the mechanical optimization of the mini-plate connections. The results of the performed numerical simulations were compared to clinical observation. They provide information helpful for better understanding of the load transfer in the mandible with the stabilizer and for improving stabilization techniques.

Keywords: finite element modeling, mandible fracture, mini-plate connection, osteosynthesis

Procedia PDF Downloads 246
104 An Interdisciplinary Maturity Model for Accompanying Sustainable Digital Transformation Processes in a Smart Residential Quarter

Authors: Wesley Preßler, Lucie Schmidt

Abstract:

Digital transformation is playing an increasingly important role in the development of smart residential quarters. In order to accompany and steer this process and ultimately make the success of the transformation efforts measurable, it is helpful to use an appropriate maturity model. However, conventional maturity models for digital transformation focus primarily on the evaluation of processes and neglect the information and power imbalances between the stakeholders, which affects the validity of the results. The Multi-Generation Smart Community (mGeSCo) research project is developing an interdisciplinary maturity model that integrates the dimensions of digital literacy, interpretive patterns, and technology acceptance to address this gap. As part of the mGeSCo project, the technological development of selected dimensions in the Smart Quarter Jena-Lobeda (Germany) is being investigated. A specific maturity model, based on Cohen's Smart Cities Wheel, evaluates the central dimensions Working, Living, Housing and Caring. To improve the reliability and relevance of the maturity assessment, the factors Digital Literacy, Interpretive Patterns and Technology Acceptance are integrated into the developed model. The digital literacy dimension examines stakeholders' skills in using digital technologies, which influence their perception and assessment of technological maturity. Digital literacy is measured by means of surveys, interviews, and participant observation, using the European Commission's Digital Literacy Framework (DigComp) as a basis. Interpretations of digital technologies provide information about how individuals perceive technologies and ascribe meaning to them. However, these are not mere assessments, prejudices, or stereotyped perceptions but collective patterns, rules, attributions of meaning and the cultural repertoire that leads to these opinions and attitudes. Understanding these interpretations helps in assessing the overarching readiness of stakeholders to digitally transform a/their neighborhood. This involves examining people's attitudes, beliefs, and values about technology adoption, as well as their perceptions of the benefits and risks associated with digital tools. These insights provide important data for a holistic view and inform the steps needed to prepare individuals in the neighborhood for a digital transformation. Technology acceptance is another crucial factor for successful digital transformation to examine the willingness of individuals to adopt and use new technologies. Surveys or questionnaires based on Davis' Technology Acceptance Model can be used to complement interpretive patterns to measure neighborhood acceptance of digital technologies. Integrating the dimensions of digital literacy, interpretive patterns and technology acceptance enables the development of a roadmap with clear prerequisites for initiating a digital transformation process in the neighborhood. During the process, maturity is measured at different points in time and compared with changes in the aforementioned dimensions to ensure sustainable transformation. Participation, co-creation, and co-production are essential concepts for a successful and inclusive digital transformation in the neighborhood context. This interdisciplinary maturity model helps to improve the assessment and monitoring of sustainable digital transformation processes in smart residential quarters. It enables a more comprehensive recording of the factors that influence the success of such processes and supports the development of targeted measures to promote digital transformation in the neighborhood context.

Keywords: digital transformation, interdisciplinary, maturity model, neighborhood

Procedia PDF Downloads 77
103 Predictive Analytics for Theory Building

Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim

Abstract:

Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.

Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building

Procedia PDF Downloads 276
102 Mapping of Urban Micro-Climate in Lyon (France) by Integrating Complementary Predictors at Different Scales into Multiple Linear Regression Models

Authors: Lucille Alonso, Florent Renard

Abstract:

The characterizations of urban heat island (UHI) and their interactions with climate change and urban climates are the main research and public health issue, due to the increasing urbanization of the population. These solutions require a better knowledge of the UHI and micro-climate in urban areas, by combining measurements and modelling. This study is part of this topic by evaluating microclimatic conditions in dense urban areas in the Lyon Metropolitan Area (France) using a combination of data traditionally used such as topography, but also from LiDAR (Light Detection And Ranging) data, Landsat 8 satellite observation and Sentinel and ground measurements by bike. These bicycle-dependent weather data collections are used to build the database of the variable to be modelled, the air temperature, over Lyon’s hyper-center. This study aims to model the air temperature, measured during 6 mobile campaigns in Lyon in clear weather, using multiple linear regressions based on 33 explanatory variables. They are of various categories such as meteorological parameters from remote sensing, topographic variables, vegetation indices, the presence of water, humidity, bare soil, buildings, radiation, urban morphology or proximity and density to various land uses (water surfaces, vegetation, bare soil, etc.). The acquisition sources are multiple and come from the Landsat 8 and Sentinel satellites, LiDAR points, and cartographic products downloaded from an open data platform in Greater Lyon. Regarding the presence of low, medium, and high vegetation, the presence of buildings and ground, several buffers close to these factors were tested (5, 10, 20, 25, 50, 100, 200 and 500m). The buffers with the best linear correlations with air temperature for ground are 5m around the measurement points, for low and medium vegetation, and for building 50m and for high vegetation is 100m. The explanatory model of the dependent variable is obtained by multiple linear regression of the remaining explanatory variables (Pearson correlation matrix with a |r| < 0.7 and VIF with < 5) by integrating a stepwise sorting algorithm. Moreover, holdout cross-validation is performed, due to its ability to detect over-fitting of multiple regression, although multiple regression provides internal validation and randomization (80% training, 20% testing). Multiple linear regression explained, on average, 72% of the variance for the study days, with an average RMSE of only 0.20°C. The impact on the model of surface temperature in the estimation of air temperature is the most important variable. Other variables are recurrent such as distance to subway stations, distance to water areas, NDVI, digital elevation model, sky view factor, average vegetation density, or building density. Changing urban morphology influences the city's thermal patterns. The thermal atmosphere in dense urban areas can only be analysed on a microscale to be able to consider the local impact of trees, streets, and buildings. There is currently no network of fixed weather stations sufficiently deployed in central Lyon and most major urban areas. Therefore, it is necessary to use mobile measurements, followed by modelling to characterize the city's multiple thermal environments.

Keywords: air temperature, LIDAR, multiple linear regression, surface temperature, urban heat island

Procedia PDF Downloads 137
101 Regenerating Habitats. A Housing Based on Modular Wooden Systems

Authors: Rui Pedro de Sousa Guimarães Ferreira, Carlos Alberto Maia Domínguez

Abstract:

Despite the ambitions to achieve climate neutrality by 2050, to fulfill the Paris Agreement's goals, the building and construction sector remains one of the most resource-intensive and greenhouse gas-emitting industries in the world, accounting for 40% of worldwide CO ₂ emissions. Over the past few decades, globalization and population growth have led to an exponential rise in demand in the housing market and, by extension, in the building industry. Considering this housing crisis, it is obvious that we will not stop building in the near future. However, the transition, which has already started, is challenging and complex because it calls for the worldwide participation of numerous organizations in altering how building systems, which have been a part of our everyday existence for over a century, are used. Wood is one of the alternatives that is most frequently used nowadays (under responsible forestry conditions) because of its physical qualities and, most importantly, because it produces fewer carbon emissions during manufacturing than steel or concrete. Furthermore, as wood retains its capacity to store CO ₂ after application and throughout the life of the building, working as a natural carbon filter, it helps to reduce greenhouse gas emissions. After a century-long focus on other materials, in the last few decades, technological advancements have made it possible to innovate systems centered around the use of wood. However, there are still some questions that require further exploration. It is necessary to standardize production and manufacturing processes based on prefabrication and modularization principles to achieve greater precision and optimization of the solutions, decreasing building time, prices, and waste from raw materials. In addition, this approach will make it possible to develop new architectural solutions to solve the rigidity and irreversibility of buildings, two of the most important issues facing housing today. Most current models are still created as inflexible, fixed, monofunctional structures that discourage any kind of regeneration, based on matrices that sustain the conventional family's traditional model and are founded on rigid, impenetrable compartmentalization. Adaptability and flexibility in housing are, and always have been, necessities and key components of architecture. People today need to constantly adapt to their surroundings and themselves because of the fast-paced, disposable, and quickly obsolescent nature of modern items. Migrations on a global scale, different kinds of co-housing, or even personal changes are some of the new questions that buildings have to answer. Designing with the reversibility of construction systems and materials in mind not only allows for the concept of "looping" in construction, with environmental advantages that enable the development of a circular economy in the sector but also unleashes multiple social benefits. In this sense, it is imperative to develop prefabricated and modular construction systems able to address the formalization of a reversible proposition that adjusts to the scale of time and its multiple reformulations, many of which are unpredictable. We must allow buildings to change, grow, or shrink over their lifetime, respecting their nature and, finally, the nature of the people living in them. It´s the ability to anticipate the unexpected, adapt to social factors, and take account of demographic shifts in society to stabilize communities, the foundation of real innovative sustainability.

Keywords: modular, timber, flexibility, housing

Procedia PDF Downloads 79
100 Photobleaching Kinetics and Epithelial Distribution of Hexylaminoleuilinate Induced PpIX in Rat Bladder Cancer

Authors: Sami El Khatib, Agnès Leroux, Jean-Louis Merlin, François Guillemin, Marie-Ange D’Hallewin

Abstract:

Photodynamic therapy (PDT) is a treatment modality based on the cytotoxic effect occurring on the target tissues by interaction of a photosensitizer with light in the presence of oxygen. One of the major advances in PDT can be attributed to the use of topical aminolevulinic (ALA) to induce Protoporphyrin IX (PpIX) for the treatment of early stage cancers as well as diagnosis. ALA is a precursor of the heme synthesis pathway. Locally delivered to the target tissue ALA overcomes the negative feedback exerted by heme and promotes the transient formation of PpIX in situ to reach critical effective levels in cells and tissue. Whereas early steps of the heme pathway occur in the cytosol, PpIX synthesis is shown to be held in the mitochondrial membranes and PpIX fluorescence is expected to accumulate in close vicinity of the initial building site and to progressively diffuse to the neighboring cytoplasmic compartment or other lipophylic organelles. PpIX is known to be highly reactive and will be degraded when irradiated with light. PpIX photobleaching is believed to be governed by a singlet oxygen mediated mechanism in the presence of oxidized amino acids and proteins. PpIX photobleaching and subsequent spectral phototransformation were described widely in tumor cells incubated in vitro with ALA solution, or ex vivo in human and porcine mucosa superfused with hexylaminolevulinate (hALA). PpIX photobleaching was also studied in vivo, using animal models such as normal or tumor mice skin and orthotopic rat bladder model. Hexyl aminolevulinate a more potent lipophilic derivative of ALA was proposed as an adjunct to standard cystoscopy in the fluorescence diagnosis of bladder cancer and other malignancies. We have previously reported the effectiveness of hALA mediated PDT of rat bladder cancer. Although normal and tumor bladder epithelium exhibit similar fluorescence intensities after intravesical instillation of two hALA concentrations (8 and 16 mM), the therapeutic response at 8mM and 20J/cm2 was completely different from the one observed at 16mM irradiated with the same light dose. Where the tumor is destroyed, leaving the underlying submucosa and muscle intact after an 8 mM instillation, 16mM sensitization and subsequent illumination results in the complete destruction of the underlying bladder wall but leaves the tumor undamaged. The object of the current study is to try to unravel the underlying mechanism for this apparent contradiction. PpIX extraction showed identical amounts of photosensitizer in tumor bearing bladders at both concentrations. Photobleaching experiments revealed mono-exponential decay curves in both situations but with a two times faster decay constant in case of 16mM bladders. Fluorescence microscopy shows an identical fluorescence pattern for normal bladders at both concentrations and tumor bladders at 8mM with bright spots. Tumor bladders at 16 mM exhibit a more diffuse cytoplasmic fluorescence distribution. The different response to PDT with regard to the initial pro-drug concentration can thus be attributed to the different cellular localization.

Keywords: bladder cancer, hexyl-aminolevulinate, photobleaching, confocal fluorescence microscopy

Procedia PDF Downloads 407
99 Combination of Modelling and Environmental Life Cycle Assessment Approach for Demand Driven Biogas Production

Authors: Juan A. Arzate, Funda C. Ertem, M. Nicolas Cruz-Bournazou, Peter Neubauer, Stefan Junne

Abstract:

— One of the biggest challenges the world faces today is global warming that is caused by greenhouse gases (GHGs) coming from the combustion of fossil fuels for energy generation. In order to mitigate climate change, the European Union has committed to reducing GHG emissions to 80–95% below the level of the 1990s by the year 2050. Renewable technologies are vital to diminish energy-related GHG emissions. Since water and biomass are limited resources, the largest contributions to renewable energy (RE) systems will have to come from wind and solar power. Nevertheless, high proportions of fluctuating RE will present a number of challenges, especially regarding the need to balance the variable energy demand with the weather dependent fluctuation of energy supply. Therefore, biogas plants in this content would play an important role, since they are easily adaptable. Feedstock availability varies locally or seasonally; however there is a lack of knowledge in how biogas plants should be operated in a stable manner by local feedstock. This problem may be prevented through suitable control strategies. Such strategies require the development of convenient mathematical models, which fairly describe the main processes. Modelling allows us to predict the system behavior of biogas plants when different feedstocks are used with different loading rates. Life cycle assessment (LCA) is a technique for analyzing several sides from evolution of a product till its disposal in an environmental point of view. It is highly recommend to use as a decision making tool. In order to achieve suitable strategies, the combination of a flexible energy generation provided by biogas plants, a secure production process and the maximization of the environmental benefits can be obtained by the combination of process modelling and LCA approaches. For this reason, this study focuses on the biogas plant which flexibly generates required energy from the co-digestion of maize, grass and cattle manure, while emitting the lowest amount of GHG´s. To achieve this goal AMOCO model was combined with LCA. The program was structured in Matlab to simulate any biogas process based on the AMOCO model and combined with the equations necessary to obtain climate change, acidification and eutrophication potentials of the whole production system based on ReCiPe midpoint v.1.06 methodology. Developed simulation was optimized based on real data from operating biogas plants and existing literature research. The results prove that AMOCO model can successfully imitate the system behavior of biogas plants and the necessary time required for the process to adapt in order to generate demanded energy from available feedstock. Combination with LCA approach provided opportunity to keep the resulting emissions from operation at the lowest possible level. This would allow for a prediction of the process, when the feedstock utilization supports the establishment of closed material circles within a smart bio-production grid – under the constraint of minimal drawbacks for the environment and maximal sustainability.

Keywords: AMOCO model, GHG emissions, life cycle assessment, modelling

Procedia PDF Downloads 188
98 Runoff Estimates of Rapidly Urbanizing Indian Cities: An Integrated Modeling Approach

Authors: Rupesh S. Gundewar, Kanchan C. Khare

Abstract:

Runoff contribution from urban areas is generally from manmade structures and few natural contributors. The manmade structures are buildings; roads and other paved areas whereas natural contributors are groundwater and overland flows etc. Runoff alleviation is done by manmade as well as natural storages. Manmade storages are storage tanks or other storage structures such as soakways or soak pits which are more common in western and European countries. Natural storages are catchment slope, infiltration, catchment length, channel rerouting, drainage density, depression storage etc. A literature survey on the manmade and natural storages/inflow has presented percentage contribution of each individually. Sanders et.al. in their research have reported that a vegetation canopy reduces runoff by 7% to 12%. Nassif et el in their research have reported that catchment slope has an impact of 16% on bare standard soil and 24% on grassed soil on rainfall runoff. Infiltration being a pervious/impervious ratio dependent parameter is catchment specific. But a literature survey has presented a range of 15% to 30% loss of rainfall runoff in various catchment study areas. Catchment length and channel rerouting too play a considerable role in reduction of rainfall runoff. Ground infiltration inflow adds to the runoff where the groundwater table is very shallow and soil saturates even in a lower intensity storm. An approximate percent contribution through this inflow and surface inflow contributes to about 2% of total runoff volume. Considering the various contributing factors in runoff it has been observed during a literature survey that integrated modelling approach needs to be considered. The traditional storm water network models are able to predict to a fair/acceptable degree of accuracy provided no interaction with receiving water (river, sea, canal etc), ground infiltration, treatment works etc. are assumed. When such interactions are significant then it becomes difficult to reproduce the actual flood extent using the traditional discrete modelling approach. As a result the correct flooding situation is very rarely addressed accurately. Since the development of spatially distributed hydrologic model the predictions have become more accurate at the cost of requiring more accurate spatial information.The integrated approach provides a greater understanding of performance of the entire catchment. It enables to identify the source of flow in the system, understand how it is conveyed and also its impact on the receiving body. It also confirms important pain points, hydraulic controls and the source of flooding which could not be easily understood with discrete modelling approach. This also enables the decision makers to identify solutions which can be spread throughout the catchment rather than being concentrated at single point where the problem exists. Thus it can be concluded from the literature survey that the representation of urban details can be a key differentiator to the successful understanding of flooding issue. The intent of this study is to accurately predict the runoff from impermeable areas from urban area in India. A representative area has been selected for which data was available and predictions have been made which are corroborated with the actual measured data.

Keywords: runoff, urbanization, impermeable response, flooding

Procedia PDF Downloads 250
97 Effect of Velocity-Slip in Nanoscale Electroosmotic Flows: Molecular and Continuum Transport Perspectives

Authors: Alper T. Celebi, Ali Beskok

Abstract:

Electroosmotic (EO) slip flows in nanochannels are investigated using non-equilibrium molecular dynamics (MD) simulations, and the results are compared with analytical solution of Poisson-Boltzmann and Stokes (PB-S) equations with slip contribution. The ultimate objective of this study is to show that well-known continuum flow model can accurately predict the EO velocity profiles in nanochannels using the slip lengths and apparent viscosities obtained from force-driven flow simulations performed at various liquid-wall interaction strengths. EO flow of aqueous NaCl solution in silicon nanochannels are simulated under realistic electrochemical conditions within the validity region of Poisson-Boltzmann theory. A physical surface charge density is determined for nanochannels based on dissociations of silanol functional groups on channel surfaces at known salt concentration, temperature and local pH. First, we present results of density profiles and ion distributions by equilibrium MD simulations, ensuring that the desired thermodynamic state and ionic conditions are satisfied. Next, force-driven nanochannel flow simulations are performed to predict the apparent viscosity of ionic solution between charged surfaces and slip lengths. Parabolic velocity profiles obtained from force-driven flow simulations are fitted to a second-order polynomial equation, where viscosity and slip lengths are quantified by comparing the coefficients of the fitted equation with continuum flow model. Presence of charged surface increases the viscosity of ionic solution while the velocity-slip at wall decreases. Afterwards, EO flow simulations are carried out under uniform electric field for different liquid-wall interaction strengths. Velocity profiles present finite slips near walls, followed with a conventional viscous flow profile in the electrical double layer that reaches a bulk flow region in the center of the channel. The EO flow enhances with increased slip at the walls, which depends on wall-liquid interaction strength and the surface charge. MD velocity profiles are compared with the predictions from analytical solutions of the slip modified PB-S equation, where the slip length and apparent viscosity values are obtained from force-driven flow simulations in charged silicon nano-channels. Our MD results show good agreements with the analytical solutions at various slip conditions, verifying the validity of PB-S equation in nanochannels as small as 3.5 nm. In addition, the continuum model normalizes slip length with the Debye length instead of the channel height, which implies that enhancement in EO flows is independent of the channel height. Further MD simulations performed at different channel heights also shows that the flow enhancement due to slip is independent of the channel height. This is important because slip enhanced EO flow is observable even in micro-channels experiments by using a hydrophobic channel with large slip and high conductivity solutions with small Debye length. The present study provides an advanced understanding of EO flows in nanochannels. Correct characterization of nanoscale EO slip flow is crucial to discover the extent of well-known continuum models, which is required for various applications spanning from ion separation to drug delivery and bio-fluidic analysis.

Keywords: electroosmotic flow, molecular dynamics, slip length, velocity-slip

Procedia PDF Downloads 158
96 Calpoly Autonomous Transportation Experience: Software for Driverless Vehicle Operating on Campus

Authors: F. Tang, S. Boskovich, A. Raheja, Z. Aliyazicioglu, S. Bhandari, N. Tsuchiya

Abstract:

Calpoly Autonomous Transportation Experience (CATE) is a driverless vehicle that we are developing to provide safe, accessible, and efficient transportation of passengers throughout the Cal Poly Pomona campus for events such as orientation tours. Unlike the other self-driving vehicles that are usually developed to operate with other vehicles and reside only on the road networks, CATE will operate exclusively on walk-paths of the campus (potentially narrow passages) with pedestrians traveling from multiple locations. Safety becomes paramount as CATE operates within the same environment as pedestrians. As driverless vehicles assume greater roles in today’s transportation, this project will contribute to autonomous driving with pedestrian traffic in a highly dynamic environment. The CATE project requires significant interdisciplinary work. Researchers from mechanical engineering, electrical engineering and computer science are working together to attack the problem from different perspectives (hardware, software and system). In this abstract, we describe the software aspects of the project, with a focus on the requirements and the major components. CATE shall provide a GUI interface for the average user to interact with the car and access its available functionalities, such as selecting a destination from any origin on campus. We have developed an interface that provides an aerial view of the campus map, the current car location, routes, and the goal location. Users can interact with CATE through audio or manual inputs. CATE shall plan routes from the origin to the selected destination for the vehicle to travel. We will use an existing aerial map for the campus and convert it to a spatial graph configuration where the vertices represent the landmarks and edges represent paths that the car should follow with some designated behaviors (such as stay on the right side of the lane or follow an edge). Graph search algorithms such as A* will be implemented as the default path planning algorithm. D* Lite will be explored to efficiently recompute the path when there are any changes to the map. CATE shall avoid any static obstacles and walking pedestrians within some safe distance. Unlike traveling along traditional roadways, CATE’s route directly coexists with pedestrians. To ensure the safety of the pedestrians, we will use sensor fusion techniques that combine data from both lidar and stereo vision for obstacle avoidance while also allowing CATE to operate along its intended route. We will also build prediction models for pedestrian traffic patterns. CATE shall improve its location and work under a GPS-denied situation. CATE relies on its GPS to give its current location, which has a precision of a few meters. We have implemented an Unscented Kalman Filter (UKF) that allows the fusion of data from multiple sensors (such as GPS, IMU, odometry) in order to increase the confidence of localization. We also noticed that GPS signals can easily get degraded or blocked on campus due to high-rise buildings or trees. UKF can also help here to generate a better state estimate. In summary, CATE will provide on-campus transportation experience that coexists with dynamic pedestrian traffic. In future work, we will extend it to multi-vehicle scenarios.

Keywords: driverless vehicle, path planning, sensor fusion, state estimate

Procedia PDF Downloads 144
95 Pivoting to Fortify our Digital Self: Revealing the Need for Personal Cyber Insurance

Authors: Richard McGregor, Carmen Reaiche, Stephen Boyle

Abstract:

Cyber threats are a relatively recent phenomenon and offer cyber insurers a dynamic and intelligent peril. As individuals en mass become increasingly digitally dependent, Personal Cyber Insurance (PCI) offers an attractive option to mitigate cyber risk at a personal level. This abstract proposes a literature review that conceptualises a framework for siting Personal Cyber Insurance (PCI) within the context of cyberspace. The lack of empirical research within this domain demonstrates an immediate need to define the scope of PCI to allow cyber insurers to understand personal cyber risk threats and vectors, customer awareness, capabilities, and their associated needs. Additionally, this will allow cyber insurers to conceptualise appropriate frameworks allowing effective management and distribution of PCI products and services within a landscape often in-congruent with risk attributes commonly associated with traditional personal line insurance products. Cyberspace has provided significant improvement to the quality of social connectivity and productivity during past decades and allowed enormous capability uplift of information sharing and communication between people and communities. Conversely, personal digital dependency furnish ample opportunities for adverse cyber events such as data breaches and cyber-attacksthus introducing a continuous and insidious threat of omnipresent cyber risk–particularly since the advent of the COVID-19 pandemic and wide-spread adoption of ‘work-from-home’ practices. Recognition of escalating inter-dependencies, vulnerabilities and inadequate personal cyber behaviours have prompted efforts by businesses and individuals alike to investigate strategies and tactics to mitigate cyber risk – of which cyber insurance is a viable, cost-effective option. It is argued that, ceteris parabus, the nature of cyberspace intrinsically provides characteristic peculiarities that pose significant and bespoke challenges to cyber insurers, often in-congruent with risk attributes commonly associated with traditional personal line insurance products. These challenges include (inter alia) a paucity of historical claim/loss data for underwriting and pricing purposes, interdependencies of cyber architecture promoting high correlation of cyber risk, difficulties in evaluating cyber risk, intangibility of risk assets (such as data, reputation), lack of standardisation across the industry, high and undetermined tail risks, and moral hazard among others. This study proposes a thematic overview of the literature deemed necessary to conceptualise the challenges to issuing personal cyber coverage. There is an evident absence of empirical research appertaining to PCI and the design of operational business models for this business domain, especially qualitative initiatives that (1) attempt to define the scope of the peril, (2) secure an understanding of the needs of both cyber insurer and customer, and (3) to identify elements pivotal to effective management and profitable distribution of PCI - leading to an argument proposed by the author that postulates that the traditional general insurance customer journey and business model are ill-suited for the lineaments of cyberspace. The findings of the review confirm significant gaps in contemporary research within the domain of personal cyber insurance.

Keywords: cyberspace, personal cyber risk, personal cyber insurance, customer journey, business model

Procedia PDF Downloads 103
94 Experimental Study of the Antibacterial Activity and Modeling of Non-isothermal Crystallization Kinetics of Sintered Seashell Reinforced Poly(Lactic Acid) And Poly(Butylene Succinate) Biocomposites Planned for 3D Printing

Authors: Mohammed S. Razali, Kamel Khimeche, Dahah Hichem, Ammar Boudjellal, Djamel E. Kaderi, Nourddine Ramdani

Abstract:

The use of additive manufacturing technologies has revolutionized various aspects of our daily lives. In particular, 3D printing has greatly advanced biomedical applications. While fused filament fabrication (FFF) technologies have made it easy to produce or prototype various medical devices, it is crucial to minimize the risk of contamination. New materials with antibacterial properties, such as those containing compounded silver nanoparticles, have emerged on the market. In a previous study, we prepared a newly sintered seashell filler (SSh) from bio-based seashells found along the Mediterranean coast using a suitable heat treatment process. We then prepared a series of polylactic acid (PLA) and polybutylene succinate (PBS) biocomposites filled with these SSh particles using a melt mixing technique with a twin-screw extruder to use them as feedstock filaments for 3D printing. The study consisted of two parts: evaluating the antibacterial activity of newly prepared biocomposites made of PLA and PBS reinforced with a sintered seashell in the first part and experimental and modeling analysis of the non-isothermal crystallization kinetics of these biocomposites in the second part. In the first part, the bactericidal activity of the biocomposites against three different bacteria, including Gram-negative bacteria such as (E. coli and Pseudomonas aeruginosa), as well as Gram-positive bacteria such as (Staphylococcus aureus), was examined. The PLA-based biocomposite containing 20 wt.% of SSh particles exhibited an inhibition zone with radial diameters of 8mm and 6mm against E. coli and Pseudo. Au, respectively, while no bacterial activity was observed against Staphylococcus aureus. In the second part, the focus was on investigating the effect of the sintered seashell filler particles on the non-isothermal crystallization kinetics of PLA and PBS 3D-printing composite materials. The objective was to understand the impact of the filler particles on the crystallization mechanism of both PLA and PBS during the cooling process of a melt-extruded filament in (FFF) to manage the dimensional accuracy and mechanical properties of the final printed part. We conducted a non-isothermal melt crystallization kinetic study of a series of PLA-SS and PBS-SS composites using differential scanning calorimetry at various cooling rates. We analyzed the obtained kinetic data using different crystallization kinetic models such as modified Avrami, Ozawa, and Mo's methods. Dynamic mode describes the relative crystallinity as a function of temperature; it found that time half crystallinity (t1/2) of neat PLA decreased from 17 min to 7.3 min for PLA+5 SSh and the (t1/2) of virgin PBS was reduced from 3.5 min to 2.8 min for the composite containing 5wt.% of SSh. We found that the coated SS particles with stearic acid acted as nucleating agents and had a nucleation activity, as observed through polarized optical microscopy. Moreover, we evaluated the effective energy barrier of the non-isothermal crystallization process using the Iso conversional methods of Flynn-Wall-Ozawa (F-W-O) and Kissinger-Akahira-Sunose (K-A-S). The study provides significant insights into the crystallization behavior of PLA and PBS biocomposites.

Keywords: avrami model, bio-based reinforcement, dsc, gram-negative bacteria, gram-positive bacteria, isoconversional methods, non-isothermal crystallization kinetics, poly(butylene succinate), poly(lactic acid), antbactirial activity

Procedia PDF Downloads 81
93 Prevalence, Median Time, and Associated Factors with the Likelihood of Initial Antidepressant Change: A Cross-Sectional Study

Authors: Nervana Elbakary, Sami Ouanes, Sadaf Riaz, Oraib Abdallah, Islam Mahran, Noriya Al-Khuzaei, Yassin Eltorki

Abstract:

Major Depressive Disorder (MDD) requires therapeutic interventions during the initial month after being diagnosed for better disease outcomes. International guidelines recommend a duration of 4–12 weeks for an initial antidepressant (IAD) trial at an optimized dose to get a response. If depressive symptoms persist after this duration, guidelines recommend switching, augmenting, or combining strategies as the next step. Most patients with MDD in the mental health setting have been labeled incorrectly as treatment-resistant where in fact they have not been subjected to an adequate trial of guideline-recommended therapy. Premature discontinuation of IAD due to ineffectiveness can cause unfavorable consequences. Avoiding irrational practices such as subtherapeutic doses of IAD, premature switching between the ADs, and refraining from unjustified polypharmacy can help the disease to go into a remission phase We aimed to determine the prevalence and the patterns of strategies applied after an IAD was changed because of a suboptimal response as a primary outcome. Secondary outcomes included the median survival time on IAD before any change; and the predictors that were associated with IAD change. This was a retrospective cross- sectional study conducted in Mental Health Services in Qatar. A dataset between January 1, 2018, and December 31, 2019, was extracted from the electronic health records. Inclusion and exclusion criteria were defined and applied. The sample size was calculated to be at least 379 patients. Descriptive statistics were reported as frequencies and percentages, in addition, to mean and standard deviation. The median time of IAD to any change strategy was calculated using survival analysis. Associated predictors were examined using two unadjusted and adjusted cox regression models. A total of 487 patients met the inclusion criteria of the study. The average age for participants was 39.1 ± 12.3 years. Patients with first experience MDD episode 255 (52%) constituted a major part of our sample comparing to the relapse group 206(42%). About 431 (88%) of the patients had an occurrence of IAD change to any strategy before end of the study. Almost half of the sample (212 (49%); 95% CI [44–53%]) had their IAD changed less than or equal to 30 days. Switching was consistently more common than combination or augmentation at any timepoint. The median time to IAD change was 43 days with 95% CI [33.2–52.7]. Five independent variables (age, bothersome side effects, un-optimization of the dose before any change, comorbid anxiety, first onset episode) were significantly associated with the likelihood of IAD change in the unadjusted analysis. The factors statistically associated with higher hazard of IAD change in the adjusted analysis were: younger age, un-optimization of the IAD dose before any change, and comorbid anxiety. Because almost half of the patients in this study changed their IAD as early as within the first month, efforts to avoid treatment failure are needed to ensure patient-treatment targets are met. The findings of this study can have direct clinical guidance for health care professionals since an optimized, evidence-based use of AD medication can improve the clinical outcomes of patients with MDD; and also, to identify high-risk factors that could worsen the survival time on IAD such as young age and comorbid anxiety

Keywords: initial antidepressant, dose optimization, major depressive disorder, comorbid anxiety, combination, augmentation, switching, premature discontinuation

Procedia PDF Downloads 151
92 Multifield Problems in 3D Structural Analysis of Advanced Composite Plates and Shells

Authors: Salvatore Brischetto, Domenico Cesare

Abstract:

Major improvements in future aircraft and spacecraft could be those dependent on an increasing use of conventional and unconventional multilayered structures embedding composite materials, functionally graded materials, piezoelectric or piezomagnetic materials, and soft foam or honeycomb cores. Layers made of such materials can be combined in different ways to obtain structures that are able to fulfill several structural requirements. The next generation of aircraft and spacecraft will be manufactured as multilayered structures under the action of a combination of two or more physical fields. In multifield problems for multilayered structures, several physical fields (thermal, hygroscopic, electric and magnetic ones) interact each other with different levels of influence and importance. An exact 3D shell model is here proposed for these types of analyses. This model is based on a coupled system including 3D equilibrium equations, 3D Fourier heat conduction equation, 3D Fick diffusion equation and electric and magnetic divergence equations. The set of partial differential equations of second order in z is written using a mixed curvilinear orthogonal reference system valid for spherical and cylindrical shell panels, cylinders and plates. The order of partial differential equations is reduced to the first one thanks to the redoubling of the number of variables. The solution in the thickness z direction is obtained by means of the exponential matrix method and the correct imposition of interlaminar continuity conditions in terms of displacements, transverse stresses, electric and magnetic potentials, temperature, moisture content and transverse normal multifield fluxes. The investigated structures have simply supported sides in order to obtain a closed form solution in the in-plane directions. Moreover, a layerwise approach is proposed which allows a 3D correct description of multilayered anisotropic structures subjected to field loads. Several results will be proposed in tabular and graphical formto evaluate displacements, stresses and strains when mechanical loads, temperature gradients, moisture content gradients, electric potentials and magnetic potentials are applied at the external surfaces of the structures in steady-state conditions. In the case of inclusions of piezoelectric and piezomagnetic layers in the multilayered structures, so called smart structures are obtained. In this case, a free vibration analysis in open and closed circuit configurations and a static analysis for sensor and actuator applications will be proposed. The proposed results will be useful to better understand the physical and structural behaviour of multilayered advanced composite structures in the case of multifield interactions. Moreover, these analytical results could be used as reference solutions for those scientists interested in the development of 3D and 2D numerical shell/plate models based, for example, on the finite element approach or on the differential quadrature methodology. The correct impositions of boundary geometrical and load conditions, interlaminar continuity conditions and the zigzag behaviour description due to transverse anisotropy will be also discussed and verified.

Keywords: composite structures, 3D shell model, stress analysis, multifield loads, exponential matrix method, layer wise approach

Procedia PDF Downloads 67
91 The Science of Health Care Delivery: Improving Patient-Centered Care through an Innovative Education Model

Authors: Alison C. Essary, Victor Trastek

Abstract:

Introduction: The current state of the health care system in the U.S. is characterized by an unprecedented number of people living with multiple chronic conditions, unsustainable rise in health care costs, inadequate access to care, and wide variation in health outcomes throughout the country. An estimated two-thirds of Americans are living with two or more chronic conditions, contributing to 75% of all health care spending. In 2013, the School for the Science of Health Care Delivery (SHCD) was charged with redesigning the health care system through education and research. Faculty in business, law, and public policy, and thought leaders in health care delivery, administration, public health and health IT created undergraduate, graduate, and executive academic programs to address this pressing need. Faculty and students work across disciplines, and with community partners and employers to improve care delivery and increase value for patients. Methods: Curricula apply content in health care administration and operations within the clinical context. Graduate modules are team-taught by faculty across academic units to model team-based practice. Seminars, team-based assignments, faculty mentoring, and applied projects are integral to student success. Cohort-driven models enhance networking and collaboration. This observational study evaluated two years of admissions data, and one year of graduate data to assess program outcomes and inform the current graduate-level curricula. Descriptive statistics includes means, percentages. Results: Fall 2013, the program received 51 applications. The mean GPA of the entering class of 37 students was 3.38. Ninety-seven percent of the fall 2013 cohort successfully completed the program (n=35). Sixty-six percent are currently employed in the health care industry (n=23). Of the remaining 12 graduates, two successfully matriculated to medical school; one works in the original field of study; four await results on the MCAT or DAT, and five were lost to follow up. Attrition of one student was attributed to non-academic reasons. Fall 2014, the program expanded to include both on-ground and online cohorts. Applications were evenly distributed between on-ground (n=70) and online (n=68). Thirty-eight students enrolled in the on-ground program. The mean GPA was 3.95. Ninety-five percent of students successfully completed the program (n=36). Thirty-six students enrolled in the online program. The mean GPA was 3.85. Graduate outcomes are pending. Discussion: Challenges include demographic variability between online and on-ground students; yet, both profiles are similar in that students intend to become change agents in the health care system. In the past two years, on-ground applications increased by 31%, persistence to graduation is > 95%, mean GPA is 3.67, graduates report admission to six U.S. medical schools, the Mayo Medical School integrates SHCD content within their curricula, and there is national interest in collaborating on industry and academic partnerships. This places SHCD at the forefront of developing innovative curricula in order to improve high-value, patient-centered care.

Keywords: delivery science, education, health care delivery, high-value care, innovation in education, patient-centered

Procedia PDF Downloads 282
90 Near-Peer Mentoring/Curriculum and Community Enterprise for Environmental Restoration Science

Authors: Lauren B. Birney

Abstract:

The BOP-CCERS (Billion Oyster Project- Curriculum and Community Enterprise for Restoration Science) Near-Peer Mentoring Program provides the long-term (five-year) support network to motivate and guide students toward restoration science-based CTE pathways. Students are selected from middle schools with actively participating BOP-CCERS teachers. Teachers will nominate students from grades 6-8 to join cohorts of between 10 and 15 students each. Cohorts are comprised primarily of students from the same school in order to facilitate mentors' travel logistics as well as to sustain connections with students and their families. Each cohort is matched with an exceptional undergraduate or graduate student, either a BOP research associate or STEM mentor recruited from collaborating City University of New York (CUNY) partner programs. In rare cases, an exceptional high school junior or senior may be matched with a cohort in addition to a research associate or graduate student. In no case is a high school student or minor be placed individually with a cohort. Mentors meet with students at least once per month and provide at least one offsite field visit per month, either to a local STEM Hub or research lab. Keeping with its five-year trajectory, the near-peer mentoring program will seek to retain students in the same cohort with the same mentor for the full duration of middle school and for at least two additional years of high school. Upon reaching the final quarter of 8th grade, the mentor will develop a meeting plan for each individual mentee. The mentee and the mentor will be required to meet individually or in small groups once per month. Once per quarter, individual meetings will be substituted for full cohort professional outings. The mentor will organize the entire cohort on a field visit or educational workshop with a museum or aquarium partner. In addition to the mentor-mentee relationship, each participating student will also be asked to conduct and present his or her own BOP field research. This research is ideally carried out with the support of the students’ regular high school STEM subject teacher; however, in cases where the teacher or school does not permit independent study, the student will be asked to conduct the research on an extracurricular basis. Near-peer mentoring affects students’ social identities and helps them to connect to role models from similar groups, ultimately giving them a sense of belonging. Qualitative and quantitative analytics were performed throughout the study. Interviews and focus groups also ensued. Additionally, an external evaluator was utilized to ensure project efficacy, efficiency, and effectiveness throughout the entire project. The BOP-CCERS Near Peer Mentoring program is a peer support network in which high school students with interest or experience in BOP (Billion Oyster Project) topics and activities (such as classroom oyster tanks, STEM Hubs, or digital platform research) provide mentorship and support for middle school or high school freshmen mentees. Peer mentoring not only empowers those students being taught but also increases the content knowledge and engagement of mentors. This support provides the necessary resources, structure, and tools to assist students in finding success.

Keywords: STEM education, environmental science, citizen science, near peer mentoring

Procedia PDF Downloads 91
89 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit

Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic

Abstract:

Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.

Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method

Procedia PDF Downloads 119
88 Solar-Electric Pump-out Boat Technology: Impacts on the Marine Environment, Public Health, and Climate Change

Authors: Joy Chiu, Colin Hemez, Emma Ryan, Jia Sun, Robert Dubrow, Michael Pascucilla

Abstract:

The popularity of recreational boating is on the rise in the United States, which raises numerous national-level challenges in the management of air and water pollution, aquatic habitat destruction, and waterway access. The need to control sewage discharge from recreational vessels underlies all of these challenges. The release of raw human waste into aquatic environments can lead to eutrophication and algal blooms; can increase human exposure to pathogenic viruses, bacteria, and parasites; can financially impact commercial shellfish harvest/fisheries and marine bathing areas; and can negatively affect access to recreational and/or commercial waterways to the detriment of local economies. Because of the damage that unregulated sewage discharge can do to environments and human health/marine life, recreational vessels in the United States are required by law to 'pump-out' sewage from their holding tanks into sewage treatment systems in all designated 'no discharge areas'. Many pump-out boats, which transfer waste out of recreational vessels, are operated and maintained using funds allocated through the Federal Clean Vessel Act (CVA). The East Shore District Health Department of Branford, Connecticut is protecting this estuary by pioneering the design and construction of the first-in-the-nation zero-emissions, the solar-electric pump-out boat of its size to replace one of its older traditional gasoline-powered models through a Connecticut Department of Energy and Environmental Protection CVA Grant. This study, conducted in collaboration with the East Shore District Health Department, the Connecticut Department of Energy and Environmental Protection, States Organization for Boating Access and Connecticut’s CVA program coordinators, had two aims: (1) To perform a national assessment of pump-out boat programs, supplemented by a limited international assessment, to establish best pump-out boat practices (regardless of how the boat is powered); and (2) to estimate the cost, greenhouse gas emissions, and environmental and public health impacts of solar-electric versus traditional gasoline-powered pump-out boats. A national survey was conducted of all CVA-funded pump-out program managers and selected pump-out boat operators to gauge best practices; costs associated with gasoline-powered pump-out boat operation and management; and the regional, cultural, and policy-related issues that might arise from the adoption of solar-electric pump-out boat technology. We also conducted life-cycle analyses of gasoline-powered and solar-electric pump-out boats to compare their greenhouse gas emissions; production of air, soil and water pollution; and impacts on human health. This work comprises the most comprehensive study into pump-out boating practices in the United States to date, in which information obtained at local, state, national, and international levels is synthesized. This study aims to enable CVA programs to make informed recommendations for sustainable pump-out boating practices and identifies the challenges and opportunities that remain for the wide adoption of solar-electric pump-out boat technology.

Keywords: pump-out boat, marine water, solar-electric, zero emissions

Procedia PDF Downloads 128
87 Association between Polygenic Risk of Alzheimer's Dementia, Brain MRI and Cognition in UK Biobank

Authors: Rachana Tank, Donald. M. Lyall, Kristin Flegal, Joey Ward, Jonathan Cavanagh

Abstract:

Alzheimer’s research UK estimates by 2050, 2 million individuals will be living with Late Onset Alzheimer’s disease (LOAD). However, individuals experience considerable cognitive deficits and brain pathology over decades before reaching clinically diagnosable LOAD and studies have utilised gene candidate studies such as genome wide association studies (GWAS) and polygenic risk (PGR) scores to identify high risk individuals and potential pathways. This investigation aims to determine whether high genetic risk of LOAD is associated with worse brain MRI and cognitive performance in healthy older adults within the UK Biobank cohort. Previous studies investigating associations of PGR for LOAD and measures of MRI or cognitive functioning have focused on specific aspects of hippocampal structure, in relatively small sample sizes and with poor ‘controlling’ for confounders such as smoking. Both the sample size of this study and the discovery GWAS sample are bigger than previous studies to our knowledge. Genetic interaction between loci showing largest effects in GWAS have not been extensively studied and it is known that APOE e4 poses the largest genetic risk of LOAD with potential gene-gene and gene-environment interactions of e4, for this reason we  also analyse genetic interactions of PGR with the APOE e4 genotype. High genetic loading based on a polygenic risk score of 21 SNPs for LOAD is associated with worse brain MRI and cognitive outcomes in healthy individuals within the UK Biobank cohort. Summary statistics from Kunkle et al., GWAS meta-analyses (case: n=30,344, control: n=52,427) will be used to create polygenic risk scores based on 21 SNPs and analyses will be carried out in N=37,000 participants in the UK Biobank. This will be the largest study to date investigating PGR of LOAD in relation to MRI. MRI outcome measures include WM tracts, structural volumes. Cognitive function measures include reaction time, pairs matching, trail making, digit symbol substitution and prospective memory. Interaction of the APOE e4 alleles and PGR will be analysed by including APOE status as an interaction term coded as either 0, 1 or 2 e4 alleles. Models will be adjusted partially for adjusted for age, BMI, sex, genotyping chip, smoking, depression and social deprivation. Preliminary results suggest PGR score for LOAD is associated with decreased hippocampal volumes including hippocampal body (standardised beta = -0.04, P = 0.022) and tail (standardised beta = -0.037, P = 0.030), but not with hippocampal head. There were also associations of genetic risk with decreased cognitive performance including fluid intelligence (standardised beta = -0.08, P<0.01) and reaction time (standardised beta = 2.04, P<0.01). No genetic interactions were found between APOE e4 dose and PGR score for MRI or cognitive measures. The generalisability of these results is limited by selection bias within the UK Biobank as participants are less likely to be obese, smoke, be socioeconomically deprived and have fewer self-reported health conditions when compared to the general population. Lack of a unified approach or standardised method for calculating genetic risk scores may also be a limitation of these analyses. Further discussion and results are pending.

Keywords: Alzheimer's dementia, cognition, polygenic risk, MRI

Procedia PDF Downloads 113
86 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College

Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa

Abstract:

This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.

Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling

Procedia PDF Downloads 231
85 Describing Cognitive Decline in Alzheimer's Disease via a Picture Description Writing Task

Authors: Marielle Leijten, Catherine Meulemans, Sven De Maeyer, Luuk Van Waes

Abstract:

For the diagnosis of Alzheimer's disease (AD), a large variety of neuropsychological tests are available. In some of these tests, linguistic processing - both oral and written - is an important factor. Language disturbances might serve as a strong indicator for an underlying neurodegenerative disorder like AD. However, the current diagnostic instruments for language assessment mainly focus on product measures, such as text length or number of errors, ignoring the importance of the process that leads to written or spoken language production. In this study, it is our aim to describe and test differences between cognitive and impaired elderly on the basis of a selection of writing process variables (inter- and intrapersonal characteristics). These process variables are mainly related to pause times, because the number, length, and location of pauses have proven to be an important indicator of the cognitive complexity of a process. Method: Participants that were enrolled in our research were chosen on the basis of a number of basic criteria necessary to collect reliable writing process data. Furthermore, we opted to match the thirteen cognitively impaired patients (8 MCI and 5 AD) with thirteen cognitively healthy elderly. At the start of the experiment, participants were each given a number of tests, such as the Mini-Mental State Examination test (MMSE), the Geriatric Depression Scale (GDS), the forward and backward digit span and the Edinburgh Handedness Inventory (EHI). Also, a questionnaire was used to collect socio-demographic information (age, gender, eduction) of the subjects as well as more details on their level of computer literacy. The tests and questionnaire were followed by two typing tasks and two picture description tasks. For the typing tasks participants had to copy (type) characters, words and sentences from a screen, whereas the picture description tasks each consisted of an image they had to describe in a few sentences. Both the typing and the picture description tasks were logged with Inputlog, a keystroke logging tool that allows us to log and time stamp keystroke activity to reconstruct and describe text production processes. The main rationale behind keystroke logging is that writing fluency and flow reveal traces of the underlying cognitive processes. This explains the analytical focus on pause (length, number, distribution, location, etc.) and revision (number, type, operation, embeddedness, location, etc.) characteristics. As in speech, pause times are seen as indexical of cognitive effort. Results. Preliminary analysis already showed some promising results concerning pause times before, within and after words. For all variables, mixed effects models were used that included participants as a random effect and MMSE scores, GDS scores and word categories (such as determiners and nouns) as a fixed effect. For pause times before and after words cognitively impaired patients paused longer than healthy elderly. These variables did not show an interaction effect between the group participants (cognitively impaired or healthy elderly) belonged to and word categories. However, pause times within words did show an interaction effect, which indicates pause times within certain word categories differ significantly between patients and healthy elderly.

Keywords: Alzheimer's disease, keystroke logging, matching, writing process

Procedia PDF Downloads 366
84 Climate Safe House: A Community Housing Project Tackling Catastrophic Sea Level Rise in Coastal Communities

Authors: Chris Fersterer, Col Fay, Tobias Danielmeier, Kat Achterberg, Scott Willis

Abstract:

New Zealand, an island nation, has an extensive coastline peppered with small communities of iconic buildings known as Bachs. Post WWII, these modest buildings were constructed by their owners as retreats and generally were small, low cost, often using recycled material and often they fell below current acceptable building standards. In the latter part of the 20th century, real estate prices in many of these communities remained low and these areas became permanent residences for people attracted to this affordable lifestyle choice. The Blueskin Resilient Communities Trust (BRCT) is an organisation that recognises the vulnerability of communities in low lying settlements as now being prone to increased flood threat brought about by climate change and sea level rise. Some of the inhabitants of Blueskin Bay, Otago, NZ have already found their properties to be un-insurable because of increased frequency of flood events and property values have slumped accordingly. Territorial authorities also acknowledge this increased risk and have created additional compliance measures for new buildings that are less than 2 m above tidal peaks. Community resilience becomes an additional concern where inhabitants are attracted to a lifestyle associated with a specific location and its people when this lifestyle is unable to be met in a suburban or city context. Traditional models of social housing fail to provide the sense of community connectedness and identity enjoyed by the current residents of Blueskin Bay. BRCT have partnered with the Otago Polytechnic Design School to design a new form of community housing that can react to this environmental change. It is a longitudinal project incorporating participatory approaches as a means of getting people ‘on board’, to understand complex systems and co-develop solutions. In the first period, they are seeking industry support and funding to develop a transportable and fully self-contained housing model that exploits current technologies. BRCT also hope that the building will become an educational tool to highlight climate change issues facing us today. This paper uses the Climate Safe House (CSH) as a case study for education in architectural sustainability through experiential learning offered as part of the Otago Polytechnics Bachelor of Design. Students engage with the project with research methodologies, including site surveys, resident interviews, data sourced from government agencies and physical modelling. The process involves collaboration across design disciplines including product and interior design but also includes connections with industry, both within the education institution and stakeholder industries introduced through BRCT. This project offers a rich learning environment where students become engaged through project based learning within a community of practice, including architecture, construction, energy and other related fields. The design outcomes are expressed in a series of public exhibitions and forums where community input is sought in a truly participatory process.

Keywords: community resilience, problem based learning, project based learning, case study

Procedia PDF Downloads 288