Search results for: magnetic modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3100

Search results for: magnetic modelling

160 Winter Wheat Yield Forecasting Using Sentinel-2 Imagery at the Early Stages

Authors: Chunhua Liao, Jinfei Wang, Bo Shan, Yang Song, Yongjun He, Taifeng Dong

Abstract:

Winter wheat is one of the main crops in Canada. Forecasting of within-field variability of yield in winter wheat at the early stages is essential for precision farming. However, the crop yield modelling based on high spatial resolution satellite data is generally affected by the lack of continuous satellite observations, resulting in reducing the generalization ability of the models and increasing the difficulty of crop yield forecasting at the early stages. In this study, the correlations between Sentinel-2 data (vegetation indices and reflectance) and yield data collected by combine harvester were investigated and a generalized multivariate linear regression (MLR) model was built and tested with data acquired in different years. It was found that the four-band reflectance (blue, green, red, near-infrared) performed better than their vegetation indices (NDVI, EVI, WDRVI and OSAVI) in wheat yield prediction. The optimum phenological stage for wheat yield prediction with highest accuracy was at the growing stages from the end of the flowering to the beginning of the filling stage. The best MLR model was therefore built to predict wheat yield before harvest using Sentinel-2 data acquired at the end of the flowering stage. Further, to improve the ability of the yield prediction at the early stages, three simple unsupervised domain adaptation (DA) methods were adopted to transform the reflectance data at the early stages to the optimum phenological stage. The winter wheat yield prediction using multiple vegetation indices showed higher accuracy than using single vegetation index. The optimum stage for winter wheat yield forecasting varied with different fields when using vegetation indices, while it was consistent when using multispectral reflectance and the optimum stage for winter wheat yield prediction was at the end of flowering stage. The average testing RMSE of the MLR model at the end of the flowering stage was 604.48 kg/ha. Near the booting stage, the average testing RMSE of yield prediction using the best MLR was reduced to 799.18 kg/ha when applying the mean matching domain adaptation approach to transform the data to the target domain (at the end of the flowering) compared to that using the original data based on the models developed at the booting stage directly (“MLR at the early stage”) (RMSE =1140.64 kg/ha). This study demonstrated that the simple mean matching (MM) performed better than other DA methods and it was found that “DA then MLR at the optimum stage” performed better than “MLR directly at the early stages” for winter wheat yield forecasting at the early stages. The results indicated that the DA had a great potential in near real-time crop yield forecasting at the early stages. This study indicated that the simple domain adaptation methods had a great potential in crop yield prediction at the early stages using remote sensing data.

Keywords: wheat yield prediction, domain adaptation, Sentinel-2, within-field scale

Procedia PDF Downloads 38
159 Development of a Framework for Assessing Public Health Risk Due to Pluvial Flooding: A Case Study of Sukhumvit, Bangkok

Authors: Pratima Pokharel

Abstract:

When sewer overflow due to rainfall in urban areas, this leads to public health risks when an individual is exposed to that contaminated floodwater. Nevertheless, it is still unclear the extent to which the infections pose a risk to public health. This study analyzed reported diarrheal cases by month and age in Bangkok, Thailand. The results showed that the cases are reported higher in the wet season than in the dry season. It was also found that in Bangkok, the probability of infection with diarrheal diseases in the wet season is higher for the age group between 15 to 44. However, the probability of infection is highest for kids under 5 years, but they are not influenced by wet weather. Further, this study introduced a vulnerability that leads to health risks from urban flooding. This study has found some vulnerability variables that contribute to health risks from flooding. Thus, for vulnerability analysis, the study has chosen two variables, economic status, and age, that contribute to health risk. Assuming that the people's economic status depends on the types of houses they are living in, the study shows the spatial distribution of economic status in the vulnerability maps. The vulnerability map result shows that people living in Sukhumvit have low vulnerability to health risks with respect to the types of houses they are living in. In addition, from age the probability of infection of diarrhea was analyzed. Moreover, a field survey was carried out to validate the vulnerability of people. It showed that health vulnerability depends on economic status, income level, and education. The result depicts that people with low income and poor living conditions are more vulnerable to health risks. Further, the study also carried out 1D Hydrodynamic Advection-Dispersion modelling with 2-year rainfall events to simulate the dispersion of fecal coliform concentration in the drainage network as well as 1D/2D Hydrodynamic model to simulate the overland flow. The 1D result represents higher concentrations for dry weather flows and a large dilution of concentration on the commencement of a rainfall event, resulting in a drop of the concentration due to runoff generated after rainfall, whereas the model produced flood depth, flood duration, and fecal coliform concentration maps, which were transferred to ArcGIS to produce hazard and risk maps. In addition, the study also simulates the 5-year and 10-year rainfall simulations to show the variation in health hazards and risks. It was found that even though the hazard coverage is very high with a 10-year rainfall events among three rainfall events, the risk was observed to be the same with a 5-year and 10-year rainfall events.

Keywords: urban flooding, risk, hazard, vulnerability, health risk, framework

Procedia PDF Downloads 41
158 A Meta-Analysis of School-Based Suicide Prevention for Adolescents and Meta-Regressions of Contextual and Intervention Factors

Authors: E. H. Walsh, J. McMahon, M. P. Herring

Abstract:

Post-primary school-based suicide prevention (PSSP) is a valuable avenue to reduce suicidal behaviours in adolescents. The aims of this meta-analysis and meta-regression were 1) to quantify the effect of PSSP interventions on adolescent suicide ideation (SI) and suicide attempts (SA), and 2) to explore how intervention effects may vary based on important contextual and intervention factors. This study provides further support to the benefits of PSSP by demonstrating lower suicide outcomes in over 30,000 adolescents following PSSP and mental health interventions and tentatively suggests that intervention effectiveness may potentially vary based on intervention factors. The protocol for this study is registered on PROSPERO (ID=CRD42020168883). Population, intervention, comparison, outcomes, and study design (PICOs) defined eligible studies as cluster randomised studies (n=12) containing PSSP and measuring suicide outcomes. Aggregate electronic database EBSCO host, Web of Science, and Cochrane Central Register of Controlled Trials databases were searched. Cochrane bias tools for cluster randomised studies demonstrated that half of the studies were rated as low risk of bias. The Egger’s Regression Test adapted for multi-level modelling indicated that publication bias was not an issue (all ps > .05). Crude and corresponding adjusted pooled log odds ratios (OR) were computed using the Metafor package in R, yielding 12 SA and 19 SI effects. Multi-level random-effects models accounting for dependencies of effects from the same study revealed that in crude models, compared to controls, interventions were significantly associated with 13% (OR=0.87, 95% confidence interval (CI), [0.78,0.96], Q18 =15.41, p=0.63) and 34% (OR=0.66, 95%CI [0.47,0.91], Q10=16.31, p=0.13) lower odds of SI and SA, respectively. Adjusted models showed similar odds reductions of 15% (OR=0.85, 95%CI[0.75,0.95], Q18=10.04, p=0.93) and 28% (OR=0.72, 95%CI[0.59,0.87], Q10=10.46, p=0.49) for SI and SA, respectively. Within-cluster heterogeneity ranged from no heterogeneity to low heterogeneity for SA across crude and adjusted models (0-9%). No heterogeneity was identified for SI across crude and adjusted models (0%). Pre-specified univariate moderator analyses were not significant for SA (all ps < 0.05). Variations in average pooled SA odds reductions across categories of various intervention characteristics were observed (all ps < 0.05), which preliminarily suggests that the effectiveness of interventions may potentially vary across intervention factors. These findings have practical implications for researchers, clinicians, educators, and decision-makers. Further investigation of important logical, theoretical, and empirical moderators on PSSP intervention effectiveness is recommended to establish how and when PSSP interventions best reduce adolescent suicidal behaviour.

Keywords: adolescents, contextual factors, post-primary school-based suicide prevention, suicide ideation, suicide attempts

Procedia PDF Downloads 81
157 Measuring the Impact of Implementing an Effective Practice Skills Training Model in Youth Detention

Authors: Phillipa Evans, Christopher Trotter

Abstract:

Aims: This study aims to examine the effectiveness of a practice skills framework implemented in three youth detention centres in Juvenile Justice in New South Wales (NSW), Australia. The study is supported by a grant from and Australian Research Council and NSW Juvenile Justice. Recent years have seen a number of incidents in youth detention centres in Australia and other places. These have led to inquiries and reviews with some suggesting that detention centres often do not even meet basic human rights and do little in terms of providing opportunities for rehabilitation of residents. While there is an increasing body of research suggesting that community based supervision can be effective in reducing recidivism if appropriate skills are used by supervisors, there has been less work considering worker skills in youth detention settings. The research that has been done, however, suggest that teaching interpersonal skills to youth officers may be effective in enhancing the rehabilitation culture of centres. Positive outcomes have been seen in a UK detention centre for example, from teaching staff to do five-minute problem-solving interventions. The aim of this project is to examine the effectiveness of training and coaching youth detention staff in three NSW detention centres in interpersonal practice skills. Effectiveness is defined in terms of reductions in the frequency of critical incidents and improvements in the well-being of staff and young people. The research is important as the results may lead to the development of more humane and rehabilitative experiences for young people. Method: The study involves training staff in core effective practice skills and supporting staff in the use of those skills through supervision and de-briefing. The core effective practice skills include role clarification, pro-social modelling, brief problem solving, and relationship skills. The training also addresses some of the background to criminal behaviour including trauma. Data regarding critical incidents and well-being before and after the program implementation are being collected. This involves interviews with staff and young people, the completion of well-being scales, and examination of departmental records regarding critical incidents. In addition to the before and after comparison a matched control group which is not offered the intervention is also being used. The study includes more than 400 young people and 100 youth officers across 6 centres including the control sites. Data collection includes interviews with workers and young people, critical incident data such as assaults, use of lock ups and confinement and school attendance. Data collection also includes analysing video-tapes of centre activities for changes in the use of staff skills. Results: The project is currently underway with ongoing training and supervision. Early results will be available for the conference.

Keywords: custody, practice skills, training, youth workers

Procedia PDF Downloads 79
156 Understanding Evidence Dispersal Caused by the Effects of Using Unmanned Aerial Vehicles in Active Indoor Crime Scenes

Authors: Elizabeth Parrott, Harry Pointon, Frederic Bezombes, Heather Panter

Abstract:

Unmanned aerial vehicles (UAV’s) are making a profound effect within policing, forensic and fire service procedures worldwide. These intelligent devices have already proven useful in photographing and recording large-scale outdoor and indoor sites using orthomosaic and three-dimensional (3D) modelling techniques, for the purpose of capturing and recording sites during and post-incident. UAV’s are becoming an established tool as they are extending the reach of the photographer and offering new perspectives without the expense and restrictions of deploying full-scale aircraft. 3D reconstruction quality is directly linked to the resolution of captured images; therefore, close proximity flights are required for more detailed models. As technology advances deployment of UAVs in confined spaces is becoming more common. With this in mind, this study investigates the effects of UAV operation within active crimes scenes with regard to the dispersal of particulate evidence. To date, there has been little consideration given to the potential effects of using UAV’s within active crime scenes aside from a legislation point of view. Although potentially the technology can reduce the likelihood of contamination by replacing some of the roles of investigating practitioners. There is the risk of evidence dispersal caused by the effect of the strong airflow beneath the UAV, from the downwash of the propellers. The initial results of this study are therefore presented to determine the height of least effect at which to fly, and the commercial propeller type to choose to generate the smallest amount of disturbance from the dataset tested. In this study, a range of commercially available 4-inch propellers were chosen as a starting point due to the common availability and their small size makes them well suited for operation within confined spaces. To perform the testing, a rig was configured to support a single motor and propeller powered with a standalone mains power supply and controlled via a microcontroller. This was to mimic a complete throttle cycle and control the device to ensure repeatability. By removing the variances of battery packs and complex UAV structures to allow for a more robust setup. Therefore, the only changing factors were the propeller and operating height. The results were calculated via computer vision analysis of the recorded dispersal of the sample particles placed below the arm-mounted propeller. The aim of this initial study is to give practitioners an insight into the technology to use when operating within confined spaces as well as recognizing some of the issues caused by UAV’s within active crime scenes.

Keywords: dispersal, evidence, propeller, UAV

Procedia PDF Downloads 140
155 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 31
154 Online Monitoring and Control of Continuous Mechanosynthesis by UV-Vis Spectrophotometry

Authors: Darren A. Whitaker, Dan Palmer, Jens Wesholowski, James Flaherty, John Mack, Ahmad B. Albadarin, Gavin Walker

Abstract:

Traditional mechanosynthesis has been performed by either ball milling or manual grinding. However, neither of these techniques allow the easy application of process control. The temperature may change unpredictably due to friction in the process. Hence the amount of energy transferred to the reactants is intrinsically non-uniform. Recently, it has been shown that the use of Twin-Screw extrusion (TSE) can overcome these limitations. Additionally, TSE enables a platform for continuous synthesis or manufacturing as it is an open-ended process, with feedstocks at one end and product at the other. Several materials including metal-organic frameworks (MOFs), co-crystals and small organic molecules have been produced mechanochemically using TSE. The described advantages of TSE are offset by drawbacks such as increased process complexity (a large number of process parameters) and variation in feedstock flow impacting on product quality. To handle the above-mentioned drawbacks, this study utilizes UV-Vis spectrophotometry (InSpectroX, ColVisTec) as an online tool to gain real-time information about the quality of the product. Additionally, this is combined with real-time process information in an Advanced Process Control system (PharmaMV, Perceptive Engineering) allowing full supervision and control of the TSE process. Further, by characterizing the dynamic behavior of the TSE, a model predictive controller (MPC) can be employed to ensure the process remains under control when perturbed by external disturbances. Two reactions were studied; a Knoevenagel condensation reaction of barbituric acid and vanillin and, the direct amidation of hydroquinone by ammonium acetate to form N-Acetyl-para-aminophenol (APAP) commonly known as paracetamol. Both reactions could be carried out continuously using TSE, nuclear magnetic resonance (NMR) spectroscopy was used to confirm the percentage conversion of starting materials to product. This information was used to construct partial least squares (PLS) calibration models within the PharmaMV development system, which relates the percent conversion to product to the acquired UV-Vis spectrum. Once this was complete, the model was deployed within the PharmaMV Real-Time System to carry out automated optimization experiments to maximize the percentage conversion based on a set of process parameters in a design of experiments (DoE) style methodology. With the optimum set of process parameters established, a series of PRBS process response tests (i.e. Pseudo-Random Binary Sequences) around the optimum were conducted. The resultant dataset was used to build a statistical model and associated MPC. The controller maximizes product quality whilst ensuring the process remains at the optimum even as disturbances such as raw material variability are introduced into the system. To summarize, a combination of online spectral monitoring and advanced process control was used to develop a robust system for optimization and control of two TSE based mechanosynthetic processes.

Keywords: continuous synthesis, pharmaceutical, spectroscopy, advanced process control

Procedia PDF Downloads 144
153 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise

Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou

Abstract:

Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.

Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction

Procedia PDF Downloads 89
152 Knowledge Management in the Tourism Industry in Project Management Paradigm

Authors: Olga A. Burukina

Abstract:

Tourism is a complex socio-economic phenomenon, partly regulated by national tourism industries. The sustainable development of tourism in a region, country or in tourist destination depends on a number of factors (political, economic, social, cultural, legal and technological), the understanding and correct interpretation of which is invariably anthropocentric. It is logical that for the successful functioning of a tour operating company, it is necessary to ensure its sustainable development. Sustainable tourism is defined as tourism that fully considers its current and future economic, social and environmental impacts, taking into account the needs of the industry, the environment and the host communities. For the business enterprise, sustainable development is defined as adopting business strategies and activities that meet the needs of the enterprise and its stakeholders today while protecting, sustaining and enhancing the human and natural resources that will be needed in the future. In addition to a systemic approach to the analysis of tourist destinations, each tourism project can and should be considered as a system characterized by a very high degree of variability, since each particular case of its implementation differs from the previous and subsequent ones, sometimes in a cardinal way. At the same time, it is important to understand that this variability is predominantly of anthropogenic nature (except for force majeure situations that are considered separately and afterwards). Knowledge management is the process of creating, sharing, using and managing the knowledge and information of an organization. It refers to a multidisciplinary approach to achieve organisational objectives by making the best use of knowledge. Knowledge management is seen as a key systems component that allows obtaining, storing, transferring, and maintaining information and knowledge in particular, in a long-term perspective. The study aims, firstly, to identify (1) the dynamic changes in the Italian travel industry in the last 5 years before the COVID19 pandemic, which can be considered the scope of force majeure circumstances, (2) the impact of the pandemic on the industry and (3) efforts required to restore it, and secondly, how project management tools can help to improve knowledge management in tour operating companies to maintain their sustainability, diminish potential risks and restore their pre-pandemic performance level as soon as possible. The pilot research is based upon a systems approach and has employed a pilot survey, semi-structured interviews, prior research analysis (aka literature review), comparative analysis, cross-case analysis, and modelling. The results obtained are very encouraging: PM tools can improve knowledge management in tour operating companies and secure the more sustainable development of the Italian tourism industry based on proper knowledge management and risk management.

Keywords: knowledge management, project management, sustainable development, tourism industr

Procedia PDF Downloads 134
151 Characterization of Fine Particles Emitted by the Inland and Maritime Shipping

Authors: Malika Souada, Juanita Rausch, Benjamin Guinot, Christine Bugajny

Abstract:

The increase of global commerce and tourism makes the shipping sector an important contributor of atmospheric pollution. Both, airborne particles and gaseous pollutants have negative impact on health and climate. This is especially the case in port cities, due to the proximity of the exposed population to the shipping emissions in addition to other multiple sources of pollution linked to the surrounding urban activity. The objective of this study is to determine the concentrations of fine particles (immission), specifically PM2.5, PM1, PM0.3, BC and sulphates, in a context where maritime passenger traffic plays an important role (port area of Bordeaux centre). The methodology is based on high temporal resolution measurements of pollutants, correlated with meteorological and ship movements data. Particles and gaseous pollutants from seven maritime passenger ships were sampled and analysed during the docking, manoeuvring and berthing phases. The particle mass measurements were supplemented by measurements of the number concentration of ultrafine particles (<300 nm diameter). The different measurement points were chosen by taking into account the local meteorological conditions and by pre-modelling the dispersion of the smoke plumes. The results of the measurement campaign carried out during the summer of 2021 in the port of Bordeaux show that the detection of concentrations of particles emitted by ships proved to be punctual and stealthy. Punctual peaks of ultrafine particle concentration in number (P#/m3) and BC (ng/m3) were measured during the docking phases of the ships, but the concentrations returned to their background level within minutes. However, it appears that the influence of the docking phases does not significantly affect the air quality of Bordeaux centre in terms of mass concentration. Additionally, no clear differences in PM2.5 concentrations between the periods with and without ships at berth were observed. The urban background pollution seems to be mainly dominated by exhaust and non-exhaust road traffic emissions. However, temporal high-resolution measurements suggest a probable emission of gaseous precursors responsible for the formation of secondary aerosols related to the ship activities. This was evidenced by the high values of the PM1/BC and PN/BC ratios, tracers of non-primary particle formation, during periods of ship berthing vs. periods without ships at berth. The research findings from this study provide robust support for port area air quality assessment and source apportionment.

Keywords: characterization, fine particulate matter, harbour air quality, shipping impacts

Procedia PDF Downloads 80
150 Geochemical Evolution of Microgranular Enclaves Hosted in Cambro-Ordovician Kyrdem Granitoids, Meghalaya Plateau, Northeast India

Authors: K. Mohon Singh

Abstract:

Cambro-Ordovician (512.5 ± 8.7 Ma) felsic magmatism in the Kyrdem region of Meghalaya plateau, herewith referred to as Kyrdem granitoids (KG), intrudes the low-grade Shillong Group of metasediments and Precambrian Basement Gneissic complex forming an oval-shaped plutonic body with longer axis almost trending N-S. Thermal aureole is poorly developed or covered under the alluvium. KG exhibit very coarse grained porphyritic texture with abundant K-feldspar megacrysts (up to 9cm long) and subordinate amount of amphibole, biotite, plagioclase, and quartz. The size of K-feldspar megacrysts increases from margin (Dwarksuid) to the interior (Kyrdem) of the KG pluton. Late felsic pulses as fine grained granite, leucocratic (aplite), and pegmatite veins intrude the KG at several places. Grey and pink varieties of KG can be recognized, but pink colour of KG is the result of post-magmatic fluids, which have not affected the magnetic properties of KG. Modal composition of KG corresponds to quartz monzonite, monzogranite, and granodiorite. KG has been geochemically characterized as metaluminous (I-type) to peraluminous (S-type) granitoids. The KG is characterized by development of variable attitude of primary foliations mostly marked along the margin of the pluton and is located at the proximity of Tyrsad-Barapani lineament. The KG contains country rock xenoliths (amphibolite, gneiss, schist, etc.) which are mostly confined to the margin of the pluton, and microgranular enclaves (ME) are hosted in the porphyritic variety of KG. Microgranular Enclaves (ME) in Kyrdem Granitoids are fine- to medium grained, mesocratic to melanocratic, phenocryst bearing or phenocryst-free, rounded to ellipsoidal showing typical magmatic textures. Mafic-felsic phenocrysts in ME are partially corroded and dissolved because of their involvement in magma-mixing event, and thus represent xenocrysts. Sharp to diffused contacts of ME with host Kyrdem Granitoids, fine grained nature and presence of acicular apatite in ME suggest comingling and undercooling of coeval, semi-solidified ME magma within partly crystalline felsic host magma. Geochemical features recognize the nature of ME (molar A/CNK=0.76-1.42) and KG (molar A/CNK =0.41-1.75) similar to hybrid-type formed by mixing of mantle-derived mafic and crustal-derived felsic magmas. Major and trace including rare earth elements variations of ME suggest the involvement of combined processes such as magma mixing, mingling and crystallization differentiation in the evolution of ME but KG variations appear primarily controlled by fractionation of plagioclase, hornblende biotite, and accessory phases. Most ME are partially to nearly re-equilibrate chemically with felsic host KG during magma mixing and mingling processes.

Keywords: geochemistry, Kyrdem Granitoids, microgranular enclaves, Northeast India

Procedia PDF Downloads 91
149 Optimization of Operational Water Quality Parameters in a Drinking Water Distribution System Using Response Surface Methodology

Authors: Sina Moradi, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Patrick Hayde, Rose Amal

Abstract:

Chloramine is commonly used as a disinfectant in drinking water distribution systems (DWDSs), particularly in Australia and the USA. Maintaining a chloramine residual throughout the DWDS is important in ensuring microbiologically safe water is supplied at the customer’s tap. In order to simulate how chloramine behaves when it moves through the distribution system, a water quality network model (WQNM) can be applied. In this work, the WQNM was based on mono-chloramine decomposition reactions, which enabled prediction of mono-chloramine residual at different locations through a DWDS in Australia, using the Bentley commercial hydraulic package (Water GEMS). The accuracy of WQNM predictions is influenced by a number of water quality parameters. Optimization of these parameters in order to obtain the closest results in comparison with actual measured data in a real DWDS would result in both cost reduction as well as reduction in consumption of valuable resources such as energy and materials. In this work, the optimum operating conditions of water quality parameters (i.e. temperature, pH, and initial mono-chloramine concentration) to maximize the accuracy of mono-chloramine residual predictions for two water supply scenarios in an entire network were determined using response surface methodology (RSM). To obtain feasible and economical water quality parameters for highest model predictability, Design Expert 8.0 software (Stat-Ease, Inc.) was applied to conduct the optimization of three independent water quality parameters. High and low levels of the water quality parameters were considered, inevitably, as explicit constraints, in order to avoid extrapolation. The independent variables were pH, temperature and initial mono-chloramine concentration. The lower and upper limits of each variable for two water supply scenarios were defined and the experimental levels for each variable were selected based on the actual conditions in studied DWDS. It was found that at pH of 7.75, temperature of 34.16 ºC, and initial mono-chloramine concentration of 3.89 (mg/L) during peak water supply patterns, root mean square error (RMSE) of WQNM for the whole network would be minimized to 0.189, and the optimum conditions for averaged water supply occurred at pH of 7.71, temperature of 18.12 ºC, and initial mono-chloramine concentration of 4.60 (mg/L). The proposed methodology to predict mono-chloramine residual can have a great potential for water treatment plant operators in accurately estimating the mono-chloramine residual through a water distribution network. Additional studies from other water distribution systems are warranted to confirm the applicability of the proposed methodology for other water samples.

Keywords: chloramine decay, modelling, response surface methodology, water quality parameters

Procedia PDF Downloads 196
148 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology

Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao

Abstract:

With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.

Keywords: optimisation, plate, sensor effectiveness, vibration control

Procedia PDF Downloads 204
147 Densities and Volumetric Properties of {Difurylmethane + [(C5 – C8) N-Alkane or an Amide]} Binary Systems at 293.15, 298.15 and 303.15 K: Modelling Excess Molar Volumes by Prigogine-Flory-Patterson Theory

Authors: Belcher Fulele, W. A. A. Ddamba

Abstract:

Study of solvent systems contributes to the understanding of intermolecular interactions that occur in binary mixtures. These interactions involves among others strong dipole-dipole interactions and weak van de Waals interactions which are of significant application in pharmaceuticals, solvent extractions, design of reactors and solvent handling and storage processes. Binary mixtures of solvents can thus be used as a model to interpret thermodynamic behavior that occur in a real solution mixture. Densities of pure DFM, n-alkanes (n-pentane, n-hexane, n-heptane and n-octane) and amides (N-methylformamide, N-ethylformamide, N,N-dimethylformamide and N,N-dimethylacetamide) as well as their [DFM + ((C5-C8) n-alkane or amide)] binary mixtures over the entire composition range, have been reported at temperature 293.15, 298.15 and 303.15 K and atmospheric pressure. These data has been used to derive the thermodynamic properties: the excess molar volume of solution, apparent molar volumes, excess partial molar volumes, limiting excess partial molar volumes, limiting partial molar volumes of each component of a binary mixture. The results are discussed in terms of possible intermolecular interactions and structural effects that occur in the binary mixtures. The variation of excess molar volume with DFM composition for the [DFM + (C5-C7) n-alkane] binary mixture exhibit a sigmoidal behavior while for the [DFM + n-octane] binary system, positive deviation of excess molar volume function was observed over the entire composition range. For each of the [DFM + (C5-C8) n-alkane] binary mixture, the excess molar volume exhibited a fall with increase in temperature. The excess molar volume for each of [DFM + (NMF or NEF or DMF or DMA)] binary system was negative over the entire DFM composition at each of the three temperatures investigated. The negative deviations in excess molar volume values follow the order: DMA > DMF > NEF > NMF. Increase in temperature has a greater effect on component self-association than it has on complex formation between molecules of components in [DFM + (NMF or NEF or DMF or DMA)] binary mixture which shifts complex formation equilibrium towards complex to give a drop in excess molar volume with increase in temperature. The Prigogine-Flory-Patterson model has been applied at 298.15 K and reveals that the free volume is the most important contributing term to the excess experimental molar volume data for [DFM + (n-pentane or n-octane)] binary system. For [DFM + (NMF or DMF or DMA)] binary mixture, the interactional term and characteristic pressure term contributions are the most important contributing terms in describing the sign of experimental excess molar volume. The mixture systems contributed to the understanding of interactions of polar solvents with proteins (amides) with non-polar solvents (alkanes) in biological systems.

Keywords: alkanes, amides, excess thermodynamic parameters, Prigogine-Flory-Patterson model

Procedia PDF Downloads 327
146 Solution Thermodynamics, Photophysical and Computational Studies of TACH2OX, a C-3 Symmetric 8-Hydroxyquinoline: Abiotic Siderophore Analogue of Enterobactin

Authors: B. K. Kanungo, Monika Thakur, Minati Baral

Abstract:

8-hydroxyquinoline, (8HQ), experiences a renaissance due to its utility as a building block in metallosupramolecular chemistry and its versatile use of its derivatives in various fields of analytical chemistry, materials science, and pharmaceutics. It forms stable complexes with a variety of metal ions. Assembly of more than one such unit to form a polydentate chelator enhances its coordinating ability and the related properties due to the chelate effect resulting in high stability constant. Keeping in view the above, a nonadentate chelator N-[3,5-bis(8-hydroxyquinoline-2-amido)cyclohexyl]-8-hydroxyquinoline-2-carboxamide, (TACH2OX), containing a central cis,cis-1,3,5-triaminocyclohexane appended to three 8-hydroxyquinoline at 2-position through amide linkage is developed, and its solution thermodynamics, photophysical and Density Functional Theory (DFT) studies were undertaken. The synthesis of TACH2OX was carried out by condensation of cis,cis-1,3,5-triaminocyclohexane, (TACH) with 8‐hydroxyquinoline‐2‐carboxylic acid. The brown colored solid has been fully characterized through melting point, infrared, nuclear magnetic resonance, electrospray ionization mass and electronic spectroscopy. In solution, TACH2OX forms protonated complexes below pH 3.4, which consecutively deprotonates to generate trinegative ion with the rise of pH. Nine protonation constants for the ligand were obtained that ranges between 2.26 to 7.28. The interaction of the chelator with two trivalent metal ion Fe3+ and Al3+ were studied in aqueous solution at 298 K. The metal-ligand formation constants (ML) obtained by potentiometric and spectrophotometric method agree with each other. The protonated and hydrolyzed species were also detected in the system. The in-silico studies of the ligand, as well as the complexes including their protonated and deprotonated species assessed by density functional theory technique, gave an accurate correlation with each observed properties such as the protonation constants, stability constants, infra-red, nmr, electronic absorption and emission spectral bands. The nature of electronic and emission spectral bands in terms of number and type were ascertained from time-dependent density functional theory study and the natural transition orbitals (NTO). The global reactivity indices parameters were used for comparison of the reactivity of the ligand and the complex molecules. The natural bonding orbital (NBO) analysis could successfully describe the structure and bonding of the metal-ligand complexes specifying the percentage of contribution in atomic orbitals in the creation of molecular orbitals. The obtained high value of metal-ligand formation constants indicates that the newly synthesized chelator is a very powerful synthetic chelator. The minimum energy molecular modeling structure of the ligand suggests that the ligand, TACH2OX, in a tripodal fashion firmly coordinates to the metal ion as hexa-coordinated chelate displaying distorted octahedral geometry by binding through three sets of N, O- donor atoms, present in each pendant arm of the central tris-cyclohexaneamine tripod.

Keywords: complexes, DFT, formation constant, TACH2OX

Procedia PDF Downloads 117
145 Managed Aquifer Recharge (MAR) for the Management of Stormwater on the Cape Flats, Cape Town

Authors: Benjamin Mauck, Kevin Winter

Abstract:

The city of Cape Town in South Africa, has shown consistent economic and population growth in the last few decades and that growth is expected to continue to increase into the future. These projected economic and population growth rates are set to place additional pressure on the city’s already strained water supply system. Thus, given Cape Town’s water scarcity, increasing water demands and stressed water supply system, coupled with global awareness around the issues of sustainable development, environmental protection and climate change, alternative water management strategies are required to ensure water is sustainably managed. Water Sensitive Urban Design (WSUD) is an approach to sustainable urban water management that attempts to assign a resource value to all forms of water in the urban context, viz. stormwater, wastewater, potable water and groundwater. WSUD employs a wide range of strategies to improve the sustainable management of urban water such as the water reuse, developing alternative available supply sources, sustainable stormwater management and enhancing the aesthetic and recreational value of urban water. Managed Aquifer Recharge (MAR) is one WSUD strategy which has proven to be a successful reuse strategy in a number of places around the world. MAR is the process where an aquifer is intentionally or artificially recharged, which provides a valuable means of water storage while enhancing the aquifers supply potential. This paper investigates the feasibility of implementing MAR in the sandy, unconfined Cape Flats Aquifer (CFA) in Cape Town. The main objective of the study is to assess if MAR is a viable strategy for stormwater management on the Cape Flats, aiding the prevention or mitigation of the seasonal flooding that occurs on the Cape Flats, while also improving the supply potential of the aquifer. This involves the infiltration of stormwater into the CFA during the wet winter months and in turn, abstracting from the CFA during the dry summer months for fit-for-purpose uses in order to optimise the recharge and storage capacity of the CFA. The fully-integrated MIKE SHE model is used in this study to simulate both surface water and groundwater hydrology. This modelling approach enables the testing of various potential recharge and abstraction scenarios required for implementation of MAR on the Cape Flats. Further MIKE SHE scenario analysis under projected future climate scenarios provides insight into the performance of MAR as a stormwater management strategy under climate change conditions. The scenario analysis using an integrated model such as MIKE SHE is a valuable tool for evaluating the feasibility of the MAR as a stormwater management strategy and its potential to contribute towards improving Cape Town’s water security into the future.

Keywords: managed aquifer recharge, stormwater management, cape flats aquifer, MIKE SHE

Procedia PDF Downloads 225
144 The Current Application of BIM - An Empirical Study Focusing on the BIM-Maturity Level

Authors: Matthias Stange

Abstract:

Building Information Modelling (BIM) is one of the most promising methods in the building design process and plays an important role in the digitalization of the Architectural, Engineering, and Construction (AEC) Industry. The application of BIM is seen as the key enabler for increasing productivity in the construction industry. The model-based collaboration using the BIM method is intended to significantly reduce cost increases, schedule delays, and quality problems in the planning and construction of buildings. Numerous qualitative studies based on expert interviews support this theory and report perceived benefits from the use of BIM in terms of achieving project objectives related to cost, schedule, and quality. However, there is a large research gap in analysing quantitative data collected from real construction projects regarding the actual benefits of applying BIM based on representative sample size and different application regions as well as different project typologies. In particular, the influence of the project-related BIM maturity level is completely unexplored. This research project examines primary data from 105 construction projects worldwide using quantitative research methods. Projects from the areas of residential, commercial, and industrial construction as well as infrastructure and hydraulic engineering were examined in application regions North America, Australia, Europe, Asia, MENA region, and South America. First, a descriptive data analysis of 6 independent project variables (BIM maturity level, application region, project category, project type, project size, and BIM level) were carried out using statistical methods. With the help of statisticaldata analyses, the influence of the project-related BIM maturity level on 6 dependent project variables (deviation in planning time, deviation in construction time, number of planning collisions, frequency of rework, number of RFIand number of changes) was investigated. The study revealed that most of the benefits of using BIM perceived through numerous qualitative studies have not been confirmed. The results of the examined sample show that the application of BIM did not have an improving influence on the dependent project variables, especially regarding the quality of the planning itself and the adherence to the schedule targets. The quantitative research suggests the conclusion that the BIM planning method in its current application has not (yet) become a recognizable increase in productivity within the planning and construction process. The empirical findings indicate that this is due to the overall low level of BIM maturity in the projects of the examined sample. As a quintessence, the author suggests that the further implementation of BIM should primarily focus on an application-oriented and consistent development of the project-related BIM maturity level instead of implementing BIM for its own sake. Apparently, there are still significant difficulties in the interweaving of people, processes, and technology.

Keywords: AEC-process, building information modeling, BIM maturity level, project results, productivity of the construction industry

Procedia PDF Downloads 52
143 Stability Analysis of Hossack Suspension Systems in High Performance Motorcycles

Authors: Ciro Moreno-Ramirez, Maria Tomas-Rodriguez, Simos A. Evangelou

Abstract:

A motorcycle's front end links the front wheel to the motorcycle's chassis and has two main functions: the front wheel suspension and the vehicle steering. Up to this date, several suspension systems have been developed in order to achieve the best possible front end behavior, being the telescopic fork the most common one and already subjected to several years of study in terms of its kinematics, dynamics, stability and control. A motorcycle telescopic fork suspension model consists of a couple of outer tubes which contain the suspension components (coil springs and dampers) internally and two inner tubes which slide into the outer ones allowing the suspension travel. The outer tubes are attached to the frame through two triple trees which connect the front end to the main frame through the steering bearings and allow the front wheel to turn about the steering axis. This system keeps the front wheel's displacement in a straight line parallel to the steering axis. However, there exist alternative suspension designs that allow different trajectories of the front wheel with the suspension travel. In this contribution, the authors investigate an alternative front suspension system (Hossack suspension) and its influence on the motorcycle nonlinear dynamics to identify and reduce stability risks that a new suspension systems may introduce in the motorcycle dynamics. Based on an existing high-fidelity motorcycle mathematical model, the front end geometry is modified to accommodate a Hossack suspension system. It is characterized by a double wishbone design that varies the front end geometry on certain maneuverings and, consequently, the machine's behavior/response. It consists of a double wishbone structure directly attached to the chassis. In here, the kinematics of this system and its impact on the motorcycle performance/stability are analyzed and compared to the well known telescopic fork suspension system. The framework of this research is the mathematical modelling and numerical simulation. Full stability analyses are performed in order to understand how the motorcycle dynamics may be affected by the newly introduced front end design. This study is carried out by a combination of nonlinear dynamical simulation and root-loci methods. A modal analysis is performed in order to get a deeper understanding of the different modes of oscillation and how the Hossack suspension system affects them. The results show that different kinematic designs of a double wishbone suspension systems do not modify the general motorcycle's stability. The normal modes properties remain unaffected by the new geometrical configurations. However, these normal modes differ from one suspension system to the other. It is seen that the normal modes behaviour depends on various important dynamic parameters, such as the front frame flexibility, the steering damping coefficient and the centre of mass location.

Keywords: nonlinear mechanical systems, motorcycle dynamics, suspension systems, stability

Procedia PDF Downloads 200
142 Non-Invasive Characterization of the Mechanical Properties of Arterial Walls

Authors: Bruno RamaëL, GwenaëL Page, Catherine Knopf-Lenoir, Olivier Baledent, Anne-Virginie Salsac

Abstract:

No routine technique currently exists for clinicians to measure the mechanical properties of vascular walls non-invasively. Most of the data available in the literature come from traction or dilatation tests conducted ex vivo on native blood vessels. The objective of the study is to develop a non-invasive characterization technique based on Magnetic Resonance Imaging (MRI) measurements of the deformation of vascular walls under pulsating blood flow conditions. The goal is to determine the mechanical properties of the vessels by inverse analysis, coupling imaging measurements and numerical simulations of the fluid-structure interactions. The hyperelastic properties are identified using Solidworks and Ansys workbench (ANSYS Inc.) solving an optimization technique. The vessel of interest targeted in the study is the common carotid artery. In vivo MRI measurements of the vessel anatomy and inlet velocity profiles was acquired along the facial vascular network on a cohort of 30 healthy volunteers: - The time-evolution of the blood vessel contours and, thus, of the cross-section surface area was measured by 3D imaging angiography sequences of phase-contrast MRI. - The blood flow velocity was measured using a 2D CINE MRI phase contrast (PC-MRI) method. Reference arterial pressure waveforms were simultaneously measured in the brachial artery using a sphygmomanometer. The three-dimensional (3D) geometry of the arterial network was reconstructed by first creating an STL file from the raw MRI data using the open source imaging software ITK-SNAP. The resulting geometry was then transformed with Solidworks into volumes that are compatible with Ansys softwares. Tetrahedral meshes of the wall and fluid domains were built using the ANSYS Meshing software, with a near-wall mesh refinement method in the case of the fluid domain to improve the accuracy of the fluid flow calculations. Ansys Structural was used for the numerical simulation of the vessel deformation and Ansys CFX for the simulation of the blood flow. The fluid structure interaction simulations showed that the systolic and diastolic blood pressures of the common carotid artery could be taken as reference pressures to identify the mechanical properties of the different arteries of the network. The coefficients of the hyperelastic law were identified using Ansys Design model for the common carotid. Under large deformations, a stiffness of 800 kPa is measured, which is of the same order of magnitude as the Young modulus of collagen fibers. Areas of maximum deformations were highlighted near bifurcations. This study is a first step towards patient-specific characterization of the mechanical properties of the facial vessels. The method is currently applied on patients suffering from facial vascular malformations and on patients scheduled for facial reconstruction. Information on the blood flow velocity as well as on the vessel anatomy and deformability will be key to improve surgical planning in the case of such vascular pathologies.

Keywords: identification, mechanical properties, arterial walls, MRI measurements, numerical simulations

Procedia PDF Downloads 292
141 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat

Authors: M. Venegas, M. De Vega, N. García-Hernando

Abstract:

Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.

Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy

Procedia PDF Downloads 257
140 Molecular Modeling and Prediction of the Physicochemical Properties of Polyols in Aqueous Solution

Authors: Maria Fontenele, Claude-Gilles Dussap, Vincent Dumouilla, Baptiste Boit

Abstract:

Roquette Frères is a producer of plant-based ingredients that employs many processes to extract relevant molecules and often transforms them through chemical and physical processes to create desired ingredients with specific functionalities. In this context, Roquette encounters numerous multi-component complex systems in their processes, including fibers, proteins, and carbohydrates, in an aqueous environment. To develop, control, and optimize both new and old processes, Roquette aims to develop new in silico tools. Currently, Roquette uses process modelling tools which include specific thermodynamic models and is willing to develop computational methodologies such as molecular dynamics simulations to gain insights into the complex interactions in such complex media, and especially hydrogen bonding interactions. The issue at hand concerns aqueous mixtures of polyols with high dry matter content. The polyols mannitol and sorbitol molecules are diastereoisomers that have nearly identical chemical structures but very different physicochemical properties: for example, the solubility of sorbitol in water is 2.5 kg/kg of water, while mannitol has a solubility of 0.25 kg/kg of water at 25°C. Therefore, predicting liquid-solid equilibrium properties in this case requires sophisticated solution models that cannot be based solely on chemical group contributions, knowing that for mannitol and sorbitol, the chemical constitutive groups are the same. Recognizing the significance of solvation phenomena in polyols, the GePEB (Chemical Engineering, Applied Thermodynamics, and Biosystems) team at Institut Pascal has developed the COSMO-UCA model, which has the structural advantage of using quantum mechanics tools to predict formation and phase equilibrium properties. In this work, we use molecular dynamics simulations to elucidate the behavior of polyols in aqueous solution. Specifically, we employ simulations to compute essential metrics such as radial distribution functions and hydrogen bond autocorrelation functions. Our findings illuminate a fundamental contrast: sorbitol and mannitol exhibit disparate hydrogen bond lifetimes within aqueous environments. This observation serves as a cornerstone in elucidating the divergent physicochemical properties inherent to each compound, shedding light on the nuanced interplay between their molecular structures and water interactions. We also present a methodology to predict the physicochemical properties of complex solutions, taking as sole input the three-dimensional structure of the molecules in the medium. Finally, by developing knowledge models, we represent some physicochemical properties of aqueous solutions of sorbitol and mannitol.

Keywords: COSMO models, hydrogen bond, molecular dynamics, thermodynamics

Procedia PDF Downloads 12
139 Techno-Economic Analysis of 1,3-Butadiene and ε-Caprolactam Production from C6 Sugars

Authors: Iris Vural Gursel, Jonathan Moncada, Ernst Worrell, Andrea Ramirez

Abstract:

In order to achieve the transition from a fossil to bio-based economy, biomass needs to replace fossil resources in meeting the world’s energy and chemical needs. This calls for development of biorefinery systems allowing cost-efficient conversion of biomass to chemicals. In biorefinery systems, feedstock is converted to key intermediates called platforms which are converted to wide range of marketable products. The C6 sugars platform stands out due to its unique versatility as precursor for multiple valuable products. Among the different potential routes from C6 sugars to bio-based chemicals, 1,3-butadiene and ε-caprolactam appear to be of great interest. Butadiene is an important chemical for the production of synthetic rubbers, while caprolactam is used in production of nylon-6. In this study, ex-ante techno-economic performance of 1,3-butadiene and ε-caprolactam routes from C6 sugars were assessed. The aim is to provide insight from an early stage of development into the potential of these new technologies, and the bottlenecks and key cost-drivers. Two cases for each product line were analyzed to take into consideration the effect of possible changes on the overall performance of both butadiene and caprolactam production. Conceptual process design for the processes was developed using Aspen Plus based on currently available data from laboratory experiments. Then, operating and capital costs were estimated and an economic assessment was carried out using Net Present Value (NPV) as indicator. Finally, sensitivity analyses on processing capacity and prices was done to take into account possible variations. Results indicate that both processes perform similarly from an energy intensity point of view ranging between 34-50 MJ per kg of main product. However, in terms of processing yield (kg of product per kg of C6 sugar), caprolactam shows higher yield by a factor 1.6-3.6 compared to butadiene. For butadiene production, with the economic parameters used in this study, for both cases studied, a negative NPV (-642 and -647 M€) was attained indicating economic infeasibility. For the caprolactam production, one of the cases also showed economic infeasibility (-229 M€), but the case with the higher caprolactam yield resulted in a positive NPV (67 M€). Sensitivity analysis indicated that the economic performance of caprolactam production can be improved with the increase in capacity (higher C6 sugars intake) reflecting benefits of the economies of scale. Furthermore, humins valorization for heat and power production was considered and found to have a positive effect. Butadiene production was found sensitive to the price of feedstock C6 sugars and product butadiene. However, even at 100% variation of the two parameters, butadiene production remained economically infeasible. Overall, the caprolactam production line shows higher economic potential in comparison to that of butadiene. The results are useful in guiding experimental research and providing direction for further development of bio-based chemicals.

Keywords: bio-based chemicals, biorefinery, C6 sugars, economic analysis, process modelling

Procedia PDF Downloads 128
138 Stability of Porous SiC Based Materials under Relevant Conditions of Radiation and Temperature

Authors: Marta Malo, Carlota Soto, Carmen García-Rosales, Teresa Hernández

Abstract:

SiC based composites are candidates for possible use as structural and functional materials in the future fusion reactors, the main role is intended for the blanket modules. In the blanket, the neutrons produced in the fusion reaction slow down and their energy is transformed into heat in order to finally generate electrical power. In the blanket design named Dual Coolant Lead Lithium (DCLL), a PbLi alloy for power conversion and tritium breeding circulates inside hollow channels called Flow Channel Inserts (FCIs). These FCI must protect the steel structures against the highly corrosive PbLi liquid and the high temperatures, but also provide electrical insulation in order to minimize magnetohydrodynamic interactions of the flowing liquid metal with the high magnetic field present in a magnetically confined fusion environment. Due to their nominally high temperature and radiation stability as well as corrosion resistance, SiC is the main choice for the flow channel inserts. The significantly lower manufacturing cost presents porous SiC (dense coating is required in order to assure protection against corrosion and as a tritium barrier) as a firm alternative to SiC/SiC composites for this purpose. This application requires the materials to be exposed to high radiation levels and extreme temperatures, conditions for which previous studies have shown noticeable changes in both the microstructure and the electrical properties of different types of silicon carbide. Both initial properties and radiation/temperature induced damage strongly depend on the crystal structure, polytype, impurities/additives that are determined by the fabrication process, so the development of a suitable material requires full control of these variables. For this work, several SiC samples with different percentage of porosity and sintering additives have been manufactured by the so-called sacrificial template method at the Ceit-IK4 Technology Center (San Sebastián, Spain), and characterized at Ciemat (Madrid, Spain). Electrical conductivity was measured as a function of temperature before and after irradiation with 1.8 MeV electrons in the Ciemat HVEC Van de Graaff accelerator up to 140 MGy (~ 2·10 -5 dpa). Radiation-induced conductivity (RIC) was also examined during irradiation at 550 ºC for different dose rates (from 0.5 to 5 kGy/s). Although no significant RIC was found in general for any of the samples, electrical conductivity increase with irradiation dose was observed to occur for some compositions with a linear tendency. However, first results indicate enhanced radiation resistance for coated samples. Preliminary thermogravimetric tests of selected samples, together with posterior XRD analysis allowed interpret radiation-induced modification of the electrical conductivity in terms of changes in the SiC crystalline structure. Further analysis is needed in order to confirm this.

Keywords: DCLL blanket, electrical conductivity, flow channel insert, porous SiC, radiation damage, thermal stability

Procedia PDF Downloads 174
137 Techno Economic Analysis for Solar PV and Hydro Power for Kafue Gorge Power Station

Authors: Elvis Nyirenda

Abstract:

This research study work was done to evaluate and propose an optimum measure to enhance the uptake of clean energy technologies such as solar photovoltaics, the study also aims at enhancing the country’s energy mix from the overdependence on hydro power which is susceptible to droughts and climate change challenges The country in the years 2015 - 2016 and 2018 - 2019 had received rainfall below average due to climate change and a shift in the weather pattern; this resulted in prolonged power outages and load shedding for more than 10 hours per day. ZESCO Limited, the utility company that owns infrastructure in the generation, transmission, and distribution of electricity (state-owned), is seeking alternative sources of energy in order to reduce the over-dependence on hydropower stations. One of the alternative sources of energy is Solar Energy from the sun. However, solar power is intermittent in nature and to smoothen the load curve, investment in robust energy storage facilities is of great importance to enhance security and reliability of electricity supply in the country. The methodology of the study looked at the historical performance of the Kafue gorge upper power station and utilised the hourly generation figures as input data for generation modelling in Homer software. The average yearly demand was derived from the available data on the system SCADA. The two dams were modelled as natural battery with the absolute state of charging and discharging determined by the available water resource and the peak electricity demand. The software Homer Energy System is used to simulate the scheme incorporating a pumped storage facility and Solar photovoltaic systems. The pumped hydro scheme works like a natural battery for the conservation of water, with the only losses being evaporation and water leakages from the dams and the turbines. To address the problem of intermittency on the solar resource and the non-availability of water for hydropower generation, the study concluded that utilising the existing Hydro power stations, Kafue Gorge upper and Kafue Gorge Lower to work conjunctively with Solar energy will reduce power deficits and increase the security of supply for the country. An optimum capacity of 350MW of solar PV can be integrated while operating Kafue Gorge power station in both generating and pumping mode to enable efficient utilisation of water at Kafue Gorge upper Dam and Kafue Gorge Lower dam.

Keywords: hydropower, solar power systems, energy storage, photovoltaics, solar irradiation, pumped hydro storage system, supervisory control and data acquisition, Homer energy

Procedia PDF Downloads 87
136 Determining the Distance Consumers Are Willing to Travel to a Store: A Structural Equation Model Approach

Authors: Fuseina Mahama, Lieselot Vanhaverbeke

Abstract:

This research investigates the impact of patronage determinants on the distance consumers are willing to travel to patronize a tire shop. Although store patronage has been acknowledged as an important domain and has received substantial research interest, most of the studies so far conducted focus on grocery retail, leaving other categories of goods widely unexplored. In this study, we focus on car tires and provide a new perspective to the specific factors that influence tire shop patronage. An online survey of consumers’ tyre purchasing behaviour was conducted among private car owners in Belgium. A sample of 864 respondents was used in the study, with almost four out of five of them being male. 84% of the respondents had purchased a car tyre in the last 24 months and on average travelled 22.4kms to patronise a tyre shop. We tested the direct and mediated effects of store choice determinants on distance consumers are willing to travel. All hypotheses were tested using Structural Equation Modelling (SEM). Our findings show that with an increase in the consumer’s age the distance they were willing to travel to a tire shop decreased. Similarly, consumers who deemed proximity an important determinant of a tire shop our findings confirmed a negative effect on willingness to travel. On the other hand, the determinants price, personal contact and professionalism all had a positive effect on distance. This means that consumers actively sought out tire shops with these characteristics and were willing to travel longer distances in order to visit them. The indirect effects of the determinants flexible opening hours, family recommendation, dealer reputation, receiving auto service at home and availability of preferred brand on distance are mediated by dealer trust. Gender had a minimal effect on distance, with females exhibiting a stronger relation in terms of dealer trust as compared to males. Overall, we found that market relevant factors were better predictors of distance; and proximity, dealer trust and professionalism have the most profound effects on distance that consumers are willing to travel. This is related to the fact that the nature of shopping goods (among which are car tires) typically reinforces consumers to be more engaged in the shopping process, therefore factors that have to do with the store (e.g. location) and shopping process play a key role in store choice decision. These findings are very specific to shopping goods and cannot be generalized to other categories of goods. For marketers and retailers these findings can have direct implications on their location strategies. The factors found to be relevant to tire shop patronage will be used in our next study to calibrate a location model to be utilised to identify the optimum location for siting new tyre shop outlets and service centres.

Keywords: dealer trust, distance to store, tire store patronage, willingness to travel

Procedia PDF Downloads 214
135 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 235
134 Climate Change and Perceived Socialization: The Role of Parents’ Climate Change Coping Style and Household Communication

Authors: Estefanya Vazquez-Casaubon, Veroline Cauberghe, Dieneke Van de Sompel, Hayley Pearce

Abstract:

Working together to reduce the anthropogenic impact should be a collective action, including effort within the household. In the matter, children are considered to have an important role in influencing the household to reduce the environmental impact through reversed socialization where children motivate and increase the concern of the parents towards environmental protection. Previous studies reveal that communication between parents and kids is key for effective reversed socialization. However, multiple barriers have been identified in the literature, such as the acceptance of the influence from the kids, the properties of the communication, among other factors. Based on the previous evidence, the present study aims to assess barriers and facilitators of communication at the household level that have an impact on reversed socialization. More precisely, the study examines how parents’ climate change coping strategy (problem-focused, meaning-focused, disregarding) influences the valence and the type of the communication related to climate change, and eventually the extent to which they report their beliefs and behaviours to be influenced by the pro-environmental perspectives of their children; i.e. reversed socialization. Via an online survey, 723 Belgian parents self-reported on communication about environmental protection and risk within their household (such as the frequency of exchange about topics related to climate change sourced from school, the household rules, imparting knowledge to the children, and outer factors like media or peer pressure, the emotional valence of the communication), their perceived socialization, and personal factors (coping mechanisms towards climate change). The results, using structural equation modelling, revealed that parents applying a problem-solving coping strategy related to climate change, appear to communicate more often in a positive than in a negative manner. Parents with a disregarding coping style towards climate change appear to communicate less often in a positive way within the household. Parents that cope via meaning-making of climate change showed to communicate less often in either a positive or negative way. Moreover, the perceived valence of the communication (positive or negative) influenced the frequency and type of household communication. Positive emotions increased the frequency of the communication overall. However, the direct effect of neither of the coping mechanisms on the reversed socialization was significant. High frequency of communication about the media, environmental views of the household members among other external topics had a positive impact on the perceived socialization, followed by discussions school-related; while parental instructing had a negative impact on the perceived socialization. Moreover, the frequency of communication was strongly affected by the perceived valence of the communication (positive or negative). The results go in line with previous evidence that a higher frequency of communication facilitates reversed socialization. Hence the results outstand how the coping mechanisms of the parents can be either a facilitator when they cope via problem-solving, while parents that disregard might avert frequent communication about climate change at the household.

Keywords: communication, parents’ coping mechanisms, environmental protection, household, perceived socialization

Procedia PDF Downloads 59
133 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques

Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev

Abstract:

Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.

Keywords: data analysis, demand modeling, healthcare, medical facilities

Procedia PDF Downloads 114
132 The Use of Geographic Information System Technologies for Geotechnical Monitoring of Pipeline Systems

Authors: A. G. Akhundov

Abstract:

Issues of obtaining unbiased data on the status of pipeline systems of oil- and oil product transportation become especially important when laying and operating pipelines under severe nature and climatic conditions. The essential attention is paid here to researching exogenous processes and their impact on linear facilities of the pipeline system. Reliable operation of pipelines under severe nature and climatic conditions, timely planning and implementation of compensating measures are only possible if operation conditions of pipeline systems are regularly monitored, and changes of permafrost soil and hydrological operation conditions are accounted for. One of the main reasons for emergency situations to appear is the geodynamic factor. Emergency situations are proved by the experience to occur within areas characterized by certain conditions of the environment and to develop according to similar scenarios depending on active processes. The analysis of natural and technical systems of main pipelines at different stages of monitoring gives a possibility of making a forecast of the change dynamics. The integration of GIS technologies, traditional means of geotechnical monitoring (in-line inspection, geodetic methods, field observations), and remote methods (aero-visual inspection, aero photo shooting, air and ground laser scanning) provides the most efficient solution of the problem. The united environment of geo information system (GIS) is a comfortable way to implement the monitoring system on the main pipelines since it provides means to describe a complex natural and technical system and every element thereof with any set of parameters. Such GIS enables a comfortable simulation of main pipelines (both in 2D and 3D), the analysis of situations and selection of recommendations to prevent negative natural or man-made processes and to mitigate their consequences. The specifics of such systems include: a multi-dimensions simulation of facilities in the pipeline system, math modelling of the processes to be observed, and the use of efficient numeric algorithms and software packets for forecasting and analyzing. We see one of the most interesting possibilities of using the monitoring results as generating of up-to-date 3D models of a facility and the surrounding area on the basis of aero laser scanning, data of aerophotoshooting, and data of in-line inspection and instrument measurements. The resulting 3D model shall be the basis of the information system providing means to store and process data of geotechnical observations with references to the facilities of the main pipeline; to plan compensating measures, and to control their implementation. The use of GISs for geotechnical monitoring of pipeline systems is aimed at improving the reliability of their operation, reducing the probability of negative events (accidents and disasters), and at mitigation of consequences thereof if they still are to occur.

Keywords: databases, 3D GIS, geotechnical monitoring, pipelines, laser scaning

Procedia PDF Downloads 165
131 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 110