Search results for: SWAT modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3982

Search results for: SWAT modeling

652 Investigation of Projected Organic Waste Impact on a Tropical Wetland in Singapore

Authors: Swee Yang Low, Dong Eon Kim, Canh Tien Trinh Nguyen, Yixiong Cai, Shie-Yui Liong

Abstract:

Nee Soon swamp forest is one of the last vestiges of tropical wetland in Singapore. Understanding the hydrological regime of the swamp forest and implications for water quality is critical to guide stakeholders in implementing effective measures to preserve the wetland against anthropogenic impacts. In particular, although current field measurement data do not indicate a concern with organic pollution, reviewing the ways in which the wetland responds to elevated organic waste influx (and the corresponding impact on dissolved oxygen, DO) can help identify potential hotspots, and the impact on the outflow from the catchment which drains into downstream controlled watercourses. An integrated water quality model is therefore developed in this study to investigate spatial and temporal concentrations of DO levels and organic pollution (as quantified by biochemical oxygen demand, BOD) within the catchment’s river network under hypothetical, projected scenarios of spiked upstream inflow. The model was developed using MIKE HYDRO for modelling the study domain, as well as the MIKE ECO Lab numerical laboratory for characterising water quality processes. Model parameters are calibrated against time series of observed discharges at three measurement stations along the river network. Over a simulation period of April 2014 to December 2015, the calibrated model predicted that a continuous spiked inflow of 400 mg/l BOD will elevate downstream concentrations at the catchment outlet to an average of 12 mg/l, from an assumed nominal baseline BOD of 1 mg/l. Levels of DO were decreased from an initial 5 mg/l to 0.4 mg/l. Though a scenario of spiked organic influx at the swamp forest’s undeveloped upstream sub-catchments is currently unlikely to occur, the outcomes nevertheless will be beneficial for future planning studies in understanding how the water quality of the catchment will be impacted should urban redevelopment works be considered around the swamp forest.

Keywords: hydrology, modeling, water quality, wetland

Procedia PDF Downloads 139
651 Chemometric Regression Analysis of Radical Scavenging Ability of Kombucha Fermented Kefir-Like Products

Authors: Strahinja Kovacevic, Milica Karadzic Banjac, Jasmina Vitas, Stefan Vukmanovic, Radomir Malbasa, Lidija Jevric, Sanja Podunavac-Kuzmanovic

Abstract:

The present study deals with chemometric regression analysis of quality parameters and the radical scavenging ability of kombucha fermented kefir-like products obtained with winter savory (WS), peppermint (P), stinging nettle (SN) and wild thyme tea (WT) kombucha inoculums. Each analyzed sample was described by milk fat content (MF, %), total unsaturated fatty acids content (TUFA, %), monounsaturated fatty acids content (MUFA, %), polyunsaturated fatty acids content (PUFA, %), the ability of free radicals scavenging (RSA Dₚₚₕ, % and RSA.ₒₕ, %) and pH values measured after each hour from the start until the end of fermentation. The aim of the conducted regression analysis was to establish chemometric models which can predict the radical scavenging ability (RSA Dₚₚₕ, % and RSA.ₒₕ, %) of the samples by correlating it with the MF, TUFA, MUFA, PUFA and the pH value at the beginning, in the middle and at the end of fermentation process which lasted between 11 and 17 hours, until pH value of 4.5 was reached. The analysis was carried out applying univariate linear (ULR) and multiple linear regression (MLR) methods on the raw data and the data standardized by the min-max normalization method. The obtained models were characterized by very limited prediction power (poor cross-validation parameters) and weak statistical characteristics. Based on the conducted analysis it can be concluded that the resulting radical scavenging ability cannot be precisely predicted only on the basis of MF, TUFA, MUFA, PUFA content, and pH values, however, other quality parameters should be considered and included in the further modeling. This study is based upon work from project: Kombucha beverages production using alternative substrates from the territory of the Autonomous Province of Vojvodina, 142-451-2400/2019-03, supported by Provincial Secretariat for Higher Education and Scientific Research of AP Vojvodina.

Keywords: chemometrics, regression analysis, kombucha, quality control

Procedia PDF Downloads 141
650 Spatial Analysis of the Impact of City Developments Degradation of Green Space in Urban Fringe Eastern City of Yogyakarta Year 2005-2010

Authors: Pebri Nurhayati, Rozanah Ahlam Fadiyah

Abstract:

In the development of the city often use rural areas that can not be separated from the change in land use that lead to the degradation of urban green space in the city fringe. In the long run, the degradation of green open space this can impact on the decline of ecological, psychological and public health. Therefore, this research aims to (1) determine the relationship between the parameters of the degradation rate of urban development with green space, (2) develop a spatial model of the impact of urban development on the degradation of green open space with remote sensing techniques and Geographical Information Systems in an integrated manner. This research is a descriptive research with data collection techniques of observation and secondary data . In the data analysis, to interpret the direction of urban development and degradation of green open space is required in 2005-2010 ASTER image with NDVI. Of interpretation will generate two maps, namely maps and map development built land degradation green open space. Secondary data related to the rate of population growth, the level of accessibility, and the main activities of each city map is processed into a population growth rate, the level of accessibility maps, and map the main activities of the town. Each map is used as a parameter to map the degradation of green space and analyzed by non-parametric statistical analysis using Crosstab thus obtained value of C (coefficient contingency). C values were then compared with the Cmaximum to determine the relationship. From this research will be obtained in the form of modeling spatial map of the City Development Impact Degradation Green Space in Urban Fringe eastern city of Yogyakarta 2005-2010. In addition, this research also generate statistical analysis of the test results of each parameter to the degradation of green open space in the Urban Fringe eastern city of Yogyakarta 2005-2010.

Keywords: spatial analysis, urban development, degradation of green space, urban fringe

Procedia PDF Downloads 312
649 Integrated Risk Assessment of Storm Surge and Climate Change for the Coastal Infrastructure

Authors: Sergey V. Vinogradov

Abstract:

Coastal communities are presently facing increased vulnerabilities due to rising sea levels and shifts in global climate patterns, a trend expected to escalate in the long run. To address the needs of government entities, the public sector, and private enterprises, there is an urgent need to thoroughly investigate, assess, and manage the present and projected risks associated with coastal flooding, including storm surges, sea level rise, and nuisance flooding. In response to these challenges, a practical approach to evaluating storm surge inundation risks has been developed. This methodology offers an integrated assessment of potential flood risk in targeted coastal areas. The physical modeling framework involves simulating synthetic storms and utilizing hydrodynamic models that align with projected future climate and ocean conditions. Both publicly available and site-specific data form the basis for a risk assessment methodology designed to translate inundation model outputs into statistically significant projections of expected financial and operational consequences. This integrated approach produces measurable indicators of impacts stemming from floods, encompassing economic and other dimensions. By establishing connections between the frequency of modeled flood events and their consequences across a spectrum of potential future climate conditions, our methodology generates probabilistic risk assessments. These assessments not only account for future uncertainty but also yield comparable metrics, such as expected annual losses for each inundation event. These metrics furnish stakeholders with a dependable dataset to guide strategic planning and inform investments in mitigation. Importantly, the model's adaptability ensures its relevance across diverse coastal environments, even in instances where site-specific data for analysis may be limited.

Keywords: climate, coastal, surge, risk

Procedia PDF Downloads 55
648 Consumer’s Behavioral Responses to Corporate Social Responsibility Marketing: Mediating Impact of Customer Trust, Emotions, Brand Image, and Brand Attitude

Authors: Yasir Ali Soomro

Abstract:

Companies that demonstrate corporate social responsibilities (CSR) are more likely to withstand any downturn or crises because of the trust built with stakeholders. Many firms are utilizing CSR marketing to improve the interactions with their various stakeholders, mainly the consumers. Most previous research on CSR has focused on the impact of CSR on customer responses and behaviors toward a company. As online food ordering and grocery shopping remains inevitable. This study will investigate structural relationships among consumer positive emotions (CPE) and negative emotions (CNE), Corporate Reputation (CR), Customer Trust (CT), Brand Image (BI), and Brand attitude (BA) on behavioral outcomes such as Online purchase intention (OPI) and Word of mouth (WOM) in retail grocery and food restaurants setting. Hierarchy of Effects Model will be used as theoretical, conceptual framework. The model describes three stages of consumer behavior: (i) cognitive, (ii) affective, and (iii) conative. The study will apply a quantitative method to test the hypotheses; a self-developed questionnaire with non-probability sampling will be utilized to collect data from 500 consumers belonging to generation X, Y, and Z residing in KSA. The study will contribute by providing empirical evidence to support the link between CSR and customer affective and conative experiences in Saudi Arabia. The theoretical contribution of this study will be empirically tested comprehensive model where CPE, CNE, CR, CT, BI, and BA act as mediating variables between the perceived CSR & Online purchase intention (OPI) and Word of mouth (WOM). Further, the study will add more to how the emotional/ psychological process mediates in the CSR literature, especially in the Middle Eastern context. The proposed study will also explain the effect of perceived CSR marketing initiatives directly and indirectly on customer behavioral responses.

Keywords: corporate social responsibility, corporate reputation, consumer emotions, loyalty, online purchase intention, word-of-mouth, structural equation modeling

Procedia PDF Downloads 90
647 Long-Term Modal Changes in International Traffic - Modelling Exercise

Authors: Tomasz Komornicki

Abstract:

The primary aim of the presentation is to try to model border traffic and, at the same time to explain on which economic variables the intensity of border traffic depended in the long term. For this purpose, long series of traffic data on the Polish borders were used. Models were estimated for three variants of explanatory variables: a) for total arrivals and departures (total movement of Poles and foreigners), b) for arrivals and departures of Poles, and c) for arrivals and departures of foreigners. Each of the defined explanatory variables in the models appeared as the logarithm of the natural number of persons. Data from 1994-2017 were used for modeling (for internal Schengen borders for the years 1994-2007). Information on the number of people arriving in and leaving Poland was collected for a total of 303 border crossings. On the basis of the analyses carried out, it was found that one of the main factors determining border traffic is generally differences in the level of economic development (GDP) and the condition of the economy (level of unemployment) and the degree of border permeability. Also statistically significant for border traffic are differences in the prices of goods (fuels, tobacco, and alcohol products) and services (mainly basic ones, e.g., hairdressing services). Such a relationship exists mainly on the eastern border (border traffic determined largely by differences in the prices of goods) and on the border with Germany (in the first analysed period, border traffic was determined mainly by the prices of goods, later - after Poland's accession to the EU and the Schengen area - also by the prices of services). The models also confirmed differences in the set of factors shaping the volume and structure of border traffic on the Polish borders resulting from general geopolitical conditions, with the year 2007 being an important caesura, after which the classical population mobility factors became visible. The results obtained were additionally related to changes in traffic that occurred as a result of the CPOVID-19 pandemic and as a result of the Russian aggression against Ukraine.

Keywords: border, modal structure, transport, Ukraine

Procedia PDF Downloads 114
646 Streamlining Cybersecurity Risk Assessment for Industrial Control and Automation Systems: Leveraging the National Institute of Standard and Technology’s Risk Management Framework (RMF) Using Model-Based System Engineering (MBSE)

Authors: Gampel Alexander, Mazzuchi Thomas, Sarkani Shahram

Abstract:

The cybersecurity landscape is constantly evolving, and organizations must adapt to the changing threat environment to protect their assets. The implementation of the NIST Risk Management Framework (RMF) has become critical in ensuring the security and safety of industrial control and automation systems. However, cybersecurity professionals are facing challenges in implementing RMF, leading to systems operating without authorization and being non-compliant with regulations. The current approach to RMF implementation based on business practices is limited and insufficient, leaving organizations vulnerable to cyberattacks resulting in the loss of personal consumer data and critical infrastructure details. To address these challenges, this research proposes a Model-Based Systems Engineering (MBSE) approach to implementing cybersecurity controls and assessing risk through the RMF process. The study emphasizes the need to shift to a modeling approach, which can streamline the RMF process and eliminate bloated structures that make it difficult to receive an Authorization-To-Operate (ATO). The study focuses on the practical application of MBSE in industrial control and automation systems to improve the security and safety of operations. It is concluded that MBSE can be used to solve the implementation challenges of the NIST RMF process and improve the security of industrial control and automation systems. The research suggests that MBSE provides a more effective and efficient method for implementing cybersecurity controls and assessing risk through the RMF process. The future work for this research involves exploring the broader applicability of MBSE in different industries and domains. The study suggests that the MBSE approach can be applied to other domains beyond industrial control and automation systems.

Keywords: authorization-to-operate (ATO), industrial control systems (ICS), model-based system’s engineering (MBSE), risk management framework (RMF)

Procedia PDF Downloads 92
645 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models

Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti

Abstract:

In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.

Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics

Procedia PDF Downloads 50
644 Study on Capability of the Octocopter Configurations in Finite Element Analysis Simulation Environment

Authors: Jeet Shende, Leonid Shpanin, Misko Abramiuk, Mattew Goodwin, Nicholas Pickett

Abstract:

Energy harvesting on board the Unmanned Ariel Vehicle (UAV) is one of the most rapidly growing emerging technologies and consists of the collection of small amounts of energy, for different applications, from unconventional sources that are incidental to the operation of the parent system or device. Different energy harvesting techniques have already been investigated in the multirotor drones, where the energy collected comes from the systems surrounding ambient environment and typically involves the conversion of solar, kinetic, or thermal energies into electrical energy. The energy harvesting from the vibrated propeller using the piezoelectric components inside the propeller has also been proven to be feasible. However, the impact on the UAV flight performance using this technology has not been investigated. In this contribution the impact on the multirotor drone operation has been investigated at different flight control configurations which support the efficient performance of the propeller vibration energy harvesting. The industrially made MANTIS X8-PRO octocopter frame kit was used to explore the octocopter operation which was modelled using SolidWorks 3D CAD package for simulation studies. The octocopter flight control strategy is developed through integration of the SolidWorks 3D CAD software and MATLAB/Simulink simulation environment for evaluation of the octocopter behaviour under different simulated flight modes and octocopter geometries. Analysis of the two modelled octocopter geometries and their flight performance is presented via graphical representation of simulated parameters. The possibility of not using the landing gear in octocopter geometry is demonstrated. The conducted study evaluates the octocopter’s flight control technique and its impact on the energy harvesting mechanism developed on board the octocopter. Finite Element Analysis (FEA) simulation results of the modelled octocopter in operation are presented exploring the performance of the octocopter flight control and structural configurations. Applications of both octocopter structures and their flight control strategy are discussed.

Keywords: energy harvesting, flight control modelling, object modeling, unmanned aerial vehicle

Procedia PDF Downloads 74
643 Numerical Modelling of Shear Zone and Its Implications on Slope Instability at Letšeng Diamond Open Pit Mine, Lesotho

Authors: M. Ntšolo, D. Kalumba, N. Lefu, G. Letlatsa

Abstract:

Rock mass damage due to shear tectonic activity has been investigated largely in geoscience where fluid transport is of major interest. However, little has been studied on the effect of shear zones on rock mass behavior and its impact on stability of rock slopes. At Letšeng Diamonds open pit mine in Lesotho, the shear zone composed of sheared kimberlite material, calcite and altered basalt is forming part of the haul ramp into the main pit cut 3. The alarming rate at which the shear zone is deteriorating has triggered concerns about both local and global stability of pit the walls. This study presents the numerical modelling of the open pit slope affected by shear zone at Letšeng Diamond Mine (LDM). Analysis of the slope involved development of the slope model by using a two-dimensional finite element code RS2. Interfaces between shear zone and host rock were represented by special joint elements incorporated in the finite element code. The analysis of structural geological mapping data provided a good platform to understand the joint network. Major joints including shear zone were incorporated into the model for simulation. This approach proved successful by demonstrating that continuum modelling can be used to evaluate evolution of stresses, strain, plastic yielding and failure mechanisms that are consistent with field observations. Structural control due to geological shear zone structure proved to be important in its location, size and orientation. Furthermore, the model analyzed slope deformation and sliding possibility along shear zone interfaces. This type of approach can predict shear zone deformation and failure mechanism, hence mitigation strategies can be deployed for safety of human lives and property within mine pits.

Keywords: numerical modeling, open pit mine, shear zone, slope stability

Procedia PDF Downloads 297
642 Suitable Models and Methods for the Steady-State Analysis of Multi-Energy Networks

Authors: Juan José Mesas, Luis Sainz

Abstract:

The motivation for the development of this paper lies in the need for energy networks to reduce losses, improve performance, optimize their operation and try to benefit from the interconnection capacity with other networks enabled for other energy carriers. These interconnections generate interdependencies between some energy networks and others, which requires suitable models and methods for their analysis. Traditionally, the modeling and study of energy networks have been carried out independently for each energy carrier. Thus, there are well-established models and methods for the steady-state analysis of electrical networks, gas networks, and thermal networks separately. What is intended is to extend and combine them adequately to be able to face in an integrated way the steady-state analysis of networks with multiple energy carriers. Firstly, the added value of multi-energy networks, their operation, and the basic principles that characterize them are explained. In addition, two current aspects of great relevance are exposed: the storage technologies and the coupling elements used to interconnect one energy network with another. Secondly, the characteristic equations of the different energy networks necessary to carry out the steady-state analysis are detailed. The electrical network, the natural gas network, and the thermal network of heat and cold are considered in this paper. After the presentation of the equations, a particular case of the steady-state analysis of a specific multi-energy network is studied. This network is represented graphically, the interconnections between the different energy carriers are described, their technical data are exposed and the equations that have previously been presented theoretically are formulated and developed. Finally, the two iterative numerical resolution methods considered in this paper are presented, as well as the resolution procedure and the results obtained. The pros and cons of the application of both methods are explained. It is verified that the results obtained for the electrical network (voltages in modulus and angle), the natural gas network (pressures), and the thermal network (mass flows and temperatures) are correct since they comply with the distribution, operation, consumption and technical characteristics of the multi-energy network under study.

Keywords: coupling elements, energy carriers, multi-energy networks, steady-state analysis

Procedia PDF Downloads 77
641 Reliability and Maintainability Optimization for Aircraft’s Repairable Components Based on Cost Modeling Approach

Authors: Adel A. Ghobbar

Abstract:

The airline industry is continuously challenging how to safely increase the service life of the aircraft with limited maintenance budgets. Operators are looking for the most qualified maintenance providers of aircraft components, offering the finest customer service. Component owner and maintenance provider is offering an Abacus agreement (Aircraft Component Leasing) to increase the efficiency and productivity of the customer service. To increase the customer service, the current focus on No Fault Found (NFF) units must change into the focus on Early Failure (EF) units. Since the effect of EF units has a significant impact on customer satisfaction, this needs to increase the reliability of EF units at minimal cost, which leads to the goal of this paper. By identifying the reliability of early failure (EF) units with regards to No Fault Found (NFF) units, in particular, the root cause analysis with an integrated cost analysis of EF units with the use of a failure mode analysis tool and a cost model, there will be a set of EF maintenance improvements. The data used for the investigation of the EF units will be obtained from the Pentagon system, an Enterprise Resource Planning (ERP) system used by Fokker Services. The Pentagon system monitors components, which needs to be repaired from Fokker aircraft owners, Abacus exchange pool, and commercial customers. The data will be selected on several criteria’s: time span, failure rate, and cost driver. When the selected data has been acquired, the failure mode and root cause analysis of EF units are initiated. The failure analysis approach tool was implemented, resulting in the proposed failure solution of EF. This will lead to specific EF maintenance improvements, which can be set-up to decrease the EF units and, as a result of this, increasing the reliability. The investigated EFs, between the time period over ten years, showed to have a significant reliability impact of 32% on the total of 23339 unscheduled failures. Since the EFs encloses almost one-third of the entire population.

Keywords: supportability, no fault found, FMEA, early failure, availability, operational reliability, predictive model

Procedia PDF Downloads 127
640 The Effect of Artificial Intelligence on Digital Factory

Authors: Sherif Fayez Lewis Ghaly

Abstract:

up to datefacupupdated planning has the mission of designing merchandise, plant life, procedures, enterprise, regions, and the development of a up to date. The requirements for up-to-date planning and the constructing of a updated have changed in recent years. everyday restructuring is turning inupupdated greater essential up-to-date hold the competitiveness of a manufacturing facilityupdated. restrictions in new regions, shorter existence cycles of product and manufacturing generation up-to-date a VUCA global (Volatility, Uncertainty, Complexity & Ambiguity) up-to-date greater frequent restructuring measures inside a manufacturing facilityupdated. A virtual up-to-date model is the making plans basis for rebuilding measures and up-to-date an fundamental up-to-date. short-time period rescheduling can now not be handled through on-web site inspections and manual measurements. The tight time schedules require 3177227fc5dac36e3e5ae6cd5820dcaa making plans fashions. updated the high variation fee of facup-to-dateries defined above, a method for rescheduling facupdatedries on the idea of a modern-day digital up to datery dual is conceived and designed for sensible software in updated restructuring projects. the point of interest is on rebuild approaches. The purpose is up-to-date preserve the planning basis (virtual up-to-date model) for conversions within a up to datefacupupdated updated. This calls for the application of a methodology that reduces the deficits of present techniques. The goal is up-to-date how a digital up to datery version may be up to date up to date during ongoing up to date operation. a method up-to-date on phoup to dategrammetry technology is presented. the focus is on developing a easy and fee-powerful up to date tune the numerous adjustments that arise in a manufacturing unit constructing in the course of operation. The method is preceded with the aid of a hardware and software assessment up-to-date become aware of the most cost effective and quickest version.

Keywords: building information modeling, digital factory model, factory planning, maintenance digital factory model, photogrammetry, restructuring

Procedia PDF Downloads 23
639 Application of Sentinel-2 Data to Evaluate the Role of Mangrove Conservation and Restoration on Aboveground Biomass

Authors: Raheleh Farzanmanesh, Christopher J. Weston

Abstract:

Mangroves are forest ecosystems located in the inter-tidal regions of tropical and subtropical coastlines that provide many valuable economic and ecological benefits for millions of people, such as preventing coastal erosion, providing breeding, and feeding grounds, improving water quality, and supporting the well-being of local communities. In addition, mangroves capture and store high amounts of carbon in biomass and soils that play an important role in combating climate change. The decline in mangrove area has prompted government and private sector interest in mangrove conservation and restoration projects to achieve multiple Sustainable Development Goals, from reducing poverty to improving life on land. Mangrove aboveground biomass plays an essential role in the global carbon cycle, climate change mitigation and adaptation by reducing CO2 emissions. However, little information is available about the effectiveness of mangrove sustainable management on mangrove change area and aboveground biomass (AGB). Here, we proposed a method for mapping, modeling, and assessing mangrove area and AGB in two Global Environment Facility (GEF) blue forests projects based on Sentinel-2 Level 1C imagery during their conservation lifetime. The SVR regression model was used to estimate AGB in Tahiry Honko project in Madagascar and the Abu Dhabi Blue Carbon Demonstration Project (Abu Dhabi Emirates. The results showed that mangrove forests and AGB declined in the Tahiry Honko project, while in the Abu Dhabi project increased after the conservation initiative was established. The results provide important information on the impact of mangrove conservation activities and contribute to the development of remote sensing applications for mapping and assessing mangrove forests in blue carbon initiatives.

Keywords: blue carbon, mangrove forest, REDD+, aboveground biomass, Sentinel-2

Procedia PDF Downloads 71
638 System Identification of Building Structures with Continuous Modeling

Authors: Ruichong Zhang, Fadi Sawaged, Lotfi Gargab

Abstract:

This paper introduces a wave-based approach for system identification of high-rise building structures with a pair of seismic recordings, which can be used to evaluate structural integrity and detect damage in post-earthquake structural condition assessment. The fundamental of the approach is based on wave features of generalized impulse and frequency response functions (GIRF and GFRF), i.e., wave responses at one structural location to an impulsive motion at another reference location in time and frequency domains respectively. With a pair of seismic recordings at the two locations, GFRF is obtainable as Fourier spectral ratio of the two recordings, and GIRF is then found with the inverse Fourier transformation of GFRF. With an appropriate continuous model for the structure, a closed-form solution of GFRF, and subsequent GIRF, can also be found in terms of wave transmission and reflection coefficients, which are related to structural physical properties above the impulse location. Matching the two sets of GFRF and/or GIRF from recordings and the model helps identify structural parameters such as wave velocity or shear modulus. For illustration, this study examines ten-story Millikan Library in Pasadena, California with recordings of Yorba Linda earthquake of September 3, 2002. The building is modelled as piecewise continuous layers, with which GFRF is derived as function of such building parameters as impedance, cross-sectional area, and damping. GIRF can then be found in closed form for some special cases and numerically in general. Not only does this study reveal the influential factors of building parameters in wave features of GIRF and GRFR, it also shows some system-identification results, which are consistent with other vibration- and wave-based results. Finally, this paper discusses the effectiveness of the proposed model in system identification.

Keywords: wave-based approach, seismic responses of buildings, wave propagation in structures, construction

Procedia PDF Downloads 232
637 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics

Authors: Titus A. Beu

Abstract:

Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.

Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.

Procedia PDF Downloads 116
636 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 124
635 A Cooperative Signaling Scheme for Global Navigation Satellite Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.

Keywords: global navigation satellite network, cooperative signaling, data combining, nodes

Procedia PDF Downloads 279
634 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator

Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty

Abstract:

Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) wherein the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation and Control design team. This paper discusses the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.

Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), steady state, transient state

Procedia PDF Downloads 264
633 Lateral Torsional Buckling: Tests on Glued Laminated Timber Beams

Authors: Vera Wilden, Benno Hoffmeister, Markus Feldmann

Abstract:

Glued laminated timber (glulam) is a preferred choice for long span girders, e.g., for gyms or storage halls. While the material provides sufficient strength to resist the bending moments, large spans lead to increased slenderness of such members and to a higher susceptibility to stability issues, in particular to lateral torsional buckling (LTB). Rules for the determination of the ultimate LTB resistance are provided by Eurocode 5. The verifications of the resistance may be performed using the so called equivalent member method or by means of theory 2nd order calculations (direct method), considering equivalent imperfections. Both methods have significant limitations concerning their applicability; the equivalent member method is limited to rather simple cases; the direct method is missing detailed provisions regarding imperfections and requirements for numerical modeling. In this paper, the results of a test series on slender glulam beams in three- and four-point bending are presented. The tests were performed in an innovative, newly developed testing rig, allowing for a very precise definition of loading and boundary conditions. The load was introduced by a hydraulic jack, which follows the lateral deformation of the beam by means of a servo-controller, coupled with the tested member and keeping the load direction vertically. The deformation-controlled tests allowed for the identification of the ultimate limit state (governed by elastic stability) and the corresponding deformations. Prior to the tests, the structural and geometrical imperfections were determined and used later in the numerical models. After the stability tests, the nearly undamaged members were tested again in pure bending until reaching the ultimate moment resistance of the cross-section. These results, accompanied by numerical studies, were compared to resistance values obtained using both methods according to Eurocode 5.

Keywords: experimental tests, glued laminated timber, lateral torsional buckling, numerical simulation

Procedia PDF Downloads 235
632 Development of Interaction Diagram for Eccentrically Loaded Reinforced Concrete Sandwich Walls with Different Design Parameters

Authors: May Haggag, Ezzat Fahmy, Mohamed Abdel-Mooty, Sherif Safar

Abstract:

Sandwich sections have a very complex nature due to variability of behavior of different materials within the section. Cracking, crushing and yielding capacity of constituent materials enforces high complexity of the section. Furthermore, slippage between the different layers adds to the section complex behavior. Conventional methods implemented in current industrial guidelines do not account for the above complexities. Thus, a throughout study is needed to understand the true behavior of the sandwich panels thus, increase the ability to use them effectively and efficiently. The purpose of this paper is to conduct numerical investigation using ANSYS software for the structural behavior of sandwich wall section under eccentric loading. Sandwich walls studied herein are composed of two RC faces, a foam core and linking shear connectors. Faces are modeled using solid elements and reinforcement together with connectors are modeled using link elements. The analysis conducted herein is nonlinear static analysis incorporating material nonlinearity, crashing and crushing of concrete and yielding of steel. The model is validated by comparing it to test results in literature. After validation, the model is used to establish extensive parametric analysis to investigate the effect of three key parameters on the axial force bending moment interaction diagram of the walls. These parameters are the concrete compressive strength, face thickness and number of shear connectors. Furthermore, the results of the parametric study are used to predict a coefficient that links the interaction diagram of a solid wall to that of a sandwich wall. The equation is predicted using the parametric study data and regression analysis. The predicted α was used to construct the interaction diagram of the investigated wall and the results were compared with ANSYS results and showed good agreement.

Keywords: sandwich walls, interaction diagrams, numerical modeling, eccentricity, reinforced concrete

Procedia PDF Downloads 401
631 Geomorphometric Analysis of the Hydrologic and Topographic Parameters of the Katsina-Ala Drainage Basin, Benue State, Nigeria

Authors: Oyatayo Kehinde Taofik, Ndabula Christopher

Abstract:

Drainage basins are a central theme in the green economy. The rising challenges in flooding, erosion or sediment transport and sedimentation threaten the green economy. This has led to increasing emphasis on quantitative analysis of drainage basin parameters for better understanding, estimation and prediction of fluvial responses and, thus associated hazards or disasters. This can be achieved through direct measurement, characterization, parameterization, or modeling. This study applied the Remote Sensing and Geographic Information System approach of parameterization and characterization of the morphometric variables of Katsina – Ala basin using a 30 m resolution Shuttle Radar Topographic Mission (SRTM) Digital Elevation Model (DEM). This was complemented with topographic and hydrological maps of Katsina-Ala on a scale of 1:50,000. Linear, areal and relief parameters were characterized. The result of the study shows that Ala and Udene sub-watersheds are 4th and 5th order basins, respectively. The stream network shows a dendritic pattern, indicating homogeneity in texture and a lack of structural control in the study area. Ala and Udene sub-watersheds have the following values for elongation ratio, circularity ratio, form factor and relief ratio: 0.48 / 0.39 / 0.35/ 9.97 and 0.40 / 0.35 / 0.32 / 6.0. They also have the following values for drainage texture and ruggedness index of 0.86 / 0.011 and 1.57 / 0.016. The study concludes that the two sub-watersheds are elongated, suggesting that they are susceptible to erosion and, thus higher sediment load in the river channels, which will dispose the watersheds to higher flood peaks. The study also concludes that the sub-watersheds have a very coarse texture, with good permeability of subsurface materials and infiltration capacity, which significantly recharge the groundwater. The study recommends that efforts should be put in place by the Local and State Governments to reduce the size of paved surfaces in these sub-watersheds by implementing a robust agroforestry program at the grass root level.

Keywords: erosion, flood, mitigation, morphometry, watershed

Procedia PDF Downloads 86
630 University-home Partnerships for Enhancing Students’ Career Adapting Responses: A Moderated-mediation Model

Authors: Yin Ma, Xun Wang, Kelsey Austin

Abstract:

Purpose – Building upon career construction theory and the conservation of resources theory, we developed a moderated mediation model to examine how the perceived university support impact students’ career adapting responses, namely, crystallization, exploration, decision and preparation, via the mediator career adaptability and moderator perceived parental support. Design/methodology/approach – The multi-stage sampling strategy was employed and survey data were collected. Structural equation modeling was used to perform the analysis. Findings – Perceived university support could directly promote students’ career adaptability, and promote three career adapting responses, namely, exploration, decision and preparation. It could also impact four career adapting responses via mediation effect of career adaptability. Its impact on students’ career adaptability can greatly increase when students’ receive parental related career support. Research limitations/implications – The cross-sectional design limits causal inference. Conducted in China, our findings should be cautiously interpreted in other countries due to cultural differences. Practical implications – University support is vital to students’ career adaptability and supports from parents can enhance this process. University-home collaboration is necessary to promote students’ career adapting responses. For students, seeking and utilizing as much supporting resources as possible is vital for their human resources development. On an organizational level, universities could benefit from our findings by introducing the practices which ask students to rate the career-related courses and encourage them to chat with parents regularly. Originality/ value – Using recently developed scale, current work contributes to the literature by investigating the impact of multiple contextual factors on students’ career adapting response. It also provide the empirical support for the role of human intervention in fostering career adapting responses.

Keywords: career adapability, university and parental support, China studies, sociology of education

Procedia PDF Downloads 63
629 Assessing Future Offshore Wind Farms in the Gulf of Roses: Insights from Weather Research and Forecasting Model Version 4.2

Authors: Kurias George, Ildefonso Cuesta Romeo, Clara Salueña Pérez, Jordi Sole Olle

Abstract:

With the growing prevalence of wind energy there is a need, for modeling techniques to evaluate the impact of wind farms on meteorology and oceanography. This study presents an approach that utilizes the WRF (Weather Research and Forecasting )with that include a Wind Farm Parametrization model to simulate the dynamics around Parc Tramuntana project, a offshore wind farm to be located near the Gulf of Roses off the coast of Barcelona, Catalonia. The model incorporates parameterizations for wind turbines enabling a representation of the wind field and how it interacts with the infrastructure of the wind farm. Current results demonstrate that the model effectively captures variations in temeperature, pressure and in both wind speed and direction over time along with their resulting effects on power output from the wind farm. These findings are crucial for optimizing turbine placement and operation thus improving efficiency and sustainability of the wind farm. In addition to focusing on atmospheric interactions, this study delves into the wake effects within the turbines in the farm. A range of meteorological parameters were also considered to offer a comprehensive understanding of the farm's microclimate. The model was tested under different horizontal resolutions and farm layouts to scrutinize the wind farm's effects more closely. These experimental configurations allow for a nuanced understanding of how turbine wakes interact with each other and with the broader atmospheric and oceanic conditions. This modified approach serves as a potent tool for stakeholders in renewable energy, environmental protection, and marine spatial planning. environmental protection and marine spatial planning. It provides a range of information regarding the environmental and socio economic impacts of offshore wind energy projects.

Keywords: weather research and forecasting, wind turbine wake effects, environmental impact, wind farm parametrization, sustainability analysis

Procedia PDF Downloads 71
628 Modeling of Cf-252 and PuBe Neutron Sources by Monte Carlo Method in Order to Develop Innovative BNCT Therapy

Authors: Marta Błażkiewicz, Adam Konefał

Abstract:

Currently, boron-neutron therapy is carried out mainly with the use of a neutron beam generated in research nuclear reactors. This fact limits the possibility of realization of a BNCT in centers distant from the above-mentioned reactors. Moreover, the number of active nuclear reactors in operation in the world is decreasing due to the limited lifetime of their operation and the lack of new installations. Therefore, the possibilities of carrying out boron-neutron therapy based on the neutron beam from the experimental reactor are shrinking. However, the use of nuclear power reactors for BNCT purposes is impossible due to the infrastructure not intended for radiotherapy. Therefore, a serious challenge is to find ways to perform boron-neutron therapy based on neutrons generated outside the research nuclear reactor. This work meets this challenge. Its goal is to develop a BNCT technique based on commonly available neutron sources such as Cf-252 and PuBe, which will enable the above-mentioned therapy in medical centers unrelated to nuclear research reactors. Advances in the field of neutron source fabrication make it possible to achieve strong neutron fluxes. The current stage of research focuses on the development of virtual models of the above-mentioned sources using the Monte Carlo simulation method. In this study, the GEANT4 tool was used, including the model for simulating neutron-matter interactions - High Precision Neutron. Models of neutron sources were developed on the basis of experimental verification based on the activation detectors method with the use of indium foil and the cadmium differentiation method allowing to separate the indium activation contribution from thermal and resonance neutrons. Due to the large number of factors affecting the result of the verification experiment, the 10% discrepancy between the simulation and experiment results was accepted.

Keywords: BNCT, virtual models, neutron sources, monte carlo, GEANT4, neutron activation detectors, gamma spectroscopy

Procedia PDF Downloads 182
627 Development of Power System Stability by Reactive Power Planning in Wind Power Plant With Doubley Fed Induction Generators Generator

Authors: Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee, Oriol Gomis Bellmunt, Vinicius Albernaz Lacerda Freitas

Abstract:

The use of distributed and renewable sources in power systems has grown significantly, recently. One the most popular sources are wind farms which have grown massively. However, ¬wind farms are connected to the grid, this can cause problems such as reduced voltage stability, frequency fluctuations and reduced dynamic stability. Variable speed generators (asynchronous) are used due to the uncontrollability of wind speed specially Doubley Fed Induction Generators (DFIG). The most important disadvantage of DFIGs is its sensitivity to voltage drop. In the case of faults, a large volume of reactive power is induced therefore, use of FACTS devices such as SVC and STATCOM are suitable for improving system output performance. They increase the capacity of lines and also passes network fault conditions. In this paper, in addition to modeling the reactive power control system in a DFIG with converter, FACTS devices have been used in a DFIG wind turbine to improve the stability of the power system containing two synchronous sources. In the following paper, recent optimal control systems have been designed to minimize fluctuations caused by system disturbances, for FACTS devices employed. For this purpose, a suitable method for the selection of nine parameters for MPSH-phase-post-phase compensators of reactive power compensators is proposed. The design algorithm is formulated ¬¬as an optimization problem searching for optimal parameters in the controller. Simulation results show that the proposed controller Improves the stability of the network and the fluctuations are at desired speed.

Keywords: renewable energy sources, optimization wind power plant, stability, reactive power compensator, double-feed induction generator, optimal control, genetic algorithm

Procedia PDF Downloads 93
626 Comparison of Inexpensive Cell Disruption Techniques for an Oleaginous Yeast

Authors: Scott Nielsen, Luca Longanesi, Chris Chuck

Abstract:

Palm oil is obtained from the flesh and kernel of the fruit of oil palms and is the most productive and inexpensive oil crop. The global demand for palm oil is approximately 75 million metric tonnes, a 29% increase in global production of palm oil since 2016. This expansion of oil palm cultivation has resulted in mass deforestation, vast biodiversity destruction and increasing net greenhouse gas emissions. One possible alternative is to produce a saturated oil, similar to palm, from microbes such as oleaginous yeast. The yeasts can be cultured on sugars derived from second-generation sources and do not compete with tropical forests for land. One highly promising oleaginous yeast for this application is Metschnikowia pulcherrima. However, recent techno-economic modeling has shown that cell lysis and standard lipid extraction are major contributors to the cost of the oil. Typical cell disruption techniques to extract either single cell oils or proteins have been based around bead-beating, homogenization and acid lysis. However, these can have a detrimental effect on lipid quality and are energy-intensive. In this study, a vortex separator, which produces high sheer with minimal energy input, was investigated as a potential low energy method of lysing cells. This was compared to four more traditional methods (thermal lysis, acid lysis, alkaline lysis, and osmotic lysis). For each method, the yeast loading was also examined at 1 g/L, 10 g/L and 100 g/L. The quality of the cell disruption was measured by optical cell density, cell counting and the particle size distribution profile comparison over a 2-hour period. This study demonstrates that the vortex separator is highly effective at lysing the cells and could potentially be used as a simple apparatus for lipid recovery in an oleaginous yeast process. The further development of this technology could potentially reduce the overall cost of microbial lipids in the future.

Keywords: palm oil substitute, metschnikowia pulcherrima, cell disruption, cell lysis

Procedia PDF Downloads 202
625 Monitoring Prospective Sites for Water Harvesting Structures Using Remote Sensing and Geographic Information Systems-Based Modeling in Egypt

Authors: Shereif. H. Mahmoud

Abstract:

Egypt has limited water resources, and it will be under water stress by the year 2030. Therefore, Egypt should consider natural and non-conventional water resources to overcome such a problem. Rain harvesting is one solution. This Paper presents a geographic information system (GIS) methodology - based on decision support system (DSS) that uses remote sensing data, filed survey, and GIS to identify potential RWH areas. The input into the DSS includes a map of rainfall surplus, slope, potential runoff coefficient (PRC), land cover/use, soil texture. In addition, the outputs are map showing potential sites for RWH. Identifying suitable RWH sites implemented in the ArcGIS model environment using the model builder of ArcGIS 10.1. Based on Analytical hierarchy process (AHP) analysis taking into account five layers, the spatial extents of RWH suitability areas identified using Multi-Criteria Evaluation (MCE). The suitability model generated a suitability map for RWH with four suitability classes, i.e. Excellent, Moderate, Poor, and unsuitable. The spatial distribution of the suitability map showed that the excellent suitable areas for RWH concentrated in the northern part of Egypt. According to their averages, 3.24% of the total area have excellent and good suitability for RWH, while 45.04 % and 51.48 % of the total area are moderate and unsuitable suitability, respectively. The majority of the areas with excellent suitability have slopes between 2 and 8% and with an intensively cultivated area. The major soil type in the excellent suitable area is loam and the rainfall range from 100 up to 200 mm. Validation of the used technique depends on comparing existing RWH structures locations with the generated suitability map using proximity analysis tool of ArcGIS 10.1. The result shows that most of exiting RWH structures categorized as successful.

Keywords: rainwater harvesting (RWH), geographic information system (GIS), analytical hierarchy process (AHP), multi-criteria evaluation (MCE), decision support system (DSS)

Procedia PDF Downloads 358
624 Pavement Management for a Metropolitan Area: A Case Study of Montreal

Authors: Luis Amador Jimenez, Md. Shohel Amin

Abstract:

Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.

Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization

Procedia PDF Downloads 460
623 Parking Service Effectiveness at Commercial Malls

Authors: Ahmad AlAbdullah, Ali AlQallaf, Mahdi Hussain, Mohammed AlAttar, Salman Ashknani, Magdy Helal

Abstract:

We study the effectiveness of the parking service provided at Kuwaiti commercial malls and explore potential problems and feasible improvements. Commercial malls are important to Kuwaitis as the entertainment and shopping centers due to the lack of other alternatives. The difficulty and relatively long times wasted in finding a parking spot at the mall are real annoyances. We applied queuing analysis to one of the major malls that offer paid-parking (1040 parking spots) in addition to free parking. Patrons of the mall usually complained of the traffic jams and delays at entering the paid parking (average delay to park exceeds 15 min for about 62% of the patrons, while average time spent in the mall is about 2.6 hours). However, the analysis showed acceptable service levels at the check-in gates of the parking garage. Detailed review of the vehicle movement at the gateways indicated that arriving and departing cars both had to share parts of the gateway to the garage, which caused the traffic jams and delays. A simple comparison we made indicated that the largest commercial mall in Kuwait does not suffer such parking issues, while other smaller, yet important malls do, including the one we studied. It was suggested that well-designed inlets and outlets of that gigantic mall permitted smooth parking despite being totally free and mall is the first choice for most people for entertainment and shopping. A simulation model is being developed for further analysis and verification. Simulation can overcome the mathematical difficulty in using non-Poisson queuing models. The simulation model is used to explore potential changes to the parking garage entrance layout. And with the inclusion of the drivers’ behavior inside the parking, effectiveness indicators can be derived to address the economic feasibility of extending the parking capacity and increasing service levels. Outcomes of the study are planned to be generalized as appropriate to other commercial malls in Kuwait

Keywords: commercial malls, parking service, queuing analysis, simulation modeling

Procedia PDF Downloads 339