Search results for: optimized model
17640 Investigations on the Influence of Optimized Charge Air Cooling for a Diesel Passenger Car
Authors: Christian Doppler, Gernot Hirschl, Gerhard Zsiga
Abstract:
Starting from 2020, an EU-wide CO2-limitation of 95g/km is scheduled for the average of an OEMs passenger car fleet. Considering that, further measures of optimization on the diesel cycle will be necessary in order to reduce fuel consumption and emissions while keeping performance values adequate at the least. The present article deals with charge air cooling (CAC) on the basis of a diesel passenger car model in a 0D/1D-working process calculation environment. The considered engine is a 2.4 litre EURO VI diesel engine with variable geometry turbocharger (VGT) and low-pressure exhaust gas recirculation (LP EGR). The object of study was the impact of charge air cooling on the engine working process at constant boundary conditions which could have been conducted with an available and validated engine model in AVL BOOST. Part load was realized with constant power and NOx-emissions, whereas full load was accomplished with a lambda control in order to obtain maximum engine performance. The informative results were used to implement a simulation model in Matlab/Simulink which is further integrated into a full vehicle simulation environment via coupling with ICOS (Independent Co-Simulation Platform). Next, the dynamic engine behavior was validated and modified with load steps taken from the engine test bed. Due to the modular setup in the Co-Simulation, different CAC-models have been simulated quickly with their different influences on the working process. In doing so, a new cooler variation isn’t needed to be reproduced and implemented into the primary simulation model environment, but is implemented quickly and easily as an independent component into the simulation entity. By means of the association of the engine model, longitudinal dynamics vehicle model and different CAC models (air/air & water/air variants) in both steady state and transient operational modes, statements are gained regarding fuel consumption, NOx-emissions and power behavior. The fact that there is no more need of a complex engine model is very advantageous for the overall simulation volume. Beside of the simulation with the mentioned demonstrator engine, there have also been conducted several experimental investigations on the engine test bench. Here the comparison of a standard CAC with an intake-manifold-integrated CAC was executed in particular. Simulative as well as experimental tests showed benefits for the water/air CAC variant (on test bed especially the intake manifold integrated variant). The benefits are illustrated by a reduced pressure loss and a gain in air efficiency and CAC efficiency, those who all lead to minimized emission and fuel consumption for stationary and transient operation.Keywords: air/water-charge air cooler, co-simulation, diesel working process, EURO VI fuel consumption
Procedia PDF Downloads 26917639 UBCSAND Model Calibration for Generic Liquefaction Triggering Curves
Authors: Jui-Ching Chou
Abstract:
Numerical simulation is a popular method used to evaluate the effects of soil liquefaction on a structure or the effectiveness of a mitigation plan. Many constitutive models (UBCSAND model, PM4 model, SANISAND model, etc.) were presented to model the liquefaction phenomenon. In general, inputs of a constitutive model need to be calibrated against the soil cyclic resistance before being applied to the numerical simulation model. Then, simulation results can be compared with results from simplified liquefaction potential assessing methods. In this article, inputs of the UBCSAND model, a simple elastic-plastic stress-strain model, are calibrated against several popular generic liquefaction triggering curves of simplified liquefaction potential assessing methods via FLAC program. Calibrated inputs can provide engineers to perform a preliminary evaluation of an existing structure or a new design project.Keywords: calibration, liquefaction, numerical simulation, UBCSAND Model
Procedia PDF Downloads 17317638 Development of a Small-Group Teaching Method for Enhancing the Learning of Basic Acupuncture Manipulation Optimized with the Theory of Motor Learning
Authors: Wen-Chao Tang, Tang-Yi Liu, Ming Gao, Gang Xu, Hua-Yuan Yang
Abstract:
This study developed a method for teaching acupuncture manipulation in small groups optimized with the theory of motor learning. Sixty acupuncture students and their teacher participated in our research. Motion videos were recorded of their manipulations using the lifting-thrusting method. These videos were analyzed using Simi Motion software to acquire the movement parameters of the thumb tip. The parameter velocity curves along Y axis was used to generate small teaching groups clustered by a self-organized map (SOM) and K-means. Ten groups were generated. All the targeted instruction based on the comparative results groups as well as the videos of teacher and student was provided to the members of each group respectively. According to the theory and research of motor learning, the factors or technologies such as video instruction, observational learning, external focus and summary feedback were integrated into this teaching method. Such efforts were desired to improve and enhance the effectiveness of current acupuncture teaching methods in limited classroom teaching time and extracurricular training.Keywords: acupuncture, group teaching, video instruction, observational learning, external focus, summary feedback
Procedia PDF Downloads 17917637 A Crop Growth Subroutine for Watershed Resources Management (WRM) Model 1: Description
Authors: Kingsley Nnaemeka Ogbu, Constantine Mbajiorgu
Abstract:
Vegetation has a marked effect on runoff and has become an important component in hydrologic model. The watershed Resources Management (WRM) model, a process-based, continuous, distributed parameter simulation model developed for hydrologic and soil erosion studies at the watershed scale lack a crop growth component. As such, this model assumes a constant parameter values for vegetation and hydraulic parameters throughout the duration of hydrologic simulation. Our approach is to develop a crop growth algorithm based on the original plant growth model used in the Environmental Policy Integrated Climate Model (EPIC) model. This paper describes the development of a single crop growth model which has the capability of simulating all crops using unique parameter values for each crop. Simulated crop growth processes will reflect the vegetative seasonality of the natural watershed system. An existing model was employed for evaluating vegetative resistance by hydraulic and vegetative parameters incorporated into the WRM model. The improved WRM model will have the ability to evaluate the seasonal variation of the vegetative roughness coefficient with depth of flow and further enhance the hydrologic model’s capability for accurate hydrologic studies.Keywords: runoff, roughness coefficient, PAR, WRM model
Procedia PDF Downloads 37817636 Stock Market Prediction by Regression Model with Social Moods
Authors: Masahiro Ohmura, Koh Kakusho, Takeshi Okadome
Abstract:
This paper presents a regression model with autocorrelated errors in which the inputs are social moods obtained by analyzing the adjectives in Twitter posts using a document topic model. The regression model predicts Dow Jones Industrial Average (DJIA) more precisely than autoregressive moving-average models.Keywords: stock market prediction, social moods, regression model, DJIA
Procedia PDF Downloads 54817635 Structural Equation Modeling Semiparametric Truncated Spline Using Simulation Data
Authors: Adji Achmad Rinaldo Fernandes
Abstract:
SEM analysis is a complex multivariate analysis because it involves a number of exogenous and endogenous variables that are interconnected to form a model. The measurement model is divided into two, namely, the reflective model (reflecting) and the formative model (forming). Before carrying out further tests on SEM, there are assumptions that must be met, namely the linearity assumption, to determine the form of the relationship. There are three modeling approaches to path analysis, including parametric, nonparametric and semiparametric approaches. The aim of this research is to develop semiparametric SEM and obtain the best model. The data used in the research is secondary data as the basis for the process of obtaining simulation data. Simulation data was generated with various sample sizes of 100, 300, and 500. In the semiparametric SEM analysis, the form of the relationship studied was determined, namely linear and quadratic and determined one and two knot points with various levels of error variance (EV=0.5; 1; 5). There are three levels of closeness of relationship for the analysis process in the measurement model consisting of low (0.1-0.3), medium (0.4-0.6) and high (0.7-0.9) levels of closeness. The best model lies in the form of the relationship X1Y1 linear, and. In the measurement model, a characteristic of the reflective model is obtained, namely that the higher the closeness of the relationship, the better the model obtained. The originality of this research is the development of semiparametric SEM, which has not been widely studied by researchers.Keywords: semiparametric SEM, measurement model, structural model, reflective model, formative model
Procedia PDF Downloads 4017634 Analysing the Interactive Effects of Factors Influencing Sand Production on Drawdown Time in High Viscosity Reservoirs
Authors: Gerald Gwamba, Bo Zhou, Yajun Song, Dong Changyin
Abstract:
The challenges that sand production presents to the oil and gas industry, particularly while working in poorly consolidated reservoirs, cannot be overstated. From restricting production to blocking production tubing, sand production increases the costs associated with production as it elevates the cost of servicing production equipment over time. Production in reservoirs that present with high viscosities, flow rate, cementation, clay content as well as fine sand contents is even more complex and challenging. As opposed to the one-factor at a-time testing, investigating the interactive effects arising from a combination of several factors offers increased reliability of results as well as representation of actual field conditions. It is thus paramount to investigate the conditions leading to the onset of sanding during production to ensure the future sustainability of hydrocarbon production operations under viscous conditions. We adopt the Design of Experiments (DOE) to analyse, using Taguchi factorial designs, the most significant interactive effects of sanding. We propose an optimized regression model to predict the drawdown time at sand production. The results obtained underscore that reservoirs characterized by varying (high and low) levels of viscosity, flow rate, cementation, clay, and fine sand content have a resulting impact on sand production. The only significant interactive effect recorded arises from the interaction between BD (fine sand content and flow rate), while the main effects included fluid viscosity and cementation, with percentage significances recorded as 31.3%, 37.76%, and 30.94%, respectively. The drawdown time model presented could be useful for predicting the time to reach the maximum drawdown pressure under viscous conditions during the onset of sand production.Keywords: factorial designs, DOE optimization, sand production prediction, drawdown time, regression model
Procedia PDF Downloads 15217633 Model Averaging in a Multiplicative Heteroscedastic Model
Authors: Alan Wan
Abstract:
In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk
Procedia PDF Downloads 38417632 Reliability Prediction of Tires Using Linear Mixed-Effects Model
Authors: Myung Hwan Na, Ho- Chun Song, EunHee Hong
Abstract:
We widely use normal linear mixed-effects model to analysis data in repeated measurement. In case of detecting heteroscedasticity and the non-normality of the population distribution at the same time, normal linear mixed-effects model can give improper result of analysis. To achieve more robust estimation, we use heavy tailed linear mixed-effects model which gives more exact and reliable analysis conclusion than standard normal linear mixed-effects model.Keywords: reliability, tires, field data, linear mixed-effects model
Procedia PDF Downloads 56317631 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification
Authors: Oumaima Khlifati, Khadija Baba
Abstract:
Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.Keywords: distress pavement, hyperparameters, automatic classification, deep learning
Procedia PDF Downloads 9317630 Towards a Measurement-Based E-Government Portals Maturity Model
Authors: Abdoullah Fath-Allah, Laila Cheikhi, Rafa E. Al-Qutaish, Ali Idri
Abstract:
The e-government emerging concept transforms the way in which the citizens are dealing with their governments. Thus, the citizens can execute the intended services online anytime and anywhere. This results in great benefits for both the governments (reduces the number of officers) and the citizens (more flexibility and time saving). Therefore, building a maturity model to assess the e-government portals becomes desired to help in the improvement process of such portals. This paper aims at proposing an e-government maturity model based on the measurement of the best practices’ presence. The main benefit of such maturity model is to provide a way to rank an e-government portal based on the used best practices, and also giving a set of recommendations to go to the higher stage in the maturity model.Keywords: best practices, e-government portal, maturity model, quality model
Procedia PDF Downloads 33817629 Trajectory Generation Procedure for Unmanned Aerial Vehicles
Authors: Amor Jnifene, Cedric Cocaud
Abstract:
One of the most constraining problems facing the development of autonomous vehicles is the limitations of current technologies. Guidance and navigation controllers need to be faster and more robust. Communication data links need to be more reliable and secure. For an Unmanned Aerial Vehicles (UAV) to be useful, and fully autonomous, one important feature that needs to be an integral part of the navigation system is autonomous trajectory planning. The work discussed in this paper presents a method for on-line trajectory planning for UAV’s. This method takes into account various constraints of different types including specific vectors of approach close to target points, multiple objectives, and other constraints related to speed, altitude, and obstacle avoidance. The trajectory produced by the proposed method ensures a smooth transition between different segments, satisfies the minimum curvature imposed by the dynamics of the UAV, and finds the optimum velocity based on available atmospheric conditions. Given a set of objective points and waypoints a skeleton of the trajectory is constructed first by linking all waypoints with straight segments based on the order in which they are encountered in the path. Secondly, vectors of approach (VoA) are assigned to objective waypoints and their preceding transitional waypoint if any. Thirdly, the straight segments are replaced by 3D curvilinear trajectories taking into account the aircraft dynamics. In summary, this work presents a method for on-line 3D trajectory generation (TG) of Unmanned Aerial Vehicles (UAVs). The method takes as inputs a series of waypoints and an optional vector of approach for each of the waypoints. Using a dynamic model based on the performance equations of fixed wing aircrafts, the TG computes a set of 3D parametric curves establishing a course between every pair of waypoints, and assembling these sets of curves to construct a complete trajectory. The algorithm ensures geometric continuity at each connection point between two sets of curves. The geometry of the trajectory is optimized according to the dynamic characteristics of the aircraft such that the result translates into a series of dynamically feasible maneuvers. In summary, this work presents a method for on-line 3D trajectory generation (TG) of Unmanned Aerial Vehicles (UAVs). The method takes as inputs a series of waypoints and an optional vector of approach for each of the waypoints. Using a dynamic model based on the performance equations of fixed wing aircraft, the TG computes a set of 3D parametric curves establishing a course between every pair of waypoints, and assembling these sets of curves to construct a complete trajectory. The algorithm ensures geometric continuity at each connection point between two sets of curves. The geometry of the trajectory is optimized according to the dynamic characteristics of the aircraft such that the result translates into a series of dynamically feasible maneuvers.Keywords: trajectory planning, unmanned autonomous air vehicle, vector of approach, waypoints
Procedia PDF Downloads 40917628 Application of Biomimetic Approach in Optimizing Buildings Heat Regulating System Using Parametric Design Tools to Achieve Thermal Comfort in Indoor Spaces in Hot Arid Regions
Authors: Aya M. H. Eissa, Ayman H. A. Mahmoud
Abstract:
When it comes to energy efficient thermal regulation system, natural systems do not only offer an inspirational source of innovative strategies but also sustainable and even regenerative ones. Using biomimetic design an energy efficient thermal regulation system can be developed. Although, conventional design process methods achieved fairly efficient systems, they still had limitations which can be overcome by using parametric design software. Accordingly, the main objective of this study is to apply and assess the efficiency of heat regulation strategies inspired from termite mounds in residential buildings’ thermal regulation system. Parametric design software is used to pave the way for further and more complex biomimetic design studies and implementations. A hot arid region is selected due to the deficiency of research in this climatic region. First, the analysis phase in which the stimuli, affecting, and the parameters, to be optimized, are set mimicking the natural system. Then, based on climatic data and using parametric design software Grasshopper, building form and openings height and areas are altered till settling on an optimized solution. Finally, an assessment of the efficiency of the optimized system, in comparison with a conventional system, is determined by firstly, indoors airflow and indoors temperature, by Ansys Fluent (CFD) simulation. Secondly by and total solar radiation falling on the building envelope, which was calculated using Ladybug, Grasshopper plugin. The results show an increase in the average indoor airflow speed from 0.5m/s to 1.5 m/s. Also, a slight decrease in temperature was noticed. And finally, the total radiation was decreased by 4%. In conclusion, despite the fact that applying a single bio-inspired heat regulation strategy might not be enough to achieve an optimum system, the concluded system is more energy efficient than the conventional ones as it aids achieving indoors comfort through passive techniques. Thus demonstrating the potential of parametric design software in biomimetic design.Keywords: biomimicry, heat regulation systems, hot arid regions, parametric design, thermal comfort
Procedia PDF Downloads 29417627 CFD Simulation of a Large Scale Unconfined Hydrogen Deflagration
Authors: I. C. Tolias, A. G. Venetsanos, N. Markatos
Abstract:
In the present work, CFD simulations of a large scale open deflagration experiment are performed. Stoichiometric hydrogen-air mixture occupies a 20 m hemisphere. Two combustion models are compared and are evaluated against the experiment. The Eddy Dissipation Model and a Multi-physics combustion model which is based on Yakhot’s equation for the turbulent flame speed. The values of models’ critical parameters are investigated. The effect of the turbulence model is also examined. k-ε model and LES approach were tested.Keywords: CFD, deflagration, hydrogen, combustion model
Procedia PDF Downloads 50217626 An Improved Data Aided Channel Estimation Technique Using Genetic Algorithm for Massive Multi-Input Multiple-Output
Authors: M. Kislu Noman, Syed Mohammed Shamsul Islam, Shahriar Hassan, Raihana Pervin
Abstract:
With the increasing rate of wireless devices and high bandwidth operations, wireless networking and communications are becoming over crowded. To cope with such crowdy and messy situation, massive MIMO is designed to work with hundreds of low costs serving antennas at a time as well as improve the spectral efficiency at the same time. TDD has been used for gaining beamforming which is a major part of massive MIMO, to gain its best improvement to transmit and receive pilot sequences. All the benefits are only possible if the channel state information or channel estimation is gained properly. The common methods to estimate channel matrix used so far is LS, MMSE and a linear version of MMSE also proposed in many research works. We have optimized these methods using genetic algorithm to minimize the mean squared error and finding the best channel matrix from existing algorithms with less computational complexity. Our simulation result has shown that the use of GA worked beautifully on existing algorithms in a Rayleigh slow fading channel and existence of Additive White Gaussian Noise. We found that the GA optimized LS is better than existing algorithms as GA provides optimal result in some few iterations in terms of MSE with respect to SNR and computational complexity.Keywords: channel estimation, LMMSE, LS, MIMO, MMSE
Procedia PDF Downloads 19117625 Comparison of Crossover Types to Obtain Optimal Queries Using Adaptive Genetic Algorithm
Authors: Wafa’ Alma'Aitah, Khaled Almakadmeh
Abstract:
this study presents an information retrieval system of using genetic algorithm to increase information retrieval efficiency. Using vector space model, information retrieval is based on the similarity measurement between query and documents. Documents with high similarity to query are judge more relevant to the query and should be retrieved first. Using genetic algorithms, each query is represented by a chromosome; these chromosomes are fed into genetic operator process: selection, crossover, and mutation until an optimized query chromosome is obtained for document retrieval. Results show that information retrieval with adaptive crossover probability and single point type crossover and roulette wheel as selection type give the highest recall. The proposed approach is verified using (242) proceedings abstracts collected from the Saudi Arabian national conference.Keywords: genetic algorithm, information retrieval, optimal queries, crossover
Procedia PDF Downloads 29217624 A Framework for Consumer Selection on Travel Destinations
Authors: J. Rhodes, V. Cheng, P. Lok
Abstract:
The aim of this study is to develop a parsimonious model that explains the effect of different stimulus on a tourist’s intention to visit a new destination. The model consists of destination trust and interest as the mediating variables. The model was tested using two different types of stimulus; both studies empirically supported the proposed model. Furthermore, the first study revealed that advertising has a stronger effect than positive online reviews. The second study found that the peripheral route of the elaboration likelihood model has a stronger influence power than the central route in this context.Keywords: advertising, electronic word-of-mouth, elaboration likelihood model, intention to visit, trust
Procedia PDF Downloads 45817623 A Combined AHP-GP Model for Selecting Knowledge Management Tool
Authors: Ahmad Sarfaraz, Raiyad Herwies
Abstract:
In this paper, a multi-criteria decision making analysis is used to help any organization selects the best KM tool that fits and serves its needs. The AHP model is used based on a previous study to highlight and identify the main criteria and sub-criteria that are incorporated in the selection process. Different KM tools alternatives with different criteria are compared and weighted accurately to be incorporated in the GP model. The main goal is to combine the GP model with the AHP model to ensure that selecting the KM tool considers the resource constraints. Two important issues are discussed in this paper: how different factors could be taken into consideration in forming the AHP model, and how to incorporate the AHP results into the GP model for better results.Keywords: knowledge management, analytical hierarchy process, goal programming, multi-criteria decision making
Procedia PDF Downloads 38517622 Long Short-Time Memory Neural Networks for Human Driving Behavior Modelling
Authors: Lu Zhao, Nadir Farhi, Yeltsin Valero, Zoi Christoforou, Nadia Haddadou
Abstract:
In this paper, a long short-term memory (LSTM) neural network model is proposed to replicate simultaneously car-following and lane-changing behaviors in road networks. By combining two kinds of LSTM layers and three input designs of the neural network, six variants of the LSTM model have been created. These models were trained and tested on the NGSIM 101 dataset, and the results were evaluated in terms of longitudinal speed and lateral position, respectively. Then, we compared the LSTM model with a classical car-following model (the intelligent driving model (IDM)) in the part of speed decision. In addition, the LSTM model is compared with a model using classical neural networks. After the comparison, the LSTM model demonstrates higher accuracy than the physical model IDM in terms of car-following behavior and displays better performance with regard to both car-following and lane-changing behavior compared to the classical neural network model.Keywords: traffic modeling, neural networks, LSTM, car-following, lane-change
Procedia PDF Downloads 26117621 AgriFood Model in Ankara Regional Innovation Strategy
Authors: Coskun Serefoglu
Abstract:
The study aims to analyse how a traditional sector such as agri-food could be mobilized through regional innovation strategies. A principal component analysis as well as qualitative information, such as in-depth interviews, focus group and surveys, were employed to find the priority sectors. An agri-food model was developed which includes both a linear model and interactive model. The model consists of two main components, one of which is technological integration and the other one is agricultural extension which is based on Land-grant university approach of U.S. which is not a common practice in Turkey.Keywords: regional innovation strategy, interactive model, agri-food sector, local development, planning, regional development
Procedia PDF Downloads 14917620 Development of an Interface between BIM-model and an AI-based Control System for Building Facades with Integrated PV Technology
Authors: Moser Stephan, Lukasser Gerald, Weitlaner Robert
Abstract:
Urban structures will be used more intensively in the future through redensification or new planned districts with high building densities. Especially, to achieve positive energy balances like requested for Positive Energy Districts (PED) the single use of roofs is not sufficient for dense urban areas. However, the increasing share of window significantly reduces the facade area available for use in PV generation. Through the use of PV technology at other building components, such as external venetian blinds, onsite generation can be maximized and standard functionalities of this product can be positively extended. While offering advantages in terms of infrastructure, sustainability in the use of resources and efficiency, these systems require an increased optimization in planning and control strategies of buildings. External venetian blinds with PV technology require an intelligent control concept to meet the required demands such as maximum power generation, glare prevention, high daylight autonomy, avoidance of summer overheating but also use of passive solar gains in wintertime. Today, geometric representation of outdoor spaces and at the building level, three-dimensional geometric information is available for planning with Building Information Modeling (BIM). In a research project, a web application which is called HELLA DECART was developed to provide this data structure to extract the data required for the simulation from the BIM models and to make it usable for the calculations and coupled simulations. The investigated object is uploaded as an IFC file to this web application and includes the object as well as the neighboring buildings and possible remote shading. This tool uses a ray tracing method to determine possible glare from solar reflections of a neighboring building as well as near and far shadows per window on the object. Subsequently, an annual estimate of the sunlight per window is calculated by taking weather data into account. This optimized daylight assessment per window provides the ability to calculate an estimation of the potential power generation at the integrated PV on the venetian blind but also for the daylight and solar entry. As a next step, these results of the calculations as well as all necessary parameters for the thermal simulation can be provided. The overall aim of this workflow is to advance the coordination between the BIM model and coupled building simulation with the resulting shading and daylighting system with the artificial lighting system and maximum power generation in a control system. In the research project Powershade, an AI based control concept for PV integrated façade elements with coupled simulation results is investigated. The developed automated workflow concept in this paper is tested by using an office living lab at the HELLA company.Keywords: BIPV, building simulation, optimized control strategy, planning tool
Procedia PDF Downloads 11017619 Stability Analysis of SEIR Epidemic Model with Treatment Function
Authors: Sasiporn Rattanasupha, Settapat Chinviriyasit
Abstract:
The treatment function adopts a continuous and differentiable function which can describe the effect of delayed treatment when the number of infected individuals increases and the medical condition is limited. In this paper, the SEIR epidemic model with treatment function is studied to investigate the dynamics of the model due to the effect of treatment. It is assumed that the treatment rate is proportional to the number of infective patients. The stability of the model is analyzed. The model is simulated to illustrate the analytical results and to investigate the effects of treatment on the spread of infection.Keywords: basic reproduction number, local stability, SEIR epidemic model, treatment function
Procedia PDF Downloads 52117618 Biotransformation Process for the Enhanced Production of the Pharmaceutical Agents Sakuranetin and Genkwanin: Poised to be Potent Therapeuctic Drugs
Authors: Niranjan Koirala, Sumangala Darsandhari, Hye Jin Jung, Jae Kyung Sohng
Abstract:
Sakuranetin, an antifungal agent and genkwanin, an anti-inflammatory agent, are flavonoids with several potential pharmaceutical applications. To produce such valuable flavonoids in large quantity, an Escherichia coli cell factory has been created. E. coli harboring O-methyltransferase (SaOMT2) derived from Streptomyces avermitilis was employed for regiospecific methylation of naringenin and apigenin. In order to increase the production via biotransformation, metK gene was overexpressed and the conditions were optimized. The maximum yield of sakuranetin and genkwanin under optimized conditions was 197 µM and 170 µM respectively when 200 µM of naringenin and apigenin were supplemented in the separate cultures. Furthermore, sakuranetin was purified in large scale and used as a substrate for in vitro glycosylation by YjiC to produce glucose and galactose derivatives of sakuranetin for improved solubility. We also found that unlike naringenin, sakuranetin effectively inhibits α-melanocyte stimulating hormone (α-MSH)-stimulated melanogenesis in B16F10 melanoma cells. In addition, genkwanin more potently inhibited angiogenesis than apigenin. Based on our findings, we speculate that these compounds warrant further investigation in vivo as potential new therapeutic anti-carcinogenic, anti-melanogenic and anti-angiogenic agents.Keywords: anti-carcinogenic, anti-melanogenic, glycosylation, methylation
Procedia PDF Downloads 60917617 UF as Pretreatment of RO for Tertiary Treatment of Biologically Treated Distillery Spentwash
Authors: Pinki Sharma, Himanshu Joshi
Abstract:
Distillery spentwash contains high chemical oxygen demand (COD), biological oxygen demand (BOD), color, total dissolved solids (TDS) and other contaminants even after biological treatment. The effluent can’t be discharged as such in the surface water bodies or land without further treatment. Reverse osmosis (RO) treatment plants have been installed in many of the distilleries at tertiary level. But at most of the places these plants are not properly working due to high concentration of organic matter and other contaminants in biologically treated spentwash. To make the membrane treatment proven and reliable technology, proper pre-treatment is mandatory. In the present study, ultra-filtration (UF) as pre-treatment of RO at tertiary stage was performed. Operating parameters namely initial pH (pHo: 2–10), trans-membrane pressure (TMP: 4-20 bars) and temperature (T: 15- 43°C) used for conducting experiments with UF system. Experiments were optimized at different operating parameters in terms of COD, color, TDS and TOC removal by using response surface methodology (RSM) with central composite design. The results showed that removal of COD, color and TDS by 62%, 93.5% and 75.5%, with UF, respectively at optimized conditions with increased permeate flux from 17.5 l/m2/h (RO) to 38 l/m2/h (UF-RO). The performance of the RO system was greatly improved both in term of pollutant removal as well as water recovery.Keywords: bio-digested distillery spentwash, reverse osmosis, response surface methodology, ultra-filtration
Procedia PDF Downloads 34717616 Design of Lead-Lag Based Internal Model Controller for Binary Distillation Column
Authors: Rakesh Kumar Mishra, Tarun Kumar Dan
Abstract:
Lead-Lag based Internal Model Control method is proposed based on Internal Model Control (IMC) strategy. In this paper, we have designed the Lead-Lag based Internal Model Control for binary distillation column for SISO process (considering only bottom product). The transfer function has been taken from Wood and Berry model. We have find the composition control and disturbance rejection using Lead-Lag based IMC and comparing with the response of simple Internal Model Controller.Keywords: SISO, lead-lag, internal model control, wood and berry, distillation column
Procedia PDF Downloads 64617615 Preliminary dosimetric Evaluation of a New Therapeutic 177LU Complex for Human Based on Biodistribution Data in Rats
Authors: H. Yousefnia, S. Zolghadri, A. Golabi Dezfuli
Abstract:
Tris (1,10-phenanthroline) lanthanum(III)] trithiocyanate is a new compound that has shown to stop DNA synthesis in CCRF-CEM and Ehrlich ascites cells leading to a cell cycle arrest in G0/G1. One other important property of the phenanthroline nucleus is its ability to act as a triplet-state photosensitizer especially in complexes with lanthanides. In Nowadays, the radiation dose assessment resource (RADAR) method is known as the most common method for absorbed dose calculation. 177Lu was produced by irradiation of a natural Lu2O3 target at a thermal neutron flux of approximately 4 × 1013 n/cm2•s. 177Lu-PL3 was prepared in the optimized condition. The radiochemical yield was checked by ITLC method. The biodistribution of the complex was investigated by intravenously injection to wild-type rats via their tail veins. In this study, the absorbed dose of 177Lu-PL3 to human organs was estimated by RADAR method. 177Lu was prepared with a specific activity of 2.6-3 GBq.mg-1 and radionuclide purity of 99.98 %. The 177Lu-PL3 complex can prepare with high radiochemical yield (> 99 %) at optimized conditions. The results show that liver and spleen have received the highest absorbed dose of 1.051 and 0.441 mSv/MBq, respectivley. The absorbed dose values for these two dose-limiting tissues suggest more biological studies special in tumor-bearing animals.Keywords: internal dosimetry, Lutetium-177, radar, animals
Procedia PDF Downloads 37217614 The Development of a Miniaturized Raman Instrument Optimized for the Detection of Biosignatures on Europa
Authors: Aria Vitkova, Hanna Sykulska-Lawrence
Abstract:
In recent years, Europa has been one of the major focus points in astrobiology due to its high potential of harbouring life in the vast ocean underneath its icy crust. However, the detection of life on Europa faces many challenges due to the harsh environmental conditions and mission constraints. Raman spectroscopy is a highly capable and versatile in-situ characterisation technique that does not require any sample preparation. It has only been used on Earth to date; however, recent advances in optical and laser technology have also allowed it to be considered for extraterrestrial exploration. So far, most efforts have been focused on the exploration of Mars, the most imminent planetary target. However, as an emerging technology with high miniaturization potential, Raman spectroscopy also represents a promising tool for the exploration of Europa. In this study, the capabilities of Raman technology in terms of life detection on Europa are explored and assessed. Spectra of biosignatures identified as high priority molecular targets for life detection on Europa were acquired at various excitation wavelengths and conditions analogous to Europa. The effects of extremely low temperatures and low concentrations in water ice were explored and evaluated in terms of the effectiveness of various configurations of Raman instruments. Based on the findings, a design of a miniaturized Raman instrument optimized for in-situ detection of life on Europa is proposed.Keywords: astrobiology, biosignatures, Europa, life detection, Raman Spectroscopy
Procedia PDF Downloads 21217613 Removal of Phenol from Aqueous Solution Using Watermelon (Citrullus C. lanatus) Rind
Authors: Fidelis Chigondo
Abstract:
This study focuses on investigating the effectiveness of watermelon rind in phenol removal from aqueous solution. The effects of various parameters (pH, initial phenol concentration, biosorbent dosage and contact time) on phenol adsorption were investigated. The pH of 2, initial phenol concentration of 40 ppm, the biosorbent dosage of 0.6 g and contact time of 6 h also deduced to be the optimum conditions for the adsorption process. The maximum phenol removal under optimized conditions was 85%. The sorption data fitted to the Freundlich isotherm with a regression coefficient of 0.9824. The kinetics was best described by the intraparticle diffusion model and Elovich Equation with regression coefficients of 1 and 0.8461 respectively showing that the reaction is chemisorption on a heterogeneous surface and the intraparticle diffusion rate only is the rate determining step. The study revealed that watermelon rind has a potential of removing phenol from industrial wastewaters.Keywords: biosorption, phenol, biosorbent, watermelon rind
Procedia PDF Downloads 24717612 Plotting of an Ideal Logic versus Resource Outflow Graph through Response Analysis on a Strategic Management Case Study Based Questionnaire
Authors: Vinay A. Sharma, Shiva Prasad H. C.
Abstract:
The initial stages of any project are often observed to be in a mixed set of conditions. Setting up the project is a tough task, but taking the initial decisions is rather not complex, as some of the critical factors are yet to be introduced into the scenario. These simple initial decisions potentially shape the timeline and subsequent events that might later be plotted on it. Proceeding towards the solution for a problem is the primary objective in the initial stages. The optimization in the solutions can come later, and hence, the resources deployed towards attaining the solution are higher than what they would have been in the optimized versions. A ‘logic’ that counters the problem is essentially the core of the desired solution. Thus, if the problem is solved, the deployment of resources has led to the required logic being attained. As the project proceeds along, the individuals working on the project face fresh challenges as a team and are better accustomed to their surroundings. The developed, optimized solutions are then considered for implementation, as the individuals are now experienced, and know better of the consequences and causes of possible failure, and thus integrate the adequate tolerances wherever required. Furthermore, as the team graduates in terms of strength, acquires prodigious knowledge, and begins its efficient transfer, the individuals in charge of the project along with the managers focus more on the optimized solutions rather than the traditional ones to minimize the required resources. Hence, as time progresses, the authorities prioritize attainment of the required logic, at a lower amount of dedicated resources. For empirical analysis of the stated theory, leaders and key figures in organizations are surveyed for their ideas on appropriate logic required for tackling a problem. Key-pointers spotted in successfully implemented solutions are noted from the analysis of the responses and a metric for measuring logic is developed. A graph is plotted with the quantifiable logic on the Y-axis, and the dedicated resources for the solutions to various problems on the X-axis. The dedicated resources are plotted over time, and hence the X-axis is also a measure of time. In the initial stages of the project, the graph is rather linear, as the required logic will be attained, but the consumed resources are also high. With time, the authorities begin focusing on optimized solutions, since the logic attained through them is higher, but the resources deployed are comparatively lower. Hence, the difference between consecutive plotted ‘resources’ reduces and as a result, the slope of the graph gradually increases. On an overview, the graph takes a parabolic shape (beginning on the origin), as with each resource investment, ideally, the difference keeps on decreasing, and the logic attained through the solution keeps increasing. Even if the resource investment is higher, the managers and authorities, ideally make sure that the investment is being made on a proportionally high logic for a larger problem, that is, ideally the slope of the graph increases with the plotting of each point.Keywords: decision-making, leadership, logic, strategic management
Procedia PDF Downloads 10817611 Improving Activity Recognition Classification of Repetitious Beginner Swimming Using a 2-Step Peak/Valley Segmentation Method with Smoothing and Resampling for Machine Learning
Authors: Larry Powell, Seth Polsley, Drew Casey, Tracy Hammond
Abstract:
Human activity recognition (HAR) systems have shown positive performance when recognizing repetitive activities like walking, running, and sleeping. Water-based activities are a reasonably new area for activity recognition. However, water-based activity recognition has largely focused on supporting the elite and competitive swimming population, which already has amazing coordination and proper form. Beginner swimmers are not perfect, and activity recognition needs to support the individual motions to help beginners. Activity recognition algorithms are traditionally built around short segments of timed sensor data. Using a time window input can cause performance issues in the machine learning model. The window’s size can be too small or large, requiring careful tuning and precise data segmentation. In this work, we present a method that uses a time window as the initial segmentation, then separates the data based on the change in the sensor value. Our system uses a multi-phase segmentation method that pulls all peaks and valleys for each axis of an accelerometer placed on the swimmer’s lower back. This results in high recognition performance using leave-one-subject-out validation on our study with 20 beginner swimmers, with our model optimized from our final dataset resulting in an F-Score of 0.95.Keywords: time window, peak/valley segmentation, feature extraction, beginner swimming, activity recognition
Procedia PDF Downloads 123