Search results for: model reduction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20450

Search results for: model reduction

19760 Using Surface Entropy Reduction to Improve the Crystallization Properties of a Recombinant Antibody Fragment RNA Crystallization Chaperone

Authors: Christina Roman, Deepak Koirala, Joseph A. Piccirilli

Abstract:

Phage displaying synthetic Fab libraries have been used to obtain Fabs that bind to specific RNA targets with high affinity and specificity. These Fabs have been demonstrated to facilitate RNA crystallization. However, the antibody framework used in the construction of these phage display libraries contains numerous bulky, flexible, and charged residues, which facilitate solubility and hinder aggregation. These residues can interfere with crystallization due to the entropic cost associated with burying them within crystal contacts. To systematically reduce the surface entropy of the Fabs and improve their crystallization properties, a protein engineering strategy termed surface entropy reduction (SER) is being applied to the Fab framework. In this approach, high entropy residues are mutated to smaller ones such as alanine or serine. Focusing initially on Fab BL3-6, which binds an RNA AAACA pentaloop with 20nM affinity, the SER P server (http://services.mbi.ucla.edu/SER/) was used and analysis was performed on existing RNA-Fab BL3-6 co-crystal structures. From this analysis twelve surface entropy reduced mutants were designed. These SER mutants were expressed and are now being measured for their crystallization and diffraction performance with various RNA targets. So far, one mutant has generated 3.02 angstrom diffraction with the yjdF riboswitch RNA. Ultimately, the most productive mutations will be combined into a new Fab framework to be used in a optimized phage displayed Fab library.

Keywords: antibody fragment, crystallography, RNA, surface entropy reduction

Procedia PDF Downloads 183
19759 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 217
19758 BTG-BIBA: A Flexibility-Enhanced Biba Model Using BTG Strategies for Operating System

Authors: Gang Liu, Can Wang, Runnan Zhang, Quan Wang, Huimin Song, Shaomin Ji

Abstract:

Biba model can protect information integrity but might deny various non-malicious access requests of the subjects, thereby decreasing the availability in the system. Therefore, a mechanism that allows exceptional access control is needed. Break the Glass (BTG) strategies refer an efficient means for extending the access rights of users in exceptional cases. These strategies help to prevent a system from stagnation. An approach is presented in this work for integrating Break the Glass strategies into the Biba model. This research proposes a model, BTG-Biba, which provides both an original Biba model used in normal situations and a mechanism used in emergency situations. The proposed model is context aware, can implement a fine-grained type of access control and primarily solves cross-domain access problems. Finally, the flexibility and availability improvement with the use of the proposed model is illustrated.

Keywords: Biba model, break the glass, context, cross-domain, fine-grained

Procedia PDF Downloads 535
19757 Proposing a Strategic Management Maturity Model for Continues Innovation

Authors: Ferhat Demir

Abstract:

Even if strategic management is highly critical for all types of organizations, only a few maturity models have been proposed in business literature for the area of strategic management activities. This paper updates previous studies and presents a new conceptual model for assessing the maturity of strategic management in any organization. Strategic management maturity model (S-3M) is basically composed of 6 maturity levels with 7 dimensions. The biggest contribution of S-3M is to put innovation into agenda of strategic management. The main objective of this study is to propose a model to align innovation with business strategies. This paper suggests that innovation (breakthrough new products/services and business models) is the only way of creating sustainable growth and strategy studies cannot ignore this aspect. Maturity models should embrace innovation to respond dynamic business environment and rapidly changing customer behaviours.

Keywords: strategic management, innovation, business model, maturity model

Procedia PDF Downloads 184
19756 Comparison of Applicability of Time Series Forecasting Models VAR, ARCH and ARMA in Management Science: Study Based on Empirical Analysis of Time Series Techniques

Authors: Muhammad Tariq, Hammad Tahir, Fawwad Mahmood Butt

Abstract:

Purpose: This study attempts to examine the best forecasting methodologies in the time series. The time series forecasting models such as VAR, ARCH and the ARMA are considered for the analysis. Methodology: The Bench Marks or the parameters such as Adjusted R square, F-stats, Durban Watson, and Direction of the roots have been critically and empirically analyzed. The empirical analysis consists of time series data of Consumer Price Index and Closing Stock Price. Findings: The results show that the VAR model performed better in comparison to other models. Both the reliability and significance of VAR model is highly appreciable. In contrary to it, the ARCH model showed very poor results for forecasting. However, the results of ARMA model appeared double standards i.e. the AR roots showed that model is stationary and that of MA roots showed that the model is invertible. Therefore, the forecasting would remain doubtful if it made on the bases of ARMA model. It has been concluded that VAR model provides best forecasting results. Practical Implications: This paper provides empirical evidences for the application of time series forecasting model. This paper therefore provides the base for the application of best time series forecasting model.

Keywords: forecasting, time series, auto regression, ARCH, ARMA

Procedia PDF Downloads 337
19755 Bioclimatic Niches of Endangered Garcinia indica Species on the Western Ghats: Predicting Habitat Suitability under Current and Future Climate

Authors: Malay K. Pramanik

Abstract:

In recent years, climate change has become a major threat and has been widely documented in the geographic distribution of many plant species. However, the impacts of climate change on the distribution of ecologically vulnerable medicinal species remain largely unknown. The identification of a suitable habitat for a species under climate change scenario is a significant step towards the mitigation of biodiversity decline. The study, therefore, aims to predict the impact of current, and future climatic scenarios on the distribution of the threatened Garcinia indica across the northern Western Ghats using Maximum Entropy (MaxEnt) modelling. The future projections were made for the year 2050 and 2070 with all Representative Concentration Pathways (RCPs) scenario (2.6, 4.5, 6.0, and 8.5) using 56 species occurrence data, and 19 bioclimatic predictors from the BCC-CSM1.1 model of the Intergovernmental Panel for Climate Change’s (IPCC) 5th assessment. The bioclimatic variables were minimised to a smaller number of variables after a multicollinearity test, and their contributions were assessed using jackknife test. The AUC value of 0.956 ± 0.023 indicates that the model performs with excellent accuracy. The study identified that temperature seasonality (39.5 ± 3.1%), isothermality (19.2 ± 1.6%), and annual precipitation (12.7 ± 1.7%) would be the major influencing variables in the current and future distribution. The model predicted 10.5% (19318.7 sq. km) of the study area as moderately to very highly suitable, while 82.60% (151904 sq. km) of the study area was identified as ‘unsuitable’ or ‘very low suitable’. Our predictions of climate change impact on habitat suitability suggest that there will be a drastic reduction in the suitability by 5.29% and 5.69% under RCP 8.5 for 2050 and 2070, respectively. Finally, the results signify that the model might be an effective tool for biodiversity protection, ecosystem management, and species re-habitation planning under future climate change scenarios.

Keywords: Garcinia Indica, maximum entropy modelling, climate change, MaxEnt, Western Ghats, medicinal plants

Procedia PDF Downloads 151
19754 Effects of Prescribed Surface Perturbation on NACA 0012 at Low Reynolds Number

Authors: Diego F. Camacho, Cristian J. Mejia, Carlos Duque-Daza

Abstract:

The recent widespread use of Unmanned Aerial Vehicles (UAVs) has fueled a renewed interest in efficiency and performance of airfoils, particularly for applications at low and moderate Reynolds numbers, typical of this kind of vehicles. Most of previous efforts in the aeronautical industry, regarding aerodynamic efficiency, had been focused on high Reynolds numbers applications, typical of commercial airliners and large size aircrafts. However, in order to increase the levels of efficiency and to boost the performance of these UAV, it is necessary to explore new alternatives in terms of airfoil design and application of drag reduction techniques. The objective of the present work is to carry out the analysis and comparison of performance levels between a standard NACA0012 profile against another one featuring a wall protuberance or surface perturbation. A computational model, based on the finite volume method, is employed to evaluate the effect of the presence of geometrical distortions on the wall. The performance evaluation is achieved in terms of variations of drag and lift coefficients for the given profile. In particular, the aerodynamic performance of the new design, i.e. the airfoil with a surface perturbation, is examined under conditions of incompressible and subsonic flow in transient state. The perturbation considered is a shaped protrusion prescribed as a small surface deformation on the top wall of the aerodynamic profile. The ultimate goal by including such a controlled smooth artificial roughness was to alter the turbulent boundary layer. It is shown in the present work that such a modification has a dramatic impact on the aerodynamic characteristics of the airfoil, and if properly adjusted, in a positive way. The computational model was implemented using the unstructured, FVM-based open source C++ platform OpenFOAM. A number of numerical experiments were carried out at Reynolds number 5x104, based on the length of the chord and the free-stream velocity, and angles of attack 6° and 12°. A Large Eddy Simulation (LES) approach was used, together with the dynamic Smagorinsky approach as subgrid scale (SGS) model, in order to account for the effect of the small turbulent scales. The impact of the surface perturbation on the performance of the airfoil is judged in terms of changes in the drag and lift coefficients, as well as in terms of alterations of the main characteristics of the turbulent boundary layer on the upper wall. A dramatic change in the whole performance can be appreciated, including an arguably large level of lift-to-drag coefficient ratio increase for all angles and a size reduction of laminar separation bubble (LSB) for a twelve-angle-of-attack.

Keywords: CFD, LES, Lift-to-drag ratio, LSB, NACA 0012 airfoil

Procedia PDF Downloads 382
19753 Soil-Cement Floor Produced with Alum Water Treatment Residues

Authors: Flavio Araujo, Paulo Scalize, Julio Lima, Natalia Vieira, Antonio Albuquerque, Isabela Santos

Abstract:

From a concern regarding the environmental impacts caused by the disposal of residues generated in Water Treatment Plants (WTP's), alternatives ways have been studied to use these residues as raw material for manufacture of building materials, avoiding their discharge on water streams, disposal on sanitary landfills or incineration. This paper aims to present the results of a research work, which is using WTR for replacing the soil content in the manufacturing of soil-cement floor with proportions of 0, 5, 10 and 15%. The samples tests showed a reduction mechanical strength in so far as has increased the amount of waste. The water absorption was below the maximum of 6% required by the standard. The application of WTR contributes to the reduction of the environmental damage in the water treatment industry.

Keywords: residue, soil-cement floor, sustainable, WTP

Procedia PDF Downloads 561
19752 Local Pricing Strategy Should Be the Entry Point of Equitable Benefit Sharing and Poverty Reduction in Community Based Forest Management: Some Evidences from Lowland Community Forestry in Nepal

Authors: Dhruba Khatri

Abstract:

Despite the short history of community based forest management, the community forestry program of Nepal has produced substantial positive effects to organize the local people at a local level institution called Community Forest User Group and manage the local forest resources in the line of poverty reduction since its inception in 1970s. Moreover, each CFUG has collected a community fund from the sale of forest products and non-forestry sources as well and the fund has played a vital role to improve the livelihood of user households living in and around the forests. The specific study sites were selected based on the criteria of i) community forests having dominancy of Sal forests, and ii) forests having 3-5 years experience of community forest management. The price rates of forest products fixed by the CFUGs and the distribution records were collected from the respective community forests. Nonetheless, the relation between pricing strategy and community fund collection revealed that the small change in price of forest products could greatly affect in community fund collection and carry out of forest management, community development, and income generation activities in the line of poverty reduction at local level.

Keywords: benefit sharing, community forest, equitable, Nepal

Procedia PDF Downloads 376
19751 Multiscale Simulation of Ink Seepage into Fibrous Structures through a Mesoscopic Variational Model

Authors: Athmane Bakhta, Sebastien Leclaire, David Vidal, Francois Bertrand, Mohamed Cheriet

Abstract:

This work presents a new three-dimensional variational model proposed for the simulation of ink seepage into paper sheets at the fiber level. The model, inspired by the Hising model, takes into account a finite volume of ink and describes the system state through gravity, cohesion, and adhesion force interactions. At the mesoscopic scale, the paper substrate is modeled using a discretized fiber structure generated using a numerical deposition procedure. A modified Monte Carlo method is introduced for the simulation of the ink dynamics. Besides, a multiphase lattice Boltzmann method is suggested to fine-tune the mesoscopic variational model parameters, and it is shown that the ink seepage behaviors predicted by the proposed model can resemble those predicted by a method relying on first principles.

Keywords: fibrous media, lattice Boltzmann, modelling and simulation, Monte Carlo, variational model

Procedia PDF Downloads 139
19750 Prediction of Super-Response to Cardiac Resynchronisation Therapy

Authors: Vadim A. Kuznetsov, Anna M. Soldatova, Tatyana N. Enina, Elena A. Gorbatenko, Dmitrii V. Krinochkin

Abstract:

The aim of the study was to evaluate potential parameters related with super-response to CRT. Methods: 60 CRT patients (mean age 54.3 ± 9.8 years; 80% men) with congestive heart failure (CHF) II-IV NYHA functional class, left ventricular ejection fraction < 35% were enrolled. At baseline, 1 month, 3 months and each 6 months after implantation clinical, electrocardiographic and echocardiographic parameters, NT-proBNP level were evaluated. According to the best decrease of left ventricular end-systolic volume (LVESV) (mean follow-up period 33.7 ± 15.1 months) patients were classified as super-responders (SR) (n=28; reduction in LVESV ≥ 30%) and non-SR (n=32; reduction in LVESV < 30%). Results: At baseline groups differed in age (58.1 ± 5.8 years in SR vs 50.8 ± 11.4 years in non-SR; p=0.003), gender (female gender 32.1% vs 9.4% respectively; p=0.028), width of QRS complex (157.6 ± 40.6 ms in SR vs 137.6 ± 33.9 ms in non-SR; p=0.044). Percentage of LBBB was equal between groups (75% in SR vs 59.4% in non-SR; p=0.274). All parameters of mechanical dyssynchrony were higher in SR, but only difference in left ventricular pre-ejection period (LVPEP) was statistically significant (153.0 ± 35.9 ms vs. 129.3 ± 28.7 ms p=0.032). NT-proBNP level was lower in SR (1581 ± 1369 pg/ml vs 3024 ± 2431 pg/ml; p=0.006). The survival rates were 100% in SR and 90.6% in non-SR (log-rank test P=0.002). Multiple logistic regression analysis showed that LVPEP (HR 1.024; 95% CI 1.004–1.044; P = 0.017), baseline NT-proBNP level (HR 0.628; 95% CI 0.414–0.953; P=0.029) and age at baseline (HR 1.094; 95% CI 1.009-1.168; P=0.30) were independent predictors for CRT super-response. ROC curve analysis demonstrated sensitivity 71.9% and specificity 82.1% (AUC=0.827; p < 0.001) of this model in prediction of super-response to CRT. Conclusion: Super-response to CRT is associated with better survival in long-term period. Presence of LBBB was not associated with super-response. LVPEP, NT-proBNP level, and age at baseline can be used as independent predictors of CRT super-response.

Keywords: cardiac resynchronisation therapy, superresponse, congestive heart failure, left bundle branch block

Procedia PDF Downloads 389
19749 Flood Planning Based on Risk Optimization: A Case Study in Phan-Calo River Basin in Vinh Phuc Province, Vietnam

Authors: Nguyen Quang Kim, Nguyen Thu Hien, Nguyen Thien Dung

Abstract:

Flood disasters are increasing worldwide in both frequency and magnitude. Every year in Vietnam, flood causes great damage to people, property, and environmental degradation. The flood risk management policy in Vietnam is currently updated. The planning of flood mitigation strategies is reviewed to make a decision how to reach sustainable flood risk reduction. This paper discusses the basic approach where the measures of flood protection are chosen based on minimizing the present value of expected monetary expenses, total residual risk and costs of flood control measures. This approach will be proposed and demonstrated in a case study for flood risk management in Vinh Phuc province of Vietnam. Research also proposed the framework to find a solution of optimal protection level and optimal measures of the flood. It provides an explicit economic basis for flood risk management plans and interactive effects of options for flood damage reduction. The results of the case study are demonstrated and discussed which would provide the processing of actions helped decision makers to choose flood risk reduction investment options.

Keywords: drainage plan, flood planning, flood risk, residual risk, risk optimization

Procedia PDF Downloads 228
19748 Segmentation of Piecewise Polynomial Regression Model by Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.

Keywords: piecewise regression, bayesian, reversible jump MCMC, segmentation

Procedia PDF Downloads 364
19747 Detecting Impact of Allowance Trading Behaviors on Distribution of NOx Emission Reductions under the Clean Air Interstate Rule

Authors: Yuanxiaoyue Yang

Abstract:

Emissions trading, or ‘cap-and-trade', has been long promoted by economists as a more cost-effective pollution control approach than traditional performance standard approaches. While there is a large body of empirical evidence for the overall effectiveness of emissions trading, relatively little attention has been paid to other unintended consequences brought by emissions trading. One important consequence is that cap-and-trade could introduce the risk of creating high-level emission concentrations in areas where emitting facilities purchase a large number of emission allowances, which may cause an unequal distribution of environmental benefits. This study will contribute to the current environmental policy literature by linking trading activity with environmental injustice concerns and empirically analyzing the causal relationship between trading activity and emissions reduction under a cap-and-trade program for the first time. To investigate the potential environmental injustice concern in cap-and-trade, this paper uses a differences-in-differences (DID) with instrumental variable method to identify the causal effect of allowance trading behaviors on emission reduction levels under the clean air interstate rule (CAIR), a cap-and-trade program targeting on the power sector in the eastern US. The major data source is the facility-year level emissions and allowance transaction data collected from US EPA air market databases. While polluting facilities from CAIR are the treatment group under our DID identification, we use non-CAIR facilities from the Acid Rain Program - another NOx control program without a trading scheme – as the control group. To isolate the causal effects of trading behaviors on emissions reduction, we also use eligibility for CAIR participation as the instrumental variable. The DID results indicate that the CAIR program was able to reduce NOx emissions from affected facilities by about 10% more than facilities who did not participate in the CAIR program. Therefore, CAIR achieves excellent overall performance in emissions reduction. The IV regression results also indicate that compared with non-CAIR facilities, purchasing emission permits still decreases a CAIR participating facility’s emissions level significantly. This result implies that even buyers under the cap-and-trade program have achieved a great amount of emissions reduction. Therefore, we conclude little evidence of environmental injustice from the CAIR program.

Keywords: air pollution, cap-and-trade, emissions trading, environmental justice

Procedia PDF Downloads 139
19746 Comparison of Sourcing Process in Supply Chain Operation References Model and Business Information Systems

Authors: Batuhan Kocaoglu

Abstract:

Although using powerful systems like ERP (Enterprise Resource Planning), companies still cannot benchmark their processes and measure their process performance easily based on predefined SCOR (Supply Chain Operation References) terms. The purpose of this research is to identify common and corresponding processes to present a conceptual model to model and measure the purchasing process of an organization. The main steps for the research study are: Literature review related to 'procure to pay' process in ERP system; Literature review related to 'sourcing' process in SCOR model; To develop a conceptual model integrating 'sourcing' of SCOR model and 'procure to pay' of ERP model. In this study, we examined the similarities and differences between these two models. The proposed framework is based on the assumptions that are drawn from (1) the body of literature, (2) the authors’ experience by working in the field of enterprise and logistics information systems. The modeling framework provides a structured and systematic way to model and decompose necessary information from conceptual representation to process element specification. This conceptual model will help the organizations to make quality purchasing system measurement instruments and tools. And offered adaptation issues for ERP systems and SCOR model will provide a more benchmarkable and worldwide standard business process.

Keywords: SCOR, ERP, procure to pay, sourcing, reference model

Procedia PDF Downloads 356
19745 Refinement of Existing Benzthiazole lead Targeting Lysine Aminotransferase in Dormant Stage of Mycobacterium tuberculosis

Authors: R. Reshma srilakshmi, S. Shalini, P. Yogeeswari, D. Sriram

Abstract:

Lysine aminotransferase is a crucial enzyme for dormancy in M. tuberculosis. It is involved in persistence and antibiotic resistance. In present work, we attempted to develop benzthiazole derivatives as lysine aminotransferase inhibitors. In our attempts, we also unexpectedly arrived at an interesting compound 21 (E)-4-(5-(2-(benzo[d]thiazol-2-yl)-2-cyanovinyl)thiophen-2-yl)benzoic acid which even though has moderate activity against persistent phase of mycobacterium, it has significant potency against active phase. In the entire series compound 22 (E)-4-(5-(2-(benzo[d]thiazol-2-yl)-2-cyanovinyl)thiophen-2-yl)isophthalic acid emerged as potent molecule with LAT IC50 of 2.62 µM. It has a significant log reduction of 2.9 and 2.3 fold against nutrient starved and biofilm forming mycobacteria. It was found to be inactive in MABA assay and M.marinum induced zebra fish model. It is also devoid of cytotoxicity. Compound 22 was also found to possess bactericidal effect which is independent of concentration and time. It was found to be effective in combination with Rifampicin in 3D granuloma model. The results are very encouraging as the hit molecule shows activity against active as well as persistent forms of tuberculosis. The identified hit needs further more pharmacokinetic and dynamic screening for development as new drug candidate.

Keywords: benzothiazole, latent tuberculosis, LAT, nutrient starvation

Procedia PDF Downloads 324
19744 Effect of Different Model Drugs on the Properties of Model Membranes from Fishes

Authors: M. Kumpugdee-Vollrath, T. G. D. Phu, M. Helmis

Abstract:

A suitable model membrane to study the pharmacological effect of pharmaceutical products is human stratum corneum because this layer of human skin is the outermost layer and it is an important barrier to be passed through. Other model membranes which were also used are for example skins from pig, mouse, reptile or fish. We are interested in fish skins in this project. The advantages of the fish skins are, that they can be obtained from the supermarket or fish shop. However, the fish skins should be freshly prepared and used directly without storage. In order to understand the effect of different model drugs e.g. lidocaine HCl, resveratrol, paracetamol, ibuprofen, acetyl salicylic acid on the properties of the model membrane from various types of fishes e.g. trout, salmon, cod, plaice permeation tests were performed and differential scanning calorimetry was applied.

Keywords: fish skin, model membrane, permeation, DSC, lidocaine HCl, resveratrol, paracetamol, ibuprofen, acetyl salicylic acid

Procedia PDF Downloads 463
19743 Study on Novel Reburning Process for NOx Reduction by Oscillating Injection of Reburn Fuel

Authors: Changyeop Lee, Sewon Kim, Jongho Lee

Abstract:

Reburning technology has been developed to adopt various commercial combustion systems. Fuel lean reburning is an advanced reburning method to reduce NOx economically without using burnout air, however it is not easy to get high NOx reduction efficiency. In the fuel lean reburning system, the localized fuel rich eddies are used to establish partial fuel rich regions so that the NOx can react with hydrocarbon radical restrictively. In this paper, a new advanced reburning method which supplies reburn fuel with oscillatory motion is introduced to increase NOx reduction rate effectively. To clarify whether forced oscillating injection of reburn fuel can effectively reduce NOx emission, experimental tests were conducted in vertical combustion furnace. Experiments were performed in flames stabilized by a gas burner, which was mounted at the bottom of the furnace. The natural gas is used as both main and reburn fuel and total thermal input is about 40kW. The forced oscillating injection of reburn fuel is realized by electronic solenoid valve, so that fuel rich region and fuel lean region is established alternately. In the fuel rich region, NOx is converted to N2 by reburning reaction, however unburned hydrocarbon and CO is oxidized in fuel lean zone and mixing zone at downstream where slightly fuel lean region is formed by mixing of two regions. This paper reports data on flue gas emissions and temperature distribution in the furnace for a wide range of experimental conditions. All experimental data has been measured at steady state. The NOx reduction rate increases up to 41% by forced oscillating reburn motion. The CO emissions were shown to be kept at very low level. And this paper makes clear that in order to decrease NOx concentration in the exhaust when oscillating reburn fuel injection system is adopted, the control of factors such as frequency and duty ratio is very important.

Keywords: NOx, CO, reburning, pollutant

Procedia PDF Downloads 287
19742 Lyapunov Functions for Extended Ross Model

Authors: Rahele Mosleh

Abstract:

This paper gives a survey of results on global stability of extended Ross model for malaria by constructing some elegant Lyapunov functions for two cases of epidemic, including disease-free and endemic occasions. The model is a nonlinear seven-dimensional system of ordinary differential equations that simulates this phenomenon in a more realistic fashion. We discuss the existence of positive disease-free and endemic equilibrium points of the model. It is stated that extended Ross model possesses invariant solutions for human and mosquito in a specific domain of the system.

Keywords: global stability, invariant solutions, Lyapunov function, stationary points

Procedia PDF Downloads 158
19741 Study of the Effect of Humic Acids on Soil Salinity Reduction

Authors: S. El Hasini, M. El Azzouzi, M. De Nobili, K. Azim, A. Zouahri

Abstract:

Soil salinization is one of the most severe environmental hazards which threaten sustainable agriculture in arid and semi-arid regions, including Morocco. In this regard the application of organic matter to saline soil has confirmed its effectiveness. The present study was aimed to examine the effect of humic acid which represent, among others, the important component of organic matter that contributes to reduce soil salinity. In fact, different composts taken from Agadir (Morocco), with different C/N ratio, were tested. After extraction and purification of humic acid, the interaction with Na2CO3 was carried out. The reduction of salinity is calculated as a value expressed in mg Na2CO3 equivalent/g HA. The results showed that humic acid had generally a significant effect on salinity. In that respect, the hypothesis proposed that carboxylic groups of humic acid create bonds with excess sodium in the soil to form a coherent complex which descends by leaching operation. The comparison between composts was based on C/N ratio, it showed that the compost with the lower ratio C/N had the most important effect on salinity reduction, whereas the compost with higher C/N ratio was less effective. The study is attended also to evaluate the quality of each compost by determining the humification index, we noticed that the compost which have the lowest C/N (20) ratio was relatively less stable, where a greater predominance of the humified substances, when the compost with C/N ratio is 35 exhibited higher stability.

Keywords: compost, humic acid, organic matter, salinity

Procedia PDF Downloads 235
19740 Tracy: A Java Library to Render a 3D Graphical Human Model

Authors: Sina Saadati, Mohammadreza Razzazi

Abstract:

Since Java is an object-oriented language, It can be used to solve a wide range of problems. One of the considerable usages of this language can be found in Agent-based modeling and simulation. Despite the significant power of Java, There is not an easy method to render a 3-dimensional human model. In this article, we are about to develop a library which helps modelers present a 3D human model and control it with Java. The library runs two server programs. The first one is a web page server that can connect to any browser and present an HTML code. The second server connects to the browser and controls the movement of the model. So, the modeler will be able to develop a simulation and display a good-looking human model without any knowledge of any graphical tools.

Keywords: agent-based modeling and simulation, human model, graphics, Java, distributed systems

Procedia PDF Downloads 103
19739 Data Science-Based Key Factor Analysis and Risk Prediction of Diabetic

Authors: Fei Gao, Rodolfo C. Raga Jr.

Abstract:

This research proposal will ascertain the major risk factors for diabetes and to design a predictive model for risk assessment. The project aims to improve diabetes early detection and management by utilizing data science techniques, which may improve patient outcomes and healthcare efficiency. The phase relation values of each attribute were used to analyze and choose the attributes that might influence the examiner's survival probability using Diabetes Health Indicators Dataset from Kaggle’s data as the research data. We compare and evaluate eight machine learning algorithms. Our investigation begins with comprehensive data preprocessing, including feature engineering and dimensionality reduction, aimed at enhancing data quality. The dataset, comprising health indicators and medical data, serves as a foundation for training and testing these algorithms. A rigorous cross-validation process is applied, and we assess their performance using five key metrics like accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC). After analyzing the data characteristics, investigate their impact on the likelihood of diabetes and develop corresponding risk indicators.

Keywords: diabetes, risk factors, predictive model, risk assessment, data science techniques, early detection, data analysis, Kaggle

Procedia PDF Downloads 66
19738 Effects of the Coagulation Bath and Reduction Process on SO2 Adsorption Capacity of Graphene Oxide Fiber

Authors: Özge Alptoğa, Nuray Uçar, Nilgün Karatepe Yavuz, Ayşen Önen

Abstract:

Sulfur dioxide (SO2) is a very toxic air pollutant gas and it causes the greenhouse effect, photochemical smog, and acid rain, which threaten human health severely. Thus, the capture of SO2 gas is very important for the environment. Graphene which is two-dimensional material has excellent mechanical, chemical, thermal properties, and many application areas such as energy storage devices, gas adsorption, sensing devices, and optical electronics. Further, graphene oxide (GO) is examined as a good adsorbent because of its important features such as functional groups (epoxy, carboxyl and hydroxyl) on the surface and layered structure. The SO2 adsorption properties of the fibers are usually investigated on carbon fibers. In this study, potential adsorption capacity of GO fibers was researched. GO dispersion was first obtained with Hummers’ method from graphite, and then GO fibers were obtained via wet spinning process. These fibers were converted into a disc shape, dried, and then subjected to SO2 gas adsorption test. The SO2 gas adsorption capacity of GO fiber discs was investigated in the fields of utilization of different coagulation baths and reduction by hydrazine hydrate. As coagulation baths, single and triple baths were used. In single bath, only ethanol and CaCl2 (calcium chloride) salt were added. In triple bath, each bath has a different concentration of water/ethanol and CaCl2 salt, and the disc obtained from triple bath has been called as reference disk. The fibers which were produced with single bath were flexible and rough, and the analyses show that they had higher SO2 adsorption capacity than triple bath fibers (reference disk). However, the reduction process did not increase the adsorption capacity, because the SEM images showed that the layers and uniform structure in the fiber form were damaged, and reduction decreased the functional groups which SO2 will be attached. Scanning Electron Microscopy (SEM), Fourier Transform Infrared Spectroscopy (FTIR), X-Ray Diffraction (XRD) analyzes were performed on the fibers and discs, and the effects on the results were interpreted. In the future applications of the study, it is aimed that subjects such as pH and additives will be examined.

Keywords: coagulation bath, graphene oxide fiber, reduction, SO2 gas adsorption

Procedia PDF Downloads 356
19737 The Benefits of End-To-End Integrated Planning from the Mine to Client Supply for Minimizing Penalties

Authors: G. Martino, F. Silva, E. Marchal

Abstract:

The control over delivered iron ore blend characteristics is one of the most important aspects of the mining business. The iron ore price is a function of its composition, which is the outcome of the beneficiation process. So, end-to-end integrated planning of mine operations can reduce risks of penalties on the iron ore price. In a standard iron mining company, the production chain is composed of mining, ore beneficiation, and client supply. When mine planning and client supply decisions are made uncoordinated, the beneficiation plant struggles to deliver the best blend possible. Technological improvements in several fields allowed bridging the gap between departments and boosting integrated decision-making processes. Clusterization and classification algorithms over historical production data generate reasonable previsions for quality and volume of iron ore produced for each pile of run-of-mine (ROM) processed. Mathematical modeling can use those deterministic relations to propose iron ore blends that better-fit specifications within a delivery schedule. Additionally, a model capable of representing the whole production chain can clearly compare the overall impact of different decisions in the process. This study shows how flexibilization combined with a planning optimization model between the mine and the ore beneficiation processes can reduce risks of out of specification deliveries. The model capabilities are illustrated on a hypothetical iron ore mine with magnetic separation process. Finally, this study shows ways of cost reduction or profit increase by optimizing process indicators across the production chain and integrating the different plannings with the sales decisions.

Keywords: clusterization and classification algorithms, integrated planning, mathematical modeling, optimization, penalty minimization

Procedia PDF Downloads 120
19736 Reduction of Toxic Matter from Marginal Water Treatment Using Sludge Recycling from Combination of Stepped Cascade Weir with Limestone Trickling Filter

Authors: Dheyaa Wajid Abbood, Ali Mohammed Tawfeeq Baqer, Eitizaz Awad Jasim

Abstract:

The aim of this investigation is to confirm the activity of a sludge recycling process in trickling filter filled with limestone as an alternative biological process over conventional high-cost treatment process with regard to toxic matter reduction from marginal water. The combination system of stepped cascade weir with limestone trickling filter has been designed and constructed in the Environmental Hydraulic Laboratory, Al-Mustansiriya University, College of Engineering. A set of experiments has been conducted during the period from August 2013 to July 2014. Seven days of continuous operation with different continuous flow rates (0.4m3/hr, 0.5 m3/hr, 0.6 m3/hr, 0.7m3/hr,0.8 m3/hr, 0.9 m3/hr, and 1m3/hr) after ten days of acclimatization experiments were carried out. Results indicate that the concentrations of toxic matter were decreasing with increasing of operation time, sludge recirculation ratio, and flow rate. The toxic matter measured includes (Mineral oils, Petroleum products, Phenols, Biocides, Polychlorinated biphenyls (PCBs), and Surfactants) which are used in these experiments were ranged between (0.074 nm-0.156 nm). Results indicated that the overall reduction efficiency after 4, 28, 52, 76, 100, 124, and 148 hours of operation were (55%, 48%, 42%, 50%, 59%, 61%, and 64%) when the combination of stepped cascade weir with limestone trickling filter is used.

Keywords: Marginal water , Toxic matter, Stepped Cascade weir, limestone trickling filter

Procedia PDF Downloads 390
19735 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning

Authors: Shayan Mohajer Hamidi

Abstract:

Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.

Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning

Procedia PDF Downloads 64
19734 Intelligent Fault Diagnosis for the Connection Elements of Modular Offshore Platforms

Authors: Jixiang Lei, Alexander Fuchs, Franz Pernkopf, Katrin Ellermann

Abstract:

Within the Space@Sea project, funded by the Horizon 2020 program, an island consisting of multiple platforms was designed. The platforms are connected by ropes and fenders. The connection is critical with respect to the safety of the whole system. Therefore, fault detection systems are investigated, which could detect early warning signs for a possible failure in the connection elements. Previously, a model-based method called Extended Kalman Filter was developed to detect the reduction of rope stiffness. This method detected several types of faults reliably, but some types of faults were much more difficult to detect. Furthermore, the model-based method is sensitive to environmental noise. When the wave height is low, a long time is needed to detect a fault and the accuracy is not always satisfactory. In this sense, it is necessary to develop a more accurate and robust technique that can detect all rope faults under a wide range of operational conditions. Inspired by this work on the Space at Sea design, we introduce a fault diagnosis method based on deep neural networks. Our method cannot only detect rope degradation by using the acceleration data from each platform but also estimate the contributions of the specific acceleration sensors using methods from explainable AI. In order to adapt to different operational conditions, the domain adaptation technique DANN is applied. The proposed model can accurately estimate rope degradation under a wide range of environmental conditions and help users understand the relationship between the output and the contributions of each acceleration sensor.

Keywords: fault diagnosis, deep learning, domain adaptation, explainable AI

Procedia PDF Downloads 172
19733 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction

Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal

Abstract:

The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.

Keywords: acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation

Procedia PDF Downloads 326
19732 Aggregation of Electric Vehicles for Emergency Frequency Regulation of Two-Area Interconnected Grid

Authors: S. Agheb, G. Ledwich, G.Walker, Z.Tong

Abstract:

Frequency control has become more of concern for reliable operation of interconnected power systems due to the integration of low inertia renewable energy sources to the grid and their volatility. Also, in case of a sudden fault, the system has less time to recover before widespread blackouts. Electric Vehicles (EV)s have the potential to cooperate in the Emergency Frequency Regulation (EFR) by a nonlinear control of the power system in case of large disturbances. The time is not adequate to communicate with each individual EV on emergency cases, and thus, an aggregate model is necessary for a quick response to prevent from much frequency deviation and the occurrence of any blackout. In this work, an aggregate of EVs is modelled as a big virtual battery in each area considering various aspects of uncertainty such as the number of connected EVs and their initial State of Charge (SOC) as stochastic variables. A control law was proposed and applied to the aggregate model using Lyapunov energy function to maximize the rate of reduction of total kinetic energy in a two-area network after the occurrence of a fault. The control methods are primarily based on the charging/ discharging control of available EVs as shunt capacity in the distribution system. Three different cases were studied considering the locational aspect of the model with the virtual EV either in the center of the two areas or in the corners. The simulation results showed that EVs could help the generator lose its kinetic energy in a short time after a contingency. Earlier estimation of possible contributions of EVs can help the supervisory control level to transmit a prompt control signal to the subsystems such as the aggregator agents and the grid. Thus, the percentage of EVs contribution for EFR will be characterized in the future as the goal of this study.

Keywords: emergency frequency regulation, electric vehicle, EV, aggregation, Lyapunov energy function

Procedia PDF Downloads 96
19731 Exploring Challenges Faced by Small Business Owners on Poverty Reduction in Rural Eastern Cape, South Africa

Authors: Akinwale Olusola Mokayode, Emaanuel Adu, Seriki Idowu Ibrahim

Abstract:

Small business can serve as a tool for poverty reduction in South Africa, but it requires adequate support and development for its continuous sustenance in spite of rigorous challenges, especially in the rural environment. This study explored the challenges faced by the small business owners in the rural Eastern Cape Province of South Africa. The objective of the study is to identify the challenges faced by small business owners in the case study area and to examine the effects of those challenges on poverty rate. Survey research design was adopted, with the distribution of structured questionnaire for data collection through a simple random sampling method. Descriptive and inferential statistics was used to analyse the data. Findings showed that small business owners face various challenges in their commercial operations. It was also made clearer that these challenges have effects on the poverty rate as well as crime rate. In conclusion, in other for small businesses to be effective instrument to tackle poverty, certain measure must be taken into considerations. This therefore necessitates recommendation from the researcher that potential and current business owners must seek valuable advice from the more experienced business tycoon and seek information about the business assistance programmes provided by government and private sectors.

Keywords: eastern cape, poverty, poverty reduction, rural, small business, sustainable livelihood

Procedia PDF Downloads 482