Search results for: Linear Discriminant Analysis (LDA)
28509 Electron Beam Processing of Ethylene-Propylene-Terpolymer-Based Rubber Mixtures
Authors: M. D. Stelescu, E. Manaila, G. Craciun, D. Ighigeanu
Abstract:
The goal of the paper is to present the results regarding the influence of the irradiation dose and amount of multifunctional monomer trimethylol-propane trimethacrylate (TMPT) on ethylene-propylene-diene terpolymer rubber (EPDM) mixtures irradiated in electron beam. Blends, molded on an electrically heated laboratory roller mill and compressed in an electrically heated hydraulic press, were irradiated using the ALID 7 of 5.5 MeV linear accelerator in the dose range of 22.6 kGy to 56.5 kGy in atmospheric conditions and at room temperature of 25 °C. The share of cross-linking and degradation reactions was evaluated by means of sol-gel analysis, cross-linking density measurements, FTIR studies and Charlesby-Pinner parameter (p0/q0) calculations. The blends containing different concentrations of TMPT (3 phr and 9 phr) and irradiated with doses in the mentioned range have present the increasing of gel content and cross-linking density. Modified and new bands in FTIR spectra have appeared, because of both cross-linking and chain scission reactions.Keywords: electron beam irradiation, EPDM rubber, crosslinking density, gel fraction
Procedia PDF Downloads 15628508 Prioritization of Sub-Watersheds in Semi Arid Region: A Case Study of Shevgaon and Pathardi Tahsils in Maharashtra
Authors: Dadasaheb R. Jawre, Maya G. Unde
Abstract:
Prioritization of sub-watershed plays important role in watershed management. It shows the requirement of watershed to give a treatment for the green growth of the region and conservation of the sub-watersheds. There is a number of factors like topography of the region, climatic characteristics like rainfall and runoff, land-use land-cover, social factors which are related to the development of watershed for agricultural uses and domestic purposes in the region. The present research is throwing a focus on how morphometric parameters in association with GIS analysis will help in identifying the ranking of the sub-watersheds for further development which help of suggested watershed structures. Shevgaon and Pathardi tahsils are drought prone tahsils of Ahmednagar district in Maharashtra. These tahsils come under the semi-arid region. Sub-watershed prioritization is necessary for proper planning and management of natural resources for sustainable development of the study area. Less rainfall and increasing population pressure on the land as well as water resources lead to scarcity of the water in the region. Hence, researcher has selected Shevgaon and Pathardi tahsils for sub-watershed prioritization. There are seven sub-watersheds which selected for the present research paper. In the morphological analysis linear aspects, aerial aspects and relief aspects are considered for the prioritization. The largest sub-watershed is Erdha which is located at Karanji in Pathardi tahsil having an area of 145.06 km2 and smallest sub-watershed is Erandgaon which is located in Shevgaon tahsil having an area of 40.143 km2. For all seven sub-watersheds, seven morphometric parameters were considered for calculating the compound parameter values. Finally, compound parameter values are grouped into three groups such as, high priority (below 4.0), moderate priority (4.0 to 5.0) and low priority (above 5.0) according to the compound value Erandgaon, Chapadgaon and Tarak sub-watersheds comes under high priority group, Erdha and Domeshwar sub-watersheds come under moderate priority group and Chandani and Kasichi sub-watershed come under low priority group. Both the tahsils falls in drought prone area, after getting the watershed structure overall development of the region will take place.Keywords: sub-watersheds, GIS and remote sensing, morphometric analysis, compound parameter value, prioritization
Procedia PDF Downloads 15528507 The Effect of Tax Avoidance on Firm Value: Evidence from Amman Stock Exchange
Authors: Mohammad Abu Nassar, Mahmoud Al Khalilah, Hussein Abu Nassar
Abstract:
The purpose of this study is to examine whether corporate tax avoidance practices can impact firm value in the Jordanian context. The study employs a quantitative approach using s sample of (124) industrial and services companies listed on the Amman Stock Exchange for the period from 2010 to 2019. Multiple linear regression analysis has been applied to test the study's hypothesis. The study employs effective tax rate and book-tax difference to measure tax avoidance and Tobin's Q factor to measure firm value. The results of the study revealed that tax avoidance practices, when measured using effective tax rates, do not significantly impact firm value. When the book-tax difference is used to measure tax avoidance, the study results showed a negative impact on firm value. The result of the study has not supported the traditional view of tax avoidance as a transfer of wealth from the government to shareholders for industrial and services companies listed on the Amman Stock Exchange, indicating that Jordanian firms should not use tax avoidance strategies to enhance their value.Keywords: tax avoidance, effective tax rate, book-tax difference, firm value, Amman stock exchange
Procedia PDF Downloads 16828506 Study of Seismic Damage Reinforced Concrete Frames in Variable Height with Logistic Statistic Function Distribution
Authors: P. Zarfam, M. Mansouri Baghbaderani
Abstract:
In seismic design, the proper reaction to the earthquake and the correct and accurate prediction of its subsequent effects on the structure are critical. Choose a proper probability distribution, which gives a more realistic probability of the structure's damage rate, is essential in damage discussions. With the development of design based on performance, analytical method of modal push over as an inexpensive, efficacious, and quick one in the estimation of the structures' seismic response is broadly used in engineering contexts. In this research three concrete frames of 3, 6, and 13 stories are analyzed in non-linear modal push over by 30 different earthquake records by OpenSEES software, then the detriment indexes of roof's displacement and relative displacement ratio of the stories are calculated by two parameters: peak ground acceleration and spectra acceleration. These indexes are used to establish the value of damage relations with log-normal distribution and logistics distribution. Finally the value of these relations is compared and the effect of height on the mentioned damage relations is studied, too.Keywords: modal pushover analysis, concrete structure, seismic damage, log-normal distribution, logistic distribution
Procedia PDF Downloads 24728505 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter
Procedia PDF Downloads 33128504 Monitoring Surface Modification of Polylactide Nonwoven Fabric with Weak Polyelectrolytes
Authors: Sima Shakoorjavan, Dawid Stawski, Somaye Akbari
Abstract:
In this study, great attempts have been made to initially modify polylactide (PLA) nonwoven surface with poly(amidoamine) (PAMMA) dendritic polymer to create amine active sites on PLA surface through aminolysis reaction. Further, layer-by-layer deposition of four layers of two weak polyelectrolytes, including PAMAM as polycation and polyacrylic acid (PAA) as polyanion on activated PLA, was monitored with turbidity analysis of waste-polyelectrolytes after each deposition step. The FTIR-ATR analysis confirmed the successful introduction of amine groups into PLA polymeric chains through the emerging peak around 1650 cm⁻¹ corresponding to N-H bending vibration and a double wide peak at around 3670-3170 cm⁻¹ corresponding to N-H stretching vibration. The adsorption-desorption behavior of (PAMAM) and poly (PAA) deposition was monitored by turbidity test. Turbidity results showed the desorption and removal of the previously deposited layer (second and third layers) upon the desorption of the next layers (third and fourth layers). Also, the importance of proper rinsing after aminolysis of PLA nonwoven fabric was revealed by turbidity test. Regarding the sample with insufficient rinsing process, higher desorption and removal of ungrafted PAMAM from aminolyzed-PLA surface into PAA solution was detected upon the deposition of the first PAA layer. This phenomenon can be due to electrostatic attraction between polycation (PAMAM) and polyanion (PAA). Moreover, the successful layer deposition through LBL was confirmed by the staining test of acid red 1 through spectrophotometry analysis. According to the results, layered PLA with four layers with PAMAM as the top layer showed higher dye absorption (46.7%) than neat (1.2%) and aminolyzed PLA (21.7%). In conclusion, the complicated adsorption-desorption behavior of dendritic polycation and linear polyanion systems was observed. Although desorption and removal of previously adsorbed layers occurred upon the deposition of the next layer, the remaining polyelectrolyte on the substrate is sufficient for the adsorption of the next polyelectrolyte through electrostatic attraction between oppositely charged polyelectrolytes. Also, an increase in dye adsorption confirmed more introduction of PAMAM onto PLA surface through LBL.Keywords: surface modification, layer-by-layer technique, weak polyelectrolytes, adsorption-desorption behavior
Procedia PDF Downloads 6728503 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches
Authors: Vahid Nourani, Atefeh Ashrafi
Abstract:
Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant
Procedia PDF Downloads 13128502 Development and Validation of a Rapid Turbidimetric Assay to Determine the Potency of Cefepime Hydrochloride in Powder Injectable Solution
Authors: Danilo F. Rodrigues, Hérida Regina N. Salgado
Abstract:
Introduction: The emergence of resistant microorganisms to a large number of clinically approved antimicrobials has been increasing, which restrict the options for the treatment of bacterial infections. As a strategy, drugs with high antimicrobial activities are in evidence. Stands out a class of antimicrobial, the cephalosporins, having as fourth generation cefepime (CEF) a semi-synthetic product which has activity against various Gram-positive bacteria (e.g. oxacillin resistant Staphylococcus aureus) and Gram-negative (e.g. Pseudomonas aeruginosa) aerobic. There are few studies in the literature regarding the development of microbiological methodologies for the analysis of this antimicrobial, so researches in this area are highly relevant to optimize the analysis of this drug in the industry and ensure the quality of the marketed product. The development of microbiological methods for the analysis of antimicrobials has gained strength in recent years and has been highlighted in relation to physicochemical methods, especially because they make possible to determine the bioactivity of the drug against a microorganism. In this context, the aim of this work was the development and validation of a microbiological method for quantitative analysis of CEF in powder lyophilized for injectable solution by turbidimetric assay. Method: For performing the method, Staphylococcus aureus ATCC 6538 IAL 2082 was used as the test microorganism and the culture medium chosen was the Casoy broth. The test was performed using temperature control (35.0 °C ± 2.0 °C) and incubated for 4 hours in shaker. The readings of the results were made at a wavelength of 530 nm through a spectrophotometer. The turbidimetric microbiological method was validated by determining the following parameters: linearity, precision (repeatability and intermediate precision), accuracy and robustness, according to ICH guidelines. Results and discussion: Among the parameters evaluated for method validation, the linearity showed results suitable for both statistical analyses as the correlation coefficients (r) that went 0.9990 for CEF reference standard and 0.9997 for CEF sample. The precision presented the following values 1.86% (intraday), 0.84% (interday) and 0.71% (between analyst). The accuracy of the method has been proven through the recovery test where the mean value obtained was 99.92%. The robustness was verified by the parameters changing volume of culture medium, brand of culture medium, incubation time in shaker and wavelength. The potency of CEF present in the samples of lyophilized powder for injectable solution was 102.46%. Conclusion: The turbidimetric microbiological method proposed for quantification of CEF in lyophilized powder for solution for injectable showed being fast, linear, precise, accurate and robust, being in accordance with all the requirements, which can be used in routine analysis of quality control in the pharmaceutical industry as an option for microbiological analysis.Keywords: cefepime hydrochloride, quality control, turbidimetric assay, validation
Procedia PDF Downloads 36328501 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning
Authors: Shayla He
Abstract:
Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.Keywords: homeless, prediction, model, RNN
Procedia PDF Downloads 12228500 Oxidosqualene Cyclase: A Novel Inhibitor
Authors: Devadrita Dey Sarkar
Abstract:
Oxidosqualene cyclase is a membrane bound enzyme in which helps in the formation of steroid scaffold in higher organisms. In a highly selective cyclization reaction oxidosqualene cyclase forms LANOSTEROL with seven chiral centres starting from the linear substrate 2,3-oxidosqualene. In humans OSC in cholesterol biosynthesis it represents a target for the discovery of novel anticholesteraemic drugs that could complement the widely used statins. The enzyme oxidosqualene: lanosterol cyclase (OSC) represents a novel target for the treatment of hypercholesterolemia. OSC catalyzes the cyclization of the linear 2,3-monoepoxysqualene to lanosterol, the initial four-ringed sterol intermediate in the cholesterol biosynthetic pathway. OSC also catalyzes the formation of 24(S), 25-epoxycholesterol, a ligand activator of the liver X receptor. Inhibition of OSC reduces cholesterol biosynthesis and selectively enhances 24(S),25-epoxycholesterol synthesis. Through this dual mechanism, OSC inhibition decreases plasma levels of low-density lipoprotein (LDL)-cholesterol and prevents cholesterol deposition within macrophages. The recent crystallization of OSC identifies the mechanism of action for this complex enzyme, setting the stage for the design of OSC inhibitors with improved pharmacological properties for cholesterol lowering and treatment of atherosclerosis. While studying and designing the inhibitor of oxidosqulene cyclase, I worked on the pdb id of 1w6k which was the most worked on pdb id and I used several methods, techniques and softwares to identify and validate the top most molecules which could be acting as an inhibitor for oxidosqualene cyclase. Thus, by partial blockage of this enzyme, both an inhibition of lanosterol and subsequently cholesterol formation as well as a concomitant effect on HMG-CoA reductase can be achieved. Both effects complement each other and lead to an effective control of cholesterol biosynthesis. It is therefore concluded that 2,3-oxidosqualene cyclase plays a crucial role in the regulation of intracellular cholesterol homeostasis. 2,3-Oxidosqualene cyclase inhibitors offer an attractive approach for novel lipid-lowering agents.Keywords: anticholesteraemic, crystallization, statins, homeostasis
Procedia PDF Downloads 35228499 Analysis of Diabetes Patients Using Pearson, Cost Optimization, Control Chart Methods
Authors: Devatha Kalyan Kumar, R. Poovarasan
Abstract:
In this paper, we have taken certain important factors and health parameters of diabetes patients especially among children by birth (pediatric congenital) where using the above three metrics methods we are going to assess the importance of each attributes in the dataset and thereby determining the most highly responsible and co-related attribute causing diabetics among young patients. We use cost optimization, control chart and Spearmen methodologies for the real-time application of finding the data efficiency in this diabetes dataset. The Spearmen methodology is the correlation methodologies used in software development process to identify the complexity between the various modules of the software. Identifying the complexity is important because if the complexity is higher, then there is a higher chance of occurrence of the risk in the software. With the use of control; chart mean, variance and standard deviation of data are calculated. With the use of Cost optimization model, we find to optimize the variables. Hence we choose the Spearmen, control chart and cost optimization methods to assess the data efficiency in diabetes datasets.Keywords: correlation, congenital diabetics, linear relationship, monotonic function, ranking samples, pediatric
Procedia PDF Downloads 25828498 Effects of Matrix Properties on Surfactant Enhanced Oil Recovery in Fractured Reservoirs
Authors: Xiaoqian Cheng, Jon Kleppe, Ole Torsæter
Abstract:
The properties of rocks have effects on efficiency of surfactant. One objective of this study is to analyze the effects of rock properties (permeability, porosity, initial water saturation) on surfactant spontaneous imbibition at laboratory scale. The other objective is to evaluate existing upscaling methods and establish a modified upscaling method. A core is put in a container that is full of surfactant solution. Assume there is no space between the bottom of the core and the container. The core is modelled as a cuboid matrix with a length of 3.5 cm, a width of 3.5 cm, and a height of 5 cm. The initial matrix, brine and oil properties are set as the properties of Ekofisk Field. The simulation results of matrix permeability show that the oil recovery rate has a strong positive linear relationship with matrix permeability. Higher oil recovery is obtained from the matrix with higher permeability. One existing upscaling method is verified by this model. The study on matrix porosity shows that the relationship between oil recovery rate and matrix porosity is a negative power function. However, the relationship between ultimate oil recovery and matrix porosity is a positive power function. The initial water saturation of matrix has negative linear relationships with ultimate oil recovery and enhanced oil recovery. However, the relationship between oil recovery and initial water saturation is more complicated with the imbibition time because of the transition of dominating force from capillary force to gravity force. Modified upscaling methods are established. The work here could be used as a reference for the surfactant application in fractured reservoirs. And the description of the relationships between properties of matrix and the oil recovery rate and ultimate oil recovery helps to improve upscaling methods.Keywords: initial water saturation, permeability, porosity, surfactant EOR
Procedia PDF Downloads 16428497 Linearization of Y-Force Equation of Rigid Body Equation of Motion and Behavior of Fighter Aircraft under Imbalance Weight on Wings during Combat
Authors: Jawad Zakir, Syed Irtiza Ali Shah, Rana Shaharyar, Sidra Mahmood
Abstract:
Y-force equation comprises aerodynamic forces, drag and side force with side slip angle β and weight component along with the coupled roll (φ) and pitch angles (θ). This research deals with the linearization of Y-force equation using Small Disturbance theory assuming equilibrium flight conditions for different state variables of aircraft. By using assumptions of Small Disturbance theory in non-linear Y-force equation, finally reached at linearized lateral rigid body equation of motion; which says that in linearized Y-force equation, the lateral acceleration is dependent on the other different aerodynamic and propulsive forces like vertical tail, change in roll rate (Δp) from equilibrium, change in yaw rate (Δr) from equilibrium, change in lateral velocity due to side force, drag and side force components due to side slip, and the lateral equation from coupled rotating frame to decoupled rotating frame. This paper describes implementation of this lateral linearized equation for aircraft control systems. Another significant parameter considered on which y-force equation depends is ‘c’ which shows that any change bought in the weight of aircrafts wing will cause Δφ and cause lateral force i.e. Y_c. This simplification also leads to lateral static and dynamic stability. The linearization of equations is required because much of mathematics control system design for aircraft is based on linear equations. This technique is simple and eases the linearization of the rigid body equations of motion without using any high-speed computers.Keywords: Y-force linearization, small disturbance theory, side slip, aerodynamic force drag, lateral rigid body equation of motion
Procedia PDF Downloads 49928496 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 11128495 Modeling and Optimization of Algae Oil Extraction Using Response Surface Methodology
Authors: I. F. Ejim, F. L. Kamen
Abstract:
Aims: In this experiment, algae oil extraction with a combination of n-hexane and ethanol was investigated. The effects of extraction solvent concentration, extraction time and temperature on the yield and quality of oil were studied using Response Surface Methodology (RSM). Experimental Design: Optimization of algae oil extraction using Box-Behnken design was used to generate 17 experimental runs in a three-factor-three-level design where oil yield, specific gravity, acid value and saponification value were evaluated as the response. Result: In this result, a minimum oil yield of 17% and maximum of 44% was realized. The optimum values for yield, specific gravity, acid value and saponification value from the overlay plot were 40.79%, 0.8788, 0.5056 mg KOH/g and 180.78 mg KOH/g respectively with desirability of 0.801. The maximum point prediction was yield 40.79% at solvent concentration 66.68 n-hexane, temperature of 40.0°C and extraction time of 4 hrs. Analysis of Variance (ANOVA) results showed that the linear and quadratic coefficient were all significant at p<0.05. The experiment was validated and results obtained were with the predicted values. Conclusion: Algae oil extraction was successfully optimized using RSM and its quality indicated it is suitable for many industrial uses.Keywords: algae oil, response surface methodology, optimization, Box-Bohnken, extraction
Procedia PDF Downloads 33928494 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System
Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu
Abstract:
In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission
Procedia PDF Downloads 14528493 Correlation of SPT N-Value and Equipment Drilling Parameters in Deep Soil Mixing
Authors: John Eric C. Bargas, Maria Cecilia M. Marcos
Abstract:
One of the most common ground improvement techniques is Deep Soil Mixing (DSM). As the technique progresses, there is still lack in the development when it comes to depth control. This was the issue experienced during the installation of DSM in one of the National projects in the Philippines. This study assesses the feasibility of using equipment drilling parameters such as hydraulic pressure, drilling speed and rotational speed in determining the Standard Penetration Test N-value of a specific soil. Hydraulic pressure and drilling speed with a constant rotational speed of 30 rpm have a positive correlation with SPT N-value for cohesive soil and sand. A linear trend was observed for cohesive soil. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.5377 while the correlation of SPT N-value and drilling speed has a R²=0.6355. While the best fitted model for sand is polynomial trend. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.7088 while the correlation of SPT N-value and drilling speed has a R²=0.4354. The low correlation may be attributed to the behavior of sand when the auger penetrates. Sand tends to follow the rotation of the auger rather than resisting which was observed for very loose to medium dense sand. Specific Energy and the product of hydraulic pressure and drilling speed yielded same R² with a positive correlation. Linear trend was observed for cohesive soil while polynomial trend for sand. Cohesive soil yielded a R²=0.7320 which has a strong relationship. Sand also yielded a strong relationship having a coefficient of determination, R²=0.7203. It is feasible to use hydraulic pressure and drilling speed to estimate the SPT N-value of the soil. Also, the product of hydraulic pressure and drilling speed can be a substitute to specific energy when estimating the SPT N-value of a soil. However, additional considerations are necessary to account for other influencing factors like ground water and physical and mechanical properties of soil.Keywords: ground improvement, equipment drilling parameters, standard penetration test, deep soil mixing
Procedia PDF Downloads 5728492 Direct Electrical Communication of Redox Enzyme Based on 3-Dimensional Cross-Linked Redox Enzyme/Nanomaterials
Authors: A. K. M. Kafi, S. N. Nina, Mashitah M. Yusoff
Abstract:
In this work, we have described a new 3-dimensional (3D) network of cross-linked Horseradish Peroxidase/Carbon Nanotube (HRP/CNT) on a thiol-modified Au surface in order to build up the effective electrical wiring of the enzyme units with the electrode. This was achieved by the electropolymerization of aniline-functionalized carbon nanotubes (CNTs) and 4-aminothiophenol -modified-HRP on a 4-aminothiophenol monolayer-modified Au electrode. The synthesized 3D HRP/CNT networks were characterized with cyclic voltammetry and amperometry, resulting the establishment direct electron transfer between the redox active unit of HRP and the Au surface. Electrochemical measurements reveal that the immobilized HRP exhibits high biological activity and stability and a quasi-reversible redox peak of the redox center of HRP was observed at about −0.355 and −0.275 V vs. Ag/AgCl. The electron transfer rate constant, KS and electron transfer co-efficient were found to be 0.57 s-1 and 0.42, respectively. Based on the electrocatalytic process by direct electrochemistry of HRP, a biosensor for detecting H2O2 was developed. The developed biosensor exhibits excellent electrocatalytic activity for the reduction of H2O2. The proposed biosensor modified with HRP/CNT 3D network displays a broader linear range and a lower detection limit for H2O2 determination. The linear range is from 1.0×10−7 to 1.2×10−4M with a detection limit of 2.2.0×10−8M at 3σ. Moreover, this biosensor exhibits very high sensitivity, good reproducibility and long-time stability. In summary, ease of fabrication, a low cost, fast response and high sensitivity are the main advantages of the new biosensor proposed in this study. These obvious advantages would really help for the real analytical applicability of the proposed biosensor.Keywords: redox enzyme, nanomaterials, biosensors, electrical communication
Procedia PDF Downloads 45628491 Removal of Basic Dyes from Aqueous Solutions with a Treated Spent Bleaching Earth
Authors: M. Mana, M. S. Ouali, L. C. de Menorval
Abstract:
A spent bleaching earth from an edible oil refinery has been treated by impregnation with a normal sodium hydroxide solution followed by mild thermal treatment (100°C). The obtained material (TSBE) was washed, dried and characterized by X-ray diffraction, FTIR, SEM, BET, and thermal analysis. The clay structure was not apparently affected by the treatment and the impregnated organic matter was quantitatively removed. We have investigated the comparative sorption of safranine and methylene blue on this material, the spent bleaching earth (SBE) and the virgin bleaching earth (VBE). The kinetic results fit the pseudo second order kinetic model and the Weber & Morris, intra-particle diffusion model. The pH had no effect on the sorption efficiency. The sorption isotherms followed the Langmuir model for various sorbent concentrations with good values of determination coefficient. A linear relationship was found between the calculated maximum removal capacity and the solid/solution ratio. A comparison between the results obtained with this material and those of the literature highlighted the low cost and the good removal capacity of the treated spent bleaching earth.Keywords: basic dyes, isotherms, sorption, spent bleaching earth
Procedia PDF Downloads 25028490 Direct Electrical Communication of Redox Enzyme Based on 3-Dimensional Crosslinked Redox Enzyme/Carbon Nanotube on a Thiol-Modified Au Surface
Authors: A. K. M. Kafi, S. N. Nina, Mashitah M. Yusoff
Abstract:
In this work, we have described a new 3-dimensional (3D) network of crosslinked Horseradish Peroxidase/Carbon Nanotube (HRP/CNT) on a thiol-modified Au surface in order to build up the effective electrical wiring of the enzyme units with the electrode. This was achieved by the electropolymerization of aniline-functionalized carbon nanotubes (CNTs) and 4-aminothiophenol -modified-HRP on a 4-aminothiophenol monolayer-modified Au electrode. The synthesized 3D HRP/CNT networks were characterized with cyclic voltammetry and amperometry, resulting the establishment direct electron transfer between the redox active unit of HRP and the Au surface. Electrochemical measurements reveal that the immobilized HRP exhibits high biological activity and stability and a quasi-reversible redox peak of the redox center of HRP was observed at about −0.355 and −0.275 V vs. Ag/AgCl. The electron transfer rate constant, KS and electron transfer co-efficient were found to be 0.57 s-1 and 0.42, respectively. Based on the electrocatalytic process by direct electrochemistry of HRP, a biosensor for detecting H2O2 was developed. The developed biosensor exhibits excellent electrocatalytic activity for the reduction of H2O2. The proposed biosensor modified with HRP/CNT 3D network displays a broader linear range and a lower detection limit for H2O2 determination. The linear range is from 1.0×10−7 to 1.2×10−4M with a detection limit of 2.2.0×10−8M at 3σ. Moreover, this biosensor exhibits very high sensitivity, good reproducibility and long-time stability. In summary, ease of fabrication, a low cost, fast response and high sensitivity are the main advantages of the new biosensor proposed in this study. These obvious advantages would really help for the real analytical applicability of the proposed biosensor.Keywords: biosensor, nanomaterials, redox enzyme, thiol-modified Au surface
Procedia PDF Downloads 33028489 Geospatial Curve Fitting Methods for Disease Mapping of Tuberculosis in Eastern Cape Province, South Africa
Authors: Davies Obaromi, Qin Yongsong, James Ndege
Abstract:
To interpolate scattered or regularly distributed data, there are imprecise or exact methods. However, there are some of these methods that could be used for interpolating data in a regular grid and others in an irregular grid. In spatial epidemiology, it is important to examine how a disease prevalence rates are distributed in space, and how they relate with each other within a defined distance and direction. In this study, for the geographic and graphic representation of the disease prevalence, linear and biharmonic spline methods were implemented in MATLAB, and used to identify, localize and compare for smoothing in the distribution patterns of tuberculosis (TB) in Eastern Cape Province. The aim of this study is to produce a more “smooth” graphical disease map for TB prevalence patterns by a 3-D curve fitting techniques, especially the biharmonic splines that can suppress noise easily, by seeking a least-squares fit rather than exact interpolation. The datasets are represented generally as a 3D or XYZ triplets, where X and Y are the spatial coordinates and Z is the variable of interest and in this case, TB counts in the province. This smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function, and it has also become the conventional method for its high precision, simplicity and flexibility. Surface and contour plots are produced for the TB prevalence at the provincial level for 2012 – 2015. From the results, the general outlook of all the fittings showed a systematic pattern in the distribution of TB cases in the province and this is consistent with some spatial statistical analyses carried out in the province. This new method is rarely used in disease mapping applications, but it has a superior advantage to be assessed at subjective locations rather than only on a rectangular grid as seen in most traditional GIS methods of geospatial analyses.Keywords: linear, biharmonic splines, tuberculosis, South Africa
Procedia PDF Downloads 24028488 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 12828487 Predicting Stem Borer Density in Maize Using RapidEye Data and Generalized Linear Models
Authors: Elfatih M. Abdel-Rahman, Tobias Landmann, Richard Kyalo, George Ong’amo, Bruno Le Ru
Abstract:
Maize (Zea mays L.) is a major staple food crop in Africa, particularly in the eastern region of the continent. The maize growing area in Africa spans over 25 million ha and 84% of rural households in Africa cultivate maize mainly as a means to generate food and income. Average maize yields in Sub Saharan Africa are 1.4 t/ha as compared to global average of 2.5–3.9 t/ha due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In East Africa, yield losses due to stem borers are currently estimated between 12% to 40% of the total production. The objective of the present study was therefore to predict stem borer larvae density in maize fields using RapidEye reflectance data and generalized linear models (GLMs). RapidEye images were captured for a test site in Kenya (Machakos) in January and in February 2015. Stem borer larva numbers were modeled using GLMs assuming Poisson (Po) and negative binomial (NB) distributions with error with log arithmetic link. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were employed to assess the models performance using a leave one-out cross-validation approach. Results showed that NB models outperformed Po ones in all study sites. RMSE and RPD ranged between 0.95 and 2.70, and between 2.39 and 6.81, respectively. Overall, all models performed similar when used the January and the February image data. We conclude that reflectance data from RapidEye data can be used to estimate stem borer larvae density. The developed models could to improve decision making regarding controlling maize stem borers using various integrated pest management (IPM) protocols.Keywords: maize, stem borers, density, RapidEye, GLM
Procedia PDF Downloads 49828486 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization
Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller
Abstract:
The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization
Procedia PDF Downloads 3828485 The Complex Relationship Between IQ and Attention Deficit Hyperactivity Disorder Symptoms: Insights From Behaviors, Cognition, and Brain in 5,138 Children With Attention Deficit Hyperactivity Disorder
Authors: Ningning Liu, Gaoding Jia, Yinshan Wang, Haimei Li, Xinian Zuo, Yufeng Wang, Lu Liu, Qiujin Qian
Abstract:
Background: There has been speculation that a high IQ may not necessarily provide protection against attention deficit hyperactivity disorder (ADHD), and there may be a U-shaped correlation between IQ and ADHD symptoms. However, this speculation has not been validated in the ADHD population in any study so far. Method: We conducted a study with 5,138 children who have been professionally diagnosed with ADHD and have a wide range of IQ levels. General Linear Models were used to determine the optimal model between IQ and ADHD core symptoms with sex and age as covariates. The ADHD symptoms we looked at included the total scores (TO), inattention (IA) and hyperactivity/impulsivity (HI). Wechsler Intelligence scale were used to assess IQ [Full-Scale IQ (FSIQ), Verbal IQ (VIQ), and Performance IQ (PIQ)]. Furthermore, we examined the correlation between IQ and the execution function [Behavior Rating Inventory of Executive Function (BRIEF)], as well as between IQ and brain surface area, to determine if the associations between IQ and ADHD symptoms are reflected in executive functions and brain structure. Results: Consistent with previous research, the results indicated that FSIQ and VIQ both showed a linear negative correlation with the TO and IA scores of ADHD. However, PIQ showed an inverted U-shaped relationship with the TO and HI scores of ADHD, with 103 as the peak point. These findings were also partially reflected in the relationship between IQ and executive functions, as well as IQ and brain surface area. Conclusion: To sum up, the relationship between IQ and ADHD symptoms is not straightforward. Our study confirms long-standing academic hypotheses and finds that PIQ exhibits an inverted U-shaped relationship with ADHD symptoms. This study enhances our understanding of symptoms and behaviors of ADHD with varying IQ characteristics and provides some evidence for targeted clinical intervention.Keywords: ADHD, IQ, execution function, brain imaging
Procedia PDF Downloads 6628484 Hypersonic Flow of CO2-N2 Mixture around a Spacecraft during the Atmospheric Reentry
Authors: Zineddine Bouyahiaoui, Rabah Haoui
Abstract:
The aim of this work is to analyze a flow around the axisymmetric blunt body taken into account the chemical and vibrational nonequilibrium flow. This work concerns the entry of spacecraft in the atmosphere of the planet Mars. Since the equations involved are non-linear partial derivatives, the volume method is the only way to solve this problem. The choice of the mesh and the CFL is a condition for the convergence to have the stationary solution.Keywords: blunt body, finite volume, hypersonic flow, viscous flow
Procedia PDF Downloads 23528483 Comparison of Developed Statokinesigram and Marker Data Signals by Model Approach
Authors: Boris Barbolyas, Kristina Buckova, Tomas Volensky, Cyril Belavy, Ladislav Dedik
Abstract:
Background: Based on statokinezigram, the human balance control is often studied. Approach to human postural reaction analysis is based on a combination of stabilometry output signal with retroreflective marker data signal processing, analysis, and understanding, in this study. The study shows another original application of Method of Developed Statokinesigram Trajectory (MDST), too. Methods: In this study, the participants maintained quiet bipedal standing for 10 s on stabilometry platform. Consequently, bilateral vibration stimuli to Achilles tendons in 20 s interval was applied. Vibration stimuli caused that human postural system took the new pseudo-steady state. Vibration frequencies were 20, 60 and 80 Hz. Participant's body segments - head, shoulders, hips, knees, ankles and little fingers were marked by 12 retroreflective markers. Markers positions were scanned by six cameras system BTS SMART DX. Registration of their postural reaction lasted 60 s. Sampling frequency was 100 Hz. For measured data processing were used Method of Developed Statokinesigram Trajectory. Regression analysis of developed statokinesigram trajectory (DST) data and retroreflective marker developed trajectory (DMT) data were used to find out which marker trajectories most correlate with stabilometry platform output signals. Scaling coefficients (λ) between DST and DMT by linear regression analysis were evaluated, too. Results: Scaling coefficients for marker trajectories were identified for all body segments. Head markers trajectories reached maximal value and ankle markers trajectories had a minimal value of scaling coefficient. Hips, knees and ankles markers were approximately symmetrical in the meaning of scaling coefficient. Notable differences of scaling coefficient were detected in head and shoulders markers trajectories which were not symmetrical. The model of postural system behavior was identified by MDST. Conclusion: Value of scaling factor identifies which body segment is predisposed to postural instability. Hypothetically, if statokinesigram represents overall human postural system response to vibration stimuli, then markers data represented particular postural responses. It can be assumed that cumulative sum of particular marker postural responses is equal to statokinesigram.Keywords: center of pressure (CoP), method of developed statokinesigram trajectory (MDST), model of postural system behavior, retroreflective marker data
Procedia PDF Downloads 35228482 URM Infill in-Plane and out-of-Plane Interaction in Damage Evaluation of RC Frames
Authors: F. Longo, G. Granello, G. Tecchio, F. Da Porto
Abstract:
Unreinforced masonry (URM) infill walls are widely used throughout the world, also in seismic prone regions, as partitions in reinforced concrete building frames. Even if they do not represent structural elements, they can dramatically affect both strength and stiffness of RC structures by acting as a diagonal strut, modifying shear and displacements distribution along the building height, with uncertain consequences on structural safety. In the last decades, many refined models have been developed to describe infill walls effect on frame structural behaviour, but generally restricted to in-plane actions. Only very recently some new approaches were implemented to consider in-plane/out-of-plane interaction of URM infill walls in progressive collapse simulations. In the present work, a particularly promising macro-model was adopted for the progressive collapse analysis of infilled RC frames. The model allows to consider the bi-directional interaction in terms of displacement and strength capacity for URM infills, and to remove the infill contribution when the URM wall is supposed to fail during the analysis process. The model was calibrated on experimental data regarding two different URM panels thickness, modelling with particular care the post-critic softening branch. A frame specimen set representing the most common Italian structures was built considering two main normative approaches: a traditional design philosophy, corresponding to structures erected between 50’s-80’s basically designed to support vertical loads, and a seismic design philosophy, corresponding to current criteria that take into account horizontal actions. Non-Linear Static analyses were carried out on the specimen set and some preliminary evaluations were drawn in terms of different performance exhibited by the RC frame when the contemporary effect of the out-of-plane damage is considered for the URM infill.Keywords: infill Panels macromodels, in plane-out of plane interaction, RC frames, URM infills
Procedia PDF Downloads 51828481 Modified Genome-Scale Metabolic Model of Escherichia coli by Adding Hyaluronic Acid Biosynthesis-Related Enzymes (GLMU2 and HYAD) from Pasteurella multocida
Authors: P. Pasomboon, P. Chumnanpuen, T. E-kobon
Abstract:
Hyaluronic acid (HA) consists of linear heteropolysaccharides repeat of D-glucuronic acid and N-acetyl-D-glucosamine. HA has various useful properties to maintain skin elasticity and moisture, reduce inflammation, and lubricate the movement of various body parts without causing immunogenic allergy. HA can be found in several animal tissues as well as in the capsule component of some bacteria including Pasteurella multocida. This study aimed to modify a genome-scale metabolic model of Escherichia coli using computational simulation and flux analysis methods to predict HA productivity under different carbon sources and nitrogen supplement by the addition of two enzymes (GLMU2 and HYAD) from P. multocida to improve the HA production under the specified amount of carbon sources and nitrogen supplements. Result revealed that threonine and aspartate supplement raised the HA production by 12.186%. Our analyses proposed the genome-scale metabolic model is useful for improving the HA production and narrows the number of conditions to be tested further.Keywords: Pasteurella multocida, Escherichia coli, hyaluronic acid, genome-scale metabolic model, bioinformatics
Procedia PDF Downloads 12528480 On the System of Split Equilibrium and Fixed Point Problems in Real Hilbert Spaces
Authors: Francis O. Nwawuru, Jeremiah N. Ezeora
Abstract:
In this paper, a new algorithm for solving the system of split equilibrium and fixed point problems in real Hilbert spaces is considered. The equilibrium bifunction involves a nite family of pseudo-monotone mappings, which is an improvement over monotone operators. More so, it turns out that the solution of the finite family of nonexpansive mappings. The regularized parameters do not depend on Lipschitz constants. Also, the computations of the stepsize, which plays a crucial role in the convergence analysis of the proposed method, do require prior knowledge of the norm of the involved bounded linear map. Furthermore, to speed up the rate of convergence, an inertial term technique is introduced in the proposed method. Under standard assumptions on the operators and the control sequences, using a modified Halpern iteration method, we establish strong convergence, a desired result in applications. Finally, the proposed scheme is applied to solve some optimization problems. The result obtained improves numerous results announced earlier in this direction.Keywords: equilibrium, Hilbert spaces, fixed point, nonexpansive mapping, extragradient method, regularized equilibrium
Procedia PDF Downloads 51