Search results for: feature combination
4027 Effect of Enzymatic Hydrolysis and Ultrasounds Pretreatments on Biogas Production from Corn Cob
Authors: N. Pérez-Rodríguez, D. García-Bernet, A. Torrado-Agrasar, J. M. Cruz, A. B. Moldes, J. M. Domínguez
Abstract:
World economy is based on non-renewable, fossil fuels such as petroleum and natural gas, which entails its rapid depletion and environmental problems. In EU countries, the objective is that at least 20% of the total energy supplies in 2020 should be derived from renewable resources. Biogas, a product of anaerobic degradation of organic substrates, represents an attractive green alternative for meeting partial energy needs. Nowadays, trend to circular economy model involves efficiently use of residues by its transformation from waste to a new resource. In this sense, characteristics of agricultural residues (that are available in plenty, renewable, as well as eco-friendly) propitiate their valorisation as substrates for biogas production. Corn cob is a by-product obtained from maize processing representing 18 % of total maize mass. Corn cob importance lies in the high production of this cereal (more than 1 x 109 tons in 2014). Due to its lignocellulosic nature, corn cob contains three main polymers: cellulose, hemicellulose and lignin. Crystalline, highly ordered structures of cellulose and lignin hinders microbial attack and subsequent biogas production. For the optimal lignocellulose utilization and to enhance gas production in anaerobic digestion, materials are usually submitted to different pretreatment technologies. In the present work, enzymatic hydrolysis, ultrasounds and combination of both technologies were assayed as pretreatments of corn cob for biogas production. Enzymatic hydrolysis pretreatment was started by adding 0.044 U of Ultraflo® L feruloyl esterase per gram of dry corncob. Hydrolyses were carried out in 50 mM sodium-phosphate buffer pH 6.0 with a solid:liquid proportion of 1:10 (w/v), at 150 rpm, 40 ºC and darkness for 3 hours. Ultrasounds pretreatment was performed subjecting corn cob, in 50 mM sodium-phosphate buffer pH 6.0 with a solid: liquid proportion of 1:10 (w/v), at a power of 750W for 1 minute. In order to observe the effect of the combination of both pretreatments, some samples were initially sonicated and then they were enzymatically hydrolysed. In terms of methane production, anaerobic digestion of the corn cob pretreated by enzymatic hydrolysis was positive achieving 290 L CH4 kg MV-1 (compared with 267 L CH4 kg MV-1 obtained with untreated corn cob). Although the use of ultrasound as the only pretreatment resulted detrimentally (since gas production decreased to 244 L CH4 kg MV-1 after 44 days of anaerobic digestion), its combination with enzymatic hydrolysis was beneficial, reaching the highest value (300.9 L CH4 kg MV-1). Consequently, the combination of both pretreatments improved biogas production from corn cob.Keywords: biogas, corn cob, enzymatic hydrolysis, ultrasound
Procedia PDF Downloads 2664026 Optimal Solutions for Real-Time Scheduling of Reconfigurable Embedded Systems Based on Neural Networks with Minimization of Power Consumption
Authors: Ghofrane Rehaiem, Hamza Gharsellaoui, Samir Benahmed
Abstract:
In this study, Artificial Neural Networks (ANNs) were used for modeling the parameters that allow the real-time scheduling of embedded systems under resources constraints designed for real-time applications running. The objective of this work is to implement a neural networks based approach for real-time scheduling of embedded systems in order to handle real-time constraints in execution scenarios. In our proposed approach, many techniques have been proposed for both the planning of tasks and reducing energy consumption. In fact, a combination of Dynamic Voltage Scaling (DVS) and time feedback can be used to scale the frequency dynamically adjusting the operating voltage. Indeed, we present in this paper a hybrid contribution that handles the real-time scheduling of embedded systems, low power consumption depending on the combination of DVS and Neural Feedback Scheduling (NFS) with the energy Priority Earlier Deadline First (PEDF) algorithm. Experimental results illustrate the efficiency of our original proposed approach.Keywords: optimization, neural networks, real-time scheduling, low-power consumption
Procedia PDF Downloads 3694025 Classification of Hyperspectral Image Using Mathematical Morphological Operator-Based Distance Metric
Authors: Geetika Barman, B. S. Daya Sagar
Abstract:
In this article, we proposed a pixel-wise classification of hyperspectral images using a mathematical morphology operator-based distance metric called “dilation distance” and “erosion distance”. This method involves measuring the spatial distance between the spectral features of a hyperspectral image across the bands. The key concept of the proposed approach is that the “dilation distance” is the maximum distance a pixel can be moved without changing its classification, whereas the “erosion distance” is the maximum distance that a pixel can be moved before changing its classification. The spectral signature of the hyperspectral image carries unique class information and shape for each class. This article demonstrates how easily the dilation and erosion distance can measure spatial distance compared to other approaches. This property is used to calculate the spatial distance between hyperspectral image feature vectors across the bands. The dissimilarity matrix is then constructed using both measures extracted from the feature spaces. The measured distance metric is used to distinguish between the spectral features of various classes and precisely distinguish between each class. This is illustrated using both toy data and real datasets. Furthermore, we investigated the role of flat vs. non-flat structuring elements in capturing the spatial features of each class in the hyperspectral image. In order to validate, we compared the proposed approach to other existing methods and demonstrated empirically that mathematical operator-based distance metric classification provided competitive results and outperformed some of them.Keywords: dilation distance, erosion distance, hyperspectral image classification, mathematical morphology
Procedia PDF Downloads 834024 Muhammad`s Vision of Interaction with Supernatural Beings According to the Hadith in Comparison to Parallels of Other Cultures
Authors: Vladimir A. Rozov
Abstract:
Comparative studies of religion and ritual could contribute better understanding of human culture universalities. Belief in supernatural beings seems to be a common feature of the religion. A significant part of the Islamic concepts that concern supernatural beings is based on a tradition based on the Hadiths. They reflect, among other things, his ideas about a proper way to interact with supernatural beings. These ideas to a large extent follow from the pre-Islamic religious experience of the Arabs and had been reflected in a number of ritual actions. Some of those beliefs concern a particular function of clothing. For example, it is known that Muhammad was wrapped in clothes during the revelation of the Quran. The same thing was performed by pre-Islamic soothsayers (kāhin) and by rival opponents of Muhammad during their trances. Muhammad also turned the clothes inside out during religious rituals (prayer for rain). Besides these specific ways of clothing which prove the external similarity of Muhammad with the soothsayers and other people who claimed the connection with supernatural forces, the pre-Islamic soothsayers had another characteristic feature which is physical flaws. In this regard, it is worth to note Muhammad's so-called "Seal the Prophecy" (h̠ ātam an- nubūwwa) -protrusion or outgrowth on his back. Another interesting feature of Muhammad's behavior was his attitude to eating onion and garlic. In particular, the Prophet didn`t eat them and forbade people who had tasted these vegetables to enter mosques, until the smell ceases to be felt. The reason for this ban on eating onion and garlic is caused by a belief that the smell of these products prevents communication with otherworldly forces. The materials of the Hadith also suggest that Muhammad shared faith in the apotropical properties of water. Both of these ideas have parallels in other cultures of the world. Muhammad's actions supposed to provide an interaction with the supernatural beings are not accidental. They have parallels in the culture of pre-Islamic Arabia as well as in many past and present world cultures. The latter fact can be explained by the similarity of the universal human beliefs in supernatural beings and how they should be interacted with. Later a number of similar ideas shared by the Prophet Muhammad was legitimized by the Islamic tradition and formed the basis of popular Islamic rituals. Thus, these parallels emphasize the commonality of human notions of supernatural beings and also demonstrate the significance of the pre-Islamic cultural context in analyzing the genesis of Islamic religious beliefs.Keywords: hadith, Prophet Muhammad, ritual, supernatural beings
Procedia PDF Downloads 3874023 A Levelized Cost Analysis for Solar Energy Powered Sea Water Desalination in the Arabian Gulf Region
Authors: Abdullah Kaya, Muammer Koc
Abstract:
A levelized cost analysis of solar energy powered seawater desalination in The Emirate of Abu Dhabi is conducted to show that clean and renewable desalination is economically viable. The Emirate heavily relies on seawater desalination for its freshwater needs due to limited freshwater resources available. This trend is expected to increase further due to growing population and economic activity, rapid decline in limited freshwater reserves, and aggravating effects of climate change. Seawater desalination in Abu Dhabi is currently done through thermal desalination technologies such as multi-stage flash (MSF) and multi-effect distillation (MED) which are coupled with thermal power plants known as co-generation. Our analysis indicates that these thermal desalination methods are inefficient regarding energy consumption and harmful to the environment due to CO₂ emissions and other dangerous byproducts. Therefore, utilization of clean and renewable desalination options has become a must for The Emirate for the transition to a sustainable future. The rapid decline in the cost of solar PV system for energy production and RO technology for desalination makes the combination of these two an ideal option for a future of sustainable desalination in the Emirate of Abu Dhabi. A Levelized cost analysis for water produced by solar PV + RO system indicates that Abu Dhabi is well positioned to utilize this technological combination for cheap and clean desalination for the coming years. It has been shown that cap-ex cost of solar PV powered RO system has potential to go as low as to 101 million US $ (1111 $/m³) at best case considering the recent technological developments. The levelized cost of water (LCW) values fluctuate between 0.34 $/m³ for the baseline case and 0.27 $/m³ for the best case. Even the highly conservative case yields LCW cheaper than 100% from all thermal desalination methods currently employed in the Emirate. Exponential cost decreases in both solar PV and RO sectors along with increasing economic scale globally signal the fact that a cheap and clean desalination can be achieved by the combination of these technologies.Keywords: solar PV, RO desalination, sustainable desalination, levelized cost of analysis, Emirate of Abu Dhabi
Procedia PDF Downloads 1624022 Heat Transfer Coefficients of Layers of Greenhouse Thermal Screens
Authors: Vitaly Haslavsky, Helena Vitoshkin
Abstract:
The total energy saving effect of different types of greenhouse thermal/shade screens was determined by measuring and calculating the overall heat transfer coefficients (U-values) for single and several layers of screens. The measurements were carried out using the hot box method, and the calculations were performed according to the ISO Standard 15099. The goal was to examine different types of materials with a wide range of thermal radiation properties used for thermal screens in combination with a dehumidification system in order to improve greenhouse insulation. The experimental results were in good agreement with the calculated heat transfer coefficients. It was shown that a high amount of infra-red (IR) radiation can be blocked by the greenhouse covering material in combination with moveable thermal screens. The aluminum foil screen could be replaced by transparent screens, depending on shading requirements. The results indicated that using a single layer, the U-value was reduced by approximately 70% compared to covering material alone, while the contributions of additional screen layers containing aluminum foil strips could reduce the U-value by approximately 90%. It was shown that three screen layers are sufficient for effective insulation.Keywords: greenhouse insulation, heat loss, thermal screens, U-value
Procedia PDF Downloads 1114021 Intrusion Detection in SCADA Systems
Authors: Leandros A. Maglaras, Jianmin Jiang
Abstract:
The protection of the national infrastructures from cyberattacks is one of the main issues for national and international security. The funded European Framework-7 (FP7) research project CockpitCI introduces intelligent intrusion detection, analysis and protection techniques for Critical Infrastructures (CI). The paradox is that CIs massively rely on the newest interconnected and vulnerable Information and Communication Technology (ICT), whilst the control equipment, legacy software/hardware, is typically old. Such a combination of factors may lead to very dangerous situations, exposing systems to a wide variety of attacks. To overcome such threats, the CockpitCI project combines machine learning techniques with ICT technologies to produce advanced intrusion detection, analysis and reaction tools to provide intelligence to field equipment. This will allow the field equipment to perform local decisions in order to self-identify and self-react to abnormal situations introduced by cyberattacks. In this paper, an intrusion detection module capable of detecting malicious network traffic in a Supervisory Control and Data Acquisition (SCADA) system is presented. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automates SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detects anomalies in the system real time. The module is part of an IDS (intrusion detection system) developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF messages that carry information about the source of the incident, the time and a classification of the alarm.Keywords: cyber-security, SCADA systems, OCSVM, intrusion detection
Procedia PDF Downloads 5524020 Optimal Economic Restructuring Aimed at an Optimal Increase in GDP Constrained by a Decrease in Energy Consumption and CO2 Emissions
Authors: Alexander Vaninsky
Abstract:
The objective of this paper is finding the way of economic restructuring - that is, change in the shares of sectoral gross outputs - resulting in the maximum possible increase in the gross domestic product (GDP) combined with decreases in energy consumption and CO2 emissions. It uses an input-output model for the GDP and factorial models for the energy consumption and CO2 emissions to determine the projection of the gradient of GDP, and the antigradients of the energy consumption and CO2 emissions, respectively, on a subspace formed by the structure-related variables. Since the gradient (antigradient) provides a direction of the steepest increase (decrease) of the objective function, and their projections retain this property for the functions' limitation to the subspace, each of the three directional vectors solves a particular problem of optimal structural change. In the next step, a type of factor analysis is applied to find a convex combination of the projected gradient and antigradients having maximal possible positive correlation with each of the three. This convex combination provides the desired direction of the structural change. The national economy of the United States is used as an example of applications.Keywords: economic restructuring, input-output analysis, divisia index, factorial decomposition, E3 models
Procedia PDF Downloads 3134019 DC Bus Voltage Ripple Control of Photo Voltaic Inverter in Low Voltage Ride-Trough Operation
Authors: Afshin Kadri
Abstract:
Using Renewable Energy Resources (RES) as a type of DG unit is developing in distribution systems. The connection of these generation units to existing AC distribution systems changes the structure and some of the operational aspects of these grids. Most of the RES requires to power electronic-based interfaces for connection to AC systems. These interfaces consist of at least one DC/AC conversion unit. Nowadays, grid-connected inverters must have the required feature to support the grid under sag voltage conditions. There are two curves in these conditions that show the magnitude of the reactive component of current as a function of voltage drop value and the required minimum time value, which must be connected to the grid. This feature is named low voltage ride-through (LVRT). Implementing this feature causes problems in the operation of the inverter that increases the amplitude of high-frequency components of the injected current and working out of maximum power point in the photovoltaic panel connected inverters are some of them. The important phenomenon in these conditions is ripples in the DC bus voltage that affects the operation of the inverter directly and indirectly. The losses of DC bus capacitors which are electrolytic capacitors, cause increasing their temperature and decreasing its lifespan. In addition, if the inverter is connected to the photovoltaic panels directly and has the duty of maximum power point tracking, these ripples cause oscillations around the operating point and decrease the generating energy. Using a bidirectional converter in the DC bus, which works as a buck and boost converter and transfers the ripples to its DC bus, is the traditional method to eliminate these ripples. In spite of eliminating the ripples in the DC bus, this method cannot solve the problem of reliability because it uses an electrolytic capacitor in its DC bus. In this work, a control method is proposed which uses the bidirectional converter as the fourth leg of the inverter and eliminates the DC bus ripples using an injection of unbalanced currents into the grid. Moreover, the proposed method works based on constant power control. In this way, in addition, to supporting the amplitude of grid voltage, it stabilizes its frequency by injecting active power. Also, the proposed method can eliminate the DC bus ripples in deep voltage drops, which cause increasing the amplitude of the reference current more than the nominal current of the inverter. The amplitude of the injected current for the faulty phases in these conditions is kept at the nominal value and its phase, together with the phase and amplitude of the other phases, are adjusted, which at the end, the ripples in the DC bus are eliminated, however, the generated power decreases.Keywords: renewable energy resources, voltage drop value, DC bus ripples, bidirectional converter
Procedia PDF Downloads 754018 Analysis of the Simulation Merger and Economic Benefit of Local Farmers' Associations in Taiwan
Authors: Lu Yung-Hsiang, Chang Kuming, Dai Yi-Fang, Liao Ching-Yi
Abstract:
According to Taiwan’s administrative division of future land planning may lead farmer association and service areas facing recombination or merger. Thus, merger combination and the economic benefit of the farmer association are worth to be discussed. The farmer association in the merger, which may cause some then will not be consolidated, or consolidate two, or ever more to one association. However, under what condition to merge is greatest, as one of observation of this study. In addition, research without using simulation methods and only on the credit department rather whole farmer association. Therefore, this paper will use the simulation approach, and examine both the merge of farmer association and the condition under which the benefits are the greatest. The data of this study set include 266 farmer associations in Taiwan period 2012 to 2013. Empirical results showed that the number of the farmer association optimal simulation combination is 108.After the merger from the first stage can be reduced by 60% of the farmers’ association. The cost saving effects of the post-merger is not different. The cost efficiency of the farmers’ association improved it. The economies of scale and scope would decrease by the merger. The research paper hopes the finding will benefit the future merger of the farmers’ association.Keywords: simulation merger, farmer association, assurance region, data envelopment analysis
Procedia PDF Downloads 3484017 On the Use of Reliability Factors to Reduce Conflict between Information Sources in Dempster-Shafer Theory
Authors: A. Alem, Y. Dahmani, A. Hadjali, A. Boualem
Abstract:
Managing the problem of the conflict, either by using the Dempster-Shafer theory, or by the application of the fusion process to push researchers in recent years to find ways to get to make best decisions especially; for information systems, vision, robotic and wireless sensor networks. In this paper we are interested to take account of the conflict in the combination step that took the conflict into account and tries to manage such a way that it does not influence the decision step, the conflict what from reliable sources. According to [1], the conflict lead to erroneous decisions in cases where was with strong degrees between sources of information, if the conflict is more than the maximum of the functions of belief mass K > max1...n (mi (A)), then the decision becomes impossible. We will demonstrate in this paper that the multiplication of mass functions by coefficients of reliability is a decreasing function; it leads to the reduction of conflict and a good decision. The definition of reliability coefficients accurately and multiply them by the mass functions of each information source to resolve the conflict and allow deciding whether the degree of conflict. The evaluation of this technique is done by a use case; a comparison of the combination of springs with a maximum conflict without, and with reliability coefficients.Keywords: Dempster-Shafer theory, fusion process, conflict managing, reliability factors, decision
Procedia PDF Downloads 4254016 Multi-Stream Graph Attention Network for Recommendation with Knowledge Graph
Abstract:
In recent years, Graph neural network has been widely used in knowledge graph recommendation. The existing recommendation methods based on graph neural network extract information from knowledge graph through entity and relation, which may not be efficient in the way of information extraction. In order to better propose useful entity information for the current recommendation task in the knowledge graph, we propose an end-to-end Neural network Model based on multi-stream graph attentional Mechanism (MSGAT), which can effectively integrate the knowledge graph into the recommendation system by evaluating the importance of entities from both users and items. Specifically, we use the attention mechanism from the user's perspective to distil the domain nodes information of the predicted item in the knowledge graph, to enhance the user's information on items, and generate the feature representation of the predicted item. Due to user history, click items can reflect the user's interest distribution, we propose a multi-stream attention mechanism, based on the user's preference for entities and relationships, and the similarity between items to be predicted and entities, aggregate user history click item's neighborhood entity information in the knowledge graph and generate the user's feature representation. We evaluate our model on three real recommendation datasets: Movielens-1M (ML-1M), LFM-1B 2015 (LFM-1B), and Amazon-Book (AZ-book). Experimental results show that compared with the most advanced models, our proposed model can better capture the entity information in the knowledge graph, which proves the validity and accuracy of the model.Keywords: graph attention network, knowledge graph, recommendation, information propagation
Procedia PDF Downloads 1154015 A Conceptual Analysis of Right of Taxpayers to Claim Refund in Nigeria
Authors: Hafsat Iyabo Sa'adu
Abstract:
A salient feature of the Nigerian Tax Law is the right of the taxpayer to demand for a refund where excess tax is paid. Section 23 of the Federal Inland Revenue Service (Establishment) Act, 2007 vests Federal Inland Revenue Services with the power to make tax refund as well as set guidelines and requirements for refund process from time to time. In addition, Section 61 of the Federal Inland Revenue Service (Establishment) Act, 2007, empowers the Federal Inland Revenue Services to issue information circular to acquaint stakeholders with the policy on the refund process. A Circular was issued to that effect to correct the position that until after the annual audit of the Service before such excess can be paid to the claimant/taxpayer. But it is amazing that such circular issuance does not feature under the states’ laws. Hence, there is an inconsistencies in the tax paying system in Nigeria. This study, therefore, sets an objective, to examine the trending concept of tax refund in Nigeria. In order to achieve this set objective, a doctrinal study went under way, wherein both federal and states laws were consulted including journals and textbooks. At the end of the research, it was revealed that the law should be specific as to the time frame within which to make the refund. It further revealed that it is essential to put up a legal framework for the tax system to recognize excess payment as debt due from the state. This would provide a foundational framework for the relationship between taxpayers and Federal Inland Revenue Service as well as promote effective tax administration in all the states of the federation. Several Recommendations were made especially relating to legislative passage of ‘’Refund Circular Bill at the states levels’ pursuant to the Federal Inland Revenue Service (Establishment) Act, 2007.Keywords: claim, Nigeria, refund, right
Procedia PDF Downloads 1164014 Integrating Machine Learning and Rule-Based Decision Models for Enhanced B2B Sales Forecasting and Customer Prioritization
Authors: Wenqi Liu, Reginald Bailey
Abstract:
This study explores an advanced approach to enhancing B2B sales forecasting by integrating machine learning models with a rule-based decision framework. The methodology begins with the development of a machine learning classification model to predict conversion likelihood, aiming to improve accuracy over traditional methods like logistic regression. The classification model's effectiveness is measured using metrics such as accuracy, precision, recall, and F1 score, alongside a feature importance analysis to identify key predictors. Following this, a machine learning regression model is used to forecast sales value, with the objective of reducing mean absolute error (MAE) compared to linear regression techniques. The regression model's performance is assessed using MAE, root mean square error (RMSE), and R-squared metrics, emphasizing feature contribution to the prediction. To bridge the gap between predictive analytics and decision-making, a rule-based decision model is introduced that prioritizes customers based on predefined thresholds for conversion probability and predicted sales value. This approach significantly enhances customer prioritization and improves overall sales performance by increasing conversion rates and optimizing revenue generation. The findings suggest that this combined framework offers a practical, data-driven solution for sales teams, facilitating more strategic decision-making in B2B environments.Keywords: sales forecasting, machine learning, rule-based decision model, customer prioritization, predictive analytics
Procedia PDF Downloads 144013 Rd-PLS Regression: From the Analysis of Two Blocks of Variables to Path Modeling
Authors: E. Tchandao Mangamana, V. Cariou, E. Vigneau, R. Glele Kakai, E. M. Qannari
Abstract:
A new definition of a latent variable associated with a dataset makes it possible to propose variants of the PLS2 regression and the multi-block PLS (MB-PLS). We shall refer to these variants as Rd-PLS regression and Rd-MB-PLS respectively because they are inspired by both Redundancy analysis and PLS regression. Usually, a latent variable t associated with a dataset Z is defined as a linear combination of the variables of Z with the constraint that the length of the loading weights vector equals 1. Formally, t=Zw with ‖w‖=1. Denoting by Z' the transpose of Z, we define herein, a latent variable by t=ZZ’q with the constraint that the auxiliary variable q has a norm equal to 1. This new definition of a latent variable entails that, as previously, t is a linear combination of the variables in Z and, in addition, the loading vector w=Z’q is constrained to be a linear combination of the rows of Z. More importantly, t could be interpreted as a kind of projection of the auxiliary variable q onto the space generated by the variables in Z, since it is collinear to the first PLS1 component of q onto Z. Consider the situation in which we aim to predict a dataset Y from another dataset X. These two datasets relate to the same individuals and are assumed to be centered. Let us consider a latent variable u=YY’q to which we associate the variable t= XX’YY’q. Rd-PLS consists in seeking q (and therefore u and t) so that the covariance between t and u is maximum. The solution to this problem is straightforward and consists in setting q to the eigenvector of YY’XX’YY’ associated with the largest eigenvalue. For the determination of higher order components, we deflate X and Y with respect to the latent variable t. Extending Rd-PLS to the context of multi-block data is relatively easy. Starting from a latent variable u=YY’q, we consider its ‘projection’ on the space generated by the variables of each block Xk (k=1, ..., K) namely, tk= XkXk'YY’q. Thereafter, Rd-MB-PLS seeks q in order to maximize the average of the covariances of u with tk (k=1, ..., K). The solution to this problem is given by q, eigenvector of YY’XX’YY’, where X is the dataset obtained by horizontally merging datasets Xk (k=1, ..., K). For the determination of latent variables of order higher than 1, we use a deflation of Y and Xk with respect to the variable t= XX’YY’q. In the same vein, extending Rd-MB-PLS to the path modeling setting is straightforward. Methods are illustrated on the basis of case studies and performance of Rd-PLS and Rd-MB-PLS in terms of prediction is compared to that of PLS2 and MB-PLS.Keywords: multiblock data analysis, partial least squares regression, path modeling, redundancy analysis
Procedia PDF Downloads 1464012 Downscaling Grace Gravity Models Using Spectral Combination Techniques for Terrestrial Water Storage and Groundwater Storage Estimation
Authors: Farzam Fatolazadeh, Kalifa Goita, Mehdi Eshagh, Shusen Wang
Abstract:
The Gravity Recovery and Climate Experiment (GRACE) is a satellite mission with twin satellites for the precise determination of spatial and temporal variations in the Earth’s gravity field. The products of this mission are monthly global gravity models containing the spherical harmonic coefficients and their errors. These GRACE models can be used for estimating terrestrial water storage (TWS) variations across the globe at large scales, thereby offering an opportunity for surface and groundwater storage (GWS) assessments. Yet, the ability of GRACE to monitor changes at smaller scales is too limited for local water management authorities. This is largely due to the low spatial and temporal resolutions of its models (~200,000 km2 and one month, respectively). High-resolution GRACE data products would substantially enrich the information that is needed by local-scale decision-makers while offering the data for the regions that lack adequate in situ monitoring networks, including northern parts of Canada. Such products could eventually be obtained through downscaling. In this study, we extended the spectral combination theory to simultaneously downscale spatiotemporally the 3o spatial coarse resolution of GRACE to 0.25o degrees resolution and monthly coarse resolution to daily resolution. This method combines the monthly gravity field solution of GRACE and daily hydrological model products in the form of both low and high-frequency signals to produce high spatiotemporal resolution TWSA and GWSA products. The main contribution and originality of this study are to comprehensively and simultaneously consider GRACE and hydrological variables and their uncertainties to form the estimator in the spectral domain. Therefore, it is predicted that we reach downscale products with an acceptable accuracy.Keywords: GRACE satellite, groundwater storage, spectral combination, terrestrial water storage
Procedia PDF Downloads 824011 Agronomic Response of Fluted Pumpkin (Telfairia occidentalis Hook. f.) to Planting Densities and Fertilizer Application
Authors: Falodun E. J., Ogbeifun S. O.
Abstract:
The objectives of this study were to investigate the yield, nutrient concentration, and uptake of fluted pumpkin (Telfairia occidentalis Hook. f.) in response to spacing and fertilizer application. Two fluted pumpkin plant populations (10,000 and 20,000 plants ha⁻¹), D1 and D2, were evaluated at three levels of NPK fertilizer (F₁, 20 t ha⁻¹ poultry manure, F₂, 300 kg ha⁻¹ NPK 15:15:15 and F₃, 10 t ha⁻¹ poultry manure + 150 kg ha⁻¹ NPK 15:15:15) using a factorial arrangement in a randomized complete block design (RCBD) with three replications. Leaf length, breadth, and the number of leaves were significantly increased at a lower plant population of 10,000 plants ha⁻¹ while herbage yield increased with a higher plant population of 20,000 plants ha⁻¹ using 300 kg ha⁻¹ inorganic NPK 15:15:15 or a combination of 10 t ha⁻¹ poultry manure + 150 kg ha⁻¹ inorganic NPK 15:15:15. Potassium (K) concentration was significantly (p < 0.05) higher at 10,000 plants ha⁻¹ and Iron (Fe) uptake was higher with combine application of organic and inorganic fertilizer (F3). To maximize the good herbage yield of fluted pumpkins, farmers in this locality should adopt a plant population of 20,000 plants ha⁻¹ using 300 kg ha⁻¹ inorganic NPK 15:15:15 (D2F2) or a combination of 10 t ha⁻¹ poultry manure + 150 kg ha⁻¹ inorganic NPK 15:15:15 (D2F3).Keywords: fertilizers, fluted pumpkin, herbage yield, plant population
Procedia PDF Downloads 1884010 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques
Authors: Imed Feki, Faouzi Msahli
Abstract:
Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique
Procedia PDF Downloads 6034009 The Flooding Management Strategy in Urban Areas: Reusing Public Facilities Land as Flood-Detention Space for Multi-Purpose
Authors: Hsiao-Ting Huang, Chang Hsueh-Sheng
Abstract:
Taiwan is an island country which is affected by the monsoon deeply. Under the climate change, the frequency of extreme rainstorm by typhoon becomes more and more often Since 2000. When the extreme rainstorm comes, it will cause serious damage in Taiwan, especially in urban area. It is suffered by the flooding and the government take it as the urgent issue. On the past, the land use of urban planning does not take flood-detention into consideration. With the development of the city, the impermeable surface increase and most of the people live in urban area. It means there is the highly vulnerability in the urban area, but it cannot deal with the surface runoff and the flooding. However, building the detention pond in hydraulic engineering way to solve the problem is not feasible in urban area. The land expropriation is the most expensive construction of the detention pond in the urban area, and the government cannot afford it. Therefore, the management strategy of flooding in urban area should use the existing resource, public facilities land. It can archive the performance of flood-detention through providing the public facilities land with the detention function. As multi-use public facilities land, it also can show the combination of the land use and water agency. To this purpose, this research generalizes the factors of multi-use for public facilities land as flood-detention space with literature review. The factors can be divided into two categories: environmental factors and conditions of public facilities. Environmental factors including three factors: the terrain elevation, the inundation potential and the distance from the drainage system. In the other hand, there are six factors for conditions of public facilities, including area, building rate, the maximum of available ratio etc. Each of them will be according to it characteristic to given the weight for the land use suitability analysis. This research selects the rules of combination from the logical combination. After this process, it can be classified into three suitability levels. Then, three suitability levels will input to the physiographic inundation model for simulating the evaluation of flood-detention respectively. This study tries to respond the urgent issue in urban area and establishes a model of multi-use for public facilities land as flood-detention through the systematic research process of this study. The result of this study can tell which combination of the suitability level is more efficacious. Besides, The model is not only standing on the side of urban planners but also add in the point of view from water agency. Those findings may serve as basis for land use indicators and decision-making references for concerned government agencies.Keywords: flooding management strategy, land use suitability analysis, multi-use for public facilities land, physiographic inundation model
Procedia PDF Downloads 3564008 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images
Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi
Abstract:
Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis
Procedia PDF Downloads 574007 A Neurofeedback Learning Model Using Time-Frequency Analysis for Volleyball Performance Enhancement
Authors: Hamed Yousefi, Farnaz Mohammadi, Niloufar Mirian, Navid Amini
Abstract:
Investigating possible capacities of visual functions where adapted mechanisms can enhance the capability of sports trainees is a promising area of research, not only from the cognitive viewpoint but also in terms of unlimited applications in sports training. In this paper, the visual evoked potential (VEP) and event-related potential (ERP) signals of amateur and trained volleyball players in a pilot study were processed. Two groups of amateur and trained subjects are asked to imagine themselves in the state of receiving a ball while they are shown a simulated volleyball field. The proposed method is based on a set of time-frequency features using algorithms such as Gabor filter, continuous wavelet transform, and a multi-stage wavelet decomposition that are extracted from VEP signals that can be indicative of being amateur or trained. The linear discriminant classifier achieves the accuracy, sensitivity, and specificity of 100% when the average of the repetitions of the signal corresponding to the task is used. The main purpose of this study is to investigate the feasibility of a fast, robust, and reliable feature/model determination as a neurofeedback parameter to be utilized for improving the volleyball players’ performance. The proposed measure has potential applications in brain-computer interface technology where a real-time biomarker is needed.Keywords: visual evoked potential, time-frequency feature extraction, short-time Fourier transform, event-related spectrum potential classification, linear discriminant analysis
Procedia PDF Downloads 1354006 Non-linear Model of Elasticity of Compressive Strength of Concrete
Authors: Charles Horace Ampong
Abstract:
Non-linear models have been found to be useful in modeling the elasticity (measure of degree of responsiveness) of a dependent variable with respect to a set of independent variables ceteris paribus. This constant elasticity principle was applied to the dependent variable (Compressive Strength of Concrete in MPa) which was found to be non-linearly related to the independent variable (Water-Cement ratio in kg/m3) for given Ages of Concrete in days (3, 7, 28) at different levels of admixtures Superplasticizer (in kg/m3), Blast Furnace Slag (in kg/m3) and Fly Ash (in kg/m3). The levels of the admixtures were categorized as: S1=Some Plasticizer added & S0=No Plasticizer added; B1=some Blast Furnace Slag added & B0=No Blast Furnace Slag added; F1=Some Fly Ash added & F0=No Fly Ash added. The number of observations (samples) used for the research was one-hundred and thirty-two (132) in all. For Superplasticizer, it was found that Compressive Strength of Concrete was more elastic with regards to Water-Cement ratio at S1 level than at S0 level for the given ages of concrete 3, 7and 28 days. For Blast Furnace Slag, Compressive Strength with regards to Water-Cement ratio was more elastic at B0 level than at B1 level for concrete ages 3, 7 and 28 days. For Fly Ash, Compressive Strength with regards to Water-Cement ratio was more elastic at B0 level than at B1 level for Ages 3, 7 and 28 days. The research also tested for different combinations of the levels of Superplasticizer, Blast Furnace Slag and Fly Ash. It was found that Compressive Strength elasticity with regards to Water-Cement ratio was lowest (Elasticity=-1.746) with a combination of S0, B0 and F0 for concrete age of 3 days. This was followed by Elasticity of -1.611 with a combination of S0, B0 and F0 for a concrete of age 7 days. Next, the highest was an Elasticity of -1.414 with combination of S0, B0 and F0 for a concrete age of 28 days. Based on preceding outcomes, three (3) non-linear model equations for predicting the output elasticity of Compressive Strength of Concrete (in %) or the value of Compressive Strength of Concrete (in MPa) with regards to Water to Cement was formulated. The model equations were based on the three different ages of concrete namely 3, 7 and 28 days under investigation. The three models showed that higher elasticity translates into higher compressive strength. And the models revealed a trend of increasing concrete strength from 3 to 28 days for a given amount of water to cement ratio. Using the models, an increasing modulus of elasticity from 3 to 28 days was deduced.Keywords: concrete, compressive strength, elasticity, water-cement
Procedia PDF Downloads 2914005 Grain Refinement of Al-7Si-0.4Mg Alloy by Combination of Al-Ti-B and Mg-Al2Ca Mater Alloys and Their Effects on Tensile Property
Authors: Young-Ok Yoon, Su-Yeon Lee, Seong-Ho Ha, Gil-Yong Yeom, Bong-Hwan Kim, Hyun-Kyu Lim, Shae K. Kim
Abstract:
Al-7Si-0.4Mg alloy (designated A356) is widely used in the automotive and aerospace industries as structural components due to an excellent combination of castability and mechanical properties. Grain refinement has a significant effect on the mechanical properties of castings, mainly since the distribution of secondary phase is changed. As a grain refiner, the Al-Ti-B master alloys containing TiAl3 and TiB2 particles have been widely used in Al foundries. The Mg loss and Mg based inclusion formation by the strong affinity of Mg to oxygen in the melting process of Mg contained alloys have been an issue. This can be significantly improved only by Mg+Al2Ca master alloy as an alloying element instead of pure Mg. Moreover, the eutectic Si modification and grain refinement is simultaneously obtained because Al2Ca behaves as Ca, a typical Si modifier. The present study is focused on the combined effects of Mg+Al2Ca and Al-Ti-B master alloys on the grain refiment of Al-7Si-0.4Mg alloy and their proper ratio for the optimum effect. The aim of this study, therefore, is to investigate the change of the microstructure in Al-7Si-0.4Mg alloy with different ratios of Ti and Al2Ca (detected Ca content) and their effects on the tensile property. The distribution and morphology of the secondary phases by the grain refinement will be discussed.Keywords: Al-7Si-0.4Mg alloy, Al2Ca, Al-Ti-B alloy, grain refinement
Procedia PDF Downloads 4334004 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 1504003 A Corpus-Assisted Discourse Analysis of Adjectival Collocation of the Word 'Education' in the American Context
Authors: Ngan Nguyen
Abstract:
The study analyses adjectives collocating with the word ‘education’ in the American language of the Corpus of Global Web-based English using a combination of corpus linguistic and discourse analytical methods to examine not only language patterns but also social political ideologies around the topic. Significant conclusions are deduced: (1) there are a large number of adjectival collocates of the word education which have been identified and classified into four categories representing four different aspects of education: level, quality, forms and types of education; (2) education, as in combination with three first categories, carries the meaning as the act and process of teaching and learning while with the last category having the meaning of a particular kind of teaching or training; (3) higher education is the topic that gains most concerns from the American public; (4) five most significant ideologies are discovered from the corpus: higher education associates with financial affairs, higher education is an industry, monetary policy of the government on higher education, people require greater accessibility to higher education and people value higher education. The study contributes to the field of developing meanings of words through corpus analysis and the field of discourse analysis.Keywords: adjectival collocation, American context, corpus linguistics, discourse analysis, education
Procedia PDF Downloads 3454002 Antimicrobial Activity of Different Essential Oils in Synergy with Amoxicillin against Clinical Isolates of Methicillin-Resistant Staphylococcus aureus
Authors: Naheed Niaz, Nimra Naeem, Bushra Uzair, Riffat Tahira
Abstract:
Antibacterial activity of different traditional plants essential oils against clinical isolates of Methicillin-resistant Staphylococcus aureus (MRSA) through disk diffusion method was evaluated. All the tested essential oils, in different concentrations, inhibited growth of S. aureus to varying degrees. Cinnamon and Thyme essential oils were observed to be the “best” against test pathogen. Even at lowest concentration of these essential oils i.e. 25 µl/ml, clear zone of inhibition was recorded 9+0.085mm and 8+0.051mm respectively, and at higher concentrations there was a total reduction in growth of MRSA. The study also focused on analyzing the synergistic effects of essential oils in combination with amoxicillin. Results showed that oregano and pennyroyal mint essential oils which were not very effective alone turned out to be strong synergistic enhancers. The activity increased with increase in concentration of the essential oils. It may be concluded from present results that cinnamon and thyme essential oils could be used as potential antimicrobial source for the treatment of infections caused by Methicillin-resistant Staphylococcus aureus (MRSA).Keywords: Staphylococcus aureus, essential oils, antibiotics, combination therapy, minimum inhibitory concentration
Procedia PDF Downloads 4454001 Antioxidant Activity of Essential Oils and Ethanolic Extracts of Four Medicinal Plants Alone and in Combination
Authors: Fatiha Bedjou, Meriem Meddas, Tadjajikt Chekkal
Abstract:
The present work aims to evaluate the antioxidant activity of ethanolic extracts and essential oils of aromatic plants of the Lamiaceae family: Thymus algeriensis and Salvia rosmarinus, and Anacardiaceae: Pistacia lentiscus, Myrtaceae: Eucalyptus polybracetea. The polyphenols were measured using the Folin-Ciocalteu method; the results showed that the essential oils studied as well as the ethanolic extracts are relatively rich in polyphenols. Their antioxidant properties were tested by the synthetic DPPH radical trapping method. The IC50 values were determined according to the graph representing the percentage of inhibition of the DPPH radical by essential oils and by ethanolic extracts, according to our results there is a correlation between the level of polyphenols present in the different essential oils and different ethanolic extracts and their ability to neutralize free radicals. Several combinations were carried out between the essential oils and also between the ethanolic extracts in order to determine the type of interactions existing between the combined substances, the results were represented in the form of isobolograms. Additive and super-additive effects were observed in combinations of essential oils, and super-additive and sub-additive effects were observed for combinations of ethanolic extracts.Keywords: essential oils, ethanolic extracts, DPPH, combination
Procedia PDF Downloads 584000 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning
Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz
Abstract:
Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics
Procedia PDF Downloads 1163999 A TFETI Domain Decompositon Solver for von Mises Elastoplasticity Model with Combination of Linear Isotropic-Kinematic Hardening
Authors: Martin Cermak, Stanislav Sysala
Abstract:
In this paper we present the efficient parallel implementation of elastoplastic problems based on the TFETI (Total Finite Element Tearing and Interconnecting) domain decomposition method. This approach allow us to use parallel solution and compute this nonlinear problem on the supercomputers and decrease the solution time and compute problems with millions of DOFs. In our approach we consider an associated elastoplastic model with the von Mises plastic criterion and the combination of linear isotropic-kinematic hardening law. This model is discretized by the implicit Euler method in time and by the finite element method in space. We consider the system of nonlinear equations with a strongly semismooth and strongly monotone operator. The semismooth Newton method is applied to solve this nonlinear system. Corresponding linearized problems arising in the Newton iterations are solved in parallel by the above mentioned TFETI. The implementation of this problem is realized in our in-house MatSol packages developed in MATLAB.Keywords: isotropic-kinematic hardening, TFETI, domain decomposition, parallel solution
Procedia PDF Downloads 4193998 Development and Utilization of Keratin-Fibrin-Gelatin Composite Films as Potential Material for Skin Tissue Engineering Application
Authors: Sivakumar Singaravelu, Giriprasath Ramanathan, M. D. Raja, Uma Tirichurapalli Sivagnanam
Abstract:
The goal of the present study was to develop and evaluate composite film for tissue engineering application. The keratin was extracted from bovine horn and used for preparation of keratin (HK), physiologically clotted fibrin (PCF) and gelatin (G) blend films in different stoichiometric ratios (1:1:1, 1:1:2 and 1:1:3) by using solvent casting method. The composite films (HK-PCF-G) were characterized physiochemically using Fourier Transform Infrared Spectroscopy (FTIR), Differential Scanning Calorimetry (DSC), Thermogravimetric Analysis (TGA) and Scanning Electron Microscopy (SEM). The mechanical properties of the composite films were analyzed. The results of tensile strength show that ultimate strength and elongation were 10.72 Mpa and 4.83 MPA respectively for 1:1:3 ratio combination. The SEM image showed a slight smooth surface for 1:1:3 ratio combination compared to other films. In order to impart antibacterial activities, the composite films were loaded with Mupirocin (MP) to act against infection. The composite films acted as a suitable carrier to protect and release the drug in a controlled manner. This developed composite film would be a suitable alternative material for tissue engineering application.Keywords: bovine horn, keratin, fibrin, gelatin, tensile strength
Procedia PDF Downloads 448