Search results for: physiological functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3322

Search results for: physiological functions

2632 Isolation of Protease Producing Bacteria from Soil Sediments of Ayiramthengu Mangrove Ecosystem

Authors: Reshmi Vijayan

Abstract:

Alkaline protease is one of the most important enzymes in the biological world. Microbial production of alkaline protease is getting more attention from researchers due to its unique properties and substantial activity. Microorganisms are the most common sources of commercial enzymes due to their physiological and biochemical properties. The study was conducted on Ayiramthenghu mangrove sediments to isolate protease producing bacteria. All the isolates were screened for proteolytic activity on a skim milk agar plate at 37˚C for 48hrs. Protease activities were determined by the formation of a clear zone around the colonies on Skim milk agar medium. The activity of the enzyme was measured by the tyrosine standard curve, and it was found to be 0.186285 U/ml/min.

Keywords: protease, protease assay, skim milk agar medium, mangrove ecosystem

Procedia PDF Downloads 74
2631 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation

Authors: Li-hsing Shih, Wei-Jen Hsu

Abstract:

Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.

Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation

Procedia PDF Downloads 50
2630 Nonparametric Truncated Spline Regression Model on the Data of Human Development Index in Indonesia

Authors: Kornelius Ronald Demu, Dewi Retno Sari Saputro, Purnami Widyaningsih

Abstract:

Human Development Index (HDI) is a standard measurement for a country's human development. Several factors may have influenced it, such as life expectancy, gross domestic product (GDP) based on the province's annual expenditure, the number of poor people, and the percentage of an illiterate people. The scatter plot between HDI and the influenced factors show that the plot does not follow a specific pattern or form. Therefore, the HDI's data in Indonesia can be applied with a nonparametric regression model. The estimation of the regression curve in the nonparametric regression model is flexible because it follows the shape of the data pattern. One of the nonparametric regression's method is a truncated spline. Truncated spline regression is one of the nonparametric approach, which is a modification of the segmented polynomial functions. The estimator of a truncated spline regression model was affected by the selection of the optimal knots point. Knot points is a focus point of spline truncated functions. The optimal knots point was determined by the minimum value of generalized cross validation (GCV). In this article were applied the data of Human Development Index with a truncated spline nonparametric regression model. The results of this research were obtained the best-truncated spline regression model to the HDI's data in Indonesia with the combination of optimal knots point 5-5-5-4. Life expectancy and the percentage of an illiterate people were the significant factors depend to the HDI in Indonesia. The coefficient of determination is 94.54%. This means the regression model is good enough to applied on the data of HDI in Indonesia.

Keywords: generalized cross validation (GCV), Human Development Index (HDI), knots point, nonparametric regression, truncated spline

Procedia PDF Downloads 312
2629 A Study of Algebraic Structure Involving Banach Space through Q-Analogue

Authors: Abdul Hakim Khan

Abstract:

The aim of the present paper is to study the Banach Space and Combinatorial Algebraic Structure of R. It is further aimed to study algebraic structure of set of all q-extension of classical formula and function for 0 < q < 1.

Keywords: integral functions, q-extensions, q numbers of metric space, algebraic structure of r and banach space

Procedia PDF Downloads 561
2628 Integration of STEM Education in Quebec, Canada – Challenges and Opportunities

Authors: B. El Fadil, R. Najar

Abstract:

STEM education is promoted by many scholars and curricula around the world, but it is not yet well established in the province of Quebec in Canada. In addition, effective instructional STEM activities and design methods are required to ensure that students and teachers' needs are being met. One potential method is the Engineering Design Process (EDP), a methodology that emphasizes the importance of creativity and collaboration in problem-solving strategies. This article reports on a case study that focused on using the EDP to develop instructional materials by means of making a technological artifact to teach mathematical variables and functions at the secondary level. The five iterative stages of the EDP (design, make, test, infer, and iterate) were integrated into the development of the course materials. Data was collected from different sources: pre- and post-questionnaires, as well as a working document dealing with pupils' understanding based on designing, making, testing, and simulating. Twenty-four grade seven (13 years old) students in Northern Quebec participated in the study. The findings of this study indicate that STEM activities have a positive impact not only on students' engagement in classroom activities but also on learning new mathematical concepts. Furthermore, STEM-focused activities have a significant effect on problem-solving skills development in an interdisciplinary approach. Based on the study's results, we can conclude, inter alia, that teachers should integrate STEM activities into their teaching practices to increase learning outcomes and attach more importance to STEM-focused activities to develop students' reflective thinking and hands-on skills.

Keywords: engineering design process, motivation, stem, integration, variables, functions

Procedia PDF Downloads 72
2627 [Keynote Talk]: Applying p-Balanced Energy Technique to Solve Liouville-Type Problems in Calculus

Authors: Lina Wu, Ye Li, Jia Liu

Abstract:

We are interested in solving Liouville-type problems to explore constancy properties for maps or differential forms on Riemannian manifolds. Geometric structures on manifolds, the existence of constancy properties for maps or differential forms, and energy growth for maps or differential forms are intertwined. In this article, we concentrate on discovery of solutions to Liouville-type problems where manifolds are Euclidean spaces (i.e. flat Riemannian manifolds) and maps become real-valued functions. Liouville-type results of vanishing properties for functions are obtained. The original work in our research findings is to extend the q-energy for a function from finite in Lq space to infinite in non-Lq space by applying p-balanced technique where q = p = 2. Calculation skills such as Hölder's Inequality and Tests for Series have been used to evaluate limits and integrations for function energy. Calculation ideas and computational techniques for solving Liouville-type problems shown in this article, which are utilized in Euclidean spaces, can be universalized as a successful algorithm, which works for both maps and differential forms on Riemannian manifolds. This innovative algorithm has a far-reaching impact on research work of solving Liouville-type problems in the general settings involved with infinite energy. The p-balanced technique in this algorithm provides a clue to success on the road of q-energy extension from finite to infinite.

Keywords: differential forms, holder inequality, Liouville-type problems, p-balanced growth, p-harmonic maps, q-energy growth, tests for series

Procedia PDF Downloads 214
2626 Molecular Characterization of Arginine Sensing Response in Unravelling Host-Pathogen Interactions in Leishmania

Authors: Evanka Madan, Madhu Puri, Dan Zilberstein, Rohini Muthuswami, Rentala Madhubala

Abstract:

The extensive interaction between the host and pathogen metabolic networks decidedly shapes the outcome of infection. Utilization of arginine by the host and pathogen is critical for determining the outcome of pathogenic infection. Infections with L. donovani, an intracellular parasite, will lead to an extensive competition of arginine between the host and the parasite donovani infection. One of the major amino acid (AA) sensing signaling pathways in mammalian cells are the mammalian target of rapamycin complex I (mTORC1) pathway. mTORC1, as a sensor of nutrient, controls numerous metabolic pathways. Arginine is critical for mTORC1 activation. SLC38A9 is the arginine sensor for the mTORC1, being activated during arginine sufficiency. L. donovani transport arginine via a high-affinity transporter (LdAAP3) that is rapidly up-regulated by arginine deficiency response (ADR) in intracellular amastigotes. This study, to author’s best knowledge, investigates the interaction between two arginine sensing systems that act in the same compartment, the lysosome. One is important for macrophage defense, and the other is essential for pathogen virulence. We hypothesize that the latter modulates lysosome arginine to prevent host defense response. The work presented here identifies an upstream regulatory role of LdAAP3 in regulating the expression of SLC38A9-mTORC1 pathway, and consequently, their function in L. donovani infected THP-1 cells cultured in 0.1 mM and 1.5 mM arginine. It was found that in physiological levels of arginine (0.1 mM), infecting THP-1 with Leishmania leads to increased levels of SLC38A9 and mTORC1 via an increase in the expression of RagA. However, the reversal was observed with LdAAP3 mutants, reflecting the positive regulatory role of LdAAP3 on the host SLC38A9. At the molecular level, upon infection, mTORC1 and RagA were found to be activated at the surface of phagolysosomes which was found to form a complex with phagolysosomal localized SLC38A9. To reveal the relevance of SLC38A9 under physiological levels of arginine, endogenous SLC38A9 was depleted and a substantial reduction in the expression of host mTORC1, its downstream active substrate, p-P70S6K1 and parasite LdAAP3, was observed, thereby showing that silencing SLC38A9 suppresses ADR. In brief, to author’s best knowledge, these results reveal an upstream regulatory role of LdAAP3 in manipulating SLC38A9 arginine sensing in host macrophages. Our study indicates that intra-macrophage survival of L. donovani depends on the availability and transport of extracellular arginine. An understanding of the sensing pathway of both parasite and host will open a new perspective on the molecular mechanism of host-parasite interaction and consequently, as a treatment for Leishmaniasis.

Keywords: arginine sensing, LdAAP3, L. donovani, mTORC1, SLC38A9, THP-1

Procedia PDF Downloads 101
2625 R Statistical Software Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. This paper will try to introduce another way of calculating reliability by using R statistical software. R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. The R programming environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of functions for the implementation of both standard and new statistical methods. R does not limit user only to operation related only to these functions. This program has many benefits over other similar programs: it is free and, as an open source, constantly updated; it has built-in help system; the R language is easy to extend with user-written functions. The significance of the work is calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. Seventy generators were studied. For each one, the number of hours of running time from its first being put into service until fan failure or until the end of the study (whichever came first) was recorded. Dataset consists of two variables: hours and status. Hours show the time of each fan working and status shows the event: 1- failed, 0- censored data. Censored data represent cases when we cannot track the specific case, so it could fail or success. Gaining the result by using R was easy and quick. The program will take into consideration censored data and include this into the results. This is not so easy in hand calculation. For the purpose of the paper results from R program have been compared to hand calculations in two different cases: censored data taken as a failure and censored data taken as a success. In all three cases, results are significantly different. If user decides to use the R for further calculations, it will give more precise results with work on censored data than the hand calculation.

Keywords: censored data, R statistical software, reliability analysis, time to failure

Procedia PDF Downloads 383
2624 Hydrological Modelling of Geological Behaviours in Environmental Planning for Urban Areas

Authors: Sheetal Sharma

Abstract:

Runoff,decreasing water levels and recharge in urban areas have been a complex issue now a days pointing defective urban design and increasing demography as cause. Very less has been discussed or analysed for water sensitive Urban Master Plans or local area plans. Land use planning deals with land transformation from natural areas into developed ones, which lead to changes in natural environment. Elaborated knowledge of relationship between the existing patterns of land use-land cover and recharge with respect to prevailing soil below is less as compared to speed of development. The parameters of incompatibility between urban functions and the functions of the natural environment are becoming various. Changes in land patterns due to built up, pavements, roads and similar land cover affects surface water flow seriously. It also changes permeability and absorption characteristics of the soil. Urban planners need to know natural processes along with modern means and best technologies available,as there is a huge gap between basic knowledge of natural processes and its requirement for balanced development planning leading to minimum impact on water recharge. The present paper analyzes the variations in land use land cover and their impacts on surface flows and sub-surface recharge in study area. The methodology adopted was to analyse the changes in land use and land cover using GIS and Civil 3d auto cad. The variations were used in  computer modeling using Storm-water Management Model to find out the runoff for various soil groups and resulting recharge observing water levels in POW data for last 40 years of the study area. Results were anlayzed again to find best correlations for sustainable recharge in urban areas.

Keywords: geology, runoff, urban planning, land use-land cover

Procedia PDF Downloads 291
2623 Management of Dysphagia after Supra Glottic Laryngectomy

Authors: Premalatha B. S., Shenoy A. M.

Abstract:

Background: Rehabilitation of swallowing is as vital as speech in surgically treated head and neck cancer patients to maintain nutritional support, enhance wound healing and improve quality of life. Aspiration following supraglottic laryngectomy is very common, and rehabilitation of the same is crucial which requires involvement of speech therapist in close contact with head and neck surgeon. Objectives: To examine the functions of swallowing outcomes after intensive therapy in supraglottic laryngectomy. Materials: Thirty-nine supra glottic laryngectomees were participated in the study. Of them, 36 subjects were males and 3 were females, in the age range of 32-68 years. Eighteen subjects had undergone standard supra glottis laryngectomy (Group1) for supraglottic lesions where as 21 of them for extended supraglottic laryngectomy (Group 2) for base tongue and lateral pharyngeal wall lesion. Prior to surgery visit by speech pathologist was mandatory to assess the sutability for surgery and rehabilitation. Dysphagia rehabilitation started after decannulation of tracheostoma by focusing on orientation about anatomy, physiological variation before and after surgery, which was tailor made for each individual based on their type and extent of surgery. Supraglottic diet - Soft solid with supraglottic swallow method was advocated to prevent aspiration. The success of intervention was documented as number of sessions taken to swallow different food consistency and also percentage of subjects who achieved satisfactory swallow in terms of number of weeks in both the groups. Results: Statistical data was computed in two ways in both the groups 1) to calculate percentage (%) of subjects who swallowed satisfactorily in the time frame of less than 3 weeks to more than 6 weeks, 2) number of sessions taken to swallow without aspiration as far as food consistency was concerned. The study indicated that in group 1 subjects of standard supraglottic laryngectomy, 61% (n=11) of them were successfully rehabilitated but their swallowing normalcy was delayed by an average 29th post operative day (3-6 weeks). Thirty three percentages (33%) (n=6) of the subjects could swallow satisfactorily without aspiration even before 3 weeks and only 5 % (n=1) of the needed more than 6 weeks to achieve normal swallowing ability. Group 2 subjects of extended SGL only 47 %( n=10) of them could achieved satisfactory swallow by 3-6 weeks and 24% (n=5) of them of them achieved normal swallowing ability before 3 weeks. Around 4% (n=1) needed more than 6 weeks and as high as 24 % (n=5) of them continued to be supplemented with naso gastric feeding even after 8-10 months post operative as they exhibited severe aspiration. As far as type of food consistencies were concerned group 1 subject could able to swallow all types without aspiration much earlier than group 2 subjects. Group 1 needed only 8 swallowing therapy sessions for thickened soft solid and 15 sessions for liquids whereas group 2 required 14 sessions for soft solid and 17 sessions for liquids to achieve swallowing normalcy without aspiration. Conclusion: The study highlights the importance of dysphagia intervention in supraglottic laryngectomees by speech pathologist.

Keywords: dysphagia management, supraglotic diet, supraglottic laryngectomy, supraglottic swallow

Procedia PDF Downloads 216
2622 Life Prediction Method of Lithium-Ion Battery Based on Grey Support Vector Machines

Authors: Xiaogang Li, Jieqiong Miao

Abstract:

As for the problem of the grey forecasting model prediction accuracy is low, an improved grey prediction model is put forward. Firstly, use trigonometric function transform the original data sequence in order to improve the smoothness of data , this model called SGM( smoothness of grey prediction model), then combine the improved grey model with support vector machine , and put forward the grey support vector machine model (SGM - SVM).Before the establishment of the model, we use trigonometric functions and accumulation generation operation preprocessing data in order to enhance the smoothness of the data and weaken the randomness of the data, then use support vector machine (SVM) to establish a prediction model for pre-processed data and select model parameters using genetic algorithms to obtain the optimum value of the global search. Finally, restore data through the "regressive generate" operation to get forecasting data. In order to prove that the SGM-SVM model is superior to other models, we select the battery life data from calce. The presented model is used to predict life of battery and the predicted result was compared with that of grey model and support vector machines.For a more intuitive comparison of the three models, this paper presents root mean square error of this three different models .The results show that the effect of grey support vector machine (SGM-SVM) to predict life is optimal, and the root mean square error is only 3.18%. Keywords: grey forecasting model, trigonometric function, support vector machine, genetic algorithms, root mean square error

Keywords: Grey prediction model, trigonometric functions, support vector machines, genetic algorithms, root mean square error

Procedia PDF Downloads 439
2621 Simulation-Based Control Module for Offshore Single Point Mooring System

Authors: Daehyun Baek, Seungmin Lee, Minju Kim Jangik Park, Hyeong-Soon Moon

Abstract:

SPM (Single Point Mooring) is one of the mooring buoy facilities installed on a coast near oil and gas terminal which is not able to berth FPSO or large oil tankers under the condition of high draft due to geometrical limitation. Loading and unloading of crude oil and gas through a subsea pipeline can be carried out between the mooring buoy, ships and onshore facilities. SPM is an offshore-standalone system which has to withstand the harsh marine environment with harsh conditions such as high wind, current and so on. Therefore, SPM is required to have high stability, reliability and durability. Also, SPM is comprised to be integrated systems which consist of power management, high pressure valve control, sophisticated hardware/software and a long distance communication system. In order to secure required functions of SPM system, a simulation model for the integrated system of SPM using MATLAB Simulink and State flow tool has been developed. The developed model consists of configuration of hydraulic system for opening and closing of PLEM (Pipeline End Manifold) valves and control system logic. To verify functions of the model, an integrated simulation model for overall systems of SPM was also developed by considering handshaking variables between individual systems. In addition to the dynamic model, a self-diagnostic function to determine failure of the system was configured, which enables the SPM system itself to alert users about the failure once a failure signal comes to arise. Controlling and monitoring the SPM system is able to be done by a HMI system which is capable of managing the SPM system remotely, which was carried out by building a communication environment between the SPM system and the HMI system.

Keywords: HMI system, mooring buoy, simulink simulation model, single point mooring, stateflow

Procedia PDF Downloads 402
2620 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications

Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino

Abstract:

The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.

Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses

Procedia PDF Downloads 169
2619 Simple and Effective Method of Lubrication and Wear Protection

Authors: Buddha Ratna Shrestha, Jimmy Faivre, Xavier Banquy

Abstract:

By precisely controlling the molecular interactions between anti-wear macromolecules and bottle-brush lubricating molecules in the solution state, we obtained a fluid with excellent lubricating and wear protection capabilities. The reason for this synergistic behavior relies on the subtle interaction forces between the fluid components which allow the confined macromolecules to sustain high loads under shear without rupture. Our results provide rational guides to design such fluids for virtually any type of surfaces. The lowest friction coefficient and the maximum pressure that it can sustain is 5*10-3 and 2.5 MPa which is close to the physiological pressure. Lubricating and protecting surfaces against wear using liquid lubricants is a great technological challenge. Until now, wear protection was usually imparted by surface coatings involving complex chemical modifications of the surface while lubrication was provided by a lubricating fluid. Hence, we here research for a simple, effective and applicable solution to the above problem using surface force apparatus (SFA). SFA is a powerful technique with sub-angstrom resolution in distance and 10 nN/m resolution in interaction force while performing friction experiment. Thus, SFA is used to have the direct insight into interaction force, material and friction at interface. Also, we always know the exact contact area. From our experiments, we found that by precisely controlling the molecular interactions between anti-wear macromolecules and lubricating molecules, we obtained a fluid with excellent lubricating and wear protection capabilities. The reason for this synergistic behavior relies on the subtle interaction forces between the fluid components which allow the confined macromolecules to sustain high loads under shear without rupture. The lowest friction coefficient and the maximum pressure that it can sustain in our system is 5*10-3 and 2.5 GPA which is well above the physiological pressure. Our results provide rational guides to design such fluids for virtually any type of surfaces. Most importantly this process is simple, effective and applicable method of lubrication and protection as until now wear protection was usually imparted by surface coatings involving complex chemical modifications of the surface. Currently, the frictional data that are obtained while sliding the flat mica surfaces are compared and confirmed that a particular mixture of solution was found to surpass all other combination. So, further we would like to confirm that the lubricating and antiwear protection remains the same by performing the friction experiments in synthetic cartilages.

Keywords: bottle brush polymer, hyaluronic acid, lubrication, tribology

Procedia PDF Downloads 249
2618 Functional Aspects of Carbonic Anhydrase

Authors: Bashistha Kumar Kanth, Seung Pil Pack

Abstract:

Carbonic anhydrase is ubiquitously distributed in organisms, and is fundamental to many eukaryotic biological processes such as photosynthesis, respiration, CO2 and ion transport, calcification and acid–base balance. However, CA occurs across the spectrum of prokaryotic metabolism in both the archaea and bacteria domains and many individual species contain more than one class. In this review, various roles of CA involved in cellular mechanism are presented to find out the CA functions applicable for industrial use.

Keywords: carbonic anhydrase, mechanism, CO2 sequestration, respiration

Procedia PDF Downloads 470
2617 Strengthening Regulation and Supervision of Microfinance Sector for Development in Ethiopia

Authors: Megersa Dugasa Fite

Abstract:

This paper analyses regulatory and supervisory issues in the Ethiopian micro finance sector, which caters to the needs of those who have been excluded from the formal financial sector. Micro-finance has received increased importance in development because of its grand goal to give credits to the poor to raise their economic and social well-being and improve the quality of lives. The micro-finance at present has been moving towards a credit-plus period through covering savings and insurance functions. It thus helps in reducing the rate of financial exclusion and social segregation, alleviating poverty and, consequently, stimulating development. The Ethiopian micro finance policy has been generally positive and developmental but major regulatory and supervisory limitations such as the absolute prohibition of NGOs to participate in micro credit functions, higher risks for depositors of micro-finance institutions, lack of credit information services with research and development, the unmet demand, and risks of market failures due to over-regulation are disappointing. Therefore, to remove the limited reach and high degree of problems typical in the informal means of financial intermediation plus to deal with the failure of formal banks to provide basic financial services to a significant portion of the country’s population, more needs to be done on micro finance. Certain key regulatory and supervisory revisions hence need to be taken to strengthen the Ethiopian micro finance sector so that it can practically provide majority poor access to a range of high quality financial services that help them work their way out of poverty and the incapacity it imposes.

Keywords: micro-finance, micro-finance regulation and supervision, micro-finance institutions, financial access, social segregation, poverty alleviation, development, Ethiopia

Procedia PDF Downloads 367
2616 Decision Support System Based On GIS and MCDM to Identify Land Suitability for Agriculture

Authors: Abdelkader Mendas

Abstract:

The integration of MultiCriteria Decision Making (MCDM) approaches in a Geographical Information System (GIS) provides a powerful spatial decision support system which offers the opportunity to efficiently produce the land suitability maps for agriculture. Indeed, GIS is a powerful tool for analyzing spatial data and establishing a process for decision support. Because of their spatial aggregation functions, MCDM methods can facilitate decision making in situations where several solutions are available, various criteria have to be taken into account and decision-makers are in conflict. The parameters and the classification system used in this work are inspired from the FAO (Food and Agriculture Organization) approach dedicated to a sustainable agriculture. A spatial decision support system has been developed for establishing the land suitability map for agriculture. It incorporates the multicriteria analysis method ELECTRE Tri (ELimitation Et Choix Traduisant la REalité) in a GIS within the GIS program package environment. The main purpose of this research is to propose a conceptual and methodological framework for the combination of GIS and multicriteria methods in a single coherent system that takes into account the whole process from the acquisition of spatially referenced data to decision-making. In this context, a spatial decision support system for developing land suitability maps for agriculture has been developed. The algorithm of ELECTRE Tri is incorporated into a GIS environment and added to the other analysis functions of GIS. This approach has been tested on an area in Algeria. A land suitability map for durum wheat has been produced. Through the obtained results, it appears that ELECTRE Tri method, integrated into a GIS, is better suited to the problem of land suitability for agriculture. The coherence of the obtained maps confirms the system effectiveness.

Keywords: multicriteria decision analysis, decision support system, geographical information system, land suitability for agriculture

Procedia PDF Downloads 610
2615 The Physiological Effect of Cold Atmospheric Pressure Plasma on Cancer Cells, Cancer Stem Cells, and Adult Stem Cells

Authors: Jeongyeon Park, Yeo Jun Yoon, Jiyoung Seo, In Seok Moon, Hae Jun Lee, Kiwon Song

Abstract:

Cold Atmospheric Pressure Plasma (CAPP) is defined as a partially ionized gas with electrically charged particles at room temperature and atmospheric pressure. CAPP generates reactive oxygen species (ROS) and reactive nitrogen species (RNS), and has potential as a new apoptosis-promoting cancer therapy. With an annular type dielectric barrier discharge (DBD) CAPP-generating device combined with a helium (He) gas feeding system, we showed that CAPP selectively induced apoptosis in various cancer cells while it promoted proliferation of the adipose tissue-derived stem cell (ASC). The apoptotic effect of CAPP was highly selective toward p53-mutated cancer cells. The intracellular ROS was mainly responsible for apoptotic cell death in CAPP-treated cancer cells. CAPP induced apoptosis even in doxorubicin-resistant cancer cell lines, demonstrating the feasibility of CAPP as a potent cancer therapy. With the same device and exposure conditions to cancer cells, CAPP stimulated proliferation of the ASC, a kind of mesenchymal stem cell that is capable of self-renewing and differentiating into adipocytes, chondrocytes, osteoblasts and neurons. CAPP-treated ASCs expressed the stem cell markers and differentiated into adipocytes as untreated ASCs. The increase of proliferation by CAPP in ASCs was offset by a NO scavenger but was not affected by ROS scavengers, suggesting that NO generated by CAPP is responsible for the activated proliferation in ASCs. Usually, cancer stem cells are reported to be resistant to known cancer therapies. When we applied CAPP of the same device and exposure conditions to cancer cells to liver cancer stem cells (CSCs) that express CD133 and epithelial cell adhesion molecule (EpCAM) cancer stem cell markers, apoptotic cell death was not examined. Apoptotic cell death of liver CSCs was induced by the CAPP generated from a device with an air-based flatten type DBD. An exposure of liver CSCs to CAPP decreased the viability of liver CSCs to a great extent, suggesting plasma be used as a promising anti-cancer treatment. To validate whether CAPP can be a promising anti-cancer treatment or an adjuvant modality to eliminate remnant tumor in cancer surgery of vestibular schwannoma, we applied CAPP to mouse schwannoma cell line SC4 Nf2 ‑/‑ and human schwannoma cell line HEI-193. A CAPP treatment leads to anti-proliferative effect in both cell lines. We are currently studying the molecular mechanisms of differential physiological effect of CAPP; the proliferation of ASCs and apoptosis of various cancer cells and CSCs.

Keywords: cold atmospheric pressure plasma, apoptosis, proliferation, cancer cells, adult stem cells

Procedia PDF Downloads 261
2614 Homeostatic Analysis of the Integrated Insulin and Glucagon Signaling Network: Demonstration of Bistable Response in Catabolic and Anabolic States

Authors: Pramod Somvanshi, Manu Tomar, K. V. Venkatesh

Abstract:

Insulin and glucagon are responsible for homeostasis of key plasma metabolites like glucose, amino acids and fatty acids in the blood plasma. These hormones act antagonistically to each other during the secretion and signaling stages. In the present work, we analyze the effect of macronutrients on the response from integrated insulin and glucagon signaling pathways. The insulin and glucagon pathways are connected by DAG (a calcium signaling component which is part of the glucagon signaling module) which activates PKC and inhibits IRS (insulin signaling component) constituting a crosstalk. AKT (insulin signaling component) inhibits cAMP (glucagon signaling component) through PDE3 forming the other crosstalk between the two signaling pathways. Physiological level of anabolism and catabolism is captured through a metric quantified by the activity levels of AKT and PKA in their phosphorylated states, which represent the insulin and glucagon signaling endpoints, respectively. Under resting and starving conditions, the phosphorylation metric represents homeostasis indicating a balance between the anabolic and catabolic activities in the tissues. The steady state analysis of the integrated network demonstrates the presence of a bistable response in the phosphorylation metric with respect to input plasma glucose levels. This indicates that two steady state conditions (one in the homeostatic zone and other in the anabolic zone) are possible for a given glucose concentration depending on the ON or OFF path. When glucose levels rise above normal, during post-meal conditions, the bistability is observed in the anabolic space denoting the dominance of the glycogenesis in liver. For glucose concentrations lower than the physiological levels, while exercising, metabolic response lies in the catabolic space denoting the prevalence of glycogenolysis in liver. The non-linear positive feedback of AKT on IRS in insulin signaling module of the network is the main cause of the bistable response. The span of bistability in the phosphorylation metric increases as plasma fatty acid and amino acid levels rise and eventually the response turns monostable and catabolic representing diabetic conditions. In the case of high fat or protein diet, fatty acids and amino acids have an inhibitory effect on the insulin signaling pathway by increasing the serine phosphorylation of IRS protein via the activation of PKC and S6K, respectively. Similar analysis was also performed with respect to input amino acid and fatty acid levels. This emergent property of bistability in the integrated network helps us understand why it becomes extremely difficult to treat obesity and diabetes when blood glucose level rises beyond a certain value.

Keywords: bistability, diabetes, feedback and crosstalk, obesity

Procedia PDF Downloads 251
2613 Current Cosmetic Treatments in Pregnancy

Authors: Daniela F. Maluf, Fernanda Roters, Luma C. F. Silva

Abstract:

The goal of this work is to report the main dermatological alterations occurring during pregnancy and actual cosmetic protocols available and recommended for safe use. Throughout pregnancy, woman's body undergoes many transformations such as hormonal changes and weight gain. These alterations can result in undesirable skin aspects that end up affecting the future mother's life. The main complaints of pregnant women involve melasma advent, varicose veins, edema, and natural skin aging. Even if most of the time is recommended to wait for the birth to use cosmetics, there are some alternatives to prevent and to treat these alterations during pregnancy. For all these cases, there is a need to update information about safety and efficacy of new actives and technologies in cosmetic products. The purpose of this study was to conduct a literature review about the main skin alterations during pregnancy and actual recommended treatments, according to the current legislation.

Keywords: pregnancy, cosmetic, treatment, physiological changes

Procedia PDF Downloads 341
2612 Spatiotemporal Variation Characteristics of Soil pH around the Balikesir City, Turkey

Authors: Çağan Alevkayali, Şermin Tağil

Abstract:

Determination of soil pH surface distribution in urban areas is substantial for sustainable development. Changes on soil properties occur due to functions on performed in agriculture, industry and other urban functions. Soil pH is important to effect on soil productivity which based on sensitive and complex relation between plant and soil. Furthermore, the spatial variability of soil reaction is necessary to measure the effects of urbanization. The objective of this study was to explore the spatial variation of soil pH quality and the influence factors of human land use on soil Ph around Balikesir City using data for 2015 and Geographic Information Systems (GIS). For this, soil samples were taken from 40 different locations, and collected with the method of "Systematic Random" from the pits at 0-20 cm depths, because anthropologic sourced pollutants accumulate on upper layers of soil. The study area was divided into a grid system with 750 x 750 m. GPS was used to determine sampling locations, and Inverse Distance Weighting (IDW) interpolation technique was used to analyze the spatial distribution of pH in the study area and to predict the variable values of un-exampled places with the help from the values of exampled places. Natural soil acidity and alkalinity depend on interaction between climate, vegetation, and soil geological properties. However, analyzing soil pH is important to indirectly evaluate soil pollution caused by urbanization and industrialization. The result of this study showed that soil pH around the Balikesir City was neutral, in generally, with values were between 6.5 and 7.0. On the other hand, some slight changes were demonstrated around open dump areas and the small industrial sites. The results obtained from this study can be indicator of important soil problems and this data can be used by ecologists, planners and managers to protect soil supplies around the Balikesir City.

Keywords: Balikesir, IDW, GIS, spatial variability, soil pH, urbanization

Procedia PDF Downloads 304
2611 Health Risks Evaluation of Heavy Metals in Sea Food from Persian ‎Gulf

Authors: Mohsen Ehsanpour, Maryam Ehsanpour, ‎Majid Afkhami, Fatemeh Afkhami ‎

Abstract:

Heavy metals are increasingly being released into natural waters from geological and anthropogenic sources. The distribution of several heavy metals (Cd, Pb) was investigated in muscle, liver in six different fish species seasonally collected in Persian Gulf (autumn 2009-summer 2010). The concentrations of all metals were lower in flesh than those recorded in liver due to their physiological roles. The THQ index for fish was calculated. Estimation of target hazard quotients calculations for the contaminated fish consumption was calculated to evaluate the effect of pollution on health. Total metal THQs values (Pb and Cd) for adults were 0.05 and 0.04 in Bushehr and Bandar-Genaveh, respectively, and for children they were 0.08 and 0.05 in Bandar-Abbas and Bandar-Lengeh, respectively.

Keywords: Persian Gulf, heavy metals, health risks, THQ index

Procedia PDF Downloads 681
2610 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 85
2609 Fast Switching Mechanism for Multicasting Failure in OpenFlow Networks

Authors: Alaa Allakany, Koji Okamura

Abstract:

Multicast technology is an efficient and scalable technology for data distribution in order to optimize network resources. However, in the IP network, the responsibility for management of multicast groups is distributed among network routers, which causes some limitations such as delays in processing group events, high bandwidth consumption and redundant tree calculation. Software Defined Networking (SDN) represented by OpenFlow presented as a solution for many problems, in SDN the control plane and data plane are separated by shifting the control and management to a remote centralized controller, and the routers are used as a forwarder only. In this paper we will proposed fast switching mechanism for solving the problem of link failure in multicast tree based on Tabu Search heuristic algorithm and modifying the functions of OpenFlow switch to fasts switch to the pack up sub tree rather than sending to the controller. In this work we will implement multicasting OpenFlow controller, this centralized controller is a core part in our multicasting approach, which is responsible for 1- constructing the multicast tree, 2- handling the multicast group events and multicast state maintenance. And finally modifying OpenFlow switch functions for fasts switch to pack up paths. Forwarders, forward the multicast packet based on multicast routing entries which were generated by the centralized controller. Tabu search will be used as heuristic algorithm for construction near optimum multicast tree and maintain multicast tree to still near optimum in case of join or leave any members from multicast group (group events).

Keywords: multicast tree, software define networks, tabu search, OpenFlow

Procedia PDF Downloads 240
2608 Relationships of Plasma Lipids, Lipoproteins and Cardiovascular Outcomes with Climatic Variations: A Large 8-Year Period Brazilian Study

Authors: Vanessa H. S. Zago, Ana Maria H. de Avila, Paula P. Costa, Welington Corozolla, Liriam S. Teixeira, Eliana C. de Faria

Abstract:

Objectives: The outcome of cardiovascular disease is affected by environment and climate. This study evaluated the possible relationships between climatic and environmental changes and the occurrence of biological rhythms in serum lipids and lipoproteins in a large population sample in the city of Campinas, State of Sao Paulo, Brazil. In addition, it determined the temporal variations of death due to atherosclerotic events in Campinas during the time window examined. Methods: A large 8-year retrospective study was carried out to evaluate the lipid profiles of individuals attended at the University of Campinas (Unicamp). The study population comprised 27.543 individuals of both sexes and of all ages. Normolipidemic and dyslipidemic individuals classified according to Brazilian guidelines on dyslipidemias, participated in the study. For the same period, the temperature, relative humidity and daily brightness records were obtained from the Centro de Pesquisas Meteorologicas e Climaticas Aplicadas a Agricultura/Unicamp and frequencies of death due to atherosclerotic events in Campinas were acquired from the Brazilian official database DATASUS, according to the International Classification of Diseases. Statistical analyses were performed using both Cosinor and ARIMA temporal analysis methods. For cross-correlation analysis between climatic and lipid parameters, cross-correlation functions were used. Results: Preliminary results indicated that rhythmicity was significant for LDL-C and HDL-C in the cases of both normolipidemic and dyslipidemic subjects (n =respectively 11.892 and 15.651 both measures increasing in the winter and decreasing in the summer). On the other hand, for dyslipidemic subjects triglycerides increased in summer and decreased in winter, in contrast to normolipidemic ones, in which triglycerides did not show rhythmicity. The number of deaths due to atherosclerotic events showed significant rhythmicity, with maximum and minimum frequencies in winter and summer, respectively. Cross-correlation analyzes showed that low humidity and temperature, higher thermal amplitude and dark cycles are associated with increased levels of LDL-C and HDL-C during winter. In contrast, TG showed moderate cross-correlations with temperature and minimum humidity in an inverse way: maximum temperature and humidity increased TG during the summer. Conclusions: This study showed a coincident rhythmicity between low temperatures and high concentrations of LDL-C and HDL-C and the number of deaths due to atherosclerotic cardiovascular events in individuals from the city of Campinas. The opposite behavior of cholesterol and TG suggest different physiological mechanisms in their metabolic modulation by climate parameters change. Thus, new analyses are underway to better elucidate these mechanisms, as well as variations in lipid concentrations in relation to climatic variations and their associations with atherosclerotic disease and death outcomes in Campinas.

Keywords: atherosclerosis, climatic variations, lipids and lipoproteins, associations

Procedia PDF Downloads 102
2607 LaeA/1-Velvet Interplay in Aspergillus and Trichoderma: Regulation of Secondary Metabolites and Cellulases

Authors: Razieh Karimi Aghcheh, Christian Kubicek, Joseph Strauss, Gerhard Braus

Abstract:

Filamentous fungi are of considerable economic and social significance for human health, nutrition and in white biotechnology. These organisms are dominant producers of a range of primary metabolites such as citric acid, microbial lipids (biodiesel) and higher unsaturated fatty acids (HUFAs). In particular, they produce also important but structurally complex secondary metabolites with enormous therapeutic applications in pharmaceutical industry, for example: cephalosporin, penicillin, taxol, zeranol and ergot alkaloids. Several fungal secondary metabolites, which are significantly relevant to human health do not only include antibiotics, but also e.g. lovastatin, a well-known antihypercholesterolemic agent produced by Aspergillus. terreus, or aflatoxin, a carcinogen produced by A. flavus. In addition to their roles for human health and agriculture, some fungi are industrially and commercially important: Species of the ascomycete genus Hypocrea spp. (teleomorph of Trichoderma) have been demonstrated as efficient producer of highly active cellulolytic enzymes. This trait makes them effective in disrupting and depolymerization of lignocellulosic materials and thus applicable tools in number of biotechnological areas as diverse as clothes-washing detergent, animal feed, and pulp and fuel productions. Fungal LaeA/LAE1 (Loss of aflR Expression A) homologs their gene products act at the interphase between secondary metabolisms, cellulase production and development. Lack of the corresponding genes results in significant physiological changes including loss of secondary metabolite and lignocellulose degrading enzymes production. At the molecular level, the encoded proteins are presumably methyltransferases or demethylases which act directly or indirectly at heterochromatin and interact with velvet domain proteins. Velvet proteins bind to DNA and affect expression of secondary metabolites (SMs) genes and cellulases. The dynamic interplay between LaeA/LAE1, velvet proteins and additional interaction partners is the key for an understanding of the coordination of metabolic and morphological functions of fungi and is required for a biotechnological control of the formation of desired bioactive products. Aspergilli and Trichoderma represent different biotechnologically significant species with significant differences in the LaeA/LAE1-Velvet protein machinery and their target proteins. We, therefore, performed a comparative study of the interaction partners of this machinery and the dynamics of the various protein-protein interactions using our robust proteomic and mass spectrometry techniques. This enhances our knowledge about the fungal coordination of secondary metabolism, cellulase production and development and thereby will certainly improve recombinant fungal strain construction for the production of industrial secondary metabolite or lignocellulose hydrolytic enzymes.

Keywords: cellulases, LaeA/1, proteomics, secondary metabolites

Procedia PDF Downloads 251
2606 Selection Criteria in the Spanish Secondary Education Content and Language Integrated Learning (CLIL) Programmes and Their Effect on Code-Switching in CLIL Methodology

Authors: Dembele Dembele, Philippe

Abstract:

Several Second Language Acquisition (SLA) studies have stressed the benefits of Content and Language Integrated Learning (CLIL) and shown how CLIL students outperformed their non-CLIL counterparts in many L2 skills. However, numerous experimental CLIL programs seem to have mainly targeted above-average and rather highly motivated language learners. The need to understand the impact of the student’s language proficiency on code-switching in CLIL instruction motivated this study. Therefore, determining the implications of the students’ low-language proficiency for CLIL methodology, as well as the frequency with which CLIL teachers use the main pedagogical functions of code-switching, seemed crucial for a Spanish CLIL instruction on a large scale. In the mixed-method approach adopted, ten face-to-face interviews were conducted in nine Valencian public secondary education schools, while over 30 CLIL teachers also contributed with their experience in two online survey questionnaires. The results showed the crucial role language proficiency plays in the Valencian CLIL/Plurilingual selection criteria. The presence of a substantial number of low-language proficient students in CLIL groups, which in turn implied important methodological consequences, was another finding of the study. Indeed, though the pedagogical use of L1 was confirmed as an extended practice among CLIL teachers, more than half of the participants perceived that code-switching impaired attaining their CLIL lesson objectives. Therein, the dissertation highlights the need for more extensive empirical research on how code-switching could prove beneficial in CLIL instruction involving low-language proficient students while maintaining the maximum possible exposure to the target language.

Keywords: CLIL methodology, low language proficiency, code switching, selection criteria, code-switching functions

Procedia PDF Downloads 52
2605 Cognitive Rehabilitation in Schizophrenia: A Review of the Indian Scenario

Authors: Garima Joshi, Pratap Sharan, V. Sreenivas, Nand Kumar, Kameshwar Prasad, Ashima N. Wadhawan

Abstract:

Schizophrenia is a debilitating disorder and is marked by cognitive impairment, which deleteriously impacts the social and professional functioning along with the quality of life of the patients and the caregivers. Often the cognitive symptoms are in their prodromal state and worsen as the illness progresses; they have proven to have a good predictive value for the prognosis of the illness. It has been shown that intensive cognitive rehabilitation (CR) leads to improvements in the healthy as well as cognitively-impaired subjects. As the majority of population in India falls in the lower to middle socio-economic status and have low education levels, using the existing packages, a majority of which are developed in the West, for cognitive rehabilitation becomes difficult. The use of technology is also restricted due to the high costs involved and the limited availability and familiarity with computers and other devices, which pose as an impedance for continued therapy. Cognitive rehabilitation in India uses a plethora of retraining methods for the patients with schizophrenia targeting the functions of attention, information processing, executive functions, learning and memory, and comprehension along with Social Cognition. Psychologists often have to follow an integrative therapy approach involving social skills training, family therapy and psychoeducation in order to maintain the gains from the cognitive rehabilitation in the long run. This paper reviews the methodologies and cognitive retaining programs used in India. It attempts to elucidate the evolution and development of methodologies used, from traditional paper-pencil based retraining to more sophisticated neuroscience-informed techniques in cognitive rehabilitation of deficits in schizophrenia as home-based or supervised and guided programs for cognitive rehabilitation.

Keywords: schizophrenia, cognitive rehabilitation, neuropsychological interventions, integrated approached to rehabilitation

Procedia PDF Downloads 345
2604 Human Brain Organoids-on-a-Chip Systems to Model Neuroinflammation

Authors: Feng Guo

Abstract:

Human brain organoids, 3D brain tissue cultures derived from human pluripotent stem cells, hold promising potential in modeling neuroinflammation for a variety of neurological diseases. However, challenges remain in generating standardized human brain organoids that can recapitulate key physiological features of a human brain. Here, this study presents a series of organoids-on-a-chip systems to generate better human brain organoids and model neuroinflammation. By employing 3D printing and microfluidic 3D cell culture technologies, the study’s systems enable the reliable, scalable, and reproducible generation of human brain organoids. Compared with conventional protocols, this study’s method increased neural progenitor proliferation and reduced heterogeneity of human brain organoids. As a proof-of-concept application, the study applied this method to model substance use disorders.

Keywords: human brain organoids, microfluidics, organ-on-a-chip, neuroinflammation

Procedia PDF Downloads 185
2603 Analysis of Adipose Tissue-Derived Mesenchymal Stem Cells under Atherosclerosis Microenvironment

Authors: Do Khanh Vy, Vuong Cat Khanh, Osamu Ohneda

Abstract:

During atherosclerosis (AS) progression, perivascular adipose tissue-derived mesenchymal stem cells (PVAT-MSCs) are exposed to the hypoxic environment due to the oxygenic deprivation which might influence the adipose tissue-derived mesenchymal stem cells (AT-MSCs) function. Additionally, it has been reported that the angiogenic ability of subcutaneous AT-MSCs (SAT-MSCs) was impaired in the AS patients. However, up to now, the effects of AS on the characteristics and function of PVAT-MSCs have not been clarified yet. In the present study, we analyzed the AS microenvironment effects on the characteristics and function of AT-MSCs. We found that there was no significant difference in cellular morphology and differentiation ability between SAT-MSCs and PVAT-MSCs in AS patients. However, the proliferation of AS-derived PVAT-MSCs was less than those of AS-derived SAT-MSCs. Importantly, the migration of AS-derived PVAT-MSCs was faster than AS-derived SAT-MSCs. Of note, AS-derived PVAT-MSCs showed the upregulation of SDF1, which is related to the homing, and VEGF, which is related to the angiogenesis compared to those of AS-derived SAT-MSCs. Consistent with these results, AS-derived PVAT-MSCs showed the higher ability to recruit EPCs and ECs than AS-derived SAT-MSCs. In addition, EPCs and ECs which cultured in the presence of AS-derived PVAT-MSC conditioned medium showed the higher angiogenic function of the tube formation compared to those cultured in AS-derived SAT-MSC conditioned medium. This result suggests that the higher paracrine effects of AS-derived PVAT-MSCs support the angiogenic function of the target cells. Our data showed the different characteristics and functions of AT-MSCs derived from different sources of tissues. Under the AS microenvironment, it seems that the characteristics and functions of PVAT-MSCs might reflect the progression of AS. Further study will be necessary to clarify the mechanism in the future.

Keywords: atherosclerosis, mesenchymal stem cells, perivascular adipose tissue, subcutaneous adipose tissue

Procedia PDF Downloads 137