Search results for: marking vector
268 The Internationalization of Capital Market Influencing Debt Sustainability's Impact on the Growth of the Nigerian Economy
Authors: Godwin Chigozie Okpara, Eugine Iheanacho
Abstract:
The paper set out to assess the sustainability of debt in the Nigerian economy. Precisely, it sought to determine the level of debt sustainability and its impact on the growth of the economy; whether internationalization of capital market has positively influenced debt sustainability’s impact on economic growth; and to ascertain the direction of causality between external debt sustainability and the growth of GDP. In the light of these objectives, ratio analysis was employed for the determination of debt sustainability. Our findings revealed that the periods 1986 – 1994 and 1999 – 2004 were periods of severe unsustainable borrowing. The unit root test showed that the variables of the growth model were integrated of order one, I(1) and the cointegration test provided evidence for long run stability. Considering the dawn of internationalization of capital market, the researcher employed the structural break approach using Chow Breakpoint test on the vector error correction model (VECM). The result of VECM showed that debt sustainability, measured by debt to GDP ratio exerts negative and significant impact on the growth of the economy while debt burden measured by debt-export ratio and debt service export ratio are negative though insignificant on the growth of GDP. The Cho test result indicated that internationalization of capital market has no significant effect on the debt overhang impact on the growth of the Economy. The granger causality test indicates a feedback effect from economic growth to debt sustainability growth indicators. On the bases of these findings, the researchers made some necessary recommendations which if followed religiously will go a long way to ameliorating debt burdens and engendering economic growth.Keywords: debt sustainability, internalization, capital market, cointegration, chow test
Procedia PDF Downloads 437267 The Relationships between Energy Consumption, Carbon Dioxide (CO2) Emissions, and GDP for Turkey: Time Series Analysis, 1980-2010
Authors: Jinhoa Lee
Abstract:
The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Turkey using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests no effects of the CO2 emissions and energy use on the GDP in Turkey. There exists a short-run bidirectional relationship between the electricity and natural gas consumption, and also there is a negative unidirectional causality running from the GDP to electricity use. Overall, the results partly support arguments that there are relationships between energy use and economic output; however, the effects may differ due to the source of energy such as in the case of Turkey for the period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.Keywords: CO2 emissions, energy consumption, GDP, Turkey, time series analysis
Procedia PDF Downloads 504266 Heterologous Expression of a Clostridium thermocellum Proteins and Assembly of Cellulosomes 'in vitro' for Biotechnology Applications
Authors: Jessica Pinheiro Silva, Brenda Rabello De Camargo, Daniel Gusmao De Morais, Eliane Ferreira Noronha
Abstract:
The utilization of lignocellulosic biomass as source of polysaccharides for industrial applications requires an arsenal of enzymes with different mode of action able to hydrolyze its complex and recalcitrant structure. Clostridium thermocellum is gram-positive, thermophilic bacterium producing lignocellulosic hydrolyzing enzymes in the form of multi-enzyme complex, termed celulossomes. This complex has several hydrolytic enzymes attached to a large and enzymically inactive protein known as Cellulosome-integrating protein (CipA), which serves as a scaffolding protein for the complex produced. This attachment occurs through specific interactions between cohesin modules of CipA and dockerin modules in enzymes. The present work aims to construct celulosomes in vitro with the structural protein CipA, a xylanase called Xyn10D and a cellulose called CelJ from C.thermocellum. A mini-scafoldin was constructed from modules derived from CipA containing two cohesion modules. This was cloned and expressed in Escherichia coli. The other two genes were cloned under the control of the alcohol oxidase 1 promoter (AOX1) in the vector pPIC9 and integrated into the genome of the methylotrophic yeast Pichia pastoris GS115. Purification of each protein is being carried out. Further studies regarding enzymatic activity of the cellulosome is going to be evaluated. The cellulosome built in vitro and composed of mini-CipA, CelJ and Xyn10D, can be very interesting for application in industrial processes involving the degradation of plant biomass.Keywords: cellulosome, CipA, Clostridium thermocellum, cohesin, dockerin, yeast
Procedia PDF Downloads 233265 Ontology-Driven Knowledge Discovery and Validation from Admission Databases: A Structural Causal Model Approach for Polytechnic Education in Nigeria
Authors: Bernard Igoche Igoche, Olumuyiwa Matthew, Peter Bednar, Alexander Gegov
Abstract:
This study presents an ontology-driven approach for knowledge discovery and validation from admission databases in Nigerian polytechnic institutions. The research aims to address the challenges of extracting meaningful insights from vast amounts of admission data and utilizing them for decision-making and process improvement. The proposed methodology combines the knowledge discovery in databases (KDD) process with a structural causal model (SCM) ontological framework. The admission database of Benue State Polytechnic Ugbokolo (Benpoly) is used as a case study. The KDD process is employed to mine and distill knowledge from the database, while the SCM ontology is designed to identify and validate the important features of the admission process. The SCM validation is performed using the conditional independence test (CIT) criteria, and an algorithm is developed to implement the validation process. The identified features are then used for machine learning (ML) modeling and prediction of admission status. The results demonstrate the adequacy of the SCM ontological framework in representing the admission process and the high predictive accuracies achieved by the ML models, with k-nearest neighbors (KNN) and support vector machine (SVM) achieving 92% accuracy. The study concludes that the proposed ontology-driven approach contributes to the advancement of educational data mining and provides a foundation for future research in this domain.Keywords: admission databases, educational data mining, machine learning, ontology-driven knowledge discovery, polytechnic education, structural causal model
Procedia PDF Downloads 62264 Short-Term Forecast of Wind Turbine Production with Machine Learning Methods: Direct Approach and Indirect Approach
Authors: Mamadou Dione, Eric Matzner-lober, Philippe Alexandre
Abstract:
The Energy Transition Act defined by the French State has precise implications on Renewable Energies, in particular on its remuneration mechanism. Until then, a purchase obligation contract permitted the sale of wind-generated electricity at a fixed rate. Tomorrow, it will be necessary to sell this electricity on the Market (at variable rates) before obtaining additional compensation intended to reduce the risk. This sale on the market requires to announce in advance (about 48 hours before) the production that will be delivered on the network, so to be able to predict (in the short term) this production. The fundamental problem remains the variability of the Wind accentuated by the geographical situation. The objective of the project is to provide, every day, short-term forecasts (48-hour horizon) of wind production using weather data. The predictions of the GFS model and those of the ECMWF model are used as explanatory variables. The variable to be predicted is the production of a wind farm. We do two approaches: a direct approach that predicts wind generation directly from weather data, and an integrated approach that estimâtes wind from weather data and converts it into wind power by power curves. We used machine learning techniques to predict this production. The models tested are random forests, CART + Bagging, CART + Boosting, SVM (Support Vector Machine). The application is made on a wind farm of 22MW (11 wind turbines) of the Compagnie du Vent (that became Engie Green France). Our results are very conclusive compared to the literature.Keywords: forecast aggregation, machine learning, spatio-temporal dynamics modeling, wind power forcast
Procedia PDF Downloads 217263 On Lie-Central Derivations and Almost Inner Lie-Derivations of Leibniz Algebras
Authors: Natalia Pacheco Rego
Abstract:
The Liezation functor is a map from the category of Leibniz algebras to the category of Lie algebras, which assigns a Leibniz algebra to the Lie algebra given by the quotient of the Leibniz algebra by the ideal spanned by the square elements of the Leibniz algebra. This functor is left adjoint to the inclusion functor that considers a Lie algebra as a Leibniz algebra. This environment fits in the framework of central extensions and commutators in semi-abelian categories with respect to a Birkhoff subcategory, where classical or absolute notions are relative to the abelianization functor. Classical properties of Leibniz algebras (properties relative to the abelianization functor) were adapted to the relative setting (with respect to the Liezation functor); in general, absolute properties have the corresponding relative ones, but not all absolute properties immediately hold in the relative case, so new requirements are needed. Following this line of research, it was conducted an analysis of central derivations of Leibniz algebras relative to the Liezation functor, called as Lie-derivations, and a characterization of Lie-stem Leibniz algebras by their Lie-central derivations was obtained. In this paper, we present an overview of these results, and we analyze some new properties concerning Lie-central derivations and almost inner Lie-derivations. Namely, a Leibniz algebra is a vector space equipped with a bilinear bracket operation satisfying the Leibniz identity. We define the Lie-bracket by [x, y]lie = [x, y] + [y, x] , for all x, y . The Lie-center of a Leibniz algebra is the two-sided ideal of elements that annihilate all the elements in the Leibniz algebra through the Lie-bracket. A Lie-derivation is a linear map which acts as a derivative with respect to the Lie-bracket. Obviously, usual derivations are Lie-derivations, but the converse is not true in general. A Lie-derivation is called a Lie-central derivation if its image is contained in the Lie-center. A Lie-derivation is called an almost inner Lie-derivation if the image of an element x is contained in the Lie-commutator of x and the Leibniz algebra. The main results we present in this talk refer to the conditions under which Lie-central derivation and almost inner Lie-derivations coincide.Keywords: almost inner Lie-derivation, Lie-center, Lie-central derivation, Lie-derivation
Procedia PDF Downloads 135262 DNA Prime/MVTT Boost Enhances Broadly Protective Immune Response against Mosaic HIV-1 Gag
Authors: Wan Liu, Haibo Wang, Cathy Huang, Zhiwu Tan, Zhiwei Chen
Abstract:
The tremendous diversity of HIV-1 has been a major challenge for an effective AIDS vaccine development. Mosaic approach presents the potential for vaccine design aiming for global protection. The mosaic antigen of HIV-1 Gag allows antigenic breadth for vaccine-elicited immune response against a wider spectrum of viral strains. However, the enhancement of immune response using vaccines is dependent on the strategy used. Heterologous prime/boost regimen has been shown to elicit high levels of immune responses. Here, we investigated whether priming using plasmid DNA with electroporation followed by boosting with the live replication-competent modified vaccinia virus vector TianTan (MVTT) combined with the mosaic antigenic sequence could elicit a greater and broader antigen-specific response against HIV-1 Gag in mice. When compared to DNA or MVTT alone, or MVTT/MVTT group, DNA/MVTT group resulted in coincidentally high frequencies of broadly reactive, Gag-specific, polyfunctional, long-lived, and cytotoxic CD8+ T cells and increased anti-Gag antibody titer. Meanwhile, the vaccination could upregulate PD-1+, and Tim-3+ CD8+ T cell, myeloid-derived suppressive cells and Treg cells to balance the stronger immune response induced. Importantly, the prime/boost vaccination could help control the EcoHIV and mesothelioma AB1-gag challenge. The stronger protective Gag-specific immunity induced by a Mosaic DNA/MVTT vaccine corroborate the promise of the mosaic approach, and the potential of two acceptably safe vectors to enhance anti-HIV immunity and cancer prevention.Keywords: DNA/MVTT vaccine, EcoHIV, mosaic antigen, mesothelioma AB1-gag
Procedia PDF Downloads 242261 Peril´s Environment of Energetic Infrastructure Complex System, Modelling by the Crisis Situation Algorithms
Authors: Jiří F. Urbánek, Alena Oulehlová, Hana Malachová, Jiří J. Urbánek Jr.
Abstract:
Crisis situations investigation and modelling are introduced and made within the complex system of energetic critical infrastructure, operating on peril´s environments. Every crisis situations and perils has an origin in the emergency/ crisis event occurrence and they need critical/ crisis interfaces assessment. Here, the emergency events can be expected - then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping; or it may be unexpected - without pre-prepared scenario of event. But the both need operational coping by means of crisis management as well. The operation, forms, characteristics, behaviour and utilization of crisis management have various qualities, depending on real critical infrastructure organization perils, and prevention training processes. An aim is always - better security and continuity of the organization, which successful obtainment needs to find and investigate critical/ crisis zones and functions in critical infrastructure organization models, operating in pertinent perils environment. Our DYVELOP (Dynamic Vector Logistics of Processes) method is disposables for it. Here, it is necessary to derive and create identification algorithm of critical/ crisis interfaces. The locations of critical/ crisis interfaces are the flags of crisis situation in organization of critical infrastructure models. Then, the model of crisis situation will be displayed at real organization of Czech energetic crisis infrastructure subject in real peril environment. These efficient measures are necessary for the infrastructure protection. They will be derived for peril mitigation, crisis situation coping and for environmentally friendly organization survival, continuity and its sustainable development advanced possibilities.Keywords: algorithms, energetic infrastructure complex system, modelling, peril´s environment
Procedia PDF Downloads 402260 Supervised Machine Learning Approach for Studying the Effect of Different Joint Sets on Stability of Mine Pit Slopes Under the Presence of Different External Factors
Authors: Sudhir Kumar Singh, Debashish Chakravarty
Abstract:
Slope stability analysis is an important aspect in the field of geotechnical engineering. It is also important from safety, and economic point of view as any slope failure leads to loss of valuable lives and damage to property worth millions. This paper aims at mitigating the risk of slope failure by studying the effect of different joint sets on the stability of mine pit slopes under the influence of various external factors, namely degree of saturation, rainfall intensity, and seismic coefficients. Supervised machine learning approach has been utilized for making accurate and reliable predictions regarding the stability of slopes based on the value of Factor of Safety. Numerous cases have been studied for analyzing the stability of slopes using the popular Finite Element Method, and the data thus obtained has been used as training data for the supervised machine learning models. The input data has been trained on different supervised machine learning models, namely Random Forest, Decision Tree, Support vector Machine, and XGBoost. Distinct test data that is not present in training data has been used for measuring the performance and accuracy of different models. Although all models have performed well on the test dataset but Random Forest stands out from others due to its high accuracy of greater than 95%, thus helping us by providing a valuable tool at our disposition which is neither computationally expensive nor time consuming and in good accordance with the numerical analysis result.Keywords: finite element method, geotechnical engineering, machine learning, slope stability
Procedia PDF Downloads 101259 Public Debt Shocks and Public Goods Provisioning in Nigeria: Implication for National Development
Authors: Amenawo I. Offiong, Hodo B. Riman
Abstract:
Public debt profile of Nigeria has continuously been on the increase over the years. The drop in international crude oil prices has further worsened revenue position of the country, thus, necessitating further acquisition of public debt to bridge the gap in revenue deficit. Yet, when we look back at the increasing public sector spending, there are concerns that the government spending do not amount to increase in public goods provided for the country. Using data from 1980 to 2014 the study therefore seeks to investigate the factors responsible for the poor provision of public goods in the face of increasing public debt profile. Using the unrestricted VAR model Governance and Tax revenue were introduced into the model as structural variables. The result suggested that governance and tax revenue were structural determinants of the effectiveness of public goods provisioning in Nigeria. The study therefore identified weak governance as the major reason for the non-provision of public goods in Nigeria. While tax revenue exerted positive influence on the provisions of public goods, weak/poor governance was observed to crowd the benefits from increase tax revenue. The study therefore recommends reappraisal of the governance system in Nigeria. Elected officers in governance should be more transparent and accountable to the electorates they represent. Furthermore, the study advocates for an annual auditing of all government MDAs accounts by external auditors to ensure (a) accountability of public debts utilization, (b) transparent in implementation of program support funds, (c) integrity of agencies responsible for program management, and (d) measuring program effectiveness with amount of funds expended.Keywords: impulse response function, public debt shocks, governance, public goods, tax revenue, vector auto-regression
Procedia PDF Downloads 272258 DYVELOP Method Implementation for the Research Development in Small and Middle Enterprises
Authors: Jiří F. Urbánek, David Král
Abstract:
Small and Middle Enterprises (SME) have a specific mission, characteristics, and behavior in global business competitive environments. They must respect policy, rules, requirements and standards in all their inherent and outer processes of supply - customer chains and networks. Paper aims and purposes are to introduce computational assistance, which enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It is providing for SMS´s global environment the capability and profit to achieve its commitment regarding the effectiveness of the quality management system in customer requirements meeting and also the continual improvement of the organization’s and SME´s processes overall performance and efficiency, as well as its societal security via continual planning improvement. DYVELOP model´s maps - the Blazons are able mathematically - graphically express the relationships among entities, actors, and processes, including the discovering and modeling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission – added value analysis. The crisis management of SMEs is obliged to use the cycles for successful coping of crisis situations. Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process is a good indicator and controlling actor of SME continuity and its sustainable development advanced possibilities.Keywords: blazons, computational assistance, DYVELOP method, small and middle enterprises
Procedia PDF Downloads 341257 Preparation and Characterization of Chitosan Nanoparticles for Delivery of Oligonucleotides
Authors: Gyati Shilakari Asthana, Abhay Asthana, Dharm Veer Kohli, Suresh Prasad Vyas
Abstract:
Purpose: The therapeutic potential of oligonucleotide (ODN) is primarily dependent upon its safe and efficient delivery to specific cells overcoming degradation and maximizing cellular uptake in vivo. The present study is focused to design low molecular weight chitosan nanoconstructs to meet the requirements of safe and effectual delivery of ODNs. LMW-chitosan is a biodegradable, water soluble, biocompatible polymer and is useful as a non-viral vector for gene delivery due to its better stability in water. Methods: LMW chitosan ODN nanoparticles (CHODN NPs) were formulated by self-assembled method using various N/P ratios (moles ratio of amine groups of CH to phosphate moieties of ODNs; 0.5:1, 1:1, 3:1, 5:1, and 7:1) of CH to ODN. The developed CHODN NPs were evaluated with respect to gel retardation assay, particle size, zeta potential and cytotoxicity and transfection efficiency. Results: Complete complexation of CH/ODN was achieved at the charge ratio of 0.5:1 or above and CHODN NPs displayed resistance against DNase I. On increasing the N/P ratio of CH/ODN, the particle size of the NPs decreased whereas zeta potential (ZV) value increased. No significant toxicity was observed at all CH concentrations. The transfection efficiency was increased on increasing N/P ratio from 1:1 to 3:1, whereas it was decreased with further increment in N/P ratio upto 7:1. Maximum transfection of CHODN NPs with both the cell lines (Raw 267.4 cells and Hela cells) was achieved at N/P ratio of 3:1. The results suggest that transfection efficiency of CHODN NPs is dependent on N/P ratio. Conclusion: Thus the present study states that LMW chitosan nanoparticulate carriers would be acceptable choice to improve transfection efficiency in vitro as well as in vivo delivery of oligonucleotide.Keywords: LMW-chitosan, chitosan nanoparticles, biocompatibility, cytotoxicity study, transfection efficiency, oligonucleotide
Procedia PDF Downloads 849256 Fake News Detection Based on Fusion of Domain Knowledge and Expert Knowledge
Authors: Yulan Wu
Abstract:
The spread of fake news on social media has posed significant societal harm to the public and the nation, with its threats spanning various domains, including politics, economics, health, and more. News on social media often covers multiple domains, and existing models studied by researchers and relevant organizations often perform well on datasets from a single domain. However, when these methods are applied to social platforms with news spanning multiple domains, their performance significantly deteriorates. Existing research has attempted to enhance the detection performance of multi-domain datasets by adding single-domain labels to the data. However, these methods overlook the fact that a news article typically belongs to multiple domains, leading to the loss of domain knowledge information contained within the news text. To address this issue, research has found that news records in different domains often use different vocabularies to describe their content. In this paper, we propose a fake news detection framework that combines domain knowledge and expert knowledge. Firstly, it utilizes an unsupervised domain discovery module to generate a low-dimensional vector for each news article, representing domain embeddings, which can retain multi-domain knowledge of the news content. Then, a feature extraction module uses the domain embeddings discovered through unsupervised domain knowledge to guide multiple experts in extracting news knowledge for the total feature representation. Finally, a classifier is used to determine whether the news is fake or not. Experiments show that this approach can improve multi-domain fake news detection performance while reducing the cost of manually labeling domain labels.Keywords: fake news, deep learning, natural language processing, multiple domains
Procedia PDF Downloads 73255 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms
Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager
Abstract:
This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties
Procedia PDF Downloads 54254 FT-NIR Method to Determine Moisture in Gluten Free Rice-Based Pasta during Drying
Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra
Abstract:
Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000 cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.Keywords: FT-NIR, pasta, moisture determination, food engineering
Procedia PDF Downloads 258253 Efficient L-Xylulose Production Using Whole-Cell Biocatalyst With NAD+ Regeneration System Through Co-Expression of Xylitol Dehydrogenase and NADH Oxidase in Escherichia Coli
Authors: Mesfin Angaw Tesfay
Abstract:
L-Xylulose is a potentially valuable rare sugar used as starting material for antiviral and anticancer drug development in pharmaceutical industries. L-Xylulose exist in a very low concentration in nature and have to be synthesized from cheap starting materials such as xylitol through biotechnological approaches. In this study, cofactor engineering and deep eutectic solvent were applied to improve the efficiency of L-xylulose production from xylitol. A water-forming NAD+ regeneration enzyme (NADH oxidase) from Streptococcus mutans ATCC 25175 was introduced into E. coli with xylitol-4-dehydrogenase (XDH) of Pantoea ananatis resulting in recombinant cells harboring the vector pETDuet-xdh-SmNox. Further, three deep eutectic solvents (DES) including, Choline chloride/glycerol (ChCl/G), Choline chloride/urea (ChCl/U), and Choline chloride/ethylene glycol (ChCl/EG) have been employed to facilitate the conversion efficiency of L-xylulose from xylitol. The co-expression system exhibited optimal activity at a temperature of 37 ℃ and pH 8.5, and the addition of Mg2+ enhanced the catalytic activity by 1.19-fold. Co-expression of NADH oxidase with XDH enzyme resulted in increased L-xylulose concentration and productivity from xylitol as well as the intracellular NAD+ concentration. Two of the DES used (ChCl/U and ChCl/EG) show positive effects on product yield and the ChCl/G has inhibiting effects. The optimum concentration of ChCl/U was 2.5%, which increased the L-xylulose yields compared to the control without DES. In a 1 L fermenter the final concentration and productivity of L-xylulose from 50 g/L of xylitol reached 48.45 g/L, and 2.42 g/L.h respectively, which was the highest report. Overall, this study is a suitable approach for large-scale production of L-xylulose from xylitol using the engineered E. coli cell.Keywords: Xylitol-4-dehydrogenase, NADH oxidase, L-xylulose, Xylitol, Coexpression, DESs
Procedia PDF Downloads 23252 Reducing the Imbalance Penalty Through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey
Authors: Hayriye Anıl, Görkem Kar
Abstract:
In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations since geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning, and, time series methods, the total generation of the power plants belonging to Zorlu Natural Electricity Generation, which has a high installed capacity in terms of geothermal, was estimated for the first one and two weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.Keywords: machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting
Procedia PDF Downloads 110251 Economic Growth: The Nexus of Oil Price Volatility and Renewable Energy Resources among Selected Developed and Developing Economies
Authors: Muhammad Siddique, Volodymyr Lugovskyy
Abstract:
This paper explores how nations might mitigate the unfavorable impacts of oil price volatility on economic growth by switching to renewable energy sources. The impacts of uncertain factor prices on economic activity are examined by looking at the Realized Volatility (RV) of oil prices rather than the more traditional method of looking at oil price shocks. The United States of America (USA), China (C), India (I), United Kingdom (UK), Germany (G), Malaysia (M), and Pakistan (P) are all included to round out the traditional literature's examination of selected nations, which focuses on oil-importing and exporting economies. Granger Causality Tests (GCT), Impulse Response Functions (IRF), and Variance Decompositions (VD) demonstrate that in a Vector Auto-Regressive (VAR) scenario, the negative impacts of oil price volatility extend beyond what can be explained by oil price shocks alone for all of the nations in the sample. Different nations have different levels of vulnerability to changes in oil prices and other factors that may play a role in a sectoral composition and the energy mix. The conventional method, which only takes into account whether a country is a net oil importer or exporter, is inadequate. The potential economic advantages of initiatives to decouple the macroeconomy from volatile commodities markets are shown through simulations of volatility shocks in alternative energy mixes (with greater proportions of renewables). It is determined that in developing countries like Pakistan, increasing the use of renewable energy sources might lessen an economy's sensitivity to changes in oil prices; nonetheless, a country-specific study is required to identify particular policy actions. In sum, the research provides an innovative justification for mitigating economic growth's dependence on stable oil prices in our sample countries.Keywords: oil price volatility, renewable energy, economic growth, developed and developing economies
Procedia PDF Downloads 79250 A QoS Aware Cluster Based Routing Algorithm for Wireless Mesh Network Using LZW Lossless Compression
Authors: J. S. Saini, P. P. K. Sandhu
Abstract:
The multi-hop nature of Wireless Mesh Networks and the hasty progression of throughput demands results in multi- channels and multi-radios structures in mesh networks, but the main problem of co-channels interference reduces the total throughput, specifically in multi-hop networks. Quality of Service mentions a vast collection of networking technologies and techniques that guarantee the ability of a network to make available desired services with predictable results. Quality of Service (QoS) can be directed at a network interface, towards a specific server or router's performance, or in specific applications. Due to interference among various transmissions, the QoS routing in multi-hop wireless networks is formidable task. In case of multi-channel wireless network, since two transmissions using the same channel may interfere with each other. This paper has considered the Destination Sequenced Distance Vector (DSDV) routing protocol to locate the secure and optimised path. The proposed technique also utilizes the Lempel–Ziv–Welch (LZW) based lossless data compression and intra cluster data aggregation to enhance the communication between the source and the destination. The use of clustering has the ability to aggregate the multiple packets and locates a single route using the clusters to improve the intra cluster data aggregation. The use of the LZW based lossless data compression has ability to reduce the data packet size and hence it will consume less energy, thus increasing the network QoS. The MATLAB tool has been used to evaluate the effectiveness of the projected technique. The comparative analysis has shown that the proposed technique outperforms over the existing techniques.Keywords: WMNS, QOS, flooding, collision avoidance, LZW, congestion control
Procedia PDF Downloads 338249 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population
Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath
Abstract:
Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics
Procedia PDF Downloads 161248 Unravelling Cross-Commodity Price Transmission Dynamics between Coastal and Freshwater Fish Species in Bangladesh: A Structural VAR Analysis
Authors: Farhana Arefeen Mila, Luis Emilio Morales, Nam Hoang, Sujana Adapa, Garry Griffith
Abstract:
This study investigates the existence of price transmission asymmetries and imperfections among the coastal and freshwater fish species in Bangladesh. Using a Structural Vector Autoregression (SVAR) model, we explore how price changes in one fish species impact the prices of others in the retail market. Monthly data from the Department of Agricultural Marketing (DAM) covering the period from 2012 to 2023 was analyzed. Price series were detrended using the Hodrick-Prescott filter, and unit root tests confirmed stationarity after detrending. The findings indicate that there are significant interdependencies and asymmetries in price transmission, particularly the strong influence of Hilsha on the broader fish market. Hilsha’s price shocks generate immediate responses across other species, reflecting its cultural and economic importance. Silver Pomfret demonstrates some independence but is still affected by broader market fluctuations, particularly those involving Hilsha. Meanwhile, Rohu and Catla exhibit high interdependence, where price changes in one species closely impact the other, underscoring their substitutable nature in consumer preferences. These findings emphasize the need for joint interventions and market monitoring to stabilize prices effectively. Stakeholders are encouraged to monitor Hilsha’s market, consider coordinated interventions for Rohu and Catla, and establish data-sharing partnerships to enhance market stability. Additionally, promoting consumer awareness of price trends and sustainable practices can further support market resilience and long-term sustainability in the fisheries sector.Keywords: price transmission, cross commodity, fish, Bangladesh, CCF, SVAR, IRF
Procedia PDF Downloads 12247 Affordable Aerodynamic Balance for Instrumentation in a Wind Tunnel Using Arduino
Authors: Pedro Ferreira, Alexandre Frugoli, Pedro Frugoli, Lucio Leonardo, Thais Cavalheri
Abstract:
The teaching of fluid mechanics in engineering courses is, in general, a source of great difficulties for learning. The possibility of the use of experiments with didactic wind tunnels can facilitate the education of future professionals. The objective of this proposal is the development of a low-cost aerodynamic balance to be used in a didactic wind tunnel. The set is comprised of an Arduino microcontroller, programmed by an open source software, linked to load cells built by students from another project. The didactic wind tunnel is 5,0m long and the test area is 90,0 cm x 90,0 cm x 150,0 cm. The Weq® electric motor, model W-22 of 9,2 HP, moves a fan with nine blades, each blade 32,0 cm long. The Weq® frequency inverter, model WEGCFW 08 (Vector Inverter) is responsible for wind speed control and also for the motor inversion of the rotational direction. A flat-convex profile prototype of airfoil was tested by measuring the drag and lift forces for certain attack angles; the air flux conditions remained constant, monitored by a Pitot tube connected to a EXTECH® Instruments digital pressure differential manometer Model HD755. The results indicate a good agreement with the theory. The choice of all of the components of this proposal resulted in a low-cost product providing a high level of specific knowledge of mechanics of fluids, which may be a good alternative to teaching in countries with scarce educational resources. The system also allows the expansion to measure other parameters like fluid velocity, temperature, pressure as well as the possibility of automation of other functions.Keywords: aerodynamic balance, wind tunnel, strain gauge, load cell, Arduino, low-cost education
Procedia PDF Downloads 444246 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations
Authors: Kuniyoshi Abe
Abstract:
Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant
Procedia PDF Downloads 164245 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection
Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine
Abstract:
Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine
Procedia PDF Downloads 267244 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction
Procedia PDF Downloads 96243 Seismic Vulnerability Analysis of Arch Dam Based on Response Surface Method
Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong
Abstract:
Earthquake is one of the main loads threatening dam safety. Once the dam is damaged, it will bring huge losses of life and property to the country and people. Therefore, it is very important to research the seismic safety of the dam. Due to the complex foundation conditions, high fortification intensity, and high scientific and technological content, it is necessary to adopt reasonable methods to evaluate the seismic safety performance of concrete arch dams built and under construction in strong earthquake areas. Structural seismic vulnerability analysis can predict the probability of structural failure at all levels under different intensity earthquakes, which can provide a scientific basis for reasonable seismic safety evaluation and decision-making. In this paper, the response surface method (RSM) is applied to the seismic vulnerability analysis of arch dams, which improves the efficiency of vulnerability analysis. Based on the central composite test design method, the material-seismic intensity samples are established. The response surface model (RSM) with arch crown displacement as performance index is obtained by finite element (FE) calculation of the samples, and then the accuracy of the response surface model (RSM) is verified. To obtain the seismic vulnerability curves, the seismic intensity measure ??(?1) is chosen to be 0.1~1.2g, with an interval of 0.1g and a total of 12 intensity levels. For each seismic intensity level, the arch crown displacement corresponding to 100 sets of different material samples can be calculated by algebraic operation of the response surface model (RSM), which avoids 1200 times of nonlinear dynamic calculation of arch dam; thus, the efficiency of vulnerability analysis is improved greatly.Keywords: high concrete arch dam, performance index, response surface method, seismic vulnerability analysis, vector-valued intensity measure
Procedia PDF Downloads 240242 Adaptive Energy-Aware Routing (AEAR) for Optimized Performance in Resource-Constrained Wireless Sensor Networks
Authors: Innocent Uzougbo Onwuegbuzie
Abstract:
Wireless Sensor Networks (WSNs) are crucial for numerous applications, yet they face significant challenges due to resource constraints such as limited power and memory. Traditional routing algorithms like Dijkstra, Ad hoc On-Demand Distance Vector (AODV), and Bellman-Ford, while effective in path establishment and discovery, are not optimized for the unique demands of WSNs due to their large memory footprint and power consumption. This paper introduces the Adaptive Energy-Aware Routing (AEAR) model, a solution designed to address these limitations. AEAR integrates reactive route discovery, localized decision-making using geographic information, energy-aware metrics, and dynamic adaptation to provide a robust and efficient routing strategy. We present a detailed comparative analysis using a dataset of 50 sensor nodes, evaluating power consumption, memory footprint, and path cost across AEAR, Dijkstra, AODV, and Bellman-Ford algorithms. Our results demonstrate that AEAR significantly reduces power consumption and memory usage while optimizing path weight. This improvement is achieved through adaptive mechanisms that balance energy efficiency and link quality, ensuring prolonged network lifespan and reliable communication. The AEAR model's superior performance underlines its potential as a viable routing solution for energy-constrained WSN environments, paving the way for more sustainable and resilient sensor network deployments.Keywords: wireless sensor networks (WSNs), adaptive energy-aware routing (AEAR), routing algorithms, energy, efficiency, network lifespan
Procedia PDF Downloads 36241 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm
Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi
Abstract:
To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm
Procedia PDF Downloads 237240 Analysis of Real Time Seismic Signal Dataset Using Machine Learning
Authors: Sujata Kulkarni, Udhav Bhosle, Vijaykumar T.
Abstract:
Due to the closeness between seismic signals and non-seismic signals, it is vital to detect earthquakes using conventional methods. In order to distinguish between seismic events and non-seismic events depending on their amplitude, our study processes the data that come from seismic sensors. The authors suggest a robust noise suppression technique that makes use of a bandpass filter, an IIR Wiener filter, recursive short-term average/long-term average (STA/LTA), and Carl short-term average (STA)/long-term average for event identification (LTA). The trigger ratio used in the proposed study to differentiate between seismic and non-seismic activity is determined. The proposed work focuses on significant feature extraction for machine learning-based seismic event detection. This serves as motivation for compiling a dataset of all features for the identification and forecasting of seismic signals. We place a focus on feature vector dimension reduction techniques due to the temporal complexity. The proposed notable features were experimentally tested using a machine learning model, and the results on unseen data are optimal. Finally, a presentation using a hybrid dataset (captured by different sensors) demonstrates how this model may also be employed in a real-time setting while lowering false alarm rates. The planned study is based on the examination of seismic signals obtained from both individual sensors and sensor networks (SN). A wideband seismic signal from BSVK and CUKG station sensors, respectively located near Basavakalyan, Karnataka, and the Central University of Karnataka, makes up the experimental dataset.Keywords: Carl STA/LTA, features extraction, real time, dataset, machine learning, seismic detection
Procedia PDF Downloads 124239 Numerical Studies on Thrust Vectoring Using Shock-Induced Self Impinging Secondary Jets
Authors: S. Vignesh, N. Vishnu, S. Vigneshwaran, M. Vishnu Anand, Dinesh Kumar Babu, V. R. Sanal Kumar
Abstract:
The study of the primary flow velocity and the self impinging secondary jet flow mixing is important from both the fundamental research and the application point of view. Real industrial configurations are more complex than simple shear layers present in idealized numerical thrust-vectoring models due to the presence of combustion, swirl and confinement. Predicting the flow features of self impinging secondary jets in a supersonic primary flow is complex owing to the fact that there are a large number of parameters involved. Earlier studies have been highlighted several key features of self impinging jets, but an extensive characterization in terms of jet interaction between supersonic flow and self impinging secondary sonic jets is still an active research topic. In this paper numerical studies have been carried out using a validated two-dimensional k-omega standard turbulence model for the design optimization of a thrust vector control system using shock induced self impinging secondary flow sonic jets using non-reacting flows. Efforts have been taken for examining the flow features of TVC system with various secondary jets at different divergent locations and jet impinging angles with the same inlet jet pressure and mass flow ratio. The results from the parametric studies reveal that in addition to the primary to the secondary mass flow ratio the characteristics of the self impinging secondary jets having bearing on an efficient thrust vectoring. We concluded that the self impinging secondary jet nozzles are better than single jet nozzle with the same secondary mass flow rate owing to the fact fixing of the self impinging secondary jet nozzles with proper jet angle could facilitate better thrust vectoring for any supersonic aerospace vehicle.Keywords: fluidic thrust vectoring, rocket steering, supersonic to sonic jet interaction, TVC in aerospace vehicles
Procedia PDF Downloads 588