Search results for: vector insects
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1273

Search results for: vector insects

253 DNA Prime/MVTT Boost Enhances Broadly Protective Immune Response against Mosaic HIV-1 Gag

Authors: Wan Liu, Haibo Wang, Cathy Huang, Zhiwu Tan, Zhiwei Chen

Abstract:

The tremendous diversity of HIV-1 has been a major challenge for an effective AIDS vaccine development. Mosaic approach presents the potential for vaccine design aiming for global protection. The mosaic antigen of HIV-1 Gag allows antigenic breadth for vaccine-elicited immune response against a wider spectrum of viral strains. However, the enhancement of immune response using vaccines is dependent on the strategy used. Heterologous prime/boost regimen has been shown to elicit high levels of immune responses. Here, we investigated whether priming using plasmid DNA with electroporation followed by boosting with the live replication-competent modified vaccinia virus vector TianTan (MVTT) combined with the mosaic antigenic sequence could elicit a greater and broader antigen-specific response against HIV-1 Gag in mice. When compared to DNA or MVTT alone, or MVTT/MVTT group, DNA/MVTT group resulted in coincidentally high frequencies of broadly reactive, Gag-specific, polyfunctional, long-lived, and cytotoxic CD8+ T cells and increased anti-Gag antibody titer. Meanwhile, the vaccination could upregulate PD-1+, and Tim-3+ CD8+ T cell, myeloid-derived suppressive cells and Treg cells to balance the stronger immune response induced. Importantly, the prime/boost vaccination could help control the EcoHIV and mesothelioma AB1-gag challenge. The stronger protective Gag-specific immunity induced by a Mosaic DNA/MVTT vaccine corroborate the promise of the mosaic approach, and the potential of two acceptably safe vectors to enhance anti-HIV immunity and cancer prevention.

Keywords: DNA/MVTT vaccine, EcoHIV, mosaic antigen, mesothelioma AB1-gag

Procedia PDF Downloads 216
252 Peril´s Environment of Energetic Infrastructure Complex System, Modelling by the Crisis Situation Algorithms

Authors: Jiří F. Urbánek, Alena Oulehlová, Hana Malachová, Jiří J. Urbánek Jr.

Abstract:

Crisis situations investigation and modelling are introduced and made within the complex system of energetic critical infrastructure, operating on peril´s environments. Every crisis situations and perils has an origin in the emergency/ crisis event occurrence and they need critical/ crisis interfaces assessment. Here, the emergency events can be expected - then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping; or it may be unexpected - without pre-prepared scenario of event. But the both need operational coping by means of crisis management as well. The operation, forms, characteristics, behaviour and utilization of crisis management have various qualities, depending on real critical infrastructure organization perils, and prevention training processes. An aim is always - better security and continuity of the organization, which successful obtainment needs to find and investigate critical/ crisis zones and functions in critical infrastructure organization models, operating in pertinent perils environment. Our DYVELOP (Dynamic Vector Logistics of Processes) method is disposables for it. Here, it is necessary to derive and create identification algorithm of critical/ crisis interfaces. The locations of critical/ crisis interfaces are the flags of crisis situation in organization of critical infrastructure models. Then, the model of crisis situation will be displayed at real organization of Czech energetic crisis infrastructure subject in real peril environment. These efficient measures are necessary for the infrastructure protection. They will be derived for peril mitigation, crisis situation coping and for environmentally friendly organization survival, continuity and its sustainable development advanced possibilities.

Keywords: algorithms, energetic infrastructure complex system, modelling, peril´s environment

Procedia PDF Downloads 374
251 Supervised Machine Learning Approach for Studying the Effect of Different Joint Sets on Stability of Mine Pit Slopes Under the Presence of Different External Factors

Authors: Sudhir Kumar Singh, Debashish Chakravarty

Abstract:

Slope stability analysis is an important aspect in the field of geotechnical engineering. It is also important from safety, and economic point of view as any slope failure leads to loss of valuable lives and damage to property worth millions. This paper aims at mitigating the risk of slope failure by studying the effect of different joint sets on the stability of mine pit slopes under the influence of various external factors, namely degree of saturation, rainfall intensity, and seismic coefficients. Supervised machine learning approach has been utilized for making accurate and reliable predictions regarding the stability of slopes based on the value of Factor of Safety. Numerous cases have been studied for analyzing the stability of slopes using the popular Finite Element Method, and the data thus obtained has been used as training data for the supervised machine learning models. The input data has been trained on different supervised machine learning models, namely Random Forest, Decision Tree, Support vector Machine, and XGBoost. Distinct test data that is not present in training data has been used for measuring the performance and accuracy of different models. Although all models have performed well on the test dataset but Random Forest stands out from others due to its high accuracy of greater than 95%, thus helping us by providing a valuable tool at our disposition which is neither computationally expensive nor time consuming and in good accordance with the numerical analysis result.

Keywords: finite element method, geotechnical engineering, machine learning, slope stability

Procedia PDF Downloads 68
250 Public Debt Shocks and Public Goods Provisioning in Nigeria: Implication for National Development

Authors: Amenawo I. Offiong, Hodo B. Riman

Abstract:

Public debt profile of Nigeria has continuously been on the increase over the years. The drop in international crude oil prices has further worsened revenue position of the country, thus, necessitating further acquisition of public debt to bridge the gap in revenue deficit. Yet, when we look back at the increasing public sector spending, there are concerns that the government spending do not amount to increase in public goods provided for the country. Using data from 1980 to 2014 the study therefore seeks to investigate the factors responsible for the poor provision of public goods in the face of increasing public debt profile. Using the unrestricted VAR model Governance and Tax revenue were introduced into the model as structural variables. The result suggested that governance and tax revenue were structural determinants of the effectiveness of public goods provisioning in Nigeria. The study therefore identified weak governance as the major reason for the non-provision of public goods in Nigeria. While tax revenue exerted positive influence on the provisions of public goods, weak/poor governance was observed to crowd the benefits from increase tax revenue. The study therefore recommends reappraisal of the governance system in Nigeria. Elected officers in governance should be more transparent and accountable to the electorates they represent. Furthermore, the study advocates for an annual auditing of all government MDAs accounts by external auditors to ensure (a) accountability of public debts utilization, (b) transparent in implementation of program support funds, (c) integrity of agencies responsible for program management, and (d) measuring program effectiveness with amount of funds expended.

Keywords: impulse response function, public debt shocks, governance, public goods, tax revenue, vector auto-regression

Procedia PDF Downloads 221
249 DYVELOP Method Implementation for the Research Development in Small and Middle Enterprises

Authors: Jiří F. Urbánek, David Král

Abstract:

Small and Middle Enterprises (SME) have a specific mission, characteristics, and behavior in global business competitive environments. They must respect policy, rules, requirements and standards in all their inherent and outer processes of supply - customer chains and networks. Paper aims and purposes are to introduce computational assistance, which enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It is providing for SMS´s global environment the capability and profit to achieve its commitment regarding the effectiveness of the quality management system in customer requirements meeting and also the continual improvement of the organization’s and SME´s processes overall performance and efficiency, as well as its societal security via continual planning improvement. DYVELOP model´s maps - the Blazons are able mathematically - graphically express the relationships among entities, actors, and processes, including the discovering and modeling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission – added value analysis. The crisis management of SMEs is obliged to use the cycles for successful coping of crisis situations.  Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process is a good indicator and controlling actor of SME continuity and its sustainable development advanced possibilities.

Keywords: blazons, computational assistance, DYVELOP method, small and middle enterprises

Procedia PDF Downloads 316
248 The Relations between Language Diversity and Similarity and Adults' Collaborative Creative Problem Solving

Authors: Z. M. T. Lim, W. Q. Yow

Abstract:

Diversity in individual problem-solving approaches, culture and nationality have been shown to have positive effects on collaborative creative processes in organizational and scholastic settings. For example, diverse graduate and organizational teams consisting of members with both structured and unstructured problem-solving styles were found to have more creative ideas on a collaborative idea generation task than teams that comprised solely of members with either structured or unstructured problem-solving styles. However, being different may not always provide benefits to the collaborative creative process. In particular, speaking different languages may hinder mutual engagement through impaired communication and thus collaboration. Instead, sharing similar languages may have facilitative effects on mutual engagement in collaborative tasks. However, no studies have explored the relations between language diversity and adults’ collaborative creative problem solving. Sixty-four Singaporean English-speaking bilingual undergraduates were paired up into similar or dissimilar language pairs based on the second language they spoke (e.g., for similar language pairs, both participants spoke English-Mandarin; for dissimilar language pairs, one participant spoke English-Mandarin and the other spoke English-Korean). Each participant completed the Ravens Progressive Matrices Task individually. Next, they worked in pairs to complete a collaborative divergent thinking task where they used mind-mapping techniques to brainstorm ideas on a given problem together (e.g., how to keep insects out of the house). Lastly, the pairs worked on a collaborative insight problem-solving task (Triangle of Coins puzzle) where they needed to flip a triangle of ten coins around by moving only three coins. Pairs who had prior knowledge of the Triangle of Coins puzzle were asked to complete an equivalent Matchstick task instead, where they needed to make seven squares by moving only two matchsticks based on a given array of matchsticks. Results showed that, after controlling for intelligence, similar language pairs completed the collaborative insight problem-solving task faster than dissimilar language pairs. Intelligence also moderated these relations. Among adults of lower intelligence, similar language pairs solved the insight problem-solving task faster than dissimilar language pairs. These differences in speed were not found in adults with higher intelligence. No differences were found in the number of ideas generated in the collaborative divergent thinking task between similar language and dissimilar language pairs. In conclusion, sharing similar languages seem to enrich collaborative creative processes. These effects were especially pertinent to pairs with lower intelligence. This provides guidelines for the formation of groups based on shared languages in collaborative creative processes. However, the positive effects of shared languages appear to be limited to the insight problem-solving task and not the divergent thinking task. This could be due to the facilitative effects of other factors of diversity as found in previous literature. Background diversity, for example, may have a larger facilitative effect on the divergent thinking task as compared to the insight problem-solving task due to the varied experiences individuals bring to the task. In conclusion, this study contributes to the understanding of the effects of language diversity in collaborative creative processes and challenges the general positive effects that diversity has on these processes.

Keywords: bilingualism, diversity, creativity, collaboration

Procedia PDF Downloads 279
247 Preparation and Characterization of Chitosan Nanoparticles for Delivery of Oligonucleotides

Authors: Gyati Shilakari Asthana, Abhay Asthana, Dharm Veer Kohli, Suresh Prasad Vyas

Abstract:

Purpose: The therapeutic potential of oligonucleotide (ODN) is primarily dependent upon its safe and efficient delivery to specific cells overcoming degradation and maximizing cellular uptake in vivo. The present study is focused to design low molecular weight chitosan nanoconstructs to meet the requirements of safe and effectual delivery of ODNs. LMW-chitosan is a biodegradable, water soluble, biocompatible polymer and is useful as a non-viral vector for gene delivery due to its better stability in water. Methods: LMW chitosan ODN nanoparticles (CHODN NPs) were formulated by self-assembled method using various N/P ratios (moles ratio of amine groups of CH to phosphate moieties of ODNs; 0.5:1, 1:1, 3:1, 5:1, and 7:1) of CH to ODN. The developed CHODN NPs were evaluated with respect to gel retardation assay, particle size, zeta potential and cytotoxicity and transfection efficiency. Results: Complete complexation of CH/ODN was achieved at the charge ratio of 0.5:1 or above and CHODN NPs displayed resistance against DNase I. On increasing the N/P ratio of CH/ODN, the particle size of the NPs decreased whereas zeta potential (ZV) value increased. No significant toxicity was observed at all CH concentrations. The transfection efficiency was increased on increasing N/P ratio from 1:1 to 3:1, whereas it was decreased with further increment in N/P ratio upto 7:1. Maximum transfection of CHODN NPs with both the cell lines (Raw 267.4 cells and Hela cells) was achieved at N/P ratio of 3:1. The results suggest that transfection efficiency of CHODN NPs is dependent on N/P ratio. Conclusion: Thus the present study states that LMW chitosan nanoparticulate carriers would be acceptable choice to improve transfection efficiency in vitro as well as in vivo delivery of oligonucleotide.

Keywords: LMW-chitosan, chitosan nanoparticles, biocompatibility, cytotoxicity study, transfection efficiency, oligonucleotide

Procedia PDF Downloads 819
246 Fake News Detection Based on Fusion of Domain Knowledge and Expert Knowledge

Authors: Yulan Wu

Abstract:

The spread of fake news on social media has posed significant societal harm to the public and the nation, with its threats spanning various domains, including politics, economics, health, and more. News on social media often covers multiple domains, and existing models studied by researchers and relevant organizations often perform well on datasets from a single domain. However, when these methods are applied to social platforms with news spanning multiple domains, their performance significantly deteriorates. Existing research has attempted to enhance the detection performance of multi-domain datasets by adding single-domain labels to the data. However, these methods overlook the fact that a news article typically belongs to multiple domains, leading to the loss of domain knowledge information contained within the news text. To address this issue, research has found that news records in different domains often use different vocabularies to describe their content. In this paper, we propose a fake news detection framework that combines domain knowledge and expert knowledge. Firstly, it utilizes an unsupervised domain discovery module to generate a low-dimensional vector for each news article, representing domain embeddings, which can retain multi-domain knowledge of the news content. Then, a feature extraction module uses the domain embeddings discovered through unsupervised domain knowledge to guide multiple experts in extracting news knowledge for the total feature representation. Finally, a classifier is used to determine whether the news is fake or not. Experiments show that this approach can improve multi-domain fake news detection performance while reducing the cost of manually labeling domain labels.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 33
245 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms

Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager

Abstract:

This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.

Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties

Procedia PDF Downloads 31
244 Experimental and Modelling Performances of a Sustainable Integrated System of Conditioning for Bee-Pollen

Authors: Andrés Durán, Brian Castellanos, Marta Quicazán, Carlos Zuluaga-Domínguez

Abstract:

Bee-pollen is an apicultural-derived food product, with a growing appreciation among consumers given the remarkable nutritional and functional composition, in particular, protein (24%), dietary fiber (15%), phenols (15 – 20 GAE/g) and carotenoids (600 – 900 µg/g). These properties are given by the geographical and climatic characteristics of the region where it is collected. There are several countries recognized by their pollen production, e.g. China, United States, Japan, Spain, among others. Beekeepers use traps in the entrance of the hive where bee-pollen is collected. After the removal of foreign particles and drying, this product is ready to be marketed. However, in countries located along the equator, the absence of seasons and a constant tropical climate throughout the year favors a more rapid spoilage condition for foods with elevated water activity. The climatic conditions also trigger the proliferation of microorganisms and insects. This, added to the factor that beekeepers usually do not have adequate processing systems for bee-pollen, leads to deficiencies in the quality and safety of the product. In contrast, the Andean region of South America, lying on equator, typically has a high production of bee-pollen of up to 36 kg/year/hive, being four times higher than in countries with marked seasons. This region is also located in altitudes superior to 2500 meters above sea level, having extremes sun ultraviolet radiation all year long. As a mechanism of defense of radiation, plants produce more secondary metabolites acting as antioxidant agents, hence, plant products such as bee-pollen contain remarkable more phenolics and carotenoids than collected in other places. Considering this, the improvement of bee-pollen processing facilities by technical modifications and the implementation of an integrated cleaning and drying system for the product in an apiary in the area was proposed. The beehives were modified through the installation of alternative bee-pollen traps to avoid sources of contamination. The processing facility was modified according to considerations of Good Manufacturing Practices, implementing the combined use of a cabin dryer with temperature control and forced airflow and a greenhouse-type solar drying system. Additionally, for the separation of impurities, a cyclone type system was implemented, complementary to a screening equipment. With these modifications, a decrease in the content of impurities and the microbiological load of bee-pollen was seen from the first stages, principally with a reduction of the presence of molds and yeasts and in the number of foreign animal origin impurities. The use of the greenhouse solar dryer integrated to the cabin dryer allowed the processing of larger quantities of product with shorter waiting times in storage, reaching a moisture content of about 6% and a water activity lower than 0.6, being appropriate for the conservation of bee-pollen. Additionally, the contents of functional or nutritional compounds were not affected, even observing an increase of up to 25% in phenols content and a non-significant decrease in carotenoids content and antioxidant activity.

Keywords: beekeeping, drying, food processing, food safety

Procedia PDF Downloads 79
243 Identification of Odorant Receptors through the Antennal Transcriptome of the Grapevine Pest, Lobesia botrana (Lepidoptera: Tortricidae)

Authors: Ricardo Godoy, Herbert Venthur, Hector Jimenez, Andres Quiroz, Ana Mutis

Abstract:

In agriculture, grape production has great economic importance at global level, considering that in 2013 it reached 7.4 million hectares (ha) covered by plantations of this fruit worldwide. Chile is the number one exporter in the world with 800,000 tons. However, these values have been threatened by the attack of the grapevine moth, Lobesia botrana (Denis & Schiffermuller) (Lepidoptera: Tortricidae), since its detection in 2008. Nowadays, the use of semiochemicals, in particular the major component of the sex pheromone, (E,Z)-7.9-dodecadienil acetate, are part of mating disruption methods to control L. botrana. How insect pests can recognize these molecules, is being part of huge efforts to deorphanize their olfactory mechanism at molecular level. Thus, an interesting group of proteins has been identified in the antennae of insects, where odorant-binding proteins (OBPs) are known by transporting molecules to odorant receptors (ORs) and a co-receptor (ORCO) causing a behavioral change in the insect. Other proteins such as chemosensory proteins (CSPs), ionotropic receptors (IRs), odorant degrading enzymes (ODEs) and sensory neuron membrane proteins (SNMPs) seem to be involved, but few studies have been performed so far. The above has led to an increasing interest in insect communication at a molecular level, which has contributed to both a better understanding of the olfaction process and the design of new pest management strategies. To date, it has been reported that the ORs can detect one or a small group of odorants in a specific way. Therefore, the objective of this study is the identification of genes that encode these ORs using the antennal transcriptome of L. botrana. Total RNA was extracted for females and males of L. botrana, and the antennal transcriptome sequenced by Next Generation Sequencing service using an Illumina HiSeq2500 platform with 50 million reads per sample. Unigenes were assembled using Trinity v2.4.0 package and transcript abundance was obtained using edgeR. Genes were identified using BLASTN and BLASTX locally installed in a Unix system and based on our own Tortricidae database. Those Unigenes related to ORs were characterized using ORFfinder and protein Blastp server. Finally, a phylogenetic analysis was performed with the candidate amino acid sequences for LbotORs including amino acid sequences of other moths ORs, such as Bombyx mori, Cydia pomonella, among others. Our findings suggest 61 genes encoding ORs and one gene encoding an ORCO in both sexes, where the greatest difference was found in the OR6 because of the transcript abundance according to the value of FPKM in females and males was 1.48 versus 324.00. In addition, according to phylogenetic analysis OR6 is closely related to OR1 in Cydia pomonella and OR6, OR7 in Epiphyas postvittana, which have been described as pheromonal receptors (PRs). These results represent the first evidence of ORs present in the antennae of L. botrana and a suitable starting point for further functional studies with selected ORs, such as OR6, which is potentially related to pheromonal recognition.

Keywords: antennal transcriptome, lobesia botrana, odorant receptors (ORs), phylogenetic analysis

Procedia PDF Downloads 166
242 FT-NIR Method to Determine Moisture in Gluten Free Rice-Based Pasta during Drying

Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra

Abstract:

Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000 cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.

Keywords: FT-NIR, pasta, moisture determination, food engineering

Procedia PDF Downloads 228
241 Reducing the Imbalance Penalty Through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey

Authors: Hayriye Anıl, Görkem Kar

Abstract:

In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations since geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning, and, time series methods, the total generation of the power plants belonging to Zorlu Natural Electricity Generation, which has a high installed capacity in terms of geothermal, was estimated for the first one and two weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.

Keywords: machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting

Procedia PDF Downloads 74
240 Economic Growth: The Nexus of Oil Price Volatility and Renewable Energy Resources among Selected Developed and Developing Economies

Authors: Muhammad Siddique, Volodymyr Lugovskyy

Abstract:

This paper explores how nations might mitigate the unfavorable impacts of oil price volatility on economic growth by switching to renewable energy sources. The impacts of uncertain factor prices on economic activity are examined by looking at the Realized Volatility (RV) of oil prices rather than the more traditional method of looking at oil price shocks. The United States of America (USA), China (C), India (I), United Kingdom (UK), Germany (G), Malaysia (M), and Pakistan (P) are all included to round out the traditional literature's examination of selected nations, which focuses on oil-importing and exporting economies. Granger Causality Tests (GCT), Impulse Response Functions (IRF), and Variance Decompositions (VD) demonstrate that in a Vector Auto-Regressive (VAR) scenario, the negative impacts of oil price volatility extend beyond what can be explained by oil price shocks alone for all of the nations in the sample. Different nations have different levels of vulnerability to changes in oil prices and other factors that may play a role in a sectoral composition and the energy mix. The conventional method, which only takes into account whether a country is a net oil importer or exporter, is inadequate. The potential economic advantages of initiatives to decouple the macroeconomy from volatile commodities markets are shown through simulations of volatility shocks in alternative energy mixes (with greater proportions of renewables). It is determined that in developing countries like Pakistan, increasing the use of renewable energy sources might lessen an economy's sensitivity to changes in oil prices; nonetheless, a country-specific study is required to identify particular policy actions. In sum, the research provides an innovative justification for mitigating economic growth's dependence on stable oil prices in our sample countries.

Keywords: oil price volatility, renewable energy, economic growth, developed and developing economies

Procedia PDF Downloads 52
239 A QoS Aware Cluster Based Routing Algorithm for Wireless Mesh Network Using LZW Lossless Compression

Authors: J. S. Saini, P. P. K. Sandhu

Abstract:

The multi-hop nature of Wireless Mesh Networks and the hasty progression of throughput demands results in multi- channels and multi-radios structures in mesh networks, but the main problem of co-channels interference reduces the total throughput, specifically in multi-hop networks. Quality of Service mentions a vast collection of networking technologies and techniques that guarantee the ability of a network to make available desired services with predictable results. Quality of Service (QoS) can be directed at a network interface, towards a specific server or router's performance, or in specific applications. Due to interference among various transmissions, the QoS routing in multi-hop wireless networks is formidable task. In case of multi-channel wireless network, since two transmissions using the same channel may interfere with each other. This paper has considered the Destination Sequenced Distance Vector (DSDV) routing protocol to locate the secure and optimised path. The proposed technique also utilizes the Lempel–Ziv–Welch (LZW) based lossless data compression and intra cluster data aggregation to enhance the communication between the source and the destination. The use of clustering has the ability to aggregate the multiple packets and locates a single route using the clusters to improve the intra cluster data aggregation. The use of the LZW based lossless data compression has ability to reduce the data packet size and hence it will consume less energy, thus increasing the network QoS. The MATLAB tool has been used to evaluate the effectiveness of the projected technique. The comparative analysis has shown that the proposed technique outperforms over the existing techniques.

Keywords: WMNS, QOS, flooding, collision avoidance, LZW, congestion control

Procedia PDF Downloads 307
238 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population

Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath

Abstract:

Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.

Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics

Procedia PDF Downloads 111
237 Eco-Friendly Cultivation

Authors: Shah Rucksana Akhter Urme

Abstract:

Agriculture is the main source of food for human consumption and feeding the world huge population, the pressure of food supply is increasing day by day. Undoubtedly, quality strain, improved plantation, farming technology, synthetic fertilizer, readily available irrigation, insecticides and harvesting technology are the main factors those to meet up the huge demand of food consumption all over the world. However, depended on this limited resources and excess amount of consuming lands, water, fertilizers leads to the end of the resources and severe climate effects has been left for our future generation. Agriculture is the most responsible to global warming, emitting more greenhouse gases than all other vehicles largely from nitrous oxide released by from fertilized fields, and carbon dioxide from the cutting of rain forests to grow crops . Farming is the thirstiest user of our precious water supplies and a major polluter, as runoff from fertilizers disrupts fragile lakes, rivers, and coastal ecosystems across the globe which accelerates the loss of biodiversity, crucial habitat and a major driver of wildlife extinction. It is needless to say that we have to more concern on how we can save the nutrients of the soil, storage of the water and avoid excessive depends on synthetic fertilizer and insecticides. In this case, eco- friendly cultivation could be a potential alternative solution to minimize effects of agriculture in our environment. The objective of this review paper is about organic cultivation following in particular biotechnological process focused on bio-fertilizer and bio-pesticides. Intense practice of chemical pesticides, insecticides has severe effect on both in human life and biodiversity. This cultivation process introduces farmer an alternative way which is nonhazardous, cost effective and ecofriendly. Organic fertilizer such as tea residue, ashes might be the best alternative to synthetic fertilizer those play important role in increasing soil nutrient and fertility. Ashes contain different essential and non-essential mineral contents that are required for plant growth. Organic pesticide such as neem spray is beneficial for crop as it is toxic for pest and insects. Recycled and composted crop wastes and animal manures, crop rotation, green manures and legumes etc. are suitable for soil fertility which is free from hazardous chemicals practice. Finally water hyacinth and algae are potential source of nutrients even alternative to soil for cultivation along with storage of water for continuous supply. Inorganic practice of agriculture, consuming fruits and vegetables becomes a threat for both human life and eco-system and synthetic fertilizer and pesticides are responsible for it. Farmers that practice eco-friendly farming have to implement steps to protect the environment, particularly by severely limiting the use of pesticides and avoiding the use of synthetic chemical fertilizers, which are necessary for organic systems to experience reduced environmental harm and health risk.

Keywords: organic farming, biopesticides, organic nutrients, water storage, global warming

Procedia PDF Downloads 34
236 Affordable Aerodynamic Balance for Instrumentation in a Wind Tunnel Using Arduino

Authors: Pedro Ferreira, Alexandre Frugoli, Pedro Frugoli, Lucio Leonardo, Thais Cavalheri

Abstract:

The teaching of fluid mechanics in engineering courses is, in general, a source of great difficulties for learning. The possibility of the use of experiments with didactic wind tunnels can facilitate the education of future professionals. The objective of this proposal is the development of a low-cost aerodynamic balance to be used in a didactic wind tunnel. The set is comprised of an Arduino microcontroller, programmed by an open source software, linked to load cells built by students from another project. The didactic wind tunnel is 5,0m long and the test area is 90,0 cm x 90,0 cm x 150,0 cm. The Weq® electric motor, model W-22 of 9,2 HP, moves a fan with nine blades, each blade 32,0 cm long. The Weq® frequency inverter, model WEGCFW 08 (Vector Inverter) is responsible for wind speed control and also for the motor inversion of the rotational direction. A flat-convex profile prototype of airfoil was tested by measuring the drag and lift forces for certain attack angles; the air flux conditions remained constant, monitored by a Pitot tube connected to a EXTECH® Instruments digital pressure differential manometer Model HD755. The results indicate a good agreement with the theory. The choice of all of the components of this proposal resulted in a low-cost product providing a high level of specific knowledge of mechanics of fluids, which may be a good alternative to teaching in countries with scarce educational resources. The system also allows the expansion to measure other parameters like fluid velocity, temperature, pressure as well as the possibility of automation of other functions.

Keywords: aerodynamic balance, wind tunnel, strain gauge, load cell, Arduino, low-cost education

Procedia PDF Downloads 404
235 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations

Authors: Kuniyoshi Abe

Abstract:

Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.

Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant

Procedia PDF Downloads 132
234 From Biowaste to Biobased Products: Life Cycle Assessment of VALUEWASTE Solution

Authors: Andrés Lara Guillén, José M. Soriano Disla, Gemma Castejón Martínez, David Fernández-Gutiérrez

Abstract:

The worldwide population is exponentially increasing, which causes a rising demand for food, energy and non-renewable resources. These demands must be attended to from a circular economy point of view. Under this approach, the obtention of strategic products from biowaste is crucial for the society to keep the current lifestyle reducing the environmental and social issues linked to the lineal economy. This is the main objective of the VALUEWASTE project. VALUEWASTE is about valorizing urban biowaste into proteins for food and feed and biofertilizers, closing the loop of this waste stream. In order to achieve this objective, the project validates three value chains, which begin with the anaerobic digestion of the biowaste. From the anaerobic digestion, three by-products are obtained: i) methane that is used by microorganisms, which will be transformed into microbial proteins; ii) digestate that is used by black soldier fly, producing insect proteins; and iii) a nutrient-rich effluent, which will be transformed into biofertilizers. VALUEWASTE is an innovative solution, which combines different technologies to valorize entirely the biowaste. However, it is also required to demonstrate that the solution is greener than other traditional technologies (baseline systems). On one hand, the proteins from microorganisms and insects will be compared with other reference protein production systems (gluten, whey and soybean). On the other hand, the biofertilizers will be compared to the production of mineral fertilizers (ammonium sulphate and synthetic struvite). Therefore, the aim of this study is to provide that biowaste valorization can reduce the environmental impacts linked to both traditional proteins manufacturing processes and mineral fertilizers, not only at a pilot-scale but also at an industrial one. In the present study, both baseline system and VALUEWASTE solution are evaluated through the Environmental Life Cycle Assessment (E-LCA). The E-LCA is based on the standards ISO 14040 and 14044. The Environmental Footprint methodology was the one used in this study to evaluate the environmental impacts. The results for the baseline cases show that the food proteins coming from whey have the highest environmental impact on ecosystems compared to the other proteins sources: 7.5 and 15.9 folds higher than soybean and gluten, respectively. Comparing feed soybean and gluten, soybean has an environmental impact on human health 195.1 folds higher. In the case of biofertilizers, synthetic struvite has higher impacts than ammonium sulfate: 15.3 (ecosystems) and 11.8 (human health) fold, respectively. The results shown in the present study will be used as a reference to demonstrate the better environmental performance of the bio-based products obtained through the VALUEWASTE solution. Other originalities that the E-LCA performed in the VALUEWASTE project provides are the diverse direct implications on investment and policies. On one hand, better environmental performance will serve to remove the barriers linked to these kinds of technologies, boosting the investment that is backed by the E-LCA. On the other hand, it will be a germ to design new policies fostering these types of solutions to achieve two of the key targets of the European Community: being self-sustainable and carbon neutral.

Keywords: anaerobic digestion, biofertilizers, circular economy, nutrients recovery

Procedia PDF Downloads 67
233 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection

Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine

Abstract:

Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.

Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine

Procedia PDF Downloads 237
232 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.

Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction

Procedia PDF Downloads 62
231 The Impact of Improved Grain Storage Technology on Marketing Behaviour and Livelihoods of Maize Farmers: A Randomized Controlled Trial in Ethiopia

Authors: Betelhem M. Negede, Maarten Voors, Hugo De Groote, Bart Minten

Abstract:

Farmers in Ethiopia produce most of their own food during one agricultural season per year. Therefore, they need to use on-farm storage technologies to bridge the lean season and benefit from price arbitrage. Maize stored using traditional storage bags offer no protection from insects and molds, leading to high storage losses. In Ethiopia access to and use of modern storage technologies are still limited, restraining farmers to benefit from local maize price fluctuations. We used a randomized controlled trial among 871 maize farmers to evaluate the impacts of Purdue Improved Crop Storage (PICS) bags, also known as hermetic bags, on storage losses, and especially on behavioral changes with respect to consumption, marketing, and income among maize farmers in Ethiopia. This study builds upon the limited previous experimental research that has tried to understand farmers’ grain storage and post-harvest losses and identify mechanisms behind the persistence of these challenges. Our main hypothesis is that access to PICS bags allows farmers to increase production, storage and maize income. Also delay the length of maize storage, reduce maize post-harvest losses and improve their food security. Our results show that even though farmers received only three PICS bags that represent 10percent of their total maize stored, they delay their length of maize storage for sales by two weeks. However, we find no treatment effect on maize income, suggesting that the arbitrage of two weeks is too small. Also, we do not find any reduction in storage losses due to farmers’ reaction by selling early and by using cheap and readily available but potentially harmful storage chemicals. Looking at the heterogeneity treatment effects between the treatment variable and highland and lowland villages, we find a decrease in the percentage of maize stored by 4 percent in the highland villages. This confirms that location specific factors, such as agro-ecology and proximity to markets are important factors that influence whether and how much of the harvest a farmer stores. These findings highlight the benefits of hermetic storage bags, by allowing farmers to make inter-temporal arbitrage and by reducing potential health risks from storage chemicals. The main policy recommendation that emanates from our study is that postharvest losses reduction throughout the whole value chain is an important pathway to food and income security in Sub-Saharan Africa (SSA). However, future storage loss interventions with hermetic storage technologies should take into account the agro-ecology of the study area and quantify storage losses beyond farmers self-reported losses, such as the count and weigh method. Finally, studies on hermetic storage technologies indicate positive impacts on post-harvest losses and in improving food security, but the adoption and use of these technologies is currently still low in SSA. Therefore, future works on the scaling up of hermetic bags, should consider reasons why farmers only use PICS bags to store grains for consumption, which is usually related to a safety-first approach or due to lack of incentives (higher price from maize not treated with chemicals), and no grain quality check.

Keywords: arbitrage, PICS hermetic bags, post-harvest storage loss, RCT

Procedia PDF Downloads 103
230 Seismic Vulnerability Analysis of Arch Dam Based on Response Surface Method

Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong

Abstract:

Earthquake is one of the main loads threatening dam safety. Once the dam is damaged, it will bring huge losses of life and property to the country and people. Therefore, it is very important to research the seismic safety of the dam. Due to the complex foundation conditions, high fortification intensity, and high scientific and technological content, it is necessary to adopt reasonable methods to evaluate the seismic safety performance of concrete arch dams built and under construction in strong earthquake areas. Structural seismic vulnerability analysis can predict the probability of structural failure at all levels under different intensity earthquakes, which can provide a scientific basis for reasonable seismic safety evaluation and decision-making. In this paper, the response surface method (RSM) is applied to the seismic vulnerability analysis of arch dams, which improves the efficiency of vulnerability analysis. Based on the central composite test design method, the material-seismic intensity samples are established. The response surface model (RSM) with arch crown displacement as performance index is obtained by finite element (FE) calculation of the samples, and then the accuracy of the response surface model (RSM) is verified. To obtain the seismic vulnerability curves, the seismic intensity measure ??(?1) is chosen to be 0.1~1.2g, with an interval of 0.1g and a total of 12 intensity levels. For each seismic intensity level, the arch crown displacement corresponding to 100 sets of different material samples can be calculated by algebraic operation of the response surface model (RSM), which avoids 1200 times of nonlinear dynamic calculation of arch dam; thus, the efficiency of vulnerability analysis is improved greatly.

Keywords: high concrete arch dam, performance index, response surface method, seismic vulnerability analysis, vector-valued intensity measure

Procedia PDF Downloads 210
229 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm

Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi

Abstract:

To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.

Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm

Procedia PDF Downloads 198
228 Analysis of Real Time Seismic Signal Dataset Using Machine Learning

Authors: Sujata Kulkarni, Udhav Bhosle, Vijaykumar T.

Abstract:

Due to the closeness between seismic signals and non-seismic signals, it is vital to detect earthquakes using conventional methods. In order to distinguish between seismic events and non-seismic events depending on their amplitude, our study processes the data that come from seismic sensors. The authors suggest a robust noise suppression technique that makes use of a bandpass filter, an IIR Wiener filter, recursive short-term average/long-term average (STA/LTA), and Carl short-term average (STA)/long-term average for event identification (LTA). The trigger ratio used in the proposed study to differentiate between seismic and non-seismic activity is determined. The proposed work focuses on significant feature extraction for machine learning-based seismic event detection. This serves as motivation for compiling a dataset of all features for the identification and forecasting of seismic signals. We place a focus on feature vector dimension reduction techniques due to the temporal complexity. The proposed notable features were experimentally tested using a machine learning model, and the results on unseen data are optimal. Finally, a presentation using a hybrid dataset (captured by different sensors) demonstrates how this model may also be employed in a real-time setting while lowering false alarm rates. The planned study is based on the examination of seismic signals obtained from both individual sensors and sensor networks (SN). A wideband seismic signal from BSVK and CUKG station sensors, respectively located near Basavakalyan, Karnataka, and the Central University of Karnataka, makes up the experimental dataset.

Keywords: Carl STA/LTA, features extraction, real time, dataset, machine learning, seismic detection

Procedia PDF Downloads 62
227 Numerical Studies on Thrust Vectoring Using Shock-Induced Self Impinging Secondary Jets

Authors: S. Vignesh, N. Vishnu, S. Vigneshwaran, M. Vishnu Anand, Dinesh Kumar Babu, V. R. Sanal Kumar

Abstract:

The study of the primary flow velocity and the self impinging secondary jet flow mixing is important from both the fundamental research and the application point of view. Real industrial configurations are more complex than simple shear layers present in idealized numerical thrust-vectoring models due to the presence of combustion, swirl and confinement. Predicting the flow features of self impinging secondary jets in a supersonic primary flow is complex owing to the fact that there are a large number of parameters involved. Earlier studies have been highlighted several key features of self impinging jets, but an extensive characterization in terms of jet interaction between supersonic flow and self impinging secondary sonic jets is still an active research topic. In this paper numerical studies have been carried out using a validated two-dimensional k-omega standard turbulence model for the design optimization of a thrust vector control system using shock induced self impinging secondary flow sonic jets using non-reacting flows. Efforts have been taken for examining the flow features of TVC system with various secondary jets at different divergent locations and jet impinging angles with the same inlet jet pressure and mass flow ratio. The results from the parametric studies reveal that in addition to the primary to the secondary mass flow ratio the characteristics of the self impinging secondary jets having bearing on an efficient thrust vectoring. We concluded that the self impinging secondary jet nozzles are better than single jet nozzle with the same secondary mass flow rate owing to the fact fixing of the self impinging secondary jet nozzles with proper jet angle could facilitate better thrust vectoring for any supersonic aerospace vehicle.

Keywords: fluidic thrust vectoring, rocket steering, supersonic to sonic jet interaction, TVC in aerospace vehicles

Procedia PDF Downloads 557
226 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification

Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine

Abstract:

Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.

Keywords: convolution, feature extraction, image analysis, validation, precision agriculture

Procedia PDF Downloads 285
225 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer

Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom

Abstract:

Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.

Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN

Procedia PDF Downloads 39
224 Epidemiological Survey on Tick-Borne Pathogens with Zoonotic Potential in Dog Populations of Southern Ethiopia

Authors: Hana Tadesse, Marika Grillini, Giulia Simonato, Alessandra Mondin, Giorgia Dotto, Antonio Frangipane Di Regalbono, Bersissa Kumsa, Rudi Cassini, Maria Luisa Menandro

Abstract:

Dogs are known to host several tick-borne pathogens with zoonotic potential; however, scant information is available on the epidemiology of these pathogens in low-income tropical coun- tries and in particular in sub-Saharan Africa. With the aim of investigating a wide range of tick- borne pathogens (i.e., Rickettsia spp., Anaplasma spp., Erhlichia spp., Borrelia spp., Hepatozoon spp. and Babesia spp.), 273 blood samples were collected from dogs in selected districts of Ethiopia and analyzed by real-time and/or end-point PCR. The results of the study showed that Hepatozoon canis was the most prevalent pathogen (53.8%), followed by Anaplasma phagocythophilum (7.0%), Babesia canis rossi (3.3%), Ehrlichia canis (2.6%) and Anaplasma platys (2.2%). Furthermore, five samples tested positive for Borrelia spp., identified as Borrelia afzelii (n = 3) and Borrelia burgdorferi (n = 2), and two samples for Rickettsia spp., identified as Rickettsia conorii (n = 1) and Rickettsia monacensis (n = 1). The finding of Anaplasma phagocythophilum and different species of the genera Borrelia and Rickettsia with zoonotic potential was unexpected and alarming, and calls for further investigation on the roles of dogs and on the tick, species acting as vector in this specific context. Other pathogens (Hepatozoon canis, Babaesia canis rossi, Anaplasma platys, Ehrlichia canis) are already known to have an important impact on the dogs’ health but have minor zoonotic potential as they were rarely or never reported in humans. Dogs from rural areas were found to be at higher risk for different pathogens, probably due to the presence of other wild canids in the same environment. The findings of the present study contribute to a better knowledge of the epidemiology of tick-borne pathogens, which is relevant to human and animal health.

Keywords: Dogs, Tick-borne pathogens, Africa, Ethiopia

Procedia PDF Downloads 56