Search results for: throughput
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 349

Search results for: throughput

79 The Response of Soil Biodiversity to Agriculture Practice in Rhizosphere

Authors: Yan Wang, Guowei Chen, Gang Wang

Abstract:

Soil microbial diversity is one of the important parameters to assess the soil fertility and soil health, even stability of the ecosystem. In this paper, we aim to reveal the soil microbial difference in rhizosphere and root zone, even to pick the special biomarkers influenced by the long term tillage practices, which included four treatments of no-tillage, ridge tillage, continuous cropping with corn and crop rotation with corn and soybean. Here, high-throughput sequencing was performed to investigate the difference of bacteria in rhizosphere and root zone. The results showed a very significant difference of species richness between rhizosphere and root zone soil at the same crop rotation system (p < 0.01), and also significant difference of species richness was found between continuous cropping with corn and corn-soybean rotation treatment in the rhizosphere statement, no-tillage and ridge tillage in root zone soils. Implied by further beta diversity analysis, both tillage methods and crop rotation systems influence the soil microbial diversity and community structure in varying degree. The composition and community structure of microbes in rhizosphere and root zone soils were clustered distinctly by the beta diversity (p < 0.05). Linear discriminant analysis coupled with effect size (LEfSe) analysis of total taxa in rhizosphere picked more than 100 bacterial taxa, which were significantly more abundant than that in root zone soils, whereas the number of biomarkers was lower between the continuous cropping with corn and crop rotation treatment, the same pattern was found at no-tillage and ridge tillage treatment. Bacterial communities were greatly influenced by main environmental factors in large scale, which is the result of biological adaptation and acclimation, hence it is beneficial for optimizing agricultural practices.

Keywords: tillage methods, biomarker, biodiversity, rhizosphere

Procedia PDF Downloads 163
78 Improving Grade Control Turnaround Times with In-Pit Hyperspectral Assaying

Authors: Gary Pattemore, Michael Edgar, Andrew Job, Marina Auad, Kathryn Job

Abstract:

As critical commodities become more scarce, significant time and resources have been used to better understand complicated ore bodies and extract their full potential. These challenging ore bodies provide several pain points for geologists and engineers to overcome, poor handling of these issues flows downs stream to the processing plant affecting throughput rates and recovery. Many open cut mines utilise blast hole drilling to extract additional information to feed back into the modelling process. This method requires samples to be collected during or after blast hole drilling. Samples are then sent for assay with turnaround times varying from 1 to 12 days. This method is time consuming, costly, requires human exposure on the bench and collects elemental data only. To address this challenge, research has been undertaken to utilise hyperspectral imaging across a broad spectrum to scan samples, collars or take down hole measurements for minerals and moisture content and grade abundances. Automation of this process using unmanned vehicles and on-board processing reduces human in pit exposure to ensure ongoing safety. On-board processing allows data to be integrated into modelling workflows with immediacy. The preliminary results demonstrate numerous direct and indirect benefits from this new technology, including rapid and accurate grade estimates, moisture content and mineralogy. These benefits allow for faster geo modelling updates, better informed mine scheduling and improved downstream blending and processing practices. The paper presents recommendations for implementation of the technology in open cut mining environments.

Keywords: grade control, hyperspectral scanning, artificial intelligence, autonomous mining, machine learning

Procedia PDF Downloads 113
77 Polymer Mixing in the Cavity Transfer Mixer

Authors: Giovanna Grosso, Martien A. Hulsen, Arash Sarhangi Fard, Andrew Overend, Patrick. D. Anderson

Abstract:

In many industrial applications and, in particular in polymer industry, the quality of mixing between different materials is fundamental to guarantee the desired properties of finished products. However, properly modelling and understanding polymer mixing often presents noticeable difficulties, because of the variety and complexity of the physical phenomena involved. This is the case of the Cavity Transfer Mixer (CTM), for which a clear understanding of mixing mechanisms is still missing, as well as clear guidelines for the system optimization. This device, invented and patented by Gale at Rapra Technology Limited, is an add-on to be mounted downstream of existing extruders, in order to improve distributive mixing. It consists of two concentric cylinders, the rotor and stator, both provided with staggered rows of hemispherical cavities. The inner cylinder (rotor) rotates, while the outer (stator) remains still. At the same time, the pressure load imposed upstream, pushes the fluid through the CTM. Mixing processes are driven by the flow field generated by the complex interaction between the moving geometry, the imposed pressure load and the rheology of the fluid. In such a context, the present work proposes a complete and accurate three dimensional modelling of the CTM and results of a broad range of simulations assessing the impact on mixing of several geometrical and functioning parameters. Among them, we find: the number of cavities per row, the number of rows, the size of the mixer, the rheology of the fluid and the ratio between the rotation speed and the fluid throughput. The model is composed of a flow part and a mixing part: a finite element solver computes the transient velocity field, which is used in the mapping method implementation in order to simulate the concentration field evolution. Results of simulations are summarized in guidelines for the device optimization.

Keywords: Mixing, non-Newtonian fluids, polymers, rheology.

Procedia PDF Downloads 379
76 Evaluation of Sustainable Business Model Innovation in Increasing the Penetration of Renewable Energy in the Ghana Power Sector

Authors: Victor Birikorang Danquah

Abstract:

Ghana's primary energy supply is heavily reliant on petroleum, biomass, and hydropower. Currently, Ghana gets its energy from hydropower (Akosombo and Bui), thermal power plants powered by crude oil, natural gas, and diesel, solar power, and imports from La Cote d'Ivoire. Until the early 2000s, large hydroelectric dams dominated Ghana's electricity generation. Due to unreliable weather patterns, Ghana increased its reliance on thermal power. However, thermal power contributes the highest percentage in terms of electricity generation in Ghana and is predominantly supplied by Independent Power Producers (IPPs). Ghana's electricity industry operates the corporate utility model as its business model. This model is typically' vertically integrated,' with a single corporation selling the majority of power generated by its generation assets to its retail business, which then sells the electricity to retail market consumers. The corporate utility model has a straightforward value proposition that is based on increasing the number of energy units sold. The unit volume business model drives the entire energy value chain to increase throughput, locking system users into unsustainable practices. This report uses the qualitative research approach to explore the electricity industry in Ghana. There is a need for increasing renewable energy, such as wind and solar, in electricity generation. The research recommends two critical business models for the penetration of renewable energy in Ghana's power sector. The first model is the peer-to-peer electricity trading model, which relies on a software platform to connect consumers and generators in order for them to trade energy directly with one another. The second model is about encouraging local energy generation, incentivizing optimal time-of-use behaviour, and allowing any financial gains to be shared among the community members.

Keywords: business model innovation, electricity generation, renewable energy, solar energy, sustainability, wind energy

Procedia PDF Downloads 180
75 Increase of the Nanofiber Degradation Rate Using PCL-PEO and PCL-PVP as a Shell in the Electrospun Core-Shell Nanofibers Using the Needleless Blades

Authors: Matej Buzgo, Erico Himawan, Ksenija JašIna, Aiva Simaite

Abstract:

Electrospinning is a versatile and efficient technology for producing nanofibers for biomedical applications. One of the most common polymers used for the preparation of nanofibers for regenerative medicine and drug delivery applications is polycaprolactone (PCL). PCL is a biocompatible and bioabsorbable material that can be used to stimulate the regeneration of various tissues. It is also a common material used for the development of drug delivery systems by blending the polymer with small active molecules. However, for many drug delivery applications, e.g. cancer immunotherapy, PCL biodegradation rate that may exceed 9 months is too long, and faster nanofiber dissolution is needed. In this paper, we investigate the dissolution and small molecule release rates of PCL blends with two hydrophilic polymers: polyethylene oxide (PEO) or polyvinylpyrrolidone (PVP). We show that adding hydrophilic polymer to the PCL reduces the water contact angle, increases the dissolution rate, and strengthens the interactions between the hydrophilic drug and polymer matrix that further sustain its release. Finally using this method, we were also able to increase the nanofiber degradation rate when PCL-PEO and PCL-PVP were used as a shell in the electrospun core-shell nanofibers and spread up the release of active proteins from their core. Electrospinning can be used for the preparation of the core-shell nanofibers, where active ingredients are encapsulated in the core and their release rate is regulated by the shell. However, such fibers are usually prepared by coaxial electrospinning that is an extremely low-throughput technique. An alternative is emulsion electrospinning that could be upscaled using needleless blades. In this work, we investigate the possibility of using emulsion electrospinning for encapsulation and sustained release of the growth factors for the development of the organotypic skin models. The core-shell nanofibers were prepared using the optimized formulation and the release rate of proteins from the fibers was investigated for 2 weeks – typical cell culture conditions.

Keywords: electrospinning, polycaprolactone (PCL), polyethylene oxide (PEO), polyvinylpyrrolidone (PVP)

Procedia PDF Downloads 273
74 Economic Development Impacts of Connected and Automated Vehicles (CAV)

Authors: Rimon Rafiah

Abstract:

This paper will present a combination of two seemingly unrelated models, which are the one for estimating economic development impacts as a result of transportation investment and the other for increasing CAV penetration in order to reduce congestion. Measuring economic development impacts resulting from transportation investments is becoming more recognized around the world. Examples include the UK’s Wider Economic Benefits (WEB) model, Economic Impact Assessments in the USA, various input-output models, and additional models around the world. The economic impact model is based on WEB and is based on the following premise: investments in transportation will reduce the cost of personal travel, enabling firms to be more competitive, creating additional throughput (the same road allows more people to travel), and reducing the cost of travel of workers to a new workplace. This reduction in travel costs was estimated in out-of-pocket terms in a given localized area and was then translated into additional employment based on regional labor supply elasticity. This additional employment was conservatively assumed to be at minimum wage levels, translated into GDP terms, and from there into direct taxation (i.e., an increase in tax taken by the government). The CAV model is based on economic principles such as CAV usage, supply, and demand. Usage of CAVs can increase capacity using a variety of means – increased automation (known as Level I thru Level IV) and also by increased penetration and usage, which has been predicted to go up to 50% by 2030 according to several forecasts, with possible full conversion by 2045-2050. Several countries have passed policies and/or legislation on sales of gasoline-powered vehicles (none) starting in 2030 and later. Supply was measured via increased capacity on given infrastructure as a function of both CAV penetration and implemented technologies. The CAV model, as implemented in the USA, has shown significant savings in travel time and also in vehicle operating costs, which can be translated into economic development impacts in terms of job creation, GDP growth and salaries as well. The models have policy implications as well and can be adapted for use in Japan as well.

Keywords: CAV, economic development, WEB, transport economics

Procedia PDF Downloads 74
73 Undersea Communications Infrastructure: Risks, Opportunities, and Geopolitical Considerations

Authors: Lori W. Gordon, Karen A. Jones

Abstract:

Today’s high-speed data connectivity depends on a vast global network of infrastructure across space, air, land, and sea, with undersea cable infrastructure (UCI) serving as the primary means for intercontinental and ‘long-haul’ communications. The UCI landscape is changing and includes an increasing variety of state actors, such as the growing economies of Brazil, Russia, India, China, and South Africa. Non-state commercial actors, such as hyper-scale content providers including Google, Facebook, Microsoft, and Amazon, are also seeking to control their data and networks through significant investments in submarine cables. Active investments by both state and non-state actors will invariably influence the growth, geopolitics, and security of this sector. Beyond these hyper-scale content providers, there are new commercial satellite communication providers. These new players include traditional geosynchronous (GEO) satellites that offer broad coverage, high throughput GEO satellites offering high capacity with spot beam technology, low earth orbit (LEO) ‘mega constellations’ – global broadband services. And potential new entrants such as High Altitude Platforms (HAPS) offer low latency connectivity, LEO constellations offer high-speed optical mesh networks, i.e., ‘fiber in the sky.’ This paper focuses on understanding the role of submarine cables within the larger context of the global data commons, spanning space, terrestrial, air, and sea networks, including an analysis of national security policy and geopolitical implications. As network operators and commercial and government stakeholders plan for emerging technologies and architectures, hedging risks for future connectivity will ensure that our data backbone will be secure for years to come.

Keywords: communications, global, infrastructure, technology

Procedia PDF Downloads 86
72 In-vitro Metabolic Fingerprinting Using Plasmonic Chips by Laser Desorption/Ionization Mass Spectrometry

Authors: Vadanasundari Vedarethinam, Kun Qian

Abstract:

The metabolic analysis is more distal over proteomics and genomics engaging in clinics and needs rationally distinct techniques, designed materials, and device for clinical diagnosis. Conventional techniques such as spectroscopic techniques, biochemical analyzers, and electrochemical have been used for metabolic diagnosis. Currently, there are four major challenges including (I) long-term process in sample pretreatment; (II) difficulties in direct metabolic analysis of biosamples due to complexity (III) low molecular weight metabolite detection with accuracy and (IV) construction of diagnostic tools by materials and device-based platforms for real case application in biomedical applications. Development of chips with nanomaterial is promising to address these critical issues. Mass spectroscopy (MS) has displayed high sensitivity and accuracy, throughput, reproducibility, and resolution for molecular analysis. Particularly laser desorption/ ionization mass spectrometry (LDI MS) combined with devices affords desirable speed for mass measurement in seconds and high sensitivity with low cost towards large scale uses. We developed a plasmonic chip for clinical metabolic fingerprinting as a hot carrier in LDI MS by series of chips with gold nanoshells on the surface through controlled particle synthesis, dip-coating, and gold sputtering for mass production. We integrated the optimized chip with microarrays for laboratory automation and nanoscaled experiments, which afforded direct high-performance metabolic fingerprinting by LDI MS using 500 nL of serum, urine, cerebrospinal fluids (CSF) and exosomes. Further, we demonstrated on-chip direct in-vitro metabolic diagnosis of early-stage lung cancer patients using serum and exosomes without any pretreatment or purifications. To our best knowledge, this work initiates a bionanotechnology based platform for advanced metabolic analysis toward large-scale diagnostic use.

Keywords: plasmonic chip, metabolic fingerprinting, LDI MS, in-vitro diagnostics

Procedia PDF Downloads 162
71 Expression of DNMT Enzymes-Regulated miRNAs Involving in Epigenetic Event of Tumor and Margin Tissues in Patients with Breast Cancer

Authors: Fatemeh Zeinali Sehrig

Abstract:

Background: miRNAs play an important role in the post-transcriptional regulation of genes, including genes involved in DNA methylation (DNMTs), and are also important regulators of oncogenic pathways. The study of microRNAs and DNMTs in breast cancer allows the development of targeted treatments and early detection of this cancer. Methods and Materials: Clinical Patients and Samples: Institutional guidelines, including ethical approval and informed consent, were followed by the Ethics Committee (Ethics code: IR.IAU.TABRIZ.REC.1401.063) of Tabriz Azad University, Tabriz, Iran. In this study, tissues of 100 patients with breast cancer and tissues of 100 healthy women were collected from Noor Nejat Hospital in Tabriz. The basic characteristics of the patients with breast cancer included: 1)Tumor grade(Grade 3 = 5%, Grade 2 = 87.5%, Grade 1 = 7.5%), 2)Lymph node(Yes = 87.5%, No = 12.5%), 3)Family cancer history(Yes = 47.5%, No = 41.3%, Unknown = 11.2%), 4) Abortion history(Yes = 36.2%).In silico methods (data gathering, process, and build networks): Gene Expression Omnibus (GEO), a high-throughput genomic database, was queried for miRNAs expression profiles in breast cancer. For Experimental protocol Tissue Processing, Total RNA isolation, complementary DNA(cDNA) synthesis, and quantitative real time PCR (QRT-PCR) analysis were performed. Results: In the present study, we found significant (p.value<0.05) changes in the expression level of miRNAs and DNMTs in patients with breast cancer. In bioinformatics studies, the GEO microarray data set, similar to qPCR results, showed a decreased expression of miRNAs and increased expression of DNMTs in breast cancer. Conclusion: According to the results of the present study, which showed a decrease in the expression of miRNAs and DNMTs in breast cancer, it can be said that these genes can be used as important diagnostic and therapeutic biomarkers in breast cancer.

Keywords: gene expression omnibus, microarray dataset, breast cancer, miRNA, DNMT (DNA methyltransferases)

Procedia PDF Downloads 34
70 Root System Architecture Analysis of Sorghum Genotypes and Its Effect on Drought Adaptation

Authors: Hailemariam Solomon, Taye Tadesse, Daniel Nadew, Firezer Girma

Abstract:

Sorghum is an important crop in semi-arid regions and has shown resilience to drought stress. However, recurrent drought is affecting its productivity. Therefore, it is necessary to explore genes that contribute to drought stress adaptation to increase sorghum productivity. The aim of this study is to evaluate and determine the effect of root system traits, specifically root angle, on drought stress adaptation and grain yield performance in sorghum genotypes. A total of 428 sorghum genotypes from the Ethiopian breeding program were evaluated in three drought-stress environments. Field trials were conducted using a row-column design with three replications. Root system traits were phenotyped using a high-throughput phenotyping platform and analyzed using a row-column design with two replications. Data analysis was performed using R software and regression analysis. The study found significant variations in root system architecture among the sorghum genotypes. Non-stay-green genotypes had a grain yield ranging from 1.63 to 3.1 tons/ha, while stay-green genotypes had a grain yield ranging from 2.4 to 2.9 tons/ha. The analysis of root angle showed that non-stay-green genotypes had an angle ranging from 8.0 to 30.5 degrees, while stay-green genotypes had an angle ranging from 12.0 to 29.0 degrees. Improved varieties exhibited angles between 14.04 and 19.50 degrees. Positive and significant correlations were observed between leaf areas and shoot dry weight, as well as between leaf width and shoot dry weight. Negative correlations were observed between root angle and leaf area, as well as between root angle and root length. This research highlights the importance of root system architecture, particularly root angle traits, in enhancing grain yield production in drought-stressed conditions. It also establishes an association between root angle and grain yield traits for maximizing sorghum productivity.

Keywords: roor sysytem architecture, root angle, narrow root angle, wider root angle, drought

Procedia PDF Downloads 75
69 Performance Assessment of Carrier Aggregation-Based Indoor Mobile Networks

Authors: Viktor R. Stoynov, Zlatka V. Valkova-Jarvis

Abstract:

The intelligent management and optimisation of radio resource technologies will lead to a considerable improvement in the overall performance in Next Generation Networks (NGNs). Carrier Aggregation (CA) technology, also known as Spectrum Aggregation, enables more efficient use of the available spectrum by combining multiple Component Carriers (CCs) in a virtual wideband channel. LTE-A (Long Term Evolution–Advanced) CA technology can combine multiple adjacent or separate CCs in the same band or in different bands. In this way, increased data rates and dynamic load balancing can be achieved, resulting in a more reliable and efficient operation of mobile networks and the enabling of high bandwidth mobile services. In this paper, several distinct CA deployment strategies for the utilisation of spectrum bands are compared in indoor-outdoor scenarios, simulated via the recently-developed Realistic Indoor Environment Generator (RIEG). We analyse the performance of the User Equipment (UE) by integrating the average throughput, the level of fairness of radio resource allocation, and other parameters, into one summative assessment termed a Comparative Factor (CF). In addition, comparison of non-CA and CA indoor mobile networks is carried out under different load conditions: varying numbers and positions of UEs. The experimental results demonstrate that the CA technology can improve network performance, especially in the case of indoor scenarios. Additionally, we show that an increase of carrier frequency does not necessarily lead to improved CF values, due to high wall-penetration losses. The performance of users under bad-channel conditions, often located in the periphery of the cells, can be improved by intelligent CA location. Furthermore, a combination of such a deployment and effective radio resource allocation management with respect to user-fairness plays a crucial role in improving the performance of LTE-A networks.

Keywords: comparative factor, carrier aggregation, indoor mobile network, resource allocation

Procedia PDF Downloads 178
68 A Cooperative Signaling Scheme for Global Navigation Satellite Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.

Keywords: global navigation satellite network, cooperative signaling, data combining, nodes

Procedia PDF Downloads 280
67 Anaerobic Digestion Batch Study of Taxonomic Variations in Microbial Communities during Adaptation of Consortium to Different Lignocellulosic Substrates Using Targeted Sequencing

Authors: Priyanka Dargode, Suhas Gore, Manju Sharma, Arvind Lali

Abstract:

Anaerobic digestion has been widely used for production of methane from different biowastes. However, the complexity of microbial communities involved in the process is poorly understood. The performance of biogas production process concerning the process productivity is closely coupled to its microbial community structure and syntrophic interactions amongst the community members. The present study aims at understanding taxonomic variations occurring in any starter inoculum when acclimatised to different lignocellulosic biomass (LBM) feedstocks relating to time of digestion. The work underlines use of high throughput Next Generation Sequencing (NGS) for validating the changes in taxonomic patterns of microbial communities. Biomethane Potential (BMP) batches were set up with different pretreated and non-pretreated LBM residues using the same microbial consortium and samples were withdrawn for studying the changes in microbial community in terms of its structure and predominance with respect to changes in metabolic profile of the process. DNA of samples withdrawn at different time intervals with reference to performance changes of the digestion process, was extracted followed by its 16S rRNA amplicon sequencing analysis using Illumina Platform. Biomethane potential and substrate consumption was monitored using Gas Chromatography(GC) and reduction in COD (Chemical Oxygen Demand) respectively. Taxonomic analysis by QIIME server data revealed that microbial community structure changes with different substrates as well as at different time intervals. It was observed that biomethane potential of each substrate was relatively similar but, the time required for substrate utilization and its conversion to biomethane was different for different substrates. This could be attributed to the nature of substrate and consequently the discrepancy between the dominance of microbial communities with regards to different substrate and at different phases of anaerobic digestion process. Knowledge of microbial communities involved would allow a rational substrate specific consortium design which will help to reduce consortium adaptation period and enhance the substrate utilisation resulting in improved efficacy of biogas process.

Keywords: amplicon sequencing, biomethane potential, community predominance, taxonomic analysis

Procedia PDF Downloads 532
66 Biomolecules Based Microarray for Screening Human Endothelial Cells Behavior

Authors: Adel Dalilottojari, Bahman Delalat, Frances J. Harding, Michaelia P. Cockshell, Claudine S. Bonder, Nicolas H. Voelcker

Abstract:

Endothelial Progenitor Cell (EPC) based therapies continue to be of interest to treat ischemic events based on their proven role to promote blood vessel formation and thus tissue re-vascularisation. Current strategies for the production of clinical-grade EPCs requires the in vitro isolation of EPCs from peripheral blood followed by cell expansion to provide sufficient quantities EPCs for cell therapy. This study aims to examine the use of different biomolecules to significantly improve the current strategy of EPC capture and expansion on collagen type I (Col I). In this study, four different biomolecules were immobilised on a surface and then investigated for their capacity to support EPC capture and proliferation. First, a cell microarray platform was fabricated by coating a glass surface with epoxy functional allyl glycidyl ether plasma polymer (AGEpp) to mediate biomolecule binding. The four candidate biomolecules tested were Col I, collagen type II (Col II), collagen type IV (Col IV) and vascular endothelial growth factor A (VEGF-A), which were arrayed on the epoxy-functionalised surface using a non-contact printer. The surrounding area between the printed biomolecules was passivated with polyethylene glycol-bisamine (A-PEG) to prevent non-specific cell attachment. EPCs were seeded onto the microarray platform and cell numbers quantified after 1 h (to determine capture) and 72 h (to determine proliferation). All of the extracellular matrix (ECM) biomolecules printed demonstrated an ability to capture EPCs within 1 h of cell seeding with Col II exhibiting the highest level of attachment when compared to the other biomolecules. Interestingly, Col IV exhibited the highest increase in EPC expansion after 72 h when compared to Col I, Col II and VEGF-A. These results provide information for significant improvement in the capture and expansion of human EPC for further application.

Keywords: biomolecules, cell microarray platform, cell therapy, endothelial progenitor cells, high throughput screening

Procedia PDF Downloads 290
65 Transcriptome Analysis of Saffron (crocus sativus L.) Stigma Focusing on Identification Genes Involved in the Biosynthesis of Crocin

Authors: Parvaneh Mahmoudi, Ahmad Moeni, Seyed Mojtaba Khayam Nekoei, Mohsen Mardi, Mehrshad Zeinolabedini, Ghasem Hosseini Salekdeh

Abstract:

Saffron (Crocus sativus L.) is one of the most important spice and medicinal plants. The three-branch style of C. sativus flowers are the most important economic part of the plant and known as saffron, which has several medicinal properties. Despite the economic and biological significance of this plant, knowledge about its molecular characteristics is very limited. In the present study, we, for the first time, constructed a comprehensive dataset for C. sativus stigma through de novo transcriptome sequencing. We performed de novo transcriptome sequencing of C. sativus stigma using the Illumina paired-end sequencing technology. A total of 52075128 reads were generated and assembled into 118075 unigenes, with an average length of 629 bp and an N50 of 951 bp. A total of 66171unigenes were identified, among them, 66171 (56%) were annotated in the non-redundant National Center for Biotechnology Information (NCBI) database, 30938 (26%) were annotated in the Swiss-Prot database, 10273 (8.7%) unigenes were mapped to 141 Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway database, while 52560 (44%) and 40756 (34%) unigenes were assigned to Gen Ontology (GO) categories and Eukaryotic Orthologous Groups of proteins (KOG), respectively. In addition, 65 candidate genes involved in three stages of crocin biosynthesis were identified. Finally, transcriptome sequencing of saffron stigma was used to identify 6779 potential microsatellites (SSRs) molecular markers. High-throughput de novo transcriptome sequencing provided a valuable resource of transcript sequences of C. sativus in public databases. In addition, most of candidate genes potentially involved in crocin biosynthesis were identified which could be further utilized in functional genomics studies. Furthermore, numerous obtained SSRs might contribute to address open questions about the origin of this amphiploid spices with probable little genetic diversity.

Keywords: saffron, transcriptome, NGS, bioinformatic

Procedia PDF Downloads 100
64 Transcriptome Analysis for Insights into Disease Progression in Dengue Patients

Authors: Abhaydeep Pandey, Shweta Shukla, Saptamita Goswami, Bhaswati Bandyopadhyay, Vishnampettai Ramachandran, Sudhanshu Vrati, Arup Banerjee

Abstract:

Dengue virus infection is now considered as one of the most important mosquito-borne infection in human. The virus is known to promote vascular permeability, cerebral edema leading to Dengue hemorrhagic fever (DHF) or Dengue shock syndrome (DSS). Dengue infection has known to be endemic in India for over two centuries as a benign and self-limited disease. In the last couple of years, the disease symptoms have changed, manifesting severe secondary complication. So far, Delhi has experienced 12 outbreaks of dengue virus infection since 1997 with the last reported in 2014-15. Without specific antivirals, the case management of high-risk dengue patients entirely relies on supportive care, involving constant monitoring and timely fluid support to prevent hypovolemic shock. Nonetheless, the diverse clinical spectrum of dengue disease, as well as its initial similarity to other viral febrile illnesses, presents a challenge in the early identification of this high-risk group. WHO recommends the use of warning signs to identify high-risk patients, but warning signs generally appear during, or just one day before the development of severe illness, thus, providing only a narrow window for clinical intervention. The ability to predict which patient may develop DHF and DSS may improve the triage and treatment. With the recent discovery of high throughput RNA sequencing allows us to understand the disease progression at the genomic level. Here, we will collate the results of RNA-Sequencing data obtained recently from PBMC of different categories of dengue patients from India and will discuss the possible role of deregulated genes and long non-coding RNAs NEAT1 for development of disease progression.

Keywords: long non-coding RNA (lncRNA), dengue, peripheral blood mononuclear cell (PBMC), nuclear enriched abundant transcript 1 (NEAT1), dengue hemorrhagic fever (DHF), dengue shock syndrome (DSS)

Procedia PDF Downloads 308
63 Gut Mycobiome Dysbiosis and Its Impact on Intestinal Permeability in Attention-Deficit/Hyperactivity Disorder

Authors: Liang-Jen Wang, Sung-Chou Li, Yuan-Ming Yeh, Sheng-Yu Lee, Ho-Chang Kuo, Chia-Yu Yang

Abstract:

Background: Dysbiosis in the gut microbial community might be involved in the pathophysiology of attention deficit/hyperactivity disorder (ADHD). The fungal component of the gut microbiome, namely the mycobiota, is a hyperdiverse group of multicellular eukaryotes that can influence host intestinal permeability. This study therefore aimed to investigate the impact of fungal mycobiome dysbiosis and intestinal permeability on ADHD. Methods: Faecal samples were collected from 35 children with ADHD and from 35 healthy controls. Total DNA was extracted from the faecal samples, and the internal transcribed spacer (ITS) regions were sequenced using high-throughput next-generation sequencing (NGS). The fungal taxonomic classification was analysed using bioinformatics tools, and the differentially expressed fungal species between the ADHD and healthy control groups were identified. An in vitro permeability assay (Caco-2 cell layer) was used to evaluate the biological effects of fungal dysbiosis on intestinal epithelial barrier function. Results: The β-diversity (the species diversity between two communities), but not α-diversity (the species diversity within a community), reflected the differences in fungal community composition between ADHD and control groups. At the phylum level, the ADHD group displayed a significantly higher abundance of Ascomycota and significantly lower abundance of Basidiomycota than the healthy control group. At the genus level, the abundance of Candida (especially Candida albicans) was significantly increased in ADHD patients compared to the healthy controls. In addition, the in vitro cell assay revealed that C. albicans secretions significantly enhanced the permeability of Caco-2 cells. Conclusions: The current study is the first to explore altered gut mycobiome dysbiosis using the NGS platform in ADHD. The findings from this study indicated that dysbiosis of the fungal mycobiome and intestinal permeability might be associated with susceptibility to ADHD.

Keywords: ADHD, fungus, gut–brain axis, biomarker, child psychiatry

Procedia PDF Downloads 113
62 Engineering of Reagentless Fluorescence Biosensors Based on Single-Chain Antibody Fragments

Authors: Christian Fercher, Jiaul Islam, Simon R. Corrie

Abstract:

Fluorescence-based immunodiagnostics are an emerging field in biosensor development and exhibit several advantages over traditional detection methods. While various affinity biosensors have been developed to generate a fluorescence signal upon sensing varying concentrations of analytes, reagentless, reversible, and continuous monitoring of complex biological samples remains challenging. Here, we aimed to genetically engineer biosensors based on single-chain antibody fragments (scFv) that are site-specifically labeled with environmentally sensitive fluorescent unnatural amino acids (UAA). A rational design approach resulted in quantifiable analyte-dependent changes in peak fluorescence emission wavelength and enabled antigen detection in vitro. Incorporation of a polarity indicator within the topological neighborhood of the antigen-binding interface generated a titratable wavelength blueshift with nanomolar detection limits. In order to ensure continuous analyte monitoring, scFv candidates with fast binding and dissociation kinetics were selected from a genetic library employing a high-throughput phage display and affinity screening approach. Initial rankings were further refined towards rapid dissociation kinetics using bio-layer interferometry (BLI) and surface plasmon resonance (SPR). The most promising candidates were expressed, purified to homogeneity, and tested for their potential to detect biomarkers in a continuous microfluidic-based assay. Variations of dissociation kinetics within an order of magnitude were achieved without compromising the specificity of the antibody fragments. This approach is generally applicable to numerous antibody/antigen combinations and currently awaits integration in a wide range of assay platforms for one-step protein quantification.

Keywords: antibody engineering, biosensor, phage display, unnatural amino acids

Procedia PDF Downloads 146
61 Microfluidic Chambers with Fluid Walls for Cell Biology

Authors: Cristian Soitu, Alexander Feuerborn, Cyril Deroy, Alfonso Castrejon-Pita, Peter R. Cook, Edmond J. Walsh

Abstract:

Microfluidics now stands as an academically mature technology after a quarter of a century research activities have delivered a vast array of proof of concepts for many biological workflows. However, translation to industry remains poor, with only a handful of notable exceptions – e.g. digital PCR, DNA sequencing – mainly because of biocompatibility issues, limited range of readouts supported or complex operation required. This technology exploits the domination of interfacial forces over gravitational ones at the microscale, replacing solid walls with fluid ones as building blocks for cell micro-environments. By employing only materials used by biologists for decades, the system is shown to be biocompatible, and easy to manufacture and operate. The method consists in displacing a continuous fluid layer into a pattern of isolated chambers overlaid with an immiscible liquid to prevent evaporation. The resulting fluid arrangements can be arrays of micro-chambers with rectangular footprint, which use the maximum surface area available, or structures with irregular patterns. Pliant, self-healing fluid walls confine volumes as small as 1 nl. Such fluidic structures can be reconfigured during the assays, giving the platform an unprecedented level of flexibility. Common workflows in cell biology are demonstrated – e.g. cell growth and retrieval, cloning, cryopreservation, fixation and immunolabeling, CRISPR-Cas9 gene editing, and proof-of-concept drug tests. This fluid-shaping technology is shown to have potential for high-throughput cell- and organism-based assays. The ability to make and reconfigure on-demand microfluidic circuits on standard Petri dishes should find many applications in biology, and yield more relevant phenotypic and genotypic responses when compared to standard microfluidic assays.

Keywords: fluid walls, micro-chambers, reconfigurable, freestyle

Procedia PDF Downloads 193
60 Identification of Bioactive Metabolites from Ficus carica and Their Neuroprotective Effects of Alzheimer's Disease

Authors: Hanan Khojah, RuAngelie Edrada-Ebel

Abstract:

Neurodegenerative disease including Alzheimer’s disease is a major cause of long-term disability. Oxidative stress is frequently implicated as one of the key contributing factors to neurodegenerative diseases. Protection against neuronal damage remains a great challenge for researchers. Ficus carica (commonly known as fig) is a species of great antioxidant nutritional value comprising a protective mechanism against innumerable health disorders related to oxidative stress as well as Alzheimer’s disease. The purpose of this work was to characterize the non-polar active metabolites in Ficus carica endocarp, mesocarp, and exocarp. Crude extracts were prepared using several extraction solvents, which included 1:1 water: ethylacetate, acetone and methanol. The dried extracts were then solvent partitioned between equivalent amounts of water and ethylacetate. Purification and fractionation were accomplished by high-throughput chromatography. The isolated metabolites were tested on their effect on human neuroblastoma cell line by cell viability test and cell cytotoxicity assay with acrolein. Molecular weights of the active metabolites were determined via LC–HRESIMS and GC-EIMS. Metabolomic profiling was performed to identify the active metabolites by using differential expression analysis software (Mzmine) and SIMCA for multivariate analysis. Structural elucidation and identification of the interested active metabolites were studied by 1-D and 2-D NMR. Significant differences in bioactivity against a concentration-dependent assay on acrolein radicals were observed between the three fruit parts. However, metabolites obtained from mesocarp and the endocarp demonstrated bioactivity to scavenge ROS radical. NMR profiling demonstrated that aliphatic compounds such as γ-sitosterol tend to induce neuronal bioactivity and exhibited bioactivity on the cell viability assay. γ-Sitosterol was found in higher concentrations in the mesocarp and was considered as one of the major phytosterol in Ficus carica.

Keywords: alzheimer, Ficus carica, γ-Sitosterol, metabolomics

Procedia PDF Downloads 344
59 Characterization of Transcription Factors Involved in Early Defense Response during Interaction of Oil Palm Elaeis guineensis Jacq. with Ganoderma boninense

Authors: Sakeh N. Mohd, Bahari M. N. Abdul, Abdullah S. N. Akmar

Abstract:

Oil palm production generates high export earnings to many countries especially in Southeast Asian region. Infection by necrotrophic fungus, Ganoderma boninense on oil palm results in basal stem rot which compromises oil palm production leading to significant economic loss. There are no reliable disease treatments nor promising resistant oil palm variety has been cultivated to eradicate the disease up to date. Thus, understanding molecular mechanisms underlying early interactions of oil palm with Ganoderma boninense may be vital to promote preventive or control measure of the disease. In the present study, four months old oil palm seedlings were infected via artificial inoculation of Ganoderma boninense on rubber wood blocks. Roots of six biological replicates of treated and untreated oil palm seedlings were harvested at 0, 3, 7 and 11 days post inoculation. Next-generation sequencing was performed to generate high-throughput RNA-Seq data and identify differentially expressed genes (DEGs) during early oil palm-Ganoderma boninense interaction. Based on de novo transcriptome assembly, a total of 427,122,605 paired-end clean reads were assembled into 30,654 unigenes. DEGs analysis revealed upregulation of 173 transcription factors on Ganoderma boninense-treated oil palm seedlings. Sixty-one transcription factors were categorized as DEGs according to stringent cut-off values of genes with log2 ratio [Number of treated oil palm seedlings/ Number of untreated oil palm seedlings] ≥ |1.0| (corresponding to 2-fold or more upregulation) and P-value ≤ 0.01. Transcription factors in response to biotic stress will be screened out from abiotic stress using reverse transcriptase polymerase chain reaction. Transcription factors unique to biotic stress will be verified using real-time polymerase chain reaction. The findings will help researchers to pinpoint defense response mechanism specific against Ganoderma boninense.

Keywords: Ganoderma boninense, necrotrophic, next-generation sequencing, transcription factors

Procedia PDF Downloads 266
58 Cu₂(ZnSn)(S)₄ Electrodeposition from a Single Bath for Photovoltaic Applications

Authors: Mahfouz Saeed

Abstract:

Cu₂(ZnSn)(S)₄ (CTZS) offers potential advantages over CuInGaSe₂ (CIGS) as solar thin film because to its higher band gap. Preparing such photovoltaic materials by electrochemical techniques is particularly attractive due to the lower processing cost and the high throughput of such techniques. Several recent publications report CTZS electroplating; however, the electrochemical process still facing serious challenges such as a sulfur atomic ration which is about 50% of the total alloy. We introduce in this work an improved electrolyte composition which enables the direct electrodeposition of CTZS from a single bath. The electrolyte is significantly more dilute in comparison to common baths described in the literature. The bath composition we introduce is: 0.0032 M CuSO₄, 0.0021 M ZnSO₄, 0.0303 M SnCl₂, 0.0038 M Na₂S₂O₃, and 0.3 mM Na₂S₂O3. PHydrion is applied to buffer the electrolyte to pH=2, and 0.7 M LiCl is applied as supporting electrolyte. Electrochemical process was carried at a rotating disk electrode which provides quantitative characterization of the flow (room temperature). Comprehensive electrochemical behavior study at different electrode rotation rates are provided. The effects of agitation on atomic composition of the deposit and its adhesion to the molybdenum back contact are discussed. The post treatment annealing was conducted under sulfur atmosphere with no need for metals addition from the gas phase during annealing. The potential which produced the desired atomic ratio of CTZS at -0.82 V/NHE. Smooth deposit, with uniform composition across the sample surface and depth was obtained at 500 rpm rotation speed. Final sulfur atomic ratio was adjusted to 50.2% in order to have the desired atomic ration. The final composition was investigated using Energy-dispersive X-ray spectroscopy technique (EDS). XRD technique used to analyze CTZS crystallography and thickness. Complete and functional CTZS PV devices were fabricated by depositing all the required layers in the correct order and the desired optical properties. Acknowledgments: Case Western Reserve University for the technical help and for using their instruments.

Keywords: photovoltaic, CTZS, thin film, electrochemical

Procedia PDF Downloads 240
57 Molecular Approach for the Detection of Lactic Acid Bacteria in the Kenyan Spontaneously Fermented Milk, Mursik

Authors: John Masani Nduko, Joseph Wafula Matofari

Abstract:

Many spontaneously fermented milk products are produced in Kenya, where they are integral to the human diet and play a central role in enhancing food security and income generation via small-scale enterprises. Fermentation enhances product properties such as taste, aroma, shelf-life, safety, texture, and nutritional value. Some of these products have demonstrated therapeutic and probiotic effects although recent reports have linked some to death, biotoxin infections, and esophageal cancer. These products are mostly processed from poor quality raw materials under unhygienic conditions resulting to inconsistent product quality and limited shelf-lives. Though very popular, research on their processing technologies is low, and none of the products has been produced under controlled conditions using starter cultures. To modernize the processing technologies for these products, our study aims at describing the microbiology and biochemistry of a representative Kenyan spontaneously fermented milk product, Mursik using modern biotechnology (DNA sequencing) and their chemical composition. Moreover, co-creation processes reflecting stakeholders’ experiences on traditional fermented milk production technologies and utilization, ideals and senses of value, which will allow the generation of products based on common ground for rapid progress will be discussed. Knowledge of the value of clean starting raw material will be emphasized, the need for the definition of fermentation parameters highlighted, and standard equipment employment to attain controlled fermentation discussed. This presentation will review the available information regarding traditional fermented milk (Mursik) and highlight our current research work on the application of molecular approaches (metagenomics) for the valorization of Mursik production process through starter culture/ probiotic strains isolation and identification, and quality and safety aspects of the product. The importance of the research and future research areas on the same subject will also be highlighted.

Keywords: lactic acid bacteria, high throughput biotechnology, spontaneous fermentation, Mursik

Procedia PDF Downloads 292
56 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 167
55 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 159
54 Establishing a Drug Discovery Platform to Progress Compounds into the Clinic

Authors: Sheraz Gul

Abstract:

The requirements for progressing a compound to clinical trials is well established and relies on the results from in-vitro and in-vivo animal tests to indicate that it is likely to be safe and efficacious when testing in humans. The typical data package required will include demonstrating compound safety, toxicity, bioavailability, pharmacodynamics (potential effects of the compound on body systems) and pharmacokinetics (how the compound is potentially absorbed, distributed, metabolised and eliminated after dosing in humans). If the desired criteria are met and the compound meets the clinical Candidate criteria and is deemed worthy of further development, a submission to regulatory bodies such as the US Food & Drug Administration for an exploratory Investigational New Drug Study can be made. The purpose of this study is to collect data to establish that the compound will not expose humans to unreasonable risks when used in limited, early-stage clinical studies in patients or normal volunteer subjects (Phase I). These studies are also designed to determine the metabolism and pharmacologic actions of the drug in humans, the side effects associated with increasing doses, and, if possible, to gain early evidence on their effectiveness. In order to reach the above goals, we have developed a pre-clinical high throughput Absorption, Distribution, Metabolism and Excretion–Toxicity (ADME–Toxicity) panel of assays to identify compounds that are likely to meet the Lead and Candidate compound acceptance criteria. This panel includes solubility studies in a range of biological fluids, cell viability studies in cancer and primary cell-lines, mitochondrial toxicity, off-target effects (across the kinase, protease, histone deacetylase, phosphodiesterase and GPCR protein families), CYP450 inhibition (5 different CYP450 enzymes), CYP450 induction, cardio-toxicity (hERG) and gene-toxicity. This panel of assays has been applied to multiple compound series developed in a number of projects delivering Lead and clinical Candidates and examples from these will be presented.

Keywords: absorption, distribution, metabolism and excretion–toxicity , drug discovery, food and drug administration , pharmacodynamics

Procedia PDF Downloads 173
53 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver

Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin

Abstract:

National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.

Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band

Procedia PDF Downloads 369
52 Statistical Pattern Recognition for Biotechnological Process Characterization Based on High Resolution Mass Spectrometry

Authors: S. Fröhlich, M. Herold, M. Allmer

Abstract:

Early stage quantitative analysis of host cell protein (HCP) variations is challenging yet necessary for comprehensive bioprocess development. High resolution mass spectrometry (HRMS) provides a high-end technology for accurate identification alongside with quantitative information. Hereby we describe a flexible HRMS assay platform to quantify HCPs relevant in microbial expression systems such as E. Coli in both up and downstream development by means of MVDA tools. Cell pellets were lysed and proteins extracted, purified samples not further treated before applying the SMART tryptic digest kit. Peptides separation was optimized using an RP-UHPLC separation platform. HRMS-MSMS analysis was conducted on an Orbitrap Velos Elite applying CID. Quantification was performed label-free taking into account ionization properties and physicochemical peptide similarities. Results were analyzed using SIEVE 2.0 (Thermo Fisher Scientific) and SIMCA (Umetrics AG). The developed HRMS platform was applied to an E. Coli expression set with varying productivity and the corresponding downstream process. Selected HCPs were successfully quantified within the fmol range. Analysing HCP networks based on pattern analysis facilitated low level quantification and enhanced validity. This approach is of high relevance for high-throughput screening experiments during upstream development, e.g. for titer determination, dynamic HCP network analysis or product characterization. Considering the downstream purification process, physicochemical clustering of identified HCPs is of relevance to adjust buffer conditions accordingly. However, the technology provides an innovative approach for label-free MS based quantification relying on statistical pattern analysis and comparison. Absolute quantification based on physicochemical properties and peptide similarity score provides a technological approach without the need of sophisticated sample preparation strategies and is therefore proven to be straightforward, sensitive and highly reproducible in terms of product characterization.

Keywords: process analytical technology, mass spectrometry, process characterization, MVDA, pattern recognition

Procedia PDF Downloads 249
51 Multicellular Cancer Spheroids as an in Vitro Model for Localized Hyperthermia Study

Authors: Kamila Dus-Szachniewicz, Artur Bednarkiewicz, Katarzyna Gdesz-Birula, Slawomir Drobczynski

Abstract:

In modern oncology hyperthermia (HT) is defined as a controlled tumor heating. HT treatment temperatures range between 40–48 °C and can selectively damage heat-sensitive cancer cells or limit their further growth, usually with minimal injury to healthy tissues. Despite many advantages, conventional whole-body and regional hyperthermia have clinically relevant side effects, including cardiac and vascular disorders. Additionally, the lack of accessibility of deep-seated tumor sites and impaired targeting micrometastases renders HT less effective. It is believed that above disadvantages can significantly overcome by the application of biofunctionalized microparticles, which can specifically target tumor sites and become activated by an external stimulus to provide a sufficient cellular response. In our research, the unique optical tweezers system have enabled capturing the silica microparticles, primary cells and tumor spheroids in highly controllable and reproducible environment to study the impact of localized heat stimulation on normal and pathological cell and within multicellular tumor spheroid. High throughput spheroid model was introduced to better mimic the response to HT treatment on tumors in vivo. Additionally, application of local heating of tumor spheroids was performed in strictly controlled conditions resembling tumor microenvironment (temperature, pH, hypoxia, etc.), in response to localized and nonhomogeneous hyperthermia in the extracellular matrix, which promotes tumor progression and metastatic spread. The lack of precise control over these well- defined parameters in basic research leads to discrepancies in the response of tumor cells to the new treatment strategy in preclinical animal testing. The developed approach enables also sorting out subclasses of cells, which exhibit partial or total resistance to therapy, in order to understand fundamental aspects of the resistance shown by given tumor cells in response to given therapy mode and conditions. This work was funded by the National Science Centre (NCN, Poland) under grant no. UMO-2017/27/B/ST7/01255.

Keywords: cancer spheroids, hyperthermia, microparticles, optical tweezers

Procedia PDF Downloads 133
50 Quantitative Polymerase Chain Reaction Analysis of Phytoplankton Composition and Abundance to Assess Eutrophication: A Multi-Year Study in Twelve Large Rivers across the United States

Authors: Chiqian Zhang, Kyle D. McIntosh, Nathan Sienkiewicz, Ian Struewing, Erin A. Stelzer, Jennifer L. Graham, Jingrang Lu

Abstract:

Phytoplankton plays an essential role in freshwater aquatic ecosystems and is the primary group synthesizing organic carbon and providing food sources or energy to ecosystems. Therefore, the identification and quantification of phytoplankton are important for estimating and assessing ecosystem productivity (carbon fixation), water quality, and eutrophication. Microscopy is the current gold standard for identifying and quantifying phytoplankton composition and abundance. However, microscopic analysis of phytoplankton is time-consuming, has a low sample throughput, and requires deep knowledge and rich experience in microbial morphology to implement. To improve this situation, quantitative polymerase chain reaction (qPCR) was considered for phytoplankton identification and quantification. Using qPCR to assess phytoplankton composition and abundance, however, has not been comprehensively evaluated. This study focused on: 1) conducting a comprehensive performance comparison of qPCR and microscopy techniques in identifying and quantifying phytoplankton and 2) examining the use of qPCR as a tool for assessing eutrophication. Twelve large rivers located throughout the United States were evaluated using data collected from 2017 to 2019 to understand the relation between qPCR-based phytoplankton abundance and eutrophication. This study revealed that temporal variation of phytoplankton abundance in the twelve rivers was limited within years (from late spring to late fall) and among different years (2017, 2018, and 2019). Midcontinent rivers had moderately greater phytoplankton abundance than eastern and western rivers, presumably because midcontinent rivers were more eutrophic. The study also showed that qPCR- and microscope-determined phytoplankton abundance had a significant positive linear correlation (adjusted R² 0.772, p-value < 0.001). In addition, phytoplankton abundance assessed via qPCR showed promise as an indicator of the eutrophication status of those rivers, with oligotrophic rivers having low phytoplankton abundance and eutrophic rivers having (relatively) high phytoplankton abundance. This study demonstrated that qPCR could serve as an alternative tool to traditional microscopy for phytoplankton quantification and eutrophication assessment in freshwater rivers.

Keywords: phytoplankton, eutrophication, river, qPCR, microscopy, spatiotemporal variation

Procedia PDF Downloads 101