Search results for: powder processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4542

Search results for: powder processing

642 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 296
641 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 114
640 The Interplay between Consumer Knowledge, Cognitive Effort, Financial Healthiness and Trust in the Financial Marketplace

Authors: Torben Hansen

Abstract:

While trust has long been regarded as one of the most critical variables for developing and maintaining well-functioning financial customer-seller relationships it can be suggested that trust not only relates to customer trust in individual companies (narrow-scope trust). Trust also relates to the broader business context in which consumers may carry out their financial behaviour (broad-scope trust). However, despite the well-recognized significance of trust in marketing research, only few studies have investigated the role of broad-scope trust in consumer financial behaviour. Moreover, as one of its many serious outcomes, the global financial crisis has elevated the need for an improved understanding of the role of broad-scope trust in consumer financial services markets. Only a minority of US and European consumers are currently confident in financial companies and ‘financial stability’ and ‘trust’ are now among the top reasons for choosing a bank. This research seeks to address this shortcoming in the marketing literature by investigating direct and moderating effects of broad-scope trust on consumer financial behaviour. Specifically, we take an ability-effort approach to consumer financial behaviour. The ability-effort approach holds the basic premise that the quality of consumer actions is influenced by ability factors, for example consumer knowledge and cognitive effort. Our study is based on two surveys. Survey 1 comprises 1,155 bank consumers, whereas survey 2 comprises 764 pension consumers. The results indicate that broad-scope trust negatively moderates relationships between knowledge and financial healthiness and between cognitive effort and financial healthiness. In addition, it is demonstrated that broad-scope trust negatively influences cognitive effort. Specifically, the results suggest that broad-scope trust contributes to the financial well-being of consumers with limited financial knowledge and processing capabilities. Since financial companies are dependent on customers to pay their loans and bills they have a greater interest in developing relations with consumers with a healthy financial behaviour than with the opposite. Hence, financial managers should be engaged with monitoring and influencing broad-scope trust. To conclude, by taking into account the contextual effect of broad-scope trust, the present study adds to our understanding of knowledge-effort-behaviour relationship in consumer financial markets.

Keywords: cognitive effort, customer-seller relationships, financial healthiness, knowledge, trust

Procedia PDF Downloads 441
639 A Decadal Flood Assessment Using Time-Series Satellite Data in Cambodia

Authors: Nguyen-Thanh Son

Abstract:

Flood is among the most frequent and costliest natural hazards. The flood disasters especially affect the poor people in rural areas, who are heavily dependent on agriculture and have lower incomes. Cambodia is identified as one of the most climate-vulnerable countries in the world, ranked 13th out of 181 countries most affected by the impacts of climate change. Flood monitoring is thus a strategic priority at national and regional levels because policymakers need reliable spatial and temporal information on flood-prone areas to form successful monitoring programs to reduce possible impacts on the country’s economy and people’s likelihood. This study aims to develop methods for flood mapping and assessment from MODIS data in Cambodia. We processed the data for the period from 2000 to 2017, following three main steps: (1) data pre-processing to construct smooth time-series vegetation and water surface indices, (2) delineation of flood-prone areas, and (3) accuracy assessment. The results of flood mapping were verified with the ground reference data, indicating the overall accuracy of 88.7% and a Kappa coefficient of 0.77, respectively. These results were reaffirmed by close agreement between the flood-mapping area and ground reference data, with the correlation coefficient of determination (R²) of 0.94. The seasonally flooded areas observed for 2010, 2015, and 2016 were remarkably smaller than other years, mainly attributed to the El Niño weather phenomenon exacerbated by impacts of climate change. Eventually, although several sources potentially lowered the mapping accuracy of flood-prone areas, including image cloud contamination, mixed-pixel issues, and low-resolution bias between the mapping results and ground reference data, our methods indicated the satisfactory results for delineating spatiotemporal evolutions of floods. The results in the form of quantitative information on spatiotemporal flood distributions could be beneficial to policymakers in evaluating their management strategies for mitigating the negative effects of floods on agriculture and people’s likelihood in the country.

Keywords: MODIS, flood, mapping, Cambodia

Procedia PDF Downloads 126
638 Sexual Health And Male Fertility: Improving Sperm Health With Focus On Technology

Authors: Diana Peninger

Abstract:

Over 10% of couples in the U.S. have infertility problems, with roughly 40% traceable to the male partner. Yet, little attention has been given to improving men’s contribution to the conception process. One solution that is showing promise in increasing conception rates for IVF and other assisted reproductive technology treatments is a first-of-its-kind semen collection that has been engineered to mitigate sperm damage caused by traditional collection methods. Patients are able to collect semen at home and deliver to clinics within 48 hours for use in fertility analysis and treatment, with less stress and improved specimen viability. This abstract will share these findings along with expert insight and tips to help attendees understand the key role sperm collection plays in addressing and treating reproductive issues, while helping to improve patient outcomes and success. Our research was to determine if male reproductive outcomes can be increased by improving sperm specimen health with a focus on technology. We utilized a redesigned semen collection cup (patented as the Device for Improved Semen Collection/DISC—U.S. Patent 6864046 – known commercially as a ProteX) that met a series of physiological parameters. Previous research demonstrated significant improvement in semen perimeters (motility forward, progression, viability, and longevity) and overall sperm biochemistry when the DISC is used for collection. Animal studies have also shown dramatic increases in pregnancy rates. Our current study compares samples collected in the DISC, next-generation DISC (DISCng), and a standard specimen cup (SSC), dry, with the 1 mL measured amount of media and media in excess ( 5mL). Both human and animal testing will be included. With sperm counts declining at alarming rates due to environmental, lifestyle, and other health factors, accurate evaluations of sperm health are critical to understanding reproductive health, origins, and treatments of infertility. An increase in the health of the sperm as measured by extensive semen parameter analysis and improved semen parameters stable for 48 hours, expanding the processing time from 1 hour to 48 hours were also demonstrated.

Keywords: reprodutive, sperm, male, infertility

Procedia PDF Downloads 129
637 Reinforced Concrete Bridge Deck Condition Assessment Methods Using Ground Penetrating Radar and Infrared Thermography

Authors: Nicole M. Martino

Abstract:

Reinforced concrete bridge deck condition assessments primarily use visual inspection methods, where an inspector looks for and records locations of cracks, potholes, efflorescence and other signs of probable deterioration. Sounding is another technique used to diagnose the condition of a bridge deck, however this method listens for damage within the subsurface as the surface is struck with a hammer or chain. Even though extensive procedures are in place for using these inspection techniques, neither one provides the inspector with a comprehensive understanding of the internal condition of a bridge deck – the location where damage originates from.  In order to make accurate estimates of repair locations and quantities, in addition to allocating the necessary funding, a total understanding of the deck’s deteriorated state is key. The research presented in this paper collected infrared thermography and ground penetrating radar data from reinforced concrete bridge decks without an asphalt overlay. These decks were of various ages and their condition varied from brand new, to in need of replacement. The goals of this work were to first verify that these nondestructive evaluation methods could identify similar areas of healthy and damaged concrete, and then to see if combining the results of both methods would provide a higher confidence than if the condition assessment was completed using only one method. The results from each method were presented as plan view color contour plots. The results from one of the decks assessed as a part of this research, including these plan view plots, are presented in this paper. Furthermore, in order to answer the interest of transportation agencies throughout the United States, this research developed a step-by-step guide which demonstrates how to collect and assess a bridge deck using these nondestructive evaluation methods. This guide addresses setup procedures on the deck during the day of data collection, system setups and settings for different bridge decks, data post-processing for each method, and data visualization and quantification.

Keywords: bridge deck deterioration, ground penetrating radar, infrared thermography, NDT of bridge decks

Procedia PDF Downloads 154
636 Mitigating Nitrous Oxide Production from Nitritation/Denitritation: Treatment of Centrate from Pig Manure Co-Digestion as a Model

Authors: Lai Peng, Cristina Pintucci, Dries Seuntjens, José Carvajal-Arroyo, Siegfried Vlaeminck

Abstract:

Economic incentives drive the implementation of short-cut nitrogen removal processes such as nitritation/denitritation (Nit/DNit) to manage nitrogen in waste streams devoid of biodegradable organic carbon. However, as any biological nitrogen removal process, the potent greenhouse gas nitrous oxide (N2O) could be emitted from Nit/DNit. Challenges remain in understanding the fundamental mechanisms and development of engineered mitigation strategies for N2O production. To provide answers, this work focuses on manure as a model, the biggest wasted nitrogen mass flow through our economies. A sequencing batch reactor (SBR; 4.5 L) was used treating the centrate (centrifuge supernatant; 2.0 ± 0.11 g N/L of ammonium) from an anaerobic digester processing mainly pig manure, supplemented with a co-substrate. Glycerin was used as external carbon source, a by-product of vegetable oil. Out-selection of nitrite oxidizing bacteria (NOB) was targeted using a combination of low dissolved oxygen (DO) levels (down to 0.5 mg O2/L), high temperature (35ºC) and relatively high free ammonia (FA) (initially 10 mg NH3-N/L). After reaching steady state, the process was able to remove 100% of ammonium with minimum nitrite and nitrate in the effluent, at a reasonably high nitrogen loading rate (0.4 g N/L/d). Substantial N2O emissions (over 15% of the nitrogen loading) were observed at the baseline operational condition, which were even increased under nitrite accumulation and a low organic carbon to nitrogen ratio. Yet, higher DO (~2.2 mg O2/L) lowered aerobic N2O emissions and weakened the dependency of N2O on nitrite concentration, suggesting a shift of N2O production pathway at elevated DO levels. Limiting the greenhouse gas emissions (environmental protection) from such a system could be substantially minimized by increasing the external carbon dosage (a cost factor), but also through the implementation of an intermittent aeration and feeding strategy. Promising steps forward have been presented in this abstract, yet at the conference the insights of ongoing experiments will also be shared.

Keywords: mitigation, nitrous oxide, nitritation/denitritation, pig manure

Procedia PDF Downloads 249
635 The Effect of Music on Consumer Behavior

Authors: Lara Ann Türeli, Özlem Bozkurt

Abstract:

There is a biochemical component to listening to music. The type of music listened to can lead to different levels of neurotransmitter and biochemical activity within the brain, resulting in brain stimulation and different moods. Therefore, music plays an important role in neuromarketing and consumer behavior. The quality of a commercial can be measured by the effect the music has on its audience. Thus, understanding how music can affect the brain can provide better marketing strategies for all businesses. The type of music used plays an important role in how a person responds to certain experiences. In the context of marketing and consumer behavior, music can determine whether a person will be intrigued to buy something. Depending on the type of music listened to by an individual; the music may trigger the release of pleasurable neurotransmitters such as dopamine. Dopamine is a neurotransmitter that plays an important role in reward pathways in the brain. When an individual experiences a pleasurable activity, increased levels of dopamine are produced, eventually leading to the formation of new reward pathways. Consequently, the increased dopamine activity within the brain triggered by music can result in new reward pathways along the dopamine pathways in the brain. Selecting pleasurable music for commercials can result in long-term brain stimulation, increasing consumerism. The effect of music on consumerism should be considered not only in commercials but also in the atmosphere it creates within stores. The type of music played in a store can affect consumer behavior and intention. Specifically, the rhythm, pitch, and pace of music can contribute to the mood of the song. The background music in a store can determine the consumer’s emotional presence and consequently affect their intentions. In conclusion, understanding the physiological, psychological, and neurochemical basis of the effect of music on brain stimulation is essential to understand consumer behavior. The role of dopamine in the formation of reward pathways as a result of music directly contributes to consumer behavior and the tendency of a commercial or store to leave a long-term effect on the consumer. The careful consideration of the pitch, pace, and rhythm of a song in the selection of music can not only help companies predict the behavior of a consumer but also determine the behavior of a consumer.

Keywords: sensory processing, neuropsychology, dopamine, neuromarketing

Procedia PDF Downloads 80
634 Hematological and Biochemical Indices of Starter Broiler Chickens Fed African Black Plum Seed Nut (Vitex Doniana) Meal

Authors: Obadire F. O., Obadire, S. O., Adeoti R. F., Pirgozliev V.

Abstract:

An experiment was conducted to determine the efficacy of utilizing African black plum seed nut (ABPNBD) meal on hematological and biochemical indices of broiler chicken ration formulated to substitute wheat offal. A total of 150- 1day-old, male Agrited birds were reared for 28 days of the experiment. The birds were assigned to five dietary treatments, with ten birds per treatment replicated 3 times. Experimental diets were formulated by supplementing the milled African black plum nut at (0, 5, 10, 12.5, and 15%) inclusion levels in the starter broiler’s ratio designated as T1 (control diet containing no ABPBD), Treatments (T2, 3,4 and 5) contained ABPNBD at 5, 10, 12.5, and 15%, respectively, in a completely randomized design. The hematological and biochemical indices of the birds were determined. The result revealed that all hematological parameters measured were significant (P <0.05) except for WBC. Increasing inclusion levels of ABPNBD decreased the PCV, HB, and RBC of the birds across the treatment groups. Birds fed 12.5 and 15% ABPNBD diets recorded the least of the parameters. The result of the serum biochemical indices showed significant (P < 0.05) influence for all parameters measured except for alanine transaminase (ALT), (AST), and creatinine. The total protein (TP), albumin, globulin, and glucose values were reduced across the treatment group as ABPNBD inclusion increased. Birds fed above 10% ABPNBD recorded the lowest value of TP, albumin, globulin, and glucose when compared with birds on a control diet and other treatments. The uric acid ranged from 3.85 to 2 .13 mmol/L, while creatinine ranged from 62.00 to 53.50 mmol/l. AST ranged between 8.50 u/l (5%) to 7.90 u/l (10%). ALT ranged between 7.50 u/l (12.5%) to 5.50 u/l (5 and 10%). In conclusion, dietary inclusion of African black plum up to 10% has no detrimental effect on the health of the starter chickens. Meanwhile, inclusion above 10% revealed a negative effect on some blood parameters measured. Therefore, African black plum should be supplemented with probable probiotics or subjected to different processing methods if to be used at a 15% inclusion level for optimal results.

Keywords: African black plum seed, starter broiler chickens, hematological and serum biochemical indices, (Vitex doniana)

Procedia PDF Downloads 52
633 Studies of Single Nucleotide Polymorphism of Proteosomal Gene Complex and Their Association with HBV Infection Risk in India

Authors: Jasbir Singh, Devender Kumar, Davender Redhu, Surender Kumar, Vandana Bhardwaj

Abstract:

Single Nucleotide polymorphism (SNP) of proteosomal gene complex is involved in the pathogenesis of hepatitis B Virus (HBV) infection. Some of such proteosomal gene complex are large multifunctional proteins (LMP) and antigen associated transporters that help in antigen presentation. Both are involved in intracellular processing and presentation of viral antigens in association with Major Histocompatability Complex (MHC) Class I molecules. A total of hundred each of hepatitis B virus infected and control samples from northern India were studied. Genomic DNA was extracted from all studied samples and PCR-RFLP method was used for genotyping at different positions of LMP genes. Genotypes at a given position were inferred from the pattern of bands and genotype frequencies and haplotype frequencies were also calculated. Homozygous SNP {A>C} was observed at codon 145 of LMP7 gene and having a protective role against HBV as there was statistically significant high distribution of this SNP among controls than cases. Heterozygous SNP {A>C} was observed at codon 145 of LMP7 gene and made individuals more susceptible to HBV infection as there was statistically significant high distribution of this SNP among cases than control. SNP {T>C} was observed at codon 60 of LMP2 gene but statistically significant differences were not observed among controls and cases. For codon 145 of LMP7 and codon 60 of LMP2 genes, four haplotypes were constructed. Haplotype I (LMP2 ‘C’ and LMP7 ‘A’) made individuals carrying it more susceptible to HBV infection as there was statistically significant high distribution of this haplotype among cases than control. Haplotype II (LMP2 ‘C’ and LMP7 ‘C’) made individuals carrying it more immune to HBV infection as there was statistically significant high distribution of this haplotype among control than cases. Thus it can be concluded that homozygous SNP {A>C} at codon 145 of LMP7 and Haplotype II (LMP2 ‘C’ and LMP7 ‘C’) has a protective role against HBV infection whereas heterozygous SNP {A>C} at codon 145 of LMP7 and Haplotype I (LMP2 ‘C’ and LMP7 ‘A’) made individuals more susceptible to HBV infection.

Keywords: Hepatitis B Virus, single nucleotide polymorphism, low molecular weight proteins, transporters associated with antigen presentation

Procedia PDF Downloads 308
632 Language Choice and Language Maintenance of Northeastern Thai Staff in Suan Sunandha Rajabhat University

Authors: Napasri Suwanajote

Abstract:

The purposes of this research were to analyze and evaluate successful factors in OTOP production process for the developing of learning center on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality. The research has been designed as a qualitative study to gather information from 30 OTOP producers in Bangkontee District, Samudsongkram Province. They were all interviewed on 3 main parts. Part 1 was about the production process including 1) production, 2) product development, 3) the community strength, 4) marketing possibility, and 5) product quality. Part 2 evaluated appropriate successful factors including 1) the analysis of the successful factors, 2) evaluate the strategy based on Sufficiency Economic Philosophy, and 3) the model of learning center on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality. The results showed that the production did not affect the environment with potential in continuing standard quality production. They used the raw materials in the country. On the aspect of product and community strength in the past 1 year, it was found that there was no appropriate packaging showing product identity according to global market standard. They needed the training on packaging especially for food and drink products. On the aspect of product quality and product specification, it was found that the products were certified by the local OTOP standard. There should be a responsible organization to help the uncertified producers pass the standard. However, there was a problem on food contamination which was hazardous to the consumers. The producers should cooperate with the government sector or educational institutes involving with food processing to reach FDA standard. The results from small group discussion showed that the community expected high education and better standard living. Some problems reported by the community included informal debt and drugs in the community. There were 8 steps in developing the model of learning center on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality.

Keywords: production process, OTOP, sufficiency economic philosophy, language choice

Procedia PDF Downloads 237
631 Identification of Candidate Congenital Heart Defects Biomarkers by Applying a Random Forest Approach on DNA Methylation Data

Authors: Kan Yu, Khui Hung Lee, Eben Afrifa-Yamoah, Jing Guo, Katrina Harrison, Jack Goldblatt, Nicholas Pachter, Jitian Xiao, Guicheng Brad Zhang

Abstract:

Background and Significance of the Study: Congenital Heart Defects (CHDs) are the most common malformation at birth and one of the leading causes of infant death. Although the exact etiology remains a significant challenge, epigenetic modifications, such as DNA methylation, are thought to contribute to the pathogenesis of congenital heart defects. At present, no existing DNA methylation biomarkers are used for early detection of CHDs. The existing CHD diagnostic techniques are time-consuming and costly and can only be used to diagnose CHDs after an infant was born. The present study employed a machine learning technique to analyse genome-wide methylation data in children with and without CHDs with the aim to find methylation biomarkers for CHDs. Methods: The Illumina Human Methylation EPIC BeadChip was used to screen the genome‐wide DNA methylation profiles of 24 infants diagnosed with congenital heart defects and 24 healthy infants without congenital heart defects. Primary pre-processing was conducted by using RnBeads and limma packages. The methylation levels of top 600 genes with the lowest p-value were selected and further investigated by using a random forest approach. ROC curves were used to analyse the sensitivity and specificity of each biomarker in both training and test sample sets. The functionalities of selected genes with high sensitivity and specificity were then assessed in molecular processes. Major Findings of the Study: Three genes (MIR663, FGF3, and FAM64A) were identified from both training and validating data by random forests with an average sensitivity and specificity of 85% and 95%. GO analyses for the top 600 genes showed that these putative differentially methylated genes were primarily associated with regulation of lipid metabolic process, protein-containing complex localization, and Notch signalling pathway. The present findings highlight that aberrant DNA methylation may play a significant role in the pathogenesis of congenital heart defects.

Keywords: biomarker, congenital heart defects, DNA methylation, random forest

Procedia PDF Downloads 158
630 Rendering Cognition Based Learning in Coherence with Development within the Context of PostgreSQL

Authors: Manuela Nayantara Jeyaraj, Senuri Sucharitharathna, Chathurika Senarath, Yasanthy Kanagaraj, Indraka Udayakumara

Abstract:

PostgreSQL is an Object Relational Database Management System (ORDBMS) that has been in existence for a while. Despite the superior features that it wraps and packages to manage database and data, the database community has not fully realized the importance and advantages of PostgreSQL. Hence, this research tends to focus on provisioning a better environment of development for PostgreSQL in order to induce the utilization and elucidate the importance of PostgreSQL. PostgreSQL is also known to be the world’s most elementary SQL-compliant open source ORDBMS. But, users have not yet resolved to PostgreSQL due to the facts that it is still under the layers and the complexity of its persistent textual environment for an introductory user. Simply stating this, there is a dire need to explicate an easy way of making the users comprehend the procedure and standards with which databases are created, tables and the relationships among them, manipulating queries and their flow based on conditions in PostgreSQL to help the community resolve to PostgreSQL at an augmented rate. Hence, this research under development within the context tends to initially identify the dominant features provided by PostgreSQL over its competitors. Following the identified merits, an analysis on why the database community holds a hesitance in migrating to PostgreSQL’s environment will be carried out. These will be modulated and tailored based on the scope and the constraints discovered. The resultant of the research proposes a system that will serve as a designing platform as well as a learning tool that will provide an interactive method of learning via a visual editor mode and incorporate a textual editor for well-versed users. The study is based on conjuring viable solutions that analyze a user’s cognitive perception in comprehending human computer interfaces and the behavioural processing of design elements. By providing a visually draggable and manipulative environment to work with Postgresql databases and table queries, it is expected to highlight the elementary features displayed by Postgresql over any other existent systems in order to grasp and disseminate the importance and simplicity offered by this to a hesitant user.

Keywords: cognition, database, PostgreSQL, text-editor, visual-editor

Procedia PDF Downloads 283
629 Spatial Analysis of Survival Pattern and Treatment Outcomes of Multi-Drug Resistant Tuberculosis (MDR-TB) Patients in Lagos, Nigeria

Authors: Akinsola Oluwatosin, Udofia Samuel, Odofin Mayowa

Abstract:

The study is aimed at assessing the Geographic Information System (GIS)-based spatial analysis of Survival Pattern and Treatment Outcomes of Multi-Drug Resistant Tuberculosis (MDR-TB) cases for Lagos, Nigeria, with an objective to inform priority areas for public health planning and resource allocation. Multi-drug resistant tuberculosis (MDR-TB) develops due to problems such as irregular drug supply, poor drug quality, inappropriate prescription, and poor adherence to treatment. The shapefile(s) for this study were already georeferenced to Minna datum. The patient’s information was acquired on MS Excel and later converted to . CSV file for easy processing to ArcMap from various hospitals. To superimpose the patient’s information the spatial data, the addresses was geocoded to generate the longitude and latitude of the patients. The database was used for the SQL query to the various pattern of the treatment. To show the pattern of disease spread, spatial autocorrelation analysis was used. The result was displayed in a graphical format showing the areas of dispersing, random and clustered of patients in the study area. Hot and cold spot analysis was analyzed to show high-density areas. The distance between these patients and the closest health facility was examined using the buffer analysis. The result shows that 22% of the points were successfully matched, while 15% were tied. However, the result table shows that a greater percentage of it was unmatched; this is evident in the fact that most of the streets within the State are unnamed, and then again, most of the patients are likely to supply the wrong addresses. MDR-TB patients of all age groups are concentrated within Lagos-Mainland, Shomolu, Mushin, Surulere, Oshodi-Isolo, and Ifelodun LGAs. MDR-TB patients between the age group of 30-47 years had the highest number and were identified to be about 184 in number. The outcome of patients on ART treatment revealed that a high number of patients (300) were not ART treatment while a paltry 45 patients were on ART treatment. The result shows the Z-score of the distribution is greater than 1 (>2.58), which means that the distribution is highly clustered at a significance level of 0.01.

Keywords: tuberculosis, patients, treatment, GIS, MDR-TB

Procedia PDF Downloads 152
628 Tale of Massive Distressed Migration from Rural to Urban Areas: A Study of Mumbai City

Authors: Vidya Yadav

Abstract:

Migration is the demographic process that links rural to urban areas, generating or spurring the growth of cities. Evidence shows the role of the city as a production processes. It looks the city as a power of centre, and a centre of change. It has been observed that not only the professionals want to settle down in an urban area but rural labourers are also coming to cities for employment. These are the people who are compelled to migrate to metropolises because of lack of employment opportunities in their place of residence. However, the cities also fail to provide adequate employment because of limited job opportunity creation and capital-intensive industrialization. So these masses of incoming migrants are force to take up whatever employment absorption is available to them particularly in urban informal activities. Ultimately with this informal job they are compelled to stay in the slum areas, which is another form of deprived housing colonies. The paper seeks to examine the evidences of poverty induced migration from rural to urban areas (particularly in urban agglomeration). The present paper utilizes an abundant rich source of census migration data (D-Series) of 1991-2001. Result shows that Mumbai remain as the most attractive place to migrate. The migrants are mainly from the major states like Uttar Pradesh, Bihar, West Bengal, Jharkhand, Odisha, and Rajasthan. Male dominated migration is related mostly for employment and females due to marriages. The picture of occupational absorption of migrants who moved for employment, cross classified with educational status. Result shows that illiterate males are primarily engaged in low grade production processing work. Illiterate’s females engaged in service sectors; but these are actually very low grade services in urban informal sectors in India like maid servants, domestic help, hawkers, vendors or vegetables sellers. Among the higher educational level, a small percentage of males and females got absorbed in professional or clerical work but the percentage has been increased in the period 1991-2001.

Keywords: informal, job, migration, urban

Procedia PDF Downloads 283
627 Modelling of Recovery and Application of Low-Grade Thermal Resources in the Mining and Mineral Processing Industry

Authors: S. McLean, J. A. Scott

Abstract:

The research topic is focusing on improving sustainable operation through recovery and reuse of waste heat in process water streams, an area in the mining industry that is often overlooked. There are significant advantages to the application of this topic, including economic and environmental benefits. The smelting process in the mining industry presents an opportunity to recover waste heat and apply it to alternative uses, thereby enhancing the overall process. This applied research has been conducted at the Sudbury Integrated Nickel Operations smelter site, in particular on the water cooling towers. The aim was to determine and optimize methods for appropriate recovery and subsequent upgrading of thermally low-grade heat lost from the water cooling towers in a manner that makes it useful for repurposing in applications, such as within an acid plant. This would be valuable to mining companies as it would be an opportunity to reduce the cost of the process, as well as decrease environmental impact and primary fuel usage. The waste heat from the cooling towers needs to be upgraded before it can be beneficially applied, as lower temperatures result in a decrease of the number of potential applications. Temperature and flow rate data were collected from the water cooling towers at an acid plant over two years. The research includes process control strategies and the development of a model capable of determining if the proposed heat recovery technique is economically viable, as well as assessing any environmental impact with the reduction in net energy consumption by the process. Therefore, comprehensive cost and impact analyses are carried out to determine the best area of application for the recovered waste heat. This method will allow engineers to easily identify the value of thermal resources available to them and determine if a full feasibility study should be carried out. The rapid scoping model developed will be applicable to any site that generates large amounts of waste heat. Results show that heat pumps are an economically viable solution for this application, allowing for reduced cost and CO₂ emissions.

Keywords: environment, heat recovery, mining engineering, sustainability

Procedia PDF Downloads 111
626 Physico-Chemical and Microbial Changes of Organic Fertilizers after Compositing Processes under Arid Conditions

Authors: Oustani Mabrouka, Halilat Med Tahar

Abstract:

The physico-chemical properties of poultry droppings indicate that this waste can be an excellent way to enrich the soil with low fertility that is the case in arid soils (low organic matter content), but its concentrations in some microbial and chemical components make them potentially dangerous and toxic contaminants if they are used directly in fresh state. On other hand, the accumulation of plant residues in the crop areas can become a source of plant disease and affects the quality of the environment. The biotechnological processes that we have identified appear to alleviate these problems. It leads to the stabilization and processing of wastes into a product of good hygienic quality and high fertilizer value by the composting test. In this context, a trial was conducted in composting operations in the region of Ouargla located in southern Algeria. Composing test was conducted in a completely randomized design experiment. Three mixtures were prepared, in pits of 1 m3 volume for each mixture. Each pit is composed by mixture of poultry droppings and crushed plant residues in amount of 40 and 60% respectively: C1: Droppings + Straw (P.D +S) , C2: Poultry Droppings + Olive Wastes (P.D+O.W) , C3: Poultry Droppings + Date palm residues (P.D+D.P). Before and after the composting process, physico-chemical parameters (temperature, moisture, pH, electrical conductivity, total carbon and total nitrogen) were studied. The stability of the biological system was noticed after 90 days. The results of physico-chemical and microbiological compost obtained from three mixtures: C1: (P.D +S) , C2: (P.D+O.W) and C3: (P.D +D.P) shows at the end of composting process, three composts characterized by the final products were characterized by their high agronomic and environmental interest with a good physico chemical characteristics in particularly a low C/N ratio with 15.15, 10.01 and 15.36 % for (P.D + S), (P.D. + O.W) and (P.D. +D.P), respectively, reflecting a stabilization and maturity of the composts. On the other hand, a significant increase of temperature was recorded at the first days of composting for all treatments, which is correlated with a strong reduction of the pathogenic micro flora contained in poultry dropings.

Keywords: Arid environment, Composting, Date palm residues, Olive wastes, pH, Pathogenic microorganisms, Poultry Droppings, Straw

Procedia PDF Downloads 235
625 Exploration of Hydrocarbon Unconventional Accumulations in the Argillaceous Formation of the Autochthonous Miocene Succession in the Carpathian Foredeep

Authors: Wojciech Górecki, Anna Sowiżdżał, Grzegorz Machowski, Tomasz Maćkowski, Bartosz Papiernik, Michał Stefaniuk

Abstract:

The article shows results of the project which aims at evaluating possibilities of effective development and exploitation of natural gas from argillaceous series of the Autochthonous Miocene in the Carpathian Foredeep. To achieve the objective, the research team develop a world-trend based but unique methodology of processing and interpretation, adjusted to data, local variations and petroleum characteristics of the area. In order to determine the zones in which maximum volumes of hydrocarbons might have been generated and preserved as shale gas reservoirs, as well as to identify the most preferable well sites where largest gas accumulations are anticipated a number of task were accomplished. Evaluation of petrophysical properties and hydrocarbon saturation of the Miocene complex is based on laboratory measurements as well as interpretation of well-logs and archival data. The studies apply mercury porosimetry (MICP), micro CT and nuclear magnetic resonance imaging (using the Rock Core Analyzer). For prospective location (e.g. central part of Carpathian Foredeep – Brzesko-Wojnicz area) reprocessing and reinterpretation of detailed seismic survey data with the use of integrated geophysical investigations has been made. Construction of quantitative, structural and parametric models for selected areas of the Carpathian Foredeep is performed on the basis of integrated, detailed 3D computer models. Modeling are carried on with the Schlumberger’s Petrel software. Finally, prospective zones are spatially contoured in a form of regional 3D grid, which will be framework for generation modelling and comprehensive parametric mapping, allowing for spatial identification of the most prospective zones of unconventional gas accumulation in the Carpathian Foredeep. Preliminary results of research works indicate a potentially prospective area for occurrence of unconventional gas accumulations in the Polish part of Carpathian Foredeep.

Keywords: autochthonous Miocene, Carpathian foredeep, Poland, shale gas

Procedia PDF Downloads 228
624 Distribution of Antioxidants between Sour Cherry Juice and Pomace

Authors: Sonja Djilas, Gordana Ćetković, Jasna Čanadanović-Brunet, Vesna Tumbas Šaponjac, Slađana Stajčić, Jelena Vulić, Milica Vinčić

Abstract:

In recent years, interest in food rich in bioactive compounds, such as polyphenols, increased the advantages of the functional food products. Bioactive components help to maintain health and prevention of diseases such as cancer, cardiovascular and many other degenerative diseases. Recent research has shown that the fruit pomace, a byproduct generated from the production of juice, can be a potential source of valuable bioactive compounds. The use of fruit industrial waste in the processing of functional foods represents an important new step for the food industry. Sour cherries have considerable nutritional, medicinal, dietetic and technological value. According to the production volume of cherries, Serbia ranks seventh in the world, with a share of 7% of the total production. The use of sour cherry pomace has so far been limited to animal feed, even though it can be potentially a good source of polyphenols. For this study, local variety of sour cherry cv. ‘Feketićka’ was chosen for its more intensive taste and deeper red color, indicating high anthocyanin content. The contents of total polyphenols, flavonoids and anthocyanins, as well as radical scavenging activity on DPPH radicals and reducing power of sour cherry juice and pomace were compared using spectrophotometrical assays. According to the results obtained, 66.91% of total polyphenols, 46.77% of flavonoids, 46.77% of total anthocyanins and 47.88% of anthocyanin monomers from sour cherry fruits have been transferred to juice. On the other hand, 29.85% of total polyphenols, 33.09% of flavonoids, 53.23% of total anthocyanins and 52.12% of anthocyanin monomers remained in pomace. Regarding radical scavenging activity, 65.51% of Trolox equivalents from sour cherries were exported to juice, while 34.49% was left in pomace. However, reducing power of sour cherry juice was much stronger than pomace (91.28% and 8.72% of Trolox equivalents from sour cherry fruits, respectively). Based on our results it can be concluded that sour cherry pomace is still a rich source of natural antioxidants, especially anthocyanins with coloring capacity, therefore it can be used for dietary supplements development and food fortification.

Keywords: antioxidants, polyphenols, pomace, sour cherry

Procedia PDF Downloads 325
623 Application of Groundwater Level Data Mining in Aquifer Identification

Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen

Abstract:

Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.

Keywords: aquifer identification, decision tree, groundwater, Fourier transform

Procedia PDF Downloads 157
622 Investigating Visual Statistical Learning during Aging Using the Eye-Tracking Method

Authors: Zahra Kazemi Saleh, Bénédicte Poulin-Charronnat, Annie Vinter

Abstract:

This study examines the effects of aging on visual statistical learning, using eye-tracking techniques to investigate this cognitive phenomenon. Visual statistical learning is a fundamental brain function that enables the automatic and implicit recognition, processing, and internalization of environmental patterns over time. Some previous research has suggested the robustness of this learning mechanism throughout the aging process, underscoring its importance in the context of education and rehabilitation for the elderly. The study included three distinct groups of participants, including 21 young adults (Mage: 19.73), 20 young-old adults (Mage: 67.22), and 17 old-old adults (Mage: 79.34). Participants were exposed to a series of 12 arbitrary black shapes organized into 6 pairs, each with different spatial configurations and orientations (horizontal, vertical, and oblique). These pairs were not explicitly revealed to the participants, who were instructed to passively observe 144 grids presented sequentially on the screen for a total duration of 7 min. In the subsequent test phase, participants performed a two-alternative forced-choice task in which they had to identify the most familiar pair from 48 trials, each consisting of a base pair and a non-base pair. Behavioral analysis using t-tests revealed notable findings. The mean score for the first group was significantly above chance, indicating the presence of visual statistical learning. Similarly, the second group also performed significantly above chance, confirming the persistence of visual statistical learning in young-old adults. Conversely, the third group, consisting of old-old adults, showed a mean score that was not significantly above chance. This lack of statistical learning in the old-old adult group suggests a decline in this cognitive ability with age. Preliminary eye-tracking results showed a decrease in the number and duration of fixations during the exposure phase for all groups. The main difference was that older participants focused more often on empty cases than younger participants, likely due to a decline in the ability to ignore irrelevant information, resulting in a decrease in statistical learning performance.

Keywords: aging, eye tracking, implicit learning, visual statistical learning

Procedia PDF Downloads 77
621 Effect of Roasting Temperature on the Proximate, Mineral and Antinutrient Content of Pigeon Pea (Cajanus cajan) Ready-to-Eat Snack

Authors: Olaide Ruth Aderibigbe, Oluwatoyin Oluwole

Abstract:

Pigeon pea is one of the minor leguminous plants; though underutilised, it is used traditionally by farmers to alleviate hunger and malnutrition. Pigeon pea is cultivated in Nigeria by subsistence farmers. It is rich in protein and minerals, however, its utilisation as food is only common among the poor and rural populace who cannot afford expensive sources of protein. One of the factors contributing to its limited use is the high antinutrient content which makes it indigestible, especially when eaten by children. The development of value-added products that can reduce the antinutrient content and make the nutrients more bioavailable will increase the utilisation of the crop and contribute to reduction of malnutrition. This research, therefore, determined the effects of different roasting temperatures (130 0C, 140 0C, and 150 0C) on the proximate, mineral and antinutrient component of a pigeon pea snack. The brown variety of pigeon pea seeds were purchased from a local market- Otto in Lagos, Nigeria. The seeds were cleaned, washed, and soaked in 50 ml of water containing sugar and salt (4:1) for 15 minutes, and thereafter the seeds were roasted at 130 0C, 140 0C, and 150 0C in an electric oven for 10 minutes. Proximate, minerals, phytate, tannin and alkaloid content analyses were carried out in triplicates following standard procedures. The results of the three replicates were polled and expressed as mean±standard deviation; a one-way analysis of variance (ANOVA) and the Least Significance Difference (LSD) were carried out. The roasting temperatures significantly (P<0.05) affected the protein, ash, fibre and carbohydrate content of the snack. Ready-to-eat snack prepared by roasting at 150 0C significantly had the highest protein (23.42±0.47%) compared the ones roasted at 130 0C and 140 0C (18.38±1.25% and 20.63±0.45%, respectively). The same trend was observed for the ash content (3.91±0.11 for 150 0C, 2.36±0.15 for 140 0C and 2.26±0.25 for 130 0C), while the fibre and carbohydrate contents were highest at roasting temperature of 130 0C. Iron, zinc, and calcium were not significantly (P<0.5) affected by the different roasting temperatures. Antinutrients decreased with increasing temperature. Phytate levels recorded were 0.02±0.00, 0.06±0.00, and 0.07±0.00 mg/g; tannin levels were 0.50±0.00, 0.57±0.00, and 0.68±0.00 mg/g, while alkaloids levels were 0.51±0.01, 0.78±0.01, and 0.82±0.01 mg/g for 150 0C, 140 0C, and 130 0C, respectively. These results show that roasting at high temperature (150 0C) can be utilised as a processing technique for increasing protein and decreasing antinutrient content of pigeon pea.

Keywords: antinutrients, pigeon pea, protein, roasting, underutilised species

Procedia PDF Downloads 143
620 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System

Authors: Dong Seop Lee, Byung Sik Kim

Abstract:

In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.

Keywords: disaster information management, unstructured data, optical character recognition, machine learning

Procedia PDF Downloads 129
619 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 167
618 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 159
617 Mesoporous Na2Ti3O7 Nanotube-Constructed Materials with Hierarchical Architecture: Synthesis and Properties

Authors: Neumoin Anton Ivanovich, Opra Denis Pavlovich

Abstract:

Materials based on titanium oxide compounds are widely used in such areas as solar energy, photocatalysis, food industry and hygiene products, biomedical technologies, etc. Demand for them has also formed in the battery industry (an example of this is the commercialization of Li4Ti5O12), where much attention has recently been paid to the development of next-generation systems and technologies, such as sodium-ion batteries. This dictates the need to search for new materials with improved characteristics, as well as ways to obtain them that meet the requirements of scalability. One of the ways to solve these problems can be the creation of nanomaterials that often have a complex of physicochemical properties that radically differ from the characteristics of their counterparts in the micro- or macroscopic state. At the same time, it is important to control the texture (specific surface area, porosity) of such materials. In view of the above, among other methods, the hydrothermal technique seems to be suitable, allowing a wide range of control over the conditions of synthesis. In the present study, a method was developed for the preparation of mesoporous nanostructured sodium trititanate (Na2Ti3O7) with a hierarchical architecture. The materials were synthesized by hydrothermal processing and exhibit a complex hierarchically organized two-layer architecture. At the first level of the hierarchy, materials are represented by particles having a roughness surface, and at the second level, by one-dimensional nanotubes. The products were found to have high specific surface area and porosity with a narrow pore size distribution (about 6 nm). As it is known, the specific surface area and porosity are important characteristics of functional materials, which largely determine the possibilities and directions of their practical application. Electrochemical impedance spectroscopy data show that the resulting sodium trititanate has a sufficiently high electrical conductivity. As expected, the synthesized complexly organized nanoarchitecture based on sodium trititanate with a porous structure can be practically in demand, for example, in the field of new generation electrochemical storage and energy conversion devices.

Keywords: sodium trititanate, hierarchical materials, mesoporosity, nanotubes, hydrothermal synthesis

Procedia PDF Downloads 107
616 Prediction of Formation Pressure Using Artificial Intelligence Techniques

Authors: Abdulmalek Ahmed

Abstract:

Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).

Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)

Procedia PDF Downloads 149
615 A Facile One Step Modification of Poly(dimethylsiloxane) via Smart Polymers for Biomicrofluidics

Authors: A. Aslihan Gokaltun, Martin L. Yarmush, Ayse Asatekin, O. Berk Usta

Abstract:

Poly(dimethylsiloxane) (PDMS) is one of the most widely used materials in the fabrication of microfluidic devices. It is easily patterned and can replicate features down to nanometers. Its flexibility, gas permeability that allows oxygenation, and low cost also drive its wide adoption. However, a major drawback of PDMS is its hydrophobicity and fast hydrophobic recovery after surface hydrophilization. This results in significant non-specific adsorption of proteins as well as small hydrophobic molecules such as therapeutic drugs limiting the utility of PDMS in biomedical microfluidic circuitry. While silicon, glass, and thermoplastics have been used, they come with problems of their own such as rigidity, high cost, and special tooling needs, which limit their use to a smaller user base. Many strategies to alleviate these common problems with PDMS are lack of general practical applicability, or have limited shelf lives in terms of the modifications they achieve. This restricts large scale implementation and adoption by industrial and research communities. Accordingly, we aim to tailor biocompatible PDMS surfaces by developing a simple and one step bulk modification approach with novel smart materials to reduce non-specific molecular adsorption and to stabilize long-term cell analysis with PDMS substrates. Smart polymers that blended with PDMS during device manufacture, spontaneously segregate to surfaces when in contact with aqueous solutions and create a < 1 nm layer that reduces non-specific adsorption of organic and biomolecules. Our methods are fully compatible with existing PDMS device manufacture protocols without any additional processing steps. We have demonstrated that our modified PDMS microfluidic system is effective at blocking the adsorption of proteins while retaining the viability of primary rat hepatocytes and preserving the biocompatibility, oxygen permeability, and transparency of the material. We expect this work will enable the development of fouling-resistant biomedical materials from microfluidics to hospital surfaces and tubing.

Keywords: cell culture, microfluidics, non-specific protein adsorption, PDMS, smart polymers

Procedia PDF Downloads 294
614 Corpus Stylistics and Multidimensional Analysis for English for Specific Purposes Teaching and Assessment

Authors: Svetlana Strinyuk, Viacheslav Lanin

Abstract:

Academic English has become lingua franca for international scientific community which stimulates universities to introduce English for Specific Purposes (EAP) courses into curriculum. Teaching L2 EAP students might be fulfilled with corpus technologies and digital stylistics. A special software developed to reach the manifold task of teaching, assessing and researching academic writing of L2 students on basis of digital stylistics and multidimensional analysis was created. A set of annotations (style markers) – grammar, lexical and syntactic features most significant of academic writing was built. Contrastive comparison of two corpora “model corpus”, subject domain limited papers published by competent writers in leading academic journals, and “students’ corpus”, subject domain limited papers written by last year students allows to receive data about the features of academic writing underused or overused by L2 EAP student. Both corpora are tagged with a special software created in GATE Developer. Style markers within the framework of research might be replaced depending on the relevance and validity of the result which is achieved from research corpora. Thus, selecting relevant (high frequency) style markers and excluding less relevant, i.e. less frequent annotations, high validity of the model is achieved. Software allows to compare the data received from processing model corpus to students’ corpus and get reports which can be used in teaching and assessment. The less deviation from the model corpus students demonstrates in their writing the higher is academic writing skill acquisition. The research showed that several style markers (hedging devices) were underused by L2 EAP students whereas lexical linking devices were used excessively. A special software implemented into teaching of EAP courses serves as a successful visual aid, makes assessment more valid; it is indicative of the degree of writing skill acquisition, and provides data for further research.

Keywords: corpus technologies in EAP teaching, multidimensional analysis, GATE Developer, corpus stylistics

Procedia PDF Downloads 200
613 Lead Chalcogenide Quantum Dots for Use in Radiation Detectors

Authors: Tom Nakotte, Hongmei Luo

Abstract:

Lead chalcogenide-based (PbS, PbSe, and PbTe) quantum dots (QDs) were synthesized for the purpose of implementing them in radiation detectors. Pb based materials have long been of interest for gamma and x-ray detection due to its high absorption cross section and Z number. The emphasis of the studies was on exploring how to control charge carrier transport within thin films containing the QDs. The properties of QDs itself can be altered by changing the size, shape, composition, and surface chemistry of the dots, while the properties of carrier transport within QD films are affected by post-deposition treatment of the films. The QDs were synthesized using colloidal synthesis methods and films were grown using multiple film coating techniques, such as spin coating and doctor blading. Current QD radiation detectors are based on the QD acting as fluorophores in a scintillation detector. Here the viability of using QDs in solid-state radiation detectors, for which the incident detectable radiation causes a direct electronic response within the QD film is explored. Achieving high sensitivity and accurate energy quantification in QD radiation detectors requires a large carrier mobility and diffusion lengths in the QD films. Pb chalcogenides-based QDs were synthesized with both traditional oleic acid ligands as well as more weakly binding oleylamine ligands, allowing for in-solution ligand exchange making the deposition of thick films in a single step possible. The PbS and PbSe QDs showed better air stability than PbTe. After precipitation the QDs passivated with the shorter ligand are dispersed in 2,6-difloupyridine resulting in colloidal solutions with concentrations anywhere from 10-100 mg/mL for film processing applications, More concentrated colloidal solutions produce thicker films during spin-coating, while an extremely concentrated solution (100 mg/mL) can be used to produce several micrometer thick films using doctor blading. Film thicknesses of micrometer or even millimeters are needed for radiation detector for high-energy gamma rays, which are of interest for astrophysics or nuclear security, in order to provide sufficient stopping power.

Keywords: colloidal synthesis, lead chalcogenide, radiation detectors, quantum dots

Procedia PDF Downloads 127