Search results for: image and video processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6438

Search results for: image and video processing

678 A Discourse Analysis of Syrian Refugee Representations in Canadian News Media

Authors: Pamela Aimee Rigor

Abstract:

This study aims to examine the representation of Syrian refugees resettled in Vancouver and the Lower Mainland in local community and major newspapers. While there is strong support for immigration in Canada, public opinion towards refugees and asylum seekers is a bit more varied. Concerns about the legitimacy of refugee claims are among the common concerns of Canadians, and hateful or negative narratives are still present in Canadian media discourse which affects how people view refugees. To counter the narratives, these Syrian refugees must publicly declare how grateful they are because they are resettled in Canada. The dominant media discourse is that these refugees should be grateful as they have been graciously accepted by Canada and Canadians, once again upholding the image of Canada being a generous and humanitarian nation. The study examined the representation of Syrian refugees and the Syrian refugee resettlement in Canadian newspapers from September 2015 to October 2017 – around the time Prime Minister Trudeau came into power up until the present. Using a combination of content and discourse analysis, it aimed to uncover how local community and major newspapers in Vancouver covered the Syrian refugee ‘crisis’ – more particularly, the arrival and resettlement of the refugees in the country. Using the qualitative data analysis software Nvivo 12, the newspapers were analyzed and sorted into themes. Based on the initial findings, the discourse of Canada being a humanitarian country and Canadians being generous, as well as the idea of Syrian refugees having to publicly announce how grateful they are, is still present in the local community newspapers. This seems to be done to counter the hateful narratives of citizens who might view them as people who are abusing help provided by the community or the services provided by the government. However, compared to the major and national newspapers in Canada, many these local community newspapers are very inclusive of Syrian refugee voices. Most of the News and Community articles interview Syrian refugees and ask them their personal stories of plight, survival, resettlement and starting a ‘new life’ in Canada. They are not seen as potential threats nor are they dismissed – the refugees were named and were allowed to share their personal experiences in these news articles. These community newspapers, even though their representations are far from perfect, actually address some aspects of the refugee resettlement issue and respond to their community’s needs. There are quite a number of news articles that announce community meetings and orientations about the Syrian refugee crisis, ways to help in the resettlement process, as well as community fundraising activities to help sponsor refugees or resettle newly arrived refugees. This study aims to promote awareness of how these individuals are socially constructed so we can, in turn, be aware of the certain biases and stereotypes present, and its implications on refugee laws and public response to the issue.

Keywords: forced migration and conflict, media representations, race and multiculturalism, refugee studies

Procedia PDF Downloads 225
677 The Gender Digital Divide in Education: The Case of Students from Rural Area from Republic of Moldova

Authors: Bărbuță Alina

Abstract:

The inter-causal relationship between social inequalities and the digital divide raises the relation issue of gender and information and communication technologies (ICT) - a key element in achieving sustainable development. In preparing generations as future digital citizens and for active socio-economic participation, ICT plays a key role in respecting gender equality. Although several studies over the years have shown that gender plays an important role in digital exclusion, in recent years, many studies with a focus on economically developed or developing countries identify an improvement in these aspects and a gap narrowing. By measuring students' digital competencies level, this paper aims to identify and analyse the existing gender digital inequalities among students. Our analyses are based on a sample of 1526 middle school students residing in rural areas from Republic of Moldova (54.2% girls, mean age 14,00, SD = 1.02). During the online survey they filled in a questionnaire adapted from the (yDSI) ”The Youth Digital Skills Indicator”. The instrument measures the level of five digital competence areas indicated in The European Digital Competence Framework (DigiCom 2.3.). Our results, based on t-test, indicate that depending on gender, there are no statistically significant differences regarding the levels of digital skills in 3 areas: Information navigation and processing; Communication and interaction; Problem solving. However, were identified significant differences in the level of digital skills in the area of ”Digital content creation” [t(1425) = 4.20, p = .000] and ”Safety” [t(1421) = 2.49, p = .000], with higher scores recorded by girls. Our results contradicts the general stereotype regarding the low level of digital competence among girls, in our sample girls scores being on pear with boys and even bigger in knowledge related to digital content creation and online safety skills. Additional investigations related to boys competence on digital safety are necessary as the implication of their low scores on this dimension may suggest boys exposure to digital threats.

Keywords: digital divide, education, gender digital divide, digital literacy, remote learning

Procedia PDF Downloads 84
676 Algae for Wastewater Treatment and CO₂ Sequestration along with Recovery of Bio-Oil and Value Added Products

Authors: P. Kiran Kumar, S. Vijaya Krishna, Kavita Verma1, V. Himabindu

Abstract:

Concern about global warming and energy security has led to increased biomass utilization as an alternative feedstock to fossil fuels. Biomass is a promising feedstock since it is abundant and cheap and can be transformed into fuels and chemical products. Microalgae biofuels are likely to have a much lower impact on the environment. Microalgae cultivation using sewage with industrial flue gases is a promising concept for integrated biodiesel production, CO₂ sequestration, and nutrients recovery. Autotrophic, Mixotrophic, and Heterotrophic are the three modes of cultivation for microalgae biomass. Several mechanical and chemical processes are available for the extraction of lipids/oily components from microalgae biomass. In organic solvent extraction methods, a prior drying of biomass and recovery of the solvent is required, which are energy-intensive. Thus, the hydrothermal process overcomes the drawbacks of conventional solvent extraction methods. In the hydrothermal process, the biomass is converted into oily components by processing in a hot, pressurized water environment. In this process, in addition to the lipid fraction of microalgae, other value-added products such as proteins, carbohydrates, and nutrients can also be recovered. In the present study was (Scenedesmus quadricauda) was isolated and cultivated in autotrophic, heterotrophic, and mixotrophically using sewage wastewater and industrial flue gas in batch and continuous mode. The harvested algae biomass from S. quadricauda was used for the recovery of lipids and bio-oil. The lipids were extracted from the algal biomass using sonication as a cell disruption method followed by solvent (Hexane) extraction, and the lipid yield obtained was 8.3 wt% with Palmitic acid, Oleic acid, and Octadeonoic acid as fatty acids. The hydrothermal process was also carried out for extraction of bio-oil, and the yield obtained was 18wt%. The bio-oil compounds such as nitrogenous compounds, organic acids, and esters, phenolics, hydrocarbons, and alkanes were obtained by the hydrothermal process of algal biomass. Nutrients such as NO₃⁻ (68%) and PO₄⁻ (15%) were also recovered along with bio-oil in the hydrothermal process.

Keywords: flue gas, hydrothermal process, microalgae, sewage wastewater, sonication

Procedia PDF Downloads 125
675 Thickness-Tunable Optical, Magnetic, and Dielectric Response of Lithium Ferrite Thin Film Synthesized by Pulsed Laser Deposition

Authors: Prajna Paramita Mohapatra, Pamu Dobbidi

Abstract:

Lithium ferrite (LiFe5O8) has potential applications as a component of microwave magnetic devices such as circulators and monolithic integrated circuits. For efficient device applications, spinel ferrites in the form of thin films are highly required. It is necessary to improve their magnetic and dielectric behavior by optimizing the processing parameters during deposition. The lithium ferrite thin films are deposited on Pt/Si substrate using the pulsed laser deposition technique (PLD). As controlling the film thickness is the easiest parameter to tailor the strain, we deposited the thin films having different film thicknesses (160 nm, 200 nm, 240 nm) at oxygen partial pressure of 0.001 mbar. The formation of single phase with spinel structure (space group - P4132) is confirmed by the XRD pattern and the Rietveld analysis. The optical bandgap is decreased with the increase in thickness. FESEM confirmed the formation of uniform grains having well separated grain boundaries. Further, the film growth and the roughness are analyzed by AFM. The root-mean-square (RMS) surface roughness is decreased from 13.52 nm (160 nm) to 9.34 nm (240 nm). The room temperature magnetization is measured with a maximum field of 10 kOe. The saturation magnetization is enhanced monotonically with an increase in thickness. The magnetic resonance linewidth is obtained in the range of 450 – 780 Oe. The dielectric response is measured in the frequency range of 104 – 106 Hz and in the temperature range of 303 – 473 K. With an increase in frequency, the dielectric constant and the loss tangent of all the samples decreased continuously, which is a typical behavior of conventional dielectric material. The real part of the dielectric constant and the dielectric loss is increased with an increase in thickness. The contribution of grain and grain boundaries is also analyzed by employing the equivalent circuit model. The highest dielectric constant is obtained for the film having a thickness of 240 nm at 104 Hz. The obtained results demonstrate that desired response can be obtained by tailoring the film thickness for the microwave magnetic devices.

Keywords: PLD, optical response, thin films, magnetic response, dielectric response

Procedia PDF Downloads 83
674 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 280
673 The Interplay between Consumer Knowledge, Cognitive Effort, Financial Healthiness and Trust in the Financial Marketplace

Authors: Torben Hansen

Abstract:

While trust has long been regarded as one of the most critical variables for developing and maintaining well-functioning financial customer-seller relationships it can be suggested that trust not only relates to customer trust in individual companies (narrow-scope trust). Trust also relates to the broader business context in which consumers may carry out their financial behaviour (broad-scope trust). However, despite the well-recognized significance of trust in marketing research, only few studies have investigated the role of broad-scope trust in consumer financial behaviour. Moreover, as one of its many serious outcomes, the global financial crisis has elevated the need for an improved understanding of the role of broad-scope trust in consumer financial services markets. Only a minority of US and European consumers are currently confident in financial companies and ‘financial stability’ and ‘trust’ are now among the top reasons for choosing a bank. This research seeks to address this shortcoming in the marketing literature by investigating direct and moderating effects of broad-scope trust on consumer financial behaviour. Specifically, we take an ability-effort approach to consumer financial behaviour. The ability-effort approach holds the basic premise that the quality of consumer actions is influenced by ability factors, for example consumer knowledge and cognitive effort. Our study is based on two surveys. Survey 1 comprises 1,155 bank consumers, whereas survey 2 comprises 764 pension consumers. The results indicate that broad-scope trust negatively moderates relationships between knowledge and financial healthiness and between cognitive effort and financial healthiness. In addition, it is demonstrated that broad-scope trust negatively influences cognitive effort. Specifically, the results suggest that broad-scope trust contributes to the financial well-being of consumers with limited financial knowledge and processing capabilities. Since financial companies are dependent on customers to pay their loans and bills they have a greater interest in developing relations with consumers with a healthy financial behaviour than with the opposite. Hence, financial managers should be engaged with monitoring and influencing broad-scope trust. To conclude, by taking into account the contextual effect of broad-scope trust, the present study adds to our understanding of knowledge-effort-behaviour relationship in consumer financial markets.

Keywords: cognitive effort, customer-seller relationships, financial healthiness, knowledge, trust

Procedia PDF Downloads 421
672 Sexual Health And Male Fertility: Improving Sperm Health With Focus On Technology

Authors: Diana Peninger

Abstract:

Over 10% of couples in the U.S. have infertility problems, with roughly 40% traceable to the male partner. Yet, little attention has been given to improving men’s contribution to the conception process. One solution that is showing promise in increasing conception rates for IVF and other assisted reproductive technology treatments is a first-of-its-kind semen collection that has been engineered to mitigate sperm damage caused by traditional collection methods. Patients are able to collect semen at home and deliver to clinics within 48 hours for use in fertility analysis and treatment, with less stress and improved specimen viability. This abstract will share these findings along with expert insight and tips to help attendees understand the key role sperm collection plays in addressing and treating reproductive issues, while helping to improve patient outcomes and success. Our research was to determine if male reproductive outcomes can be increased by improving sperm specimen health with a focus on technology. We utilized a redesigned semen collection cup (patented as the Device for Improved Semen Collection/DISC—U.S. Patent 6864046 – known commercially as a ProteX) that met a series of physiological parameters. Previous research demonstrated significant improvement in semen perimeters (motility forward, progression, viability, and longevity) and overall sperm biochemistry when the DISC is used for collection. Animal studies have also shown dramatic increases in pregnancy rates. Our current study compares samples collected in the DISC, next-generation DISC (DISCng), and a standard specimen cup (SSC), dry, with the 1 mL measured amount of media and media in excess ( 5mL). Both human and animal testing will be included. With sperm counts declining at alarming rates due to environmental, lifestyle, and other health factors, accurate evaluations of sperm health are critical to understanding reproductive health, origins, and treatments of infertility. An increase in the health of the sperm as measured by extensive semen parameter analysis and improved semen parameters stable for 48 hours, expanding the processing time from 1 hour to 48 hours were also demonstrated.

Keywords: reprodutive, sperm, male, infertility

Procedia PDF Downloads 115
671 Reinforced Concrete Bridge Deck Condition Assessment Methods Using Ground Penetrating Radar and Infrared Thermography

Authors: Nicole M. Martino

Abstract:

Reinforced concrete bridge deck condition assessments primarily use visual inspection methods, where an inspector looks for and records locations of cracks, potholes, efflorescence and other signs of probable deterioration. Sounding is another technique used to diagnose the condition of a bridge deck, however this method listens for damage within the subsurface as the surface is struck with a hammer or chain. Even though extensive procedures are in place for using these inspection techniques, neither one provides the inspector with a comprehensive understanding of the internal condition of a bridge deck – the location where damage originates from.  In order to make accurate estimates of repair locations and quantities, in addition to allocating the necessary funding, a total understanding of the deck’s deteriorated state is key. The research presented in this paper collected infrared thermography and ground penetrating radar data from reinforced concrete bridge decks without an asphalt overlay. These decks were of various ages and their condition varied from brand new, to in need of replacement. The goals of this work were to first verify that these nondestructive evaluation methods could identify similar areas of healthy and damaged concrete, and then to see if combining the results of both methods would provide a higher confidence than if the condition assessment was completed using only one method. The results from each method were presented as plan view color contour plots. The results from one of the decks assessed as a part of this research, including these plan view plots, are presented in this paper. Furthermore, in order to answer the interest of transportation agencies throughout the United States, this research developed a step-by-step guide which demonstrates how to collect and assess a bridge deck using these nondestructive evaluation methods. This guide addresses setup procedures on the deck during the day of data collection, system setups and settings for different bridge decks, data post-processing for each method, and data visualization and quantification.

Keywords: bridge deck deterioration, ground penetrating radar, infrared thermography, NDT of bridge decks

Procedia PDF Downloads 138
670 Mitigating Nitrous Oxide Production from Nitritation/Denitritation: Treatment of Centrate from Pig Manure Co-Digestion as a Model

Authors: Lai Peng, Cristina Pintucci, Dries Seuntjens, José Carvajal-Arroyo, Siegfried Vlaeminck

Abstract:

Economic incentives drive the implementation of short-cut nitrogen removal processes such as nitritation/denitritation (Nit/DNit) to manage nitrogen in waste streams devoid of biodegradable organic carbon. However, as any biological nitrogen removal process, the potent greenhouse gas nitrous oxide (N2O) could be emitted from Nit/DNit. Challenges remain in understanding the fundamental mechanisms and development of engineered mitigation strategies for N2O production. To provide answers, this work focuses on manure as a model, the biggest wasted nitrogen mass flow through our economies. A sequencing batch reactor (SBR; 4.5 L) was used treating the centrate (centrifuge supernatant; 2.0 ± 0.11 g N/L of ammonium) from an anaerobic digester processing mainly pig manure, supplemented with a co-substrate. Glycerin was used as external carbon source, a by-product of vegetable oil. Out-selection of nitrite oxidizing bacteria (NOB) was targeted using a combination of low dissolved oxygen (DO) levels (down to 0.5 mg O2/L), high temperature (35ºC) and relatively high free ammonia (FA) (initially 10 mg NH3-N/L). After reaching steady state, the process was able to remove 100% of ammonium with minimum nitrite and nitrate in the effluent, at a reasonably high nitrogen loading rate (0.4 g N/L/d). Substantial N2O emissions (over 15% of the nitrogen loading) were observed at the baseline operational condition, which were even increased under nitrite accumulation and a low organic carbon to nitrogen ratio. Yet, higher DO (~2.2 mg O2/L) lowered aerobic N2O emissions and weakened the dependency of N2O on nitrite concentration, suggesting a shift of N2O production pathway at elevated DO levels. Limiting the greenhouse gas emissions (environmental protection) from such a system could be substantially minimized by increasing the external carbon dosage (a cost factor), but also through the implementation of an intermittent aeration and feeding strategy. Promising steps forward have been presented in this abstract, yet at the conference the insights of ongoing experiments will also be shared.

Keywords: mitigation, nitrous oxide, nitritation/denitritation, pig manure

Procedia PDF Downloads 237
669 The Effect of Music on Consumer Behavior

Authors: Lara Ann Türeli, Özlem Bozkurt

Abstract:

There is a biochemical component to listening to music. The type of music listened to can lead to different levels of neurotransmitter and biochemical activity within the brain, resulting in brain stimulation and different moods. Therefore, music plays an important role in neuromarketing and consumer behavior. The quality of a commercial can be measured by the effect the music has on its audience. Thus, understanding how music can affect the brain can provide better marketing strategies for all businesses. The type of music used plays an important role in how a person responds to certain experiences. In the context of marketing and consumer behavior, music can determine whether a person will be intrigued to buy something. Depending on the type of music listened to by an individual; the music may trigger the release of pleasurable neurotransmitters such as dopamine. Dopamine is a neurotransmitter that plays an important role in reward pathways in the brain. When an individual experiences a pleasurable activity, increased levels of dopamine are produced, eventually leading to the formation of new reward pathways. Consequently, the increased dopamine activity within the brain triggered by music can result in new reward pathways along the dopamine pathways in the brain. Selecting pleasurable music for commercials can result in long-term brain stimulation, increasing consumerism. The effect of music on consumerism should be considered not only in commercials but also in the atmosphere it creates within stores. The type of music played in a store can affect consumer behavior and intention. Specifically, the rhythm, pitch, and pace of music can contribute to the mood of the song. The background music in a store can determine the consumer’s emotional presence and consequently affect their intentions. In conclusion, understanding the physiological, psychological, and neurochemical basis of the effect of music on brain stimulation is essential to understand consumer behavior. The role of dopamine in the formation of reward pathways as a result of music directly contributes to consumer behavior and the tendency of a commercial or store to leave a long-term effect on the consumer. The careful consideration of the pitch, pace, and rhythm of a song in the selection of music can not only help companies predict the behavior of a consumer but also determine the behavior of a consumer.

Keywords: sensory processing, neuropsychology, dopamine, neuromarketing

Procedia PDF Downloads 65
668 Hematological and Biochemical Indices of Starter Broiler Chickens Fed African Black Plum Seed Nut (Vitex Doniana) Meal

Authors: Obadire F. O., Obadire, S. O., Adeoti R. F., Pirgozliev V.

Abstract:

An experiment was conducted to determine the efficacy of utilizing African black plum seed nut (ABPNBD) meal on hematological and biochemical indices of broiler chicken ration formulated to substitute wheat offal. A total of 150- 1day-old, male Agrited birds were reared for 28 days of the experiment. The birds were assigned to five dietary treatments, with ten birds per treatment replicated 3 times. Experimental diets were formulated by supplementing the milled African black plum nut at (0, 5, 10, 12.5, and 15%) inclusion levels in the starter broiler’s ratio designated as T1 (control diet containing no ABPBD), Treatments (T2, 3,4 and 5) contained ABPNBD at 5, 10, 12.5, and 15%, respectively, in a completely randomized design. The hematological and biochemical indices of the birds were determined. The result revealed that all hematological parameters measured were significant (P <0.05) except for WBC. Increasing inclusion levels of ABPNBD decreased the PCV, HB, and RBC of the birds across the treatment groups. Birds fed 12.5 and 15% ABPNBD diets recorded the least of the parameters. The result of the serum biochemical indices showed significant (P < 0.05) influence for all parameters measured except for alanine transaminase (ALT), (AST), and creatinine. The total protein (TP), albumin, globulin, and glucose values were reduced across the treatment group as ABPNBD inclusion increased. Birds fed above 10% ABPNBD recorded the lowest value of TP, albumin, globulin, and glucose when compared with birds on a control diet and other treatments. The uric acid ranged from 3.85 to 2 .13 mmol/L, while creatinine ranged from 62.00 to 53.50 mmol/l. AST ranged between 8.50 u/l (5%) to 7.90 u/l (10%). ALT ranged between 7.50 u/l (12.5%) to 5.50 u/l (5 and 10%). In conclusion, dietary inclusion of African black plum up to 10% has no detrimental effect on the health of the starter chickens. Meanwhile, inclusion above 10% revealed a negative effect on some blood parameters measured. Therefore, African black plum should be supplemented with probable probiotics or subjected to different processing methods if to be used at a 15% inclusion level for optimal results.

Keywords: African black plum seed, starter broiler chickens, hematological and serum biochemical indices, (Vitex doniana)

Procedia PDF Downloads 28
667 Studies of Single Nucleotide Polymorphism of Proteosomal Gene Complex and Their Association with HBV Infection Risk in India

Authors: Jasbir Singh, Devender Kumar, Davender Redhu, Surender Kumar, Vandana Bhardwaj

Abstract:

Single Nucleotide polymorphism (SNP) of proteosomal gene complex is involved in the pathogenesis of hepatitis B Virus (HBV) infection. Some of such proteosomal gene complex are large multifunctional proteins (LMP) and antigen associated transporters that help in antigen presentation. Both are involved in intracellular processing and presentation of viral antigens in association with Major Histocompatability Complex (MHC) Class I molecules. A total of hundred each of hepatitis B virus infected and control samples from northern India were studied. Genomic DNA was extracted from all studied samples and PCR-RFLP method was used for genotyping at different positions of LMP genes. Genotypes at a given position were inferred from the pattern of bands and genotype frequencies and haplotype frequencies were also calculated. Homozygous SNP {A>C} was observed at codon 145 of LMP7 gene and having a protective role against HBV as there was statistically significant high distribution of this SNP among controls than cases. Heterozygous SNP {A>C} was observed at codon 145 of LMP7 gene and made individuals more susceptible to HBV infection as there was statistically significant high distribution of this SNP among cases than control. SNP {T>C} was observed at codon 60 of LMP2 gene but statistically significant differences were not observed among controls and cases. For codon 145 of LMP7 and codon 60 of LMP2 genes, four haplotypes were constructed. Haplotype I (LMP2 ‘C’ and LMP7 ‘A’) made individuals carrying it more susceptible to HBV infection as there was statistically significant high distribution of this haplotype among cases than control. Haplotype II (LMP2 ‘C’ and LMP7 ‘C’) made individuals carrying it more immune to HBV infection as there was statistically significant high distribution of this haplotype among control than cases. Thus it can be concluded that homozygous SNP {A>C} at codon 145 of LMP7 and Haplotype II (LMP2 ‘C’ and LMP7 ‘C’) has a protective role against HBV infection whereas heterozygous SNP {A>C} at codon 145 of LMP7 and Haplotype I (LMP2 ‘C’ and LMP7 ‘A’) made individuals more susceptible to HBV infection.

Keywords: Hepatitis B Virus, single nucleotide polymorphism, low molecular weight proteins, transporters associated with antigen presentation

Procedia PDF Downloads 295
666 Language Choice and Language Maintenance of Northeastern Thai Staff in Suan Sunandha Rajabhat University

Authors: Napasri Suwanajote

Abstract:

The purposes of this research were to analyze and evaluate successful factors in OTOP production process for the developing of learning center on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality. The research has been designed as a qualitative study to gather information from 30 OTOP producers in Bangkontee District, Samudsongkram Province. They were all interviewed on 3 main parts. Part 1 was about the production process including 1) production, 2) product development, 3) the community strength, 4) marketing possibility, and 5) product quality. Part 2 evaluated appropriate successful factors including 1) the analysis of the successful factors, 2) evaluate the strategy based on Sufficiency Economic Philosophy, and 3) the model of learning center on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality. The results showed that the production did not affect the environment with potential in continuing standard quality production. They used the raw materials in the country. On the aspect of product and community strength in the past 1 year, it was found that there was no appropriate packaging showing product identity according to global market standard. They needed the training on packaging especially for food and drink products. On the aspect of product quality and product specification, it was found that the products were certified by the local OTOP standard. There should be a responsible organization to help the uncertified producers pass the standard. However, there was a problem on food contamination which was hazardous to the consumers. The producers should cooperate with the government sector or educational institutes involving with food processing to reach FDA standard. The results from small group discussion showed that the community expected high education and better standard living. Some problems reported by the community included informal debt and drugs in the community. There were 8 steps in developing the model of learning center on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality.

Keywords: production process, OTOP, sufficiency economic philosophy, language choice

Procedia PDF Downloads 221
665 Identification of Candidate Congenital Heart Defects Biomarkers by Applying a Random Forest Approach on DNA Methylation Data

Authors: Kan Yu, Khui Hung Lee, Eben Afrifa-Yamoah, Jing Guo, Katrina Harrison, Jack Goldblatt, Nicholas Pachter, Jitian Xiao, Guicheng Brad Zhang

Abstract:

Background and Significance of the Study: Congenital Heart Defects (CHDs) are the most common malformation at birth and one of the leading causes of infant death. Although the exact etiology remains a significant challenge, epigenetic modifications, such as DNA methylation, are thought to contribute to the pathogenesis of congenital heart defects. At present, no existing DNA methylation biomarkers are used for early detection of CHDs. The existing CHD diagnostic techniques are time-consuming and costly and can only be used to diagnose CHDs after an infant was born. The present study employed a machine learning technique to analyse genome-wide methylation data in children with and without CHDs with the aim to find methylation biomarkers for CHDs. Methods: The Illumina Human Methylation EPIC BeadChip was used to screen the genome‐wide DNA methylation profiles of 24 infants diagnosed with congenital heart defects and 24 healthy infants without congenital heart defects. Primary pre-processing was conducted by using RnBeads and limma packages. The methylation levels of top 600 genes with the lowest p-value were selected and further investigated by using a random forest approach. ROC curves were used to analyse the sensitivity and specificity of each biomarker in both training and test sample sets. The functionalities of selected genes with high sensitivity and specificity were then assessed in molecular processes. Major Findings of the Study: Three genes (MIR663, FGF3, and FAM64A) were identified from both training and validating data by random forests with an average sensitivity and specificity of 85% and 95%. GO analyses for the top 600 genes showed that these putative differentially methylated genes were primarily associated with regulation of lipid metabolic process, protein-containing complex localization, and Notch signalling pathway. The present findings highlight that aberrant DNA methylation may play a significant role in the pathogenesis of congenital heart defects.

Keywords: biomarker, congenital heart defects, DNA methylation, random forest

Procedia PDF Downloads 142
664 Rendering Cognition Based Learning in Coherence with Development within the Context of PostgreSQL

Authors: Manuela Nayantara Jeyaraj, Senuri Sucharitharathna, Chathurika Senarath, Yasanthy Kanagaraj, Indraka Udayakumara

Abstract:

PostgreSQL is an Object Relational Database Management System (ORDBMS) that has been in existence for a while. Despite the superior features that it wraps and packages to manage database and data, the database community has not fully realized the importance and advantages of PostgreSQL. Hence, this research tends to focus on provisioning a better environment of development for PostgreSQL in order to induce the utilization and elucidate the importance of PostgreSQL. PostgreSQL is also known to be the world’s most elementary SQL-compliant open source ORDBMS. But, users have not yet resolved to PostgreSQL due to the facts that it is still under the layers and the complexity of its persistent textual environment for an introductory user. Simply stating this, there is a dire need to explicate an easy way of making the users comprehend the procedure and standards with which databases are created, tables and the relationships among them, manipulating queries and their flow based on conditions in PostgreSQL to help the community resolve to PostgreSQL at an augmented rate. Hence, this research under development within the context tends to initially identify the dominant features provided by PostgreSQL over its competitors. Following the identified merits, an analysis on why the database community holds a hesitance in migrating to PostgreSQL’s environment will be carried out. These will be modulated and tailored based on the scope and the constraints discovered. The resultant of the research proposes a system that will serve as a designing platform as well as a learning tool that will provide an interactive method of learning via a visual editor mode and incorporate a textual editor for well-versed users. The study is based on conjuring viable solutions that analyze a user’s cognitive perception in comprehending human computer interfaces and the behavioural processing of design elements. By providing a visually draggable and manipulative environment to work with Postgresql databases and table queries, it is expected to highlight the elementary features displayed by Postgresql over any other existent systems in order to grasp and disseminate the importance and simplicity offered by this to a hesitant user.

Keywords: cognition, database, PostgreSQL, text-editor, visual-editor

Procedia PDF Downloads 259
663 Spatial Analysis of Survival Pattern and Treatment Outcomes of Multi-Drug Resistant Tuberculosis (MDR-TB) Patients in Lagos, Nigeria

Authors: Akinsola Oluwatosin, Udofia Samuel, Odofin Mayowa

Abstract:

The study is aimed at assessing the Geographic Information System (GIS)-based spatial analysis of Survival Pattern and Treatment Outcomes of Multi-Drug Resistant Tuberculosis (MDR-TB) cases for Lagos, Nigeria, with an objective to inform priority areas for public health planning and resource allocation. Multi-drug resistant tuberculosis (MDR-TB) develops due to problems such as irregular drug supply, poor drug quality, inappropriate prescription, and poor adherence to treatment. The shapefile(s) for this study were already georeferenced to Minna datum. The patient’s information was acquired on MS Excel and later converted to . CSV file for easy processing to ArcMap from various hospitals. To superimpose the patient’s information the spatial data, the addresses was geocoded to generate the longitude and latitude of the patients. The database was used for the SQL query to the various pattern of the treatment. To show the pattern of disease spread, spatial autocorrelation analysis was used. The result was displayed in a graphical format showing the areas of dispersing, random and clustered of patients in the study area. Hot and cold spot analysis was analyzed to show high-density areas. The distance between these patients and the closest health facility was examined using the buffer analysis. The result shows that 22% of the points were successfully matched, while 15% were tied. However, the result table shows that a greater percentage of it was unmatched; this is evident in the fact that most of the streets within the State are unnamed, and then again, most of the patients are likely to supply the wrong addresses. MDR-TB patients of all age groups are concentrated within Lagos-Mainland, Shomolu, Mushin, Surulere, Oshodi-Isolo, and Ifelodun LGAs. MDR-TB patients between the age group of 30-47 years had the highest number and were identified to be about 184 in number. The outcome of patients on ART treatment revealed that a high number of patients (300) were not ART treatment while a paltry 45 patients were on ART treatment. The result shows the Z-score of the distribution is greater than 1 (>2.58), which means that the distribution is highly clustered at a significance level of 0.01.

Keywords: tuberculosis, patients, treatment, GIS, MDR-TB

Procedia PDF Downloads 134
662 Tale of Massive Distressed Migration from Rural to Urban Areas: A Study of Mumbai City

Authors: Vidya Yadav

Abstract:

Migration is the demographic process that links rural to urban areas, generating or spurring the growth of cities. Evidence shows the role of the city as a production processes. It looks the city as a power of centre, and a centre of change. It has been observed that not only the professionals want to settle down in an urban area but rural labourers are also coming to cities for employment. These are the people who are compelled to migrate to metropolises because of lack of employment opportunities in their place of residence. However, the cities also fail to provide adequate employment because of limited job opportunity creation and capital-intensive industrialization. So these masses of incoming migrants are force to take up whatever employment absorption is available to them particularly in urban informal activities. Ultimately with this informal job they are compelled to stay in the slum areas, which is another form of deprived housing colonies. The paper seeks to examine the evidences of poverty induced migration from rural to urban areas (particularly in urban agglomeration). The present paper utilizes an abundant rich source of census migration data (D-Series) of 1991-2001. Result shows that Mumbai remain as the most attractive place to migrate. The migrants are mainly from the major states like Uttar Pradesh, Bihar, West Bengal, Jharkhand, Odisha, and Rajasthan. Male dominated migration is related mostly for employment and females due to marriages. The picture of occupational absorption of migrants who moved for employment, cross classified with educational status. Result shows that illiterate males are primarily engaged in low grade production processing work. Illiterate’s females engaged in service sectors; but these are actually very low grade services in urban informal sectors in India like maid servants, domestic help, hawkers, vendors or vegetables sellers. Among the higher educational level, a small percentage of males and females got absorbed in professional or clerical work but the percentage has been increased in the period 1991-2001.

Keywords: informal, job, migration, urban

Procedia PDF Downloads 264
661 Modelling of Recovery and Application of Low-Grade Thermal Resources in the Mining and Mineral Processing Industry

Authors: S. McLean, J. A. Scott

Abstract:

The research topic is focusing on improving sustainable operation through recovery and reuse of waste heat in process water streams, an area in the mining industry that is often overlooked. There are significant advantages to the application of this topic, including economic and environmental benefits. The smelting process in the mining industry presents an opportunity to recover waste heat and apply it to alternative uses, thereby enhancing the overall process. This applied research has been conducted at the Sudbury Integrated Nickel Operations smelter site, in particular on the water cooling towers. The aim was to determine and optimize methods for appropriate recovery and subsequent upgrading of thermally low-grade heat lost from the water cooling towers in a manner that makes it useful for repurposing in applications, such as within an acid plant. This would be valuable to mining companies as it would be an opportunity to reduce the cost of the process, as well as decrease environmental impact and primary fuel usage. The waste heat from the cooling towers needs to be upgraded before it can be beneficially applied, as lower temperatures result in a decrease of the number of potential applications. Temperature and flow rate data were collected from the water cooling towers at an acid plant over two years. The research includes process control strategies and the development of a model capable of determining if the proposed heat recovery technique is economically viable, as well as assessing any environmental impact with the reduction in net energy consumption by the process. Therefore, comprehensive cost and impact analyses are carried out to determine the best area of application for the recovered waste heat. This method will allow engineers to easily identify the value of thermal resources available to them and determine if a full feasibility study should be carried out. The rapid scoping model developed will be applicable to any site that generates large amounts of waste heat. Results show that heat pumps are an economically viable solution for this application, allowing for reduced cost and CO₂ emissions.

Keywords: environment, heat recovery, mining engineering, sustainability

Procedia PDF Downloads 96
660 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 206
659 Physico-Chemical and Microbial Changes of Organic Fertilizers after Compositing Processes under Arid Conditions

Authors: Oustani Mabrouka, Halilat Med Tahar

Abstract:

The physico-chemical properties of poultry droppings indicate that this waste can be an excellent way to enrich the soil with low fertility that is the case in arid soils (low organic matter content), but its concentrations in some microbial and chemical components make them potentially dangerous and toxic contaminants if they are used directly in fresh state. On other hand, the accumulation of plant residues in the crop areas can become a source of plant disease and affects the quality of the environment. The biotechnological processes that we have identified appear to alleviate these problems. It leads to the stabilization and processing of wastes into a product of good hygienic quality and high fertilizer value by the composting test. In this context, a trial was conducted in composting operations in the region of Ouargla located in southern Algeria. Composing test was conducted in a completely randomized design experiment. Three mixtures were prepared, in pits of 1 m3 volume for each mixture. Each pit is composed by mixture of poultry droppings and crushed plant residues in amount of 40 and 60% respectively: C1: Droppings + Straw (P.D +S) , C2: Poultry Droppings + Olive Wastes (P.D+O.W) , C3: Poultry Droppings + Date palm residues (P.D+D.P). Before and after the composting process, physico-chemical parameters (temperature, moisture, pH, electrical conductivity, total carbon and total nitrogen) were studied. The stability of the biological system was noticed after 90 days. The results of physico-chemical and microbiological compost obtained from three mixtures: C1: (P.D +S) , C2: (P.D+O.W) and C3: (P.D +D.P) shows at the end of composting process, three composts characterized by the final products were characterized by their high agronomic and environmental interest with a good physico chemical characteristics in particularly a low C/N ratio with 15.15, 10.01 and 15.36 % for (P.D + S), (P.D. + O.W) and (P.D. +D.P), respectively, reflecting a stabilization and maturity of the composts. On the other hand, a significant increase of temperature was recorded at the first days of composting for all treatments, which is correlated with a strong reduction of the pathogenic micro flora contained in poultry dropings.

Keywords: Arid environment, Composting, Date palm residues, Olive wastes, pH, Pathogenic microorganisms, Poultry Droppings, Straw

Procedia PDF Downloads 217
658 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India

Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar

Abstract:

The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.

Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose

Procedia PDF Downloads 240
657 Exploration of Hydrocarbon Unconventional Accumulations in the Argillaceous Formation of the Autochthonous Miocene Succession in the Carpathian Foredeep

Authors: Wojciech Górecki, Anna Sowiżdżał, Grzegorz Machowski, Tomasz Maćkowski, Bartosz Papiernik, Michał Stefaniuk

Abstract:

The article shows results of the project which aims at evaluating possibilities of effective development and exploitation of natural gas from argillaceous series of the Autochthonous Miocene in the Carpathian Foredeep. To achieve the objective, the research team develop a world-trend based but unique methodology of processing and interpretation, adjusted to data, local variations and petroleum characteristics of the area. In order to determine the zones in which maximum volumes of hydrocarbons might have been generated and preserved as shale gas reservoirs, as well as to identify the most preferable well sites where largest gas accumulations are anticipated a number of task were accomplished. Evaluation of petrophysical properties and hydrocarbon saturation of the Miocene complex is based on laboratory measurements as well as interpretation of well-logs and archival data. The studies apply mercury porosimetry (MICP), micro CT and nuclear magnetic resonance imaging (using the Rock Core Analyzer). For prospective location (e.g. central part of Carpathian Foredeep – Brzesko-Wojnicz area) reprocessing and reinterpretation of detailed seismic survey data with the use of integrated geophysical investigations has been made. Construction of quantitative, structural and parametric models for selected areas of the Carpathian Foredeep is performed on the basis of integrated, detailed 3D computer models. Modeling are carried on with the Schlumberger’s Petrel software. Finally, prospective zones are spatially contoured in a form of regional 3D grid, which will be framework for generation modelling and comprehensive parametric mapping, allowing for spatial identification of the most prospective zones of unconventional gas accumulation in the Carpathian Foredeep. Preliminary results of research works indicate a potentially prospective area for occurrence of unconventional gas accumulations in the Polish part of Carpathian Foredeep.

Keywords: autochthonous Miocene, Carpathian foredeep, Poland, shale gas

Procedia PDF Downloads 216
656 Distribution of Antioxidants between Sour Cherry Juice and Pomace

Authors: Sonja Djilas, Gordana Ćetković, Jasna Čanadanović-Brunet, Vesna Tumbas Šaponjac, Slađana Stajčić, Jelena Vulić, Milica Vinčić

Abstract:

In recent years, interest in food rich in bioactive compounds, such as polyphenols, increased the advantages of the functional food products. Bioactive components help to maintain health and prevention of diseases such as cancer, cardiovascular and many other degenerative diseases. Recent research has shown that the fruit pomace, a byproduct generated from the production of juice, can be a potential source of valuable bioactive compounds. The use of fruit industrial waste in the processing of functional foods represents an important new step for the food industry. Sour cherries have considerable nutritional, medicinal, dietetic and technological value. According to the production volume of cherries, Serbia ranks seventh in the world, with a share of 7% of the total production. The use of sour cherry pomace has so far been limited to animal feed, even though it can be potentially a good source of polyphenols. For this study, local variety of sour cherry cv. ‘Feketićka’ was chosen for its more intensive taste and deeper red color, indicating high anthocyanin content. The contents of total polyphenols, flavonoids and anthocyanins, as well as radical scavenging activity on DPPH radicals and reducing power of sour cherry juice and pomace were compared using spectrophotometrical assays. According to the results obtained, 66.91% of total polyphenols, 46.77% of flavonoids, 46.77% of total anthocyanins and 47.88% of anthocyanin monomers from sour cherry fruits have been transferred to juice. On the other hand, 29.85% of total polyphenols, 33.09% of flavonoids, 53.23% of total anthocyanins and 52.12% of anthocyanin monomers remained in pomace. Regarding radical scavenging activity, 65.51% of Trolox equivalents from sour cherries were exported to juice, while 34.49% was left in pomace. However, reducing power of sour cherry juice was much stronger than pomace (91.28% and 8.72% of Trolox equivalents from sour cherry fruits, respectively). Based on our results it can be concluded that sour cherry pomace is still a rich source of natural antioxidants, especially anthocyanins with coloring capacity, therefore it can be used for dietary supplements development and food fortification.

Keywords: antioxidants, polyphenols, pomace, sour cherry

Procedia PDF Downloads 305
655 Application of Groundwater Level Data Mining in Aquifer Identification

Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen

Abstract:

Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.

Keywords: aquifer identification, decision tree, groundwater, Fourier transform

Procedia PDF Downloads 141
654 Investigating Visual Statistical Learning during Aging Using the Eye-Tracking Method

Authors: Zahra Kazemi Saleh, Bénédicte Poulin-Charronnat, Annie Vinter

Abstract:

This study examines the effects of aging on visual statistical learning, using eye-tracking techniques to investigate this cognitive phenomenon. Visual statistical learning is a fundamental brain function that enables the automatic and implicit recognition, processing, and internalization of environmental patterns over time. Some previous research has suggested the robustness of this learning mechanism throughout the aging process, underscoring its importance in the context of education and rehabilitation for the elderly. The study included three distinct groups of participants, including 21 young adults (Mage: 19.73), 20 young-old adults (Mage: 67.22), and 17 old-old adults (Mage: 79.34). Participants were exposed to a series of 12 arbitrary black shapes organized into 6 pairs, each with different spatial configurations and orientations (horizontal, vertical, and oblique). These pairs were not explicitly revealed to the participants, who were instructed to passively observe 144 grids presented sequentially on the screen for a total duration of 7 min. In the subsequent test phase, participants performed a two-alternative forced-choice task in which they had to identify the most familiar pair from 48 trials, each consisting of a base pair and a non-base pair. Behavioral analysis using t-tests revealed notable findings. The mean score for the first group was significantly above chance, indicating the presence of visual statistical learning. Similarly, the second group also performed significantly above chance, confirming the persistence of visual statistical learning in young-old adults. Conversely, the third group, consisting of old-old adults, showed a mean score that was not significantly above chance. This lack of statistical learning in the old-old adult group suggests a decline in this cognitive ability with age. Preliminary eye-tracking results showed a decrease in the number and duration of fixations during the exposure phase for all groups. The main difference was that older participants focused more often on empty cases than younger participants, likely due to a decline in the ability to ignore irrelevant information, resulting in a decrease in statistical learning performance.

Keywords: aging, eye tracking, implicit learning, visual statistical learning

Procedia PDF Downloads 60
653 Effect of Roasting Temperature on the Proximate, Mineral and Antinutrient Content of Pigeon Pea (Cajanus cajan) Ready-to-Eat Snack

Authors: Olaide Ruth Aderibigbe, Oluwatoyin Oluwole

Abstract:

Pigeon pea is one of the minor leguminous plants; though underutilised, it is used traditionally by farmers to alleviate hunger and malnutrition. Pigeon pea is cultivated in Nigeria by subsistence farmers. It is rich in protein and minerals, however, its utilisation as food is only common among the poor and rural populace who cannot afford expensive sources of protein. One of the factors contributing to its limited use is the high antinutrient content which makes it indigestible, especially when eaten by children. The development of value-added products that can reduce the antinutrient content and make the nutrients more bioavailable will increase the utilisation of the crop and contribute to reduction of malnutrition. This research, therefore, determined the effects of different roasting temperatures (130 0C, 140 0C, and 150 0C) on the proximate, mineral and antinutrient component of a pigeon pea snack. The brown variety of pigeon pea seeds were purchased from a local market- Otto in Lagos, Nigeria. The seeds were cleaned, washed, and soaked in 50 ml of water containing sugar and salt (4:1) for 15 minutes, and thereafter the seeds were roasted at 130 0C, 140 0C, and 150 0C in an electric oven for 10 minutes. Proximate, minerals, phytate, tannin and alkaloid content analyses were carried out in triplicates following standard procedures. The results of the three replicates were polled and expressed as mean±standard deviation; a one-way analysis of variance (ANOVA) and the Least Significance Difference (LSD) were carried out. The roasting temperatures significantly (P<0.05) affected the protein, ash, fibre and carbohydrate content of the snack. Ready-to-eat snack prepared by roasting at 150 0C significantly had the highest protein (23.42±0.47%) compared the ones roasted at 130 0C and 140 0C (18.38±1.25% and 20.63±0.45%, respectively). The same trend was observed for the ash content (3.91±0.11 for 150 0C, 2.36±0.15 for 140 0C and 2.26±0.25 for 130 0C), while the fibre and carbohydrate contents were highest at roasting temperature of 130 0C. Iron, zinc, and calcium were not significantly (P<0.5) affected by the different roasting temperatures. Antinutrients decreased with increasing temperature. Phytate levels recorded were 0.02±0.00, 0.06±0.00, and 0.07±0.00 mg/g; tannin levels were 0.50±0.00, 0.57±0.00, and 0.68±0.00 mg/g, while alkaloids levels were 0.51±0.01, 0.78±0.01, and 0.82±0.01 mg/g for 150 0C, 140 0C, and 130 0C, respectively. These results show that roasting at high temperature (150 0C) can be utilised as a processing technique for increasing protein and decreasing antinutrient content of pigeon pea.

Keywords: antinutrients, pigeon pea, protein, roasting, underutilised species

Procedia PDF Downloads 119
652 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System

Authors: Dong Seop Lee, Byung Sik Kim

Abstract:

In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.

Keywords: disaster information management, unstructured data, optical character recognition, machine learning

Procedia PDF Downloads 111
651 The Effect of Elapsed Time on the Cardiac Troponin-T Degradation and Its Utility as a Time Since Death Marker in Cases of Death Due to Burn

Authors: Sachil Kumar, Anoop K.Verma, Uma Shankar Singh

Abstract:

It’s extremely important to study postmortem interval in different causes of death since it assists in a great way in making an opinion on the exact cause of death following such incident often times. With diligent knowledge of the interval one could really say as an expert that the cause of death is not feigned hence there is a great need in evaluating such death to have been at the CRIME SCENE before performing an autopsy on such body. The approach described here is based on analyzing the degradation or proteolysis of a cardiac protein in cases of deaths due to burn as a marker of time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (Department of Forensic Medicine and Toxicology), King George’s Medical University, Lucknow India, after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC) for different time periods (~7.30, 18.20, 30.30, 41.20, 41.40, 54.30, 65.20, and 88.40 Hours). The cases included were the subjects of burn without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. As time postmortem progresses the intact cTnT band degrades to fragments that are easily detected by the monoclonal antibodies. A decreasing trend in the level of cTnT (% of intact) was found as the PM hours increased. A significant difference was observed between <15 h and other PM hours (p<0.01). Significant difference in cTnT level (% of intact) was also observed between 16-25 h and 56-65 h & >75 h (p<0.01). Western blot data clearly showed the intact protein at 42 kDa, three major (28 kDa, 30kDa, 10kDa) fragments, three additional minor fragments (12 kDa, 14kDa, and 15 kDa) and formation of low molecular weight fragments. Overall, both PMI and cardiac tissue of burned corpse had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 41.40 Hrs and after it intact protein slowly disappears. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the time postmortem. A strong significant positive correlation was found between cTnT and PM hours (r=0.87, p=0.0001). The regression analysis showed a good variability explained (R2=0.768) The post-mortem Troponin-T fragmentation observed in this study reveals a sequential, time-dependent process with the potential for use as a predictor of PMI in cases of burning.

Keywords: burn, degradation, postmortem interval, troponin-T

Procedia PDF Downloads 429
650 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 149
649 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 137