Search results for: relevance vector machines
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2575

Search results for: relevance vector machines

55 Glocalization of Journalism and Mass Communication Education: Best Practices from an International Collaboration on Curriculum Development

Authors: Bellarmine Ezumah, Michael Mawa

Abstract:

Glocalization is often defined as the practice of conducting business according to both local and global considerations – this epitomizes the curriculum co-development collaboration between a journalism and mass communications professor from a university in the United States and the Uganda Martyrs University in Uganda where a brand new journalism and mass communications program was recently co-developed. This paper presents the experiences and research result of this initiative which was funded through the Institute of International Education (IIE) under the umbrella of the Carnegie African Diaspora Fellowship Program (CADFP). Vital international and national concerns were addressed. On a global level, scholars have questioned and criticized the general Western-module ingrained in journalism and mass communication curriculum and proposed a decolonization of journalism curricula. Another major criticism is the concept of western-based educators transplanting their curriculum verbatim to other regions of the world without paying greater attention to the local needs. To address these two global concerns, an extensive assessment of local needs was conducted prior to the conceptualization of the new program. The assessment of needs adopted a participatory action model and captured the knowledge and narratives of both internal and external stakeholders. This involved review of pertinent documents including the nation’s constitution, governmental briefs, and promulgations, interviews with governmental officials, media and journalism educators, media practitioners, students, and benchmarking the curriculum of other tertiary institutions in the nation. Information gathered through this process served as blueprint and frame of reference for all design decisions. In the area of local needs, four key factors were addressed. First, the realization that most media personnel in Uganda are both academically and professionally unqualified. Second, the practitioners with academic training were found lacking in experience. Third, the current curricula offered at several tertiary institutions are not comprehensive and lack local relevance. The project addressed these problems thus: first, the program was designed to cater to both traditional and non-traditional students offering opportunities for unqualified media practitioners to get their formal training through evening and weekender programs. Secondly, the challenge of inexperienced graduates was mitigated by designing the program to adopt the experiential learning approach which many refer to as the ‘Teaching Hospital Model’. This entails integrating practice to theory - similar to the way medical students engage in hands-on practice under the supervision of a mentor. The university drew a Memorandum of Understanding (MoU) with reputable media houses for students and faculty to use their studios for hands-on experience and for seasoned media practitioners to guest-teach some courses. With the convergence functions of media industry today, graduates should be trained to have adequate knowledge of other disciplines; therefore, the curriculum integrated cognate courses that would render graduates versatile. Ultimately, this research serves as a template for African colleges and universities to follow in their quest to glocalize their curricula. While the general concept of journalism may remain western, journalism curriculum developers in Africa through extensive assessment of needs, and focusing on those needs and other societal particularities, can adjust the western module to fit their local needs.

Keywords: curriculum co-development, glocalization of journalism education, international journalism, needs assessment

Procedia PDF Downloads 114
54 A Data-Driven Compartmental Model for Dengue Forecasting and Covariate Inference

Authors: Yichao Liu, Peter Fransson, Julian Heidecke, Jonas Wallin, Joacim Rockloev

Abstract:

Dengue, a mosquito-borne viral disease, poses a significant public health challenge in endemic tropical or subtropical countries, including Sri Lanka. To reveal insights into the complexity of the dynamics of this disease and study the drivers, a comprehensive model capable of both robust forecasting and insightful inference of drivers while capturing the co-circulating of several virus strains is essential. However, existing studies mostly focus on only one aspect at a time and do not integrate and carry insights across the siloed approach. While mechanistic models are developed to capture immunity dynamics, they are often oversimplified and lack integration of all the diverse drivers of disease transmission. On the other hand, purely data-driven methods lack constraints imposed by immuno-epidemiological processes, making them prone to overfitting and inference bias. This research presents a hybrid model that combines machine learning techniques with mechanistic modelling to overcome the limitations of existing approaches. Leveraging eight years of newly reported dengue case data, along with socioeconomic factors, such as human mobility, weekly climate data from 2011 to 2018, genetic data detecting the introduction and presence of new strains, and estimates of seropositivity for different districts in Sri Lanka, we derive a data-driven vector (SEI) to human (SEIR) model across 16 regions in Sri Lanka at the weekly time scale. By conducting ablation studies, the lag effects allowing delays up to 12 weeks of time-varying climate factors were determined. The model demonstrates superior predictive performance over a pure machine learning approach when considering lead times of 5 and 10 weeks on data withheld from model fitting. It further reveals several interesting interpretable findings of drivers while adjusting for the dynamics and influences of immunity and introduction of a new strain. The study uncovers strong influences of socioeconomic variables: population density, mobility, household income and rural vs. urban population. The study reveals substantial sensitivity to the diurnal temperature range and precipitation, while mean temperature and humidity appear less important in the study location. Additionally, the model indicated sensitivity to vegetation index, both max and average. Predictions on testing data reveal high model accuracy. Overall, this study advances the knowledge of dengue transmission in Sri Lanka and demonstrates the importance of incorporating hybrid modelling techniques to use biologically informed model structures with flexible data-driven estimates of model parameters. The findings show the potential to both inference of drivers in situations of complex disease dynamics and robust forecasting models.

Keywords: compartmental model, climate, dengue, machine learning, social-economic

Procedia PDF Downloads 53
53 Aquaporin-1 as a Differential Marker in Toxicant-Induced Lung Injury

Authors: Ekta Yadav, Sukanta Bhattacharya, Brijesh Yadav, Ariel Hus, Jagjit Yadav

Abstract:

Background and Significance: Respiratory exposure to toxicants (chemicals or particulates) causes disruption of lung homeostasis leading to lung toxicity/injury manifested as pulmonary inflammation, edema, and/or other effects depending on the type and extent of exposure. This emphasizes the need for investigating toxicant type-specific mechanisms to understand therapeutic targets. Aquaporins, aka water channels, are known to play a role in lung homeostasis. Particularly, the two major lung aquaporins AQP5 and AQP1 expressed in alveolar epithelial and vasculature endothelia respectively allow for movement of the fluid between the alveolar air space and the associated vasculature. In view of this, the current study is focused on understanding the regulation of lung aquaporins and other targets during inhalation exposure to toxic chemicals (Cigarette smoke chemicals) versus toxic particles (Carbon nanoparticles) or co-exposures to understand their relevance as markers of injury and intervention. Methodologies: C57BL/6 mice (5-7 weeks old) were used in this study following an approved protocol by the University of Cincinnati Institutional Animal Care and Use Committee (IACUC). The mice were exposed via oropharyngeal aspiration to multiwall carbon nanotube (MWCNT) particles suspension once (33 ugs/mouse) followed by housing for four weeks or to Cigarette smoke Extract (CSE) using a daily dose of 30µl/mouse for four weeks, or to co-exposure using the combined regime. Control groups received vehicles following the same dosing schedule. Lung toxicity/injury was assessed in terms of homeostasis changes in the lung tissue and lumen. Exposed lungs were analyzed for transcriptional expression of specific targets (AQPs, surfactant protein A, Mucin 5b) in relation to tissue homeostasis. Total RNA from lungs extracted using TRIreagent kit was analyzed using qRT-PCR based on gene-specific primers. Total protein in bronchoalveolar lavage (BAL) fluid was determined by the DC protein estimation kit (BioRad). GraphPad Prism 5.0 (La Jolla, CA, USA) was used for all analyses. Major findings: CNT exposure alone or as co-exposure with CSE increased the total protein content in the BAL fluid (lung lumen rinse), implying compromised membrane integrity and cellular infiltration in the lung alveoli. In contrast, CSE showed no significant effect. AQP1, required for water transport across membranes of endothelial cells in lungs, was significantly upregulated in CNT exposure but downregulated in CSE exposure and showed an intermediate level of expression for the co-exposure group. Both CNT and CSE exposures had significant downregulating effects on Muc5b, and SP-A expression and the co-exposure showed either no significant effect (Muc5b) or significant downregulating effect (SP-A), suggesting an increased propensity for infection in the exposed lungs. Conclusions: The current study based on the lung toxicity mouse model showed that both toxicant types, particles (CNT) versus chemicals (CSE), cause similar downregulation of lung innate defense targets (SP-A, Muc5b) and mostly a summative effect when presented as co-exposure. However, the two toxicant types show differential induction of aquaporin-1 coinciding with the corresponding differential damage to alveolar integrity (vascular permeability). Interestingly, this implies the potential of AQP1 as a differential marker of toxicant type-specific lung injury.

Keywords: aquaporin, gene expression, lung injury, toxicant exposure

Procedia PDF Downloads 157
52 Biophysical and Structural Characterization of Transcription Factor Rv0047c of Mycobacterium Tuberculosis H37Rv

Authors: Md. Samsuddin Ansari, Ashish Arora

Abstract:

Every year 10 million people fall ill with one of the oldest diseases known as tuberculosis, caused by Mycobacterium tuberculosis. The success of M. tuberculosis as a pathogen is because of its ability to persist in host tissues. Multidrug resistance (MDR) mycobacteria cases increase every day, which is associated with efflux pumps controlled at the level of transcription. The transcription regulators of MDR transporters in bacteria belong to one of the following four regulatory protein families: AraC, MarR, MerR, and TetR. Phenolic acid decarboxylase repressor (PadR), like a family of transcription regulators, is closely related to the MarR family. Phenolic acid decarboxylase repressor (PadR) was first identified as a transcription factor involved in the regulation of phenolic acid stress response in various microorganisms (including Mycobacterium tuberculosis H37Rv). Recently research has shown that the PadR family transcription factors are global, multifunction transcription regulators. Rv0047c is a PadR subfamily-1 protein. We are exploring the biophysical and structural characterization of Rv0047c. The Rv0047 gene was amplified by PCR using the primers containing EcoRI and HindIII restriction enzyme sites cloned in pET-NH6 vector and overexpressed in DH5α and BL21 (λDE3) cells of E. coli following purification with Ni2+-NTA column and size exclusion chromatography. We did DSC to know the thermal stability; the Tm (transition temperature) of protein is 55.29ºC, and ΔH (enthalpy change) of 6.92 kcal/mol. Circular dichroism to know the secondary structure and conformation and fluorescence spectroscopy for tertiary structure study of protein. To understand the effect of pH on the structure, function, and stability of Rv0047c we employed spectroscopy techniques such as circular dichroism, fluorescence, and absorbance measurements in a wide range of pH (from pH-2.0 to pH-12). At low and high pH, it shows drastic changes in the secondary and tertiary structure of the protein. EMSA studies showed the specific binding of Rv0047c with its own 30-bp promoter region. To determine the effect of complex formation on the secondary structure of Rv0047c, we examined the CD spectra of the complex of Rv0047c with promoter DNA of rv0047. The functional role of Rv0047c was characterized by over-expressing the Rv0047c gene under the control of hsp60 promoter in Mycobacterium tuberculosis H37Rv. We have predicted the three-dimensional structure of Rv0047c using the Swiss Model and Modeller, with validity checked by the Ramachandra plot. We did molecular docking of Rv0047c with dnaA, through PatchDock following refinement through FireDock. Through this, it is possible to easily identify the binding hot-stop of the receptor molecule with that of the ligand, the nature of the interface itself, and the conformational change undergone by the protein pattern. We are using X-crystallography to unravel the structure of Rv0047c. Overall the studies show that Rv0047c may have transcription regulation along with providing an insight into the activity of Rv0047c in the pH range of subcellular environment and helps to understand the protein-protein interaction, a novel target to kill dormant bacteria and potential strategy for tuberculosis control.

Keywords: mycobacterium tuberculosis, phenolic acid decarboxylase repressor, Rv0047c, Circular dichroism, fluorescence spectroscopy, docking, protein-protein interaction

Procedia PDF Downloads 91
51 Metal Contamination in an E-Waste Recycling Community in Northeastern Thailand

Authors: Aubrey Langeland, Richard Neitzel, Kowit Nambunmee

Abstract:

Electronic waste, ‘e-waste’, refers generally to discarded electronics and electrical equipment, including products from cell phones and laptops to wires, batteries and appliances. While e-waste represents a transformative source of income in low- and middle-income countries, informal e-waste workers use rudimentary methods to recover materials, simultaneously releasing harmful chemicals into the environment and creating a health hazard for themselves and surrounding communities. Valuable materials such as precious metals, copper, aluminum, ferrous metals, plastic and components are recycled from e-waste. However, persistent organic pollutants such as polychlorinated biphenyls (PCBs) and some polybrominated diphenyl ethers (PBDEs), and heavy metals are toxicants contained within e-waste and are of great concern to human and environmental health. The current study seeks to evaluate the environmental contamination resulting from informal e-waste recycling in a predominantly agricultural community in northeastern Thailand. To accomplish this objective, five types of environmental samples were collected and analyzed for concentrations of eight metals commonly associated with e-waste recycling during the period of July 2016 through July 2017. Rice samples from the community were collected after harvest and analyzed using inductively coupled plasma mass spectrometry (ICP-MS) and gas furnace atomic spectroscopy (GF-AS). Soil samples were collected and analyzed using methods similar to those used in analyzing the rice samples. Surface water samples were collected and analyzed using absorption colorimetry for three heavy metals. Environmental air samples were collected using a sampling pump and matched-weight PVC filters, then analyzed using Inductively Coupled Argon Plasma-Atomic Emission Spectroscopy (ICAP-AES). Finally, surface wipe samples were collected from surfaces in homes where e-waste recycling activities occur and were analyzed using ICAP-AES. Preliminary1 results indicate that some rice samples have concentrations of lead and cadmium significantly higher than limits set by the United States Department of Agriculture (USDA) and the World Health Organization (WHO). Similarly, some soil samples show levels of copper, lead and cadmium more than twice the maximum permissible level set by the USDA and WHO, and significantly higher than other areas of Thailand. Surface water samples indicate that areas near e-waste recycling activities, particularly the burning of e-waste products, result in increased levels of cadmium, lead and copper in surface waters. This is of particular concern given that many of the surface waters tested are used in irrigation of crops. Surface wipe samples measured concentrations of metals commonly associated with e-waste, suggesting a danger of ingestion of metals during cooking and other activities. Of particular concern is the relevance of surface contamination of metals to child health. Finally, air sampling showed that the burning of e-waste presents a serious health hazard to workers and the environment through inhalation and deposition2. Our research suggests a need for improved methods of e-waste recycling that allows workers to continue this valuable revenue stream in a sustainable fashion that protects both human and environmental health. 1Statistical analysis to be finished in October 2017 due to follow-up field studies occurring in July and August 2017. 2Still awaiting complete analytic results.

Keywords: e-waste, environmental contamination, informal recycling, metals

Procedia PDF Downloads 347
50 Aerobic Biodegradation of a Chlorinated Hydrocarbon by Bacillus Cereus 2479

Authors: Srijata Mitra, Mobina Parveen, Pranab Roy, Narayan Chandra Chattopadhyay

Abstract:

Chlorinated hydrocarbon can be a major pollution problem in groundwater as well as soil. Many people interact with these chemicals on daily accidentally or by professionally in the laboratory. One of the most common sources for Chlorinated hydrocarbon contamination of soil and groundwater are industrial effluents. The wide use and discharge of Trichloroethylene (TCE), a volatile chlorohydrocarbon from chemical industry, led to major water pollution in rural areas. TCE is an mainly used as an industrial metal degreaser in industries. Biotransformation of TCE to the potent carcinogen vinyl chloride (VC) by consortia of anaerobic bacteria might have role for the above purpose. For these reasons, the aim of current study was to isolate and characterized the genes involved in TCE metabolism and also to investigate the in silico study of those genes. To our knowledge, only one aromatic dioxygenase system, the toluene dioxygenase in Pseudomonas putida F1 has been shown to be involved in TCE degradation. This is first instance where Bacillus cereus group being used in biodegradation of trichloroethylene. A novel bacterial strain 2479 was isolated from oil depot site at Rajbandh, Durgapur (West Bengal, India) by enrichment culture technique. It was identified based on polyphasic approach and ribotyping. The bacterium was gram positive, rod shaped, endospore forming and capable of degrading trichloroethylene as the sole carbon source. On the basis of phylogenetic data and Fatty Acid Methyl Ester Analysis, strain 2479 should be placed within the genus Bacillus and species cereus. However, the present isolate (strain 2479) is unique and sharply different from the usual Bacillus strains in its biodegrading nature. Fujiwara test was done to estimate that the strain 2479 could degrade TCE efficiently. The gene for TCE biodegradation was PCR amplified from genomic DNA of Bacillus cereus 2479 by using todC1 gene specific primers. The 600bp amplicon was cloned into expression vector pUC I8 in the E. coli host XL1-Blue and expressed under the control of lac promoter and nucleotide sequence was determined. The gene sequence was deposited at NCBI under the Accession no. GU183105. In Silico approach involved predicting the physico-chemical properties of deduced Tce1 protein by using ProtParam tool. The tce1 gene contained 342 bp long ORF encoding 114 amino acids with a predicted molecular weight 12.6 kDa and the theoretical pI value of the polypeptide was 5.17, molecular formula: C559H886N152O165S8, total number of atoms: 1770, aliphatic index: 101.93, instability index: 28.60, Grand Average of Hydropathicity (GRAVY): 0.152. Three differentially expressed proteins (97.1, 40 and 30 kDa) were directly involved in TCE biodegradation, found to react immunologically to the antibodies raised against TCE inducible proteins in Western blot analysis. The present study suggested that cloned gene product (TCE1) was capable of degrading TCE as verified chemically.

Keywords: cloning, Bacillus cereus, in silico analysis, TCE

Procedia PDF Downloads 376
49 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 121
48 The First Complete Mitochondrial Genome of Melon Thrips, Thrips palmi (Thripinae: Thysanoptera): Vector for Tospoviruses

Authors: Kaomud Tyagi, Rajasree Chakraborty, Shantanu Kundu, Devkant Singha, Kailash Chandra, Vikas Kumar

Abstract:

The melon thrips, Thrips palmi is a serious pest of a wide range of agriculture crops and also act as vectors for plant viruses (genus Tospovirus, family Bunyaviridae). More molecular data on this species is required to understand the cryptic speciation and evolutionary affiliations. Mitochondrial genomes have been widely used in phylogenetic and evolutionary studies in insect. So far, mitogenomes of five thrips species (Anaphothrips obscurus, Frankliniella intonsa, Frankliniella occidentalis, Scirtothrips dorsalis and Thrips imaginis) is available in the GenBank database. In this study, we sequenced the first complete mitogenome T. palmi and compared it with available thrips mitogenomes. We assembled the mitogenome from the whole genome sequencing data generated using Illumina Hiseq2500. Annotation was performed using MITOS web-server to estimate the location of protein coding genes (PCGs), transfer RNA (tRNAs), ribosomal RNAs (rRNAs) and their secondary structures. The boundaries of PCGs and rRNAs was confirmed manually in NCBI. Phylogenetic analyses were performed using the 13 PCGs data using maximum likelihood (ML) in PAUP, and Bayesian inference (BI) in MrBayes 3.2. The complete mitogenome of T. palmi was 15,333 base pairs (bp), which was greater than the genomes of A. obscurus (14,890bp), F. intonsa (15,215 bp), F. occidentalis (14,889 bp) and S. dorsalis South Asia strain (SA1) (14,283 bp), but smaller than the genomes of T. imaginis (15,407 bp) and S. dorsalis East Asia strain (EA1) (15,343bp). Like in other thrips species, the mitochondrial genome of T. palmi was represented by 37 genes, including 13 PCGs, large and small ribosomal RNA (rrnL and rrnS) genes, 22 transfer RNA (tRNAs) genes (with one extra gene for trn-Serine) and two A+T-rich control regions (CR1 and CR2). Thirty one genes were observed on heavy (H) strand and six genes on the light (L) strand. The six tRNA genes (trnG,trnK, trnY, trnW, trnF, and trnH) were found to be conserved in all thrips species mitogenomes in their locations relative to a protein-coding or rRNA gene upstream or downstream. The gene arrangements of T. palmi is very close to T. imaginis except the rearrangements in tRNAs genes: trnR (arginine), and trnE (glutamic acid) were found to be located between cox3 and CR2 in T. imaginis which were translocated between atp6 and CR1 in T. palmi; trnL1 (Leucine) and trnS1(Serine) were located between atp6 and CR1 in T. imaginis which were translocated between cox3 and CR2 in T. palmi. The location of CR1 upstream of nad5 gene was suggested to be ancestral condition of the thrips species in subfamily Thripinae, was also observed in T. palmi. Both the Maximum likelihood (ML) and Bayesian Inference (BI) phylogenetic trees generated resulted in similar topologies. The T. palmi was clustered with T. imaginis. We concluded that more molecular data on the diverse thrips species from different hierarchical level is needed, to understand the phylogenetic and evolutionary relationships among them.

Keywords: thrips, comparative mitogenomics, gene rearrangements, phylogenetic analysis

Procedia PDF Downloads 145
47 Advanced Bio-Fuels for Biorefineries: Incorporation of Waste Tires and Calcium-Based Catalysts to the Pyrolysis of Biomass

Authors: Alberto Veses, Olga Sanhauja, María Soledad Callén, Tomás García

Abstract:

The appropriate use of renewable sources emerges as a decisive point to minimize the environmental impact caused by fossil fuels use. Particularly, the use of lignocellulosic biomass becomes one of the best promising alternatives since it is the only carbon-containing renewable source that can produce bioproducts similar to fossil fuels and it does not compete with food market. Among all the processes that can valorize lignocellulosic biomass, pyrolysis is an attractive alternative because it is the only thermochemical process that can produce a liquid biofuel (bio-oil) in a simple way and solid and gas fractions that can be used as energy sources to support the process. However, in order to incorporate bio-oils in current infrastructures and further process in future biorefineries, their quality needs to be improved. Introducing different low-cost catalysts and/or incorporating different polymer residues to the process are some of the new, simple and low-cost strategies that allow the user to directly obtain advanced bio-oils to be used in future biorefineries in an economic way. In this manner, from previous thermogravimetric analyses, local agricultural wastes such as grape seeds (GS) were selected as lignocellulosic biomass while, waste tires (WT) were selected as polymer residue. On the other hand, CaO was selected as low-cost catalyst based on previous experiences by the group. To reach this aim, a specially-designed fixed bed reactor using N₂ as a carrier gas was used. This reactor has the peculiarity to incorporate a vertical mobile liner that allows the user to introduce the feedstock in the oven once the selected temperature (550 ºC) is reached, ensuring higher heating rates needed for the process. Obtaining a well-defined phase distribution in the resulting bio-oil is crucial to ensure the viability to the process. Thus, once experiments were carried out, not only a well-defined two layers was observed introducing several mixtures (reaching values up to 40 wt.% of WT) but also, an upgraded organic phase, which is the one considered to be processed in further biorefineries. Radical interactions between GS and WT released during the pyrolysis process and dehydration reactions enhanced by CaO can promote the formation of better-quality bio-oils. The latter was reflected in a reduction of water and oxygen content of bio-oil and hence, a substantial increase of its heating value and its stability. Moreover, not only sulphur content was reduced from solely WT pyrolysis but also potential and negative issues related to a strong acidic environment of conventional bio-oils were minimized due to its basic pH and lower total acid numbers. Therefore, acidic compounds obtained in the pyrolysis such as CO₂-like substances can react with the CaO and minimize acidic problems related to lignocellulosic bio-oils. Moreover, this CO₂ capture promotes H₂ production from water gas shift reaction favoring hydrogen-transfer reactions, improving the final quality of the bio-oil. These results show the great potential of grapes seeds to carry out the catalytic co-pyrolysis process with different plastic residues in order to produce a liquid bio-oil that can be considered as a high-quality renewable vector.

Keywords: advanced bio-oils, biorefinery, catalytic co-pyrolysis of biomass and waste tires, lignocellulosic biomass

Procedia PDF Downloads 215
46 Shifting Contexts and Shifting Identities: Campus Race-related Experiences, Racial Identity, and Achievement Motivation among Black College Students during the Transition to College

Authors: Tabbye Chavous, Felecia Webb, Bridget Richardson, Gloryvee Fonseca-Bolorin, Seanna Leath, Robert Sellers

Abstract:

There has been recent renewed attention to Black students’ experiences at predominantly White U.S. universities (PWIs), e.g., the #BBUM (“Being Black at the University of Michigan”), “I too am Harvard” social media campaigns, and subsequent student protest activities nationwide. These campaigns illuminate how many minority students encounter challenges to their racial/ethnic identities as they enter PWI contexts. Students routinely report experiences such as being ignored or treated as a token in classes, receiving messages of low academic expectations by faculty and peers, being questioned about their academic qualifications or belonging, being excluded from academic and social activities, and being racially profiled and harassed in the broader campus community due to race. Researchers have linked such racial marginalization and stigma experiences to student motivation and achievement. One potential mechanism is through the impact of college experiences on students’ identities, given the relevance of the college context for students’ personal identity development, including personal beliefs systems around social identities salient in this context. However, little research examines the impact of the college context on Black students’ racial identities. This study examined change in Black college students’ (N=329) racial identity beliefs over the freshman year at three predominantly White U.S. universities. Using cluster analyses, we identified profile groups reflecting different patterns of stability and change in students’ racial centrality (importance of race to overall self-concept), private regard (personal group affect/group pride), and public regard (perceptions of societal views of Blacks) from beginning of year (Time 1) to end of year (Time 2). Multinomial logit regression analyses indicated that the racial identity change clusters were predicted by pre-college background (racial composition of high school and neighborhood), as well as college-based experiences (racial discrimination, interracial friendships, and perceived campus racial climate). In particular, experiencing campus racial discrimination related to high, stable centrality, and decreases in private regard and public regard. Perceiving racial climates norms of institutional support for intergroup interactions on campus related to maintaining low and decreasing in private and public regard. Multivariate Analyses of Variance results showed change cluster effects on achievement motivation outcomes at the end of students’ academic year. Having high, stable centrality and high private regard related to more positive outcomes overall (academic competence, positive academic affect, academic curiosity and persistence). Students decreasing in private regard and public regard were particularly vulnerable to negative motivation outcomes. Findings support scholarship indicating both stability in racial identity beliefs and the importance of critical context transitions in racial identity development and adjustment outcomes among emerging adults. Findings also are consistent with research suggesting promotive effects of a strong, positive racial identity on student motivation, as well as research linking awareness of racial stigma to decreased academic engagement.

Keywords: diversity, motivation, learning, ethnic minority achievement, higher education

Procedia PDF Downloads 492
45 Relevance of Dosing Time for Everolimus Toxicity on Thyroid Gland and Hormones in Mice

Authors: Dilek Ozturk, Narin Ozturk, Zeliha Pala Kara, Engin Kaptan, Serap Sancar Bas, Nurten Ozsoy, Alper Okyar

Abstract:

Most physiological processes oscillate in a rhythmic manner in mammals including metabolism and energy homeostasis, locomotor activity, hormone secretion, immune and endocrine system functions. Endocrine body rhythms are tightly regulated by the circadian timing system. The hypothalamic-pituitary-thyroid (HPT) axis is under circadian control at multiple levels from hypothalamus to thyroid gland. Since circadian timing system controls a variety of biological functions in mammals, circadian rhythms of biological functions may modify the drug tolerability/toxicity depending on the dosing time. Selective mTOR (mammalian target of rapamycin) inhibitor everolimus is an immunosuppressant and anticancer agent that is active against many cancers. It was also found to be active in medullary thyroid cancer. The aim of this study was to investigate the dosing time-dependent toxicity of everolimus on the thyroid gland and hormones in mice. Healthy C57BL/6J mice were synchronized with 12h:12h Light-Dark cycle (LD12:12, with Zeitgeber Time 0 – ZT0 – corresponding to Light onset). Everolimus was administered to male (5 mg/kg/day) and female mice (15 mg/kg/day) orally at ZT1-rest period- and ZT13-activity period- for 4 weeks; body weight loss, clinical signs and possible changes in serum thyroid hormone levels (TSH and free T4) were examined. Histological alterations in the thyroid gland were evaluated according to the following criteria: follicular size, colloid density and viscidity, height of the follicular epithelium and the presence of necrotic cells. The statistical significance between differences was analyzed with ANOVA. Study findings included everolimus-related diarrhea, decreased activity, decreased body weight gains, alterations in serum TSH levels, and histopathological changes in thyroid gland. Decreases in mean body weight gains were more evident in mice treated at ZT1 as compared to ZT13 (p < 0.001, for both sexes). Control tissue sections of thyroid glands exhibited well-organized histoarchitecture when compared to everolimus-treated groups. Everolimus caused histopathological alterations in thyroid glands in male (5 mg/kg, slightly) and female mice (15 mg/kg; p < 0.01 for both ZT as compared to their controls) irrespective of dosing-time. TSH levels were slightly decreased upon everolimus treatment at ZT13 in both males and females. Conversely, increases in TSH levels were observed when everolimus treated at ZT1 in both males (5 mg/kg; p < 0.05) and females (15 mg/kg; slightly). No statistically significant alterations in serum free T4 levels were observed. TSH and free T4 is clinically important thyroid hormones since a number of disease states have been linked to alterations in these hormones. Serum free T4 levels within the normal ranges in the presence of abnormal serum TSH levels in everolimus treated mice may suggest subclinical thyroid disease which may have repercussions on the cardiovascular system, as well as on other organs and systems. Our study has revealed the histological damage on thyroid gland induced by subacute everolimus administration, this effect was irrespective of dosing time. However, based on the body weight changes and clinical signs upon everolimus treatment, tolerability for the drug was best following dosing at ZT13 in both male and females. Yet, effects of everolimus on thyroid functions may deserve further studies regarding their clinical importance and chronotoxicity.

Keywords: circadian rhythm, chronotoxicity, everolimus, thyroid gland, thyroid hormones

Procedia PDF Downloads 331
44 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 153
43 Sustainable Recycling Practices to Reduce Health Hazards of Municipal Solid Waste in Patna, India

Authors: Anupama Singh, Papia Raj

Abstract:

Though Municipal Solid Waste (MSW) is a worldwide problem, yet its implications are enormous in developing countries, as they are unable to provide proper Municipal Solid Waste Management (MSWM) for the large volume of MSW. As a result, the collected wastes are dumped in open dumping at landfilling sites while the uncollected wastes remain strewn on the roadside, many-a-time clogging drainage. Such unsafe and inadequate management of MSW causes various public health hazards. For example, MSW directly on contact or by leachate contaminate the soil, surface water, and ground water; open burning causes air pollution; anaerobic digestion between the piles of MSW enhance the greenhouse gases i.e., carbon dioxide and methane (CO2 and CH4) into the atmosphere. Moreover, open dumping can cause spread of vector borne disease like cholera, typhoid, dysentery, and so on. Patna, the capital city of Bihar, one of the most underdeveloped provinces in India, is a unique representation of this situation. Patna has been identified as the ‘garbage city’. Over the last decade there has been an exponential increase in the quantity of MSW generation in Patna. Though a large proportion of such MSW is recyclable in nature, only a negligible portion is recycled. Plastic constitutes the major chunk of the recyclable waste. The chemical composition of plastic is versatile consisting of toxic compounds, such as, plasticizers, like adipates and phthalates. Pigmented plastic is highly toxic and it contains harmful metals such as copper, lead, chromium, cobalt, selenium, and cadmium. Human population becomes vulnerable to an array of health problems as they are exposed to these toxic chemicals multiple times a day through air, water, dust, and food. Based on analysis of health data it can be emphasized that in Patna there has been an increase in the incidence of specific diseases, such as, diarrhoea, dysentry, acute respiratory infection (ARI), asthma, and other chronic respiratory diseases (CRD). This trend can be attributed to improper MSWM. The results were reiterated through a survey (N=127) conducted during 2014-15 in selected areas of Patna. Random sampling method of data collection was used to better understand the relationship between different variables affecting public health due to exposure to MSW and lack of MSWM. The results derived through bivariate and logistic regression analysis of the survey data indicate that segregation of wastes at source, segregation behavior, collection bins in the area, distance of collection bins from residential area, and transportation of MSW are the major determinants of public health issues. Sustainable recycling is a robust method for MSWM with its pioneer concerns being environment, society, and economy. It thus ensures minimal threat to environment and ecology consequently improving public health conditions. Hence, this paper concludes that sustainable recycling would be the most viable approach to manage MSW in Patna and would eventually reduce public health hazards.

Keywords: municipal solid waste, Patna, public health, sustainable recycling

Procedia PDF Downloads 300
42 Genetic Polymorphism and Insilico Study Epitope Block 2 MSP1 Gene of Plasmodium falciparum Isolate Endemic Jayapura

Authors: Arsyam Mawardi, Sony Suhandono, Azzania Fibriani, Fifi Fitriyah Masduki

Abstract:

Malaria is an infectious disease caused by Plasmodium sp. This disease has a high prevalence in Indonesia, especially in Jayapura. The vaccine that is currently being developed has not been effective in overcoming malaria. This is due to the high polymorphism in the Plasmodium genome especially in areas that encode Plasmodium surface proteins. Merozoite Surface Protein 1 (MSP1) Plasmodium falciparum is a surface protein that plays a role in the invasion process in human erythrocytes through the interaction of Glycophorin A protein receptors and sialic acid in erythrocytes with Reticulocyte Binding Proteins (RBP) and Duffy Adhesion Protein (DAP) ligands in merozoites. MSP1 can be targeted to be a specific antigen and predicted epitope area which will be used for the development of diagnostic and malaria vaccine therapy. MSP1 consists of 17 blocks, each block is dimorphic, and has been marked as the K1 and MAD20 alleles. Exceptions only in block 2, because it has 3 alleles, among others K1, MAD20 and RO33. These polymorphisms cause allelic variations and implicate the severity of patients infected P. falciparum. In addition, polymorphism of MSP1 in Jayapura isolates has not been reported so it is interesting to be further identified and projected as a specific antigen. Therefore, in this study, we analyzed the allele polymorphism as well as detected the MSP1 epitope antigen candidate on block 2 P. falciparum. Clinical samples of selected malaria patients followed the consecutive sampling method, examining malaria parasites with blood preparations on glass objects observed through a microscope. Plasmodium DNA was isolated from the blood of malarial positive patients. The block 2 MSP1 gene was amplified using PCR method and cloned using the pGEM-T easy vector then transformed to TOP'10 E.coli. Positive colonies selection was performed with blue-white screening. The existence of target DNA was confirmed by PCR colonies and DNA sequencing methods. Furthermore, DNA sequence analysis was done through alignment and formation of a phylogenetic tree using MEGA 6 software and insilico analysis using IEDB software to predict epitope candidate for P. falciparum. A total of 15 patient samples have been isolated from Plasmodium DNA. PCR amplification results show the target gene size about ± 1049 bp. The results of MSP1 nucleotide alignment analysis reveal that block 2 MSP1 genes derived from the sample of malarial patients were distributed in four different allele family groups, K1 (7), MAD20 (1), RO33 (0) and MSP1_Jayapura (10) alleles. The most commonly appears of the detected allele is MSP1_Jayapura single allele. There was no significant association between sex variables, age, the density of parasitemia and alel variation (Mann Whitney, U > 0.05), while symptomatic signs have a significant difference as a trigger of detectable allele variation (U < 0.05). In this research, insilico study shows that there is a new epitope antigen candidate from the MSP1_Jayapura allele and it is predicted to be recognized by B cells with 17 amino acid lengths in the amino acid sequence 187 to 203.

Keywords: epitope candidate, insilico analysis, MSP1 P. falciparum, polymorphism

Procedia PDF Downloads 162
41 Catalytic Dehydrogenation of Formic Acid into H2/CO2 Gas: A Novel Approach

Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy

Abstract:

Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of biomass platform, comprising a potential pool of hydrogen energy that stands as a new energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need of in-situ H2 production, which plays a key role in the hydrogenation reactions of biomass into higher value components. It is reported elsewhere in literature that catalytic decomposition of FA is usually performed in poorly designed setup using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. it work suggests an approach that integrates designing a novel catalyst featuring magnetic property with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H2 gas from FA. Using ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under inert medium. Through a novel approach, FA is charged into the reactor via high-pressure positive displacement pump at steady state conditions. The produced gas (H2+CO2) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The novelty of this work lies in designing a very responsive catalyst, pumping consistent amount of FA into a sealed reactor running at steady state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at lower temperature range (35-50°C) yielded more gas while the catalyst loading and Pd doping wt.% were found to be the most significant factors with a P-values 0.026 & 0.031, respectively.

Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles

Procedia PDF Downloads 28
40 New Hybrid Process for Converting Small Structural Parts from Metal to CFRP

Authors: Yannick Willemin

Abstract:

Carbon fibre-reinforced plastic (CFRP) offers outstanding value. However, like all materials, CFRP also has its challenges. Many forming processes are largely manual and hard to automate, making it challenging to control repeatability and reproducibility (R&R); they generate significant scrap and are too slow for high-series production; fibre costs are relatively high and subject to supply and cost fluctuations; the supply chain is fragmented; many forms of CFRP are not recyclable, and many materials have yet to be fully characterized for accurate simulation; shelf life and outlife limitations add cost; continuous-fibre forms have design limitations; many materials are brittle; and small and/or thick parts are costly to produce and difficult to automate. A majority of small structural parts are metal due to high CFRP fabrication costs for the small-size class. The fact that CFRP manufacturing processes that produce the highest performance parts also tend to be the slowest and least automated is another reason CFRP parts are generally higher in cost than comparably performing metal parts, which are easier to produce. Fortunately, business is in the midst of a major manufacturing evolution—Industry 4.0— one technology seeing rapid growth is additive manufacturing/3D printing, thanks to new processes and materials, plus an ability to harness Industry 4.0 tools. No longer limited to just prototype parts, metal-additive technologies are used to produce tooling and mold components for high-volume manufacturing, and polymer-additive technologies can incorporate fibres to produce true composites and be used to produce end-use parts with high aesthetics, unmatched complexity, mass customization opportunities, and high mechanical performance. A new hybrid manufacturing process combines the best capabilities of additive—high complexity, low energy usage and waste, 100% traceability, faster to market—and post-consolidation—tight tolerances, high R&R, established materials, and supply chains—technologies. The platform was developed by Zürich-based 9T Labs AG and is called Additive Fusion Technology (AFT). It consists of a design software offering the possibility to determine optimal fibre layup, then exports files back to check predicted performance—plus two pieces of equipment: a 3d-printer—which lays up (near)-net-shape preforms using neat thermoplastic filaments and slit, roll-formed unidirectional carbon fibre-reinforced thermoplastic tapes—and a post-consolidation module—which consolidates then shapes preforms into final parts using a compact compression press fitted with a heating unit and matched metal molds. Matrices—currently including PEKK, PEEK, PA12, and PPS, although nearly any high-quality commercial thermoplastic tapes and filaments can be used—are matched between filaments and tapes to assure excellent bonding. Since thermoplastics are used exclusively, larger assemblies can be produced by bonding or welding together smaller components, and end-of-life parts can be recycled. By combining compression molding with 3D printing, higher part quality with very-low voids and excellent surface finish on A and B sides can be produced. Tight tolerances (min. section thickness=1.5mm, min. section height=0.6mm, min. fibre radius=1.5mm) with high R&R can be cost-competitively held in production volumes of 100 to 10,000 parts/year on a single set of machines.

Keywords: additive manufacturing, composites, thermoplastic, hybrid manufacturing

Procedia PDF Downloads 75
39 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU

Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais

Abstract:

Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.

Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking

Procedia PDF Downloads 16
38 Catalytic Decomposition of Formic Acid into H₂/CO₂ Gas: A Distinct Approach

Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy

Abstract:

Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of the biomass platform, comprising a potential pool of hydrogen energy that stands as a distinct energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need for in-situ H₂ production, which plays a key role in the hydrogenation reactions of biomass into higher-value components. It is reported elsewhere in the literature that catalytic decomposition of FA is usually performed in poorly designed setups using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. Our work suggests an approach that integrates designing a distinct catalyst featuring magnetic properties with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for the dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H₂ gas from FA. Using an ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under an inert medium. Through a distinct approach, FA is charged into the reactor via a high-pressure positive displacement pump at steady-state conditions. The produced gas (H₂+CO₂) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The uniqueness of this work lies in designing a very responsive catalyst, pumping a consistent amount of FA into a sealed reactor running at steady-state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at a lower temperature range (35-50°C) yielded more gas, while the catalyst loading and Pd doping wt.% were found to be the most significant factors with P-values 0.026 & 0.031, respectively.

Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles

Procedia PDF Downloads 28
37 Automated End of Sprint Detection for Force-Velocity-Power Analysis with GPS/GNSS Systems

Authors: Patrick Cormier, Cesar Meylan, Matt Jensen, Dana Agar-Newman, Chloe Werle, Ming-Chang Tsai, Marc Klimstra

Abstract:

Sprint-derived horizontal force-velocity-power (FVP) profiles can be developed with adequate validity and reliability with satellite (GPS/GNSS) systems. However, FVP metrics are sensitive to small nuances in data processing procedures such that minor differences in defining the onset and end of the sprint could result in different FVP metric outcomes. Furthermore, in team-sports, there is a requirement for rapid analysis and feedback of results from multiple athletes, therefore developing standardized and automated methods to improve the speed, efficiency and reliability of this process are warranted. Thus, the purpose of this study was to compare different methods of sprint end detection on the development of FVP profiles from 10Hz GPS/GNSS data through goodness-of-fit and intertrial reliability statistics. Seventeen national team female soccer players participated in the FVP protocol which consisted of 2x40m maximal sprints performed towards the end of a soccer specific warm-up in a training session (1020 hPa, wind = 0, temperature = 30°C) on an open grass field. Each player wore a 10Hz Catapult system unit (Vector S7, Catapult Innovations) inserted in a vest in a pouch between the scapulae. All data were analyzed following common procedures. Variables computed and assessed were the model parameters, estimated maximal sprint speed (MSS) and the acceleration constant τ, in addition to horizontal relative force (F₀), velocity at zero (V₀), and relative mechanical power (Pmax). The onset of the sprints was standardized with an acceleration threshold of 0.1 m/s². The sprint end detection methods were: 1. Time when peak velocity (MSS) was achieved (zero acceleration), 2. Time after peak velocity drops by -0.4 m/s, 3. Time after peak velocity drops by -0.6 m/s, and 4. When the integrated distance from the GPS/GNSS signal achieves 40-m. Goodness-of-fit of each sprint end detection method was determined using the residual sum of squares (RSS) to demonstrate the error of the FVP modeling with the sprint data from the GPS/GNSS system. Inter-trial reliability (from 2 trials) was assessed utilizing intraclass correlation coefficients (ICC). For goodness-of-fit results, the end detection technique that used the time when peak velocity was achieved (zero acceleration) had the lowest RSS values, followed by -0.4 and -0.6 velocity decay, and 40-m end had the highest RSS values. For intertrial reliability, the end of sprint detection techniques that were defined as the time at (method 1) or shortly after (method 2 and 3) when MSS was achieved had very large to near perfect ICC and the time at the 40 m integrated distance (method 4) had large to very large ICCs. Peak velocity was reached at 29.52 ± 4.02-m. Therefore, sport scientists should implement end of sprint detection either when peak velocity is determined or shortly after to improve goodness of fit to achieve reliable between trial FVP profile metrics. Although, more robust processing and modeling procedures should be developed in future research to improve sprint model fitting. This protocol was seamlessly integrated into the usual training which shows promise for sprint monitoring in the field with this technology.

Keywords: automated, biomechanics, team-sports, sprint

Procedia PDF Downloads 101
36 A Model to Assess Sustainability Using Multi-Criteria Analysis and Geographic Information Systems: A Case Study

Authors: Antonio Boggia, Luisa Paolotti, Gianluca Massei, Lucia Rocchi, Elaine Pace, Maria Attard

Abstract:

The aim of this paper is to present a methodology and a computer model for sustainability assessment based on the integration of Multi-criteria Decision Analysis (MCDA) with a Geographic Information System (GIS). It presents the result of a study for the implementation of a model for measuring sustainability to address the policy actions for the improvement of sustainability at territory level. The aim is to rank areas in order to understand the specific technical and/or financial support that is required to develop sustainable growth. Assessing sustainable development is a multidimensional problem: economic, social and environmental aspects have to be taken into account at the same time. The tool for a multidimensional representation is a proper set of indicators. The set of indicators must be integrated into a model, that is an assessment methodology, to be used for measuring sustainability. The model, developed by the Environmental Laboratory of the University of Perugia, is called GeoUmbriaSUIT. It is a calculation procedure developed as a plugin working in the open-source GIS software QuantumGIS. The multi-criteria method used within GeoUmbriaSUIT is the algorithm TOPSIS (Technique for Order Preference by Similarity to Ideal Design), which defines a ranking based on the distance from the worst point and the closeness to an ideal point, for each of the criteria used. For the sustainability assessment procedure, GeoUmbriaSUIT uses a geographic vector file where the graphic data represent the study area and the single evaluation units within it (the alternatives, e.g. the regions of a country, or the municipalities of a region), while the alphanumeric data (attribute table), describe the environmental, economic and social aspects related to the evaluation units by means of a set of indicators (criteria). The use of the algorithm available in the plugin allows to treat individually the indicators representing the three dimensions of sustainability, and to compute three different indices: environmental index, economic index and social index. The graphic output of the model allows for an integrated assessment of the three dimensions, avoiding aggregation. The presence of separate indices and graphic output make GeoUmbriaSUIT a readable and transparent tool, since it doesn’t produce an aggregate index of sustainability as final result of the calculations, which is often cryptic and difficult to interpret. In addition, it is possible to develop a “back analysis”, able to explain the positions obtained by the alternatives in the ranking, based on the criteria used. The case study presented is an assessment of the level of sustainability in the six regions of Malta, an island state in the middle of the Mediterranean Sea and the southernmost member of the European Union. The results show that the integration of MCDA-GIS is an adequate approach for sustainability assessment. In particular, the implemented model is able to provide easy to understand results. This is a very important condition for a sound decision support tool, since most of the time decision makers are not experts and need understandable output. In addition, the evaluation path is traceable and transparent.

Keywords: GIS, multi-criteria analysis, sustainability assessment, sustainable development

Procedia PDF Downloads 259
35 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers

Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala

Abstract:

The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.

Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification

Procedia PDF Downloads 139
34 Taiwanese Pre-Service Elementary School EFL Teachers’ Perception and Practice of Station Teaching in English Remedial Education

Authors: Chien Chin-Wen

Abstract:

Collaborative teaching has different teaching models and station teaching is one type of collaborative teaching. Station teaching is not commonly practiced in elementary school English education and introduced in language teacher education programs in Taiwan. In station teaching, each teacher takes a small part of instructional content, working with a small number of students. Students rotate between stations where they receive the assignments and instruction from different teachers. The teachers provide the same content to each group, but the instructional method can vary based upon the needs of each group of students. This study explores thirty-four Taiwanese pre-service elementary school English teachers’ knowledge about station teaching and their competence demonstrated in designing activities for and delivering of station teaching in an English remedial education to six sixth graders in a local elementary school in northern Taiwan. The participants simultaneously enrolled in this Elementary School English Teaching Materials and Methods class, a part of an elementary school teacher education program in a northern Taiwan city. The instructor (Jennifer, pseudonym) in this Elementary School English Teaching Materials and Methods class collaborated with an English teacher (Olivia, pseudonym) in Maureen Elementary School (pseudonym), an urban elementary school in a northwestern Taiwan city. Of Olivia’s students, four male and two female sixth graders needed to have remedial English education. Olivia chose these six elementary school students because they were in the lowest 5 % of their class in terms of their English proficiency. The thirty-four pre-service English teachers signed up for and took turns in teaching these six sixth graders every Thursday afternoon from four to five o’clock for twelve weeks. While three participants signed up as a team and taught these six sixth graders, the last team consisted of only two pre-service teachers. Each team designed a 40-minute lesson plan on the given language focus (words, sentence patterns, dialogue, phonics) of the assigned unit. Data in this study included the KWLA chart, activity designs, and semi-structured interviews. Data collection lasted for four months, from September to December 2014. Data were analyzed as follows. First, all the notes were read and marked with appropriate codes (e.g., I don’t know, co-teaching etc.). Second, tentative categories were labeled (e.g., before, after, process, future implication, etc.). Finally, the data were sorted into topics that reflected the research questions on the basis of their relevance. This study has the following major findings. First of all, the majority of participants knew nothing about station teaching at the beginning of the study. After taking the course Elementary School English Teaching Materials and Methods and after designing and delivering the station teaching in an English remedial education program to six sixth graders, they learned that station teaching is co-teaching, and that it includes activity designs for different stations and students’ rotating from station to station. They demonstrated knowledge and skills in activity designs for vocabulary, sentence patterns, dialogue, and phonics. Moreover, they learned to interact with individual learners and guided them step by step in learning vocabulary, sentence patterns, dialogue, and phonics. However, they were still incompetent in classroom management, time management, English, and designing diverse and meaningful activities for elementary school students at different English proficiency levels. Hence, language teacher education programs are recommended to integrate station teaching to help pre-service teachers be equipped with eight knowledge and competences, including linguistic knowledge, content knowledge, general pedagogical knowledge, curriculum knowledge, knowledge of learners and their characteristics, pedagogical content knowledge, knowledge of education content, and knowledge of education’s ends and purposes.

Keywords: co-teaching, competence, knowledge, pre-service teachers, station teaching

Procedia PDF Downloads 406
33 Translation, Cross-Cultural Adaption, and Validation of the Vividness of Movement Imagery Questionnaire 2 (VMIQ-2) to Classical Arabic Language

Authors: Majid Alenezi, Abdelbare Algamode, Amy Hayes, Gavin Lawrence, Nichola Callow

Abstract:

The purpose of this study was to translate and culturally adapt the Vividness of Movement Imagery Questionnaire-2 (VMIQ-2) from English to produce a new Arabic version (VMIQ-2A), and to evaluate the reliability and validity of the translated questionnaire. The questionnaire assesses how vividly and clearly individuals are able to imagine themselves performing everyday actions. Its purpose is to measure individuals’ ability to conduct movement imagery, which can be defined as “the cognitive rehearsal of a task in the absence of overt physical movement.” Movement imagery has been introduced in physiotherapy as a promising intervention technique, especially when physical exercise is not possible (e.g. pain, immobilisation.) Considerable evidence indicates movement imagery interventions improve physical function, but to maximize efficacy it is important to know the imagery abilities of the individuals being treated. Given the increase in the global sharing of knowledge it is desirable to use standard measures of imagery ability across language and cultures, thus motivating this project. The translation procedure followed guidelines from the Translation and Cultural Adaptation group of the International Society for Pharmacoeconomics and Outcomes Research and involved the following phases: Preparation; the original VMIQ-2 was adapted slightly to provide additional information and simplified grammar. Forward translation; three native speakers resident in Saudi Arabia translated the original VMIQ-2 from English to Arabic, following instruction to preserve meaning (not literal translation), and cultural relevance. Reconciliation; the project manager (first author), the primary translator and a physiotherapist reviewed the three independent translations to produce a reconciled first Arabic draft of VMIQ-2A. Backward translation; a fourth translator (native Arabic speaker fluent in English) translated literally the reconciled first Arabic draft to English. The project manager and two study authors compared the English back translation to the original VMIQ-2 and produced the second Arabic draft. Cognitive debriefing; to assess participants’ understanding of the second Arabic draft, 7 native Arabic speakers resident in the UK completed the questionnaire, and rated the clearness of the questions, specified difficult words or passages, and wrote in their own words their understanding of key terms. Following review of this feedback, a final Arabic version was created. 142 native Arabic speakers completed the questionnaire in community meeting places or at home; a subset of 44 participants completed the questionnaire a second time 1 week later. Results showed the translated questionnaire to be valid and reliable. Correlation coefficients indicated good test-retest reliability. Cronbach’s a indicated high internal consistency. Construct validity was tested in two ways. Imagery ability scores have been found to be invariant across gender; this result was replicated within the current study, assessed by independent-samples t-test. Additionally, experienced sports participants have higher imagery ability than those less experienced; this result was also replicated within the current study, assessed by analysis of variance, supporting construct validity. Results provide preliminary evidence that the VMIQ-2A is reliable and valid to be used with a general population who are native Arabic speakers. Future research will include validation of the VMIQ-2A in a larger sample, and testing validity in specific patient populations.

Keywords: motor imagery, physiotherapy, translation and validation, imagery ability

Procedia PDF Downloads 305
32 Implementing Equitable Learning Experiences to Increase Environmental Awareness and Science Proficiency in Alabama’s Schools and Communities

Authors: Carly Cummings, Maria Soledad Peresin

Abstract:

Alabama has a long history of racial injustice and unsatisfactory educational performance. In the 1870s Jim Crow laws segregated public schools and disproportionally allocated funding and resources to white institutions across the South. Despite the Supreme Court ruling to integrate schools following Brown vs. the Board of Education in 1954, Alabama’s school system continued to exhibit signs of segregation, compounded by “white flight” and the establishment of exclusive private schools, which still exist today. This discriminatory history has had a lasting impact of the state’s education system, reflected in modern school demographics and achievement data. It is well known that Alabama struggles with education performance, especially in science education. On average, minority groups scored the lowest in science proficiency. In Alabama, minority populations are concentrated in a region known as the Black Belt, which was once home to countless slave plantations and was the epicenter of the Civil Rights Movement. Today the Black Belt is characterized by a high density of woodlands and plays a significant role in Alabama’s leading economic industry-forest products. Given the economic importance of forestry and agriculture to the state, environmental science proficiency is essential to its stability; however, it is neglected in areas where it is needed most. To better understand the inequity of science education within Alabama, our study first investigates how geographic location, demographics and school funding relate to science achievement scores using ArcGIS and Pearson’s correlation coefficient. Additionally, our study explores the implementation of a relevant, problem-based, active learning lesson in schools. Relevant learning engages students by connecting material to their personal experiences. Problem-based active learning involves real-world problem-solving through hands-on experiences. Given Alabama’s significant woodland coverage, educational materials on forest products were developed with consideration of its relevance to students, especially those located in the Black Belt. Furthermore, to incorporate problem solving and active learning, the lesson centered around students using forest products to solve environmental challenges, such as water pollution- an increasing challenge within the state due to climate change. Pre and post assessment surveys were provided to teachers to measure the effectiveness of the lesson. In addition to pedagogical practices, community and mentorship programs are known to positively impact educational achievements. To this end, our work examines the results of surveys measuring educational professionals’ attitudes toward a local mentorship group within the Black Belt and its potential to address environmental and science literacy. Additionally, our study presents survey results from participants who attended an educational community event, gauging its effectiveness in increasing environmental and science proficiency. Our results demonstrate positive improvements in environmental awareness and science literacy with relevant pedagogy, mentorship, and community involvement. Implementing these practices can help provide equitable and inclusive learning environments and can better equip students with the skills and knowledge needed to bridge this historic educational gap within Alabama.

Keywords: equitable education, environmental science, environmental education, science education, racial injustice, sustainability, rural education

Procedia PDF Downloads 47
31 Modeling and Energy Analysis of Limestone Decomposition with Microwave Heating

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

The energy transition is spurred by structural changes in energy demand, supply, and prices. Microwave technology was first proposed as a faster alternative for cooking food. It was found that food heated instantly when interacting with high-frequency electromagnetic waves. The dielectric properties account for a material’s ability to absorb electromagnetic energy and dissipate this energy in the form of heat. Many energy-intense industries could benefit from electromagnetic heating since many of the raw materials are dielectric at high temperatures. Limestone sedimentary rock is a dielectric material intensively used in the cement industry to produce unslaked lime. A numerical 3D model was implemented in COMSOL Multiphysics to study the limestone continuous processing under microwave heating. The model solves the two-way coupling between the Energy equation and Maxwell’s equations as well as the coupling between heat transfer and chemical interfaces. Complementary, a controller was implemented to optimize the overall heating efficiency and control the numerical model stability. This was done by continuously matching the cavity impedance and predicting the required energy for the system, avoiding energy inefficiencies. This controller was developed in MATLAB and successfully fulfilled all these goals. The limestone load influence on thermal decomposition and overall process efficiency was the main object of this study. The procedure considered the Verification and Validation of the chemical kinetics model separately from the coupled model. The chemical model was found to correctly describe the chosen kinetic equation, and the coupled model successfully solved the equations describing the numerical model. The interaction between flow of material and electric field Poynting vector revealed to influence limestone decomposition, as a result from the low dielectric properties of limestone. The numerical model considered this effect and took advantage from this interaction. The model was demonstrated to be highly unstable when solving non-linear temperature distributions. Limestone has a dielectric loss response that increases with temperature and has low thermal conductivity. For this reason, limestone is prone to produce thermal runaway under electromagnetic heating, as well as numerical model instabilities. Five different scenarios were tested by considering a material fill ratio of 30%, 50%, 65%, 80%, and 100%. Simulating the tube rotation for mixing enhancement was proven to be beneficial and crucial for all loads considered. When uniform temperature distribution is accomplished, the electromagnetic field and material interaction is facilitated. The results pointed out the inefficient development of the electric field within the bed for 30% fill ratio. The thermal efficiency showed the propensity to stabilize around 90%for loads higher than 50%. The process accomplished a maximum microwave efficiency of 75% for the 80% fill ratio, sustaining that the tube has an optimal fill of material. Electric field peak detachment was observed for the case with 100% fill ratio, justifying the lower efficiencies compared to 80%. Microwave technology has been demonstrated to be an important ally for the decarbonization of the cement industry.

Keywords: CFD numerical simulations, efficiency optimization, electromagnetic heating, impedance matching, limestone continuous processing

Procedia PDF Downloads 156
30 Recognizing Human Actions by Multi-Layer Growing Grid Architecture

Authors: Z. Gharaee

Abstract:

Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.

Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance

Procedia PDF Downloads 138
29 Comparing Community Health Agents, Physicians and Nurses in Brazil's Family Health Strategy

Authors: Rahbel Rahman, Rogério Meireles Pinto, Margareth Santos Zanchetta

Abstract:

Background: Existing shortcomings of current health-service delivery include poor teamwork, competencies that do not address consumer needs, and episodic rather than continuous care. Brazil’s Sistema Único de Saúde (Unified Health System, UHS) is acknowledged worldwide as a model for delivering community-based care through Estratégia Saúde da Família (FHS; Family Health Strategy) interdisciplinary teams, comprised of Community Health Agents (in Portuguese, Agentes Comunitário de Saude, ACS), nurses, and physicians. FHS teams are mandated to collectively offer clinical care, disease prevention services, vector control, health surveillance and social services. Our study compares medical providers (nurses and physicians) and community-based providers (ACS) on their perceptions of work environment, professional skills, cognitive capacities and job context. Global health administrators and policy makers can leverage on similarities and differences across care providers to develop interprofessional training for community-based primary care. Methods: Cross-sectional data were collected from 168 ACS, 62 nurses and 32 physicians in Brazil. We compared providers’ demographic characteristics (age, race, and gender) and job context variables (caseload, work experience, work proximity to community, the length of commute, and familiarity with the community). Providers perceptions were compared to their work environment (work conditions and work resources), professional skills (consumer-input, interdisciplinary collaboration, efficacy of FHS teams, work-methods and decision-making autonomy), and cognitive capacities (knowledge and skills, skill variety, confidence and perseverance). Descriptive and bi-variate analysis, such as Pearson Chi-square and Analysis of Variance (ANOVA) F-tests, were performed to draw comparisons across providers. Results: Majority of participants were ACS (64%); 24% nurses; and 12% physicians. Majority of nurses and ACS identified as mixed races (ACS, n=85; nurses, n=27); most physicians identified as males (n=16; 52%), and white (n=18; 58%). Physicians were less likely to incorporate consumer-input and demonstrated greater decision-making autonomy than nurses and ACS. ACS reported the highest levels of knowledge and skills but the least confidence compared to nurses and physicians. ACS, nurses, and physicians were efficacious that FHS teams improved the quality of health in their catchment areas, though nurses tend to disagree that interdisciplinary collaboration facilitated their work. Conclusion: To our knowledge, there has been no study comparing key demographic and cognitive variables across ACS, nurses and physicians in the context of their work environment and professional training. We suggest that global health systems can leverage upon the diverse perspectives of providers to implement a community-based primary care model grounded in interprofessional training. Our study underscores the need for in-service trainings to instill reflective skills of providers, improve communication skills of medical providers and curative skills of ACS. Greater autonomy needs to be extended to community based providers to offer care integral to addressing consumer and community needs.

Keywords: global health systems, interdisciplinary health teams, community health agents, community-based care

Procedia PDF Downloads 215
28 Spatio-Temporal Dynamic of Woody Vegetation Assessment Using Oblique Landscape Photographs

Authors: V. V. Fomin, A. P. Mikhailovich, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Ground-level landscape photos can be used as a source of objective data on woody vegetation and vegetation dynamics. We proposed a method for processing, analyzing, and presenting ground photographs, which has the following advantages: 1) researcher has to form holistic representation of the study area in form of a set of interlapping ground-level landscape photographs; 2) it is necessary to define or obtain characteristics of the landscape, objects, and phenomena present on the photographs; 3) it is necessary to create new or supplement existing textual descriptions and annotations for the ground-level landscape photographs; 4) single or multiple ground-level landscape photographs can be used to develop specialized geoinformation layers, schematic maps or thematic maps; 5) it is necessary to determine quantitative data that describes both images as a whole, and displayed objects and phenomena, using algorithms for automated image analysis. It is suggested to match each photo with a polygonal geoinformation layer, which is a sector consisting of areas corresponding with parts of the landscape visible in the photos. Calculation of visibility areas is performed in a geoinformation system within a sector using a digital model of a study area relief and visibility analysis functions. Superposition of the visibility sectors corresponding with various camera viewpoints allows matching landscape photos with each other to create a complete and wholesome representation of the space in question. It is suggested to user-defined data or phenomenons on the images with the following superposition over the visibility sector in the form of map symbols. The technology of geoinformation layers’ spatial superposition over the visibility sector creates opportunities for image geotagging using quantitative data obtained from raster or vector layers within the sector with the ability to generate annotations in natural language. The proposed method has proven itself well for relatively open and clearly visible areas with well-defined relief, for example, in mountainous areas in the treeline ecotone. When the polygonal layers of visibility sectors for a large number of different points of photography are topologically superimposed, a layer of visibility of sections of the entire study area is formed, which is displayed in the photographs. Also, as a result of this overlapping of sectors, areas that did not appear in the photo will be assessed as gaps. According to the results of this procedure, it becomes possible to obtain information about the photos that display a specific area and from which points of photography it is visible. This information may be obtained either as a query on the map or as a query for the attribute table of the layer. The method was tested using repeated photos taken from forty camera viewpoints located on Ray-Iz mountain massif (Polar Urals, Russia) from 1960 until 2023. It has been successfully used in combination with other ground-based and remote sensing methods of studying the climate-driven dynamics of woody vegetation in the Polar Urals. Acknowledgment: This research was collaboratively funded by the Russian Ministry for Science and Education project No. FEUG-2023-0002 (image representation) and Russian Science Foundation project No. 24-24-00235 (automated textual description).

Keywords: woody, vegetation, repeated, photographs

Procedia PDF Downloads 32
27 Sexuality Education through Media and Technology: Addressing Unmet Needs of Adolescents in Bangladesh

Authors: Farhana Alam Bhuiyan, Saad Khan, Tanveer Hassan, Jhalok Ranjon Talukder, Syeda Farjana Ahmed, Rahil Roodsaz, Els Rommes, Sabina Faiz Rashid

Abstract:

Breaking the shame’ is a 3 year (2015-2018) qualitative implementation research project which investigates several aspects of sexual and reproductive health and rights (SRHR) issues for adolescents living in Bangladesh. Scope of learning SRHR issues for adolescents is limited here due to cultural and religious taboos. This study adds to the ongoing discussions around adolescent’s SRHR needs and aims to, 1) understand the overall SRHR needs of urban and rural unmarried female and male adolescents and the challenges they face, 2) explore existing gaps in the content of SRHR curriculum and 3) finally, addresses some critical knowledge gaps by developing and implementing innovative SRHR educational materials. 18 in-depth interviews (IDIs) and 10 focus-group discussions (FGDs) with boys and 21 IDIs and 14 FGDs with girls of ages 13-19, from both urban and rural setting took place. Curriculum materials from two leading organizations, Unite for Body Rights (UBR) Alliance Bangladesh and BRAC Adolescent Development Program (ADP) were also reviewed, with discussions with 12 key program staff. This paper critically analyses the relevance of some of the SRHR topics that are covered, the challenges with existing pedagogic approaches and key sexuality issues that are not covered in the content, but are important for adolescents. Adolescents asked for content and guidance on a number of topics which remain missing from the core curriculum, such as emotional coping mechanisms particularly in relationships, bullying, impact of exposure to porn, and sexual performance anxiety. Other core areas of concern were effects of masturbation, condom use, sexual desire and orientation, which are mentioned in the content, but never discussed properly, resulting in confusion. Due to lack of open discussion around sexuality, porn becomes a source of information for the adolescents. For these reasons, several myths and misconceptions regarding SRHR issues like body, sexuality, agency, and gender roles still persist. The pedagogical approach is very didactic, and teachers felt uncomfortable to have discussions on certain SRHR topics due to cultural taboos or shame and stigma. Certain topics are favored- such as family planning, menstruation- and presented with an emphasis on biology and risk. Rigid formal teaching style, hierarchical power relations between students and most teachers discourage questions and frank conversations. Pedagogy approaches within classrooms play a critical role in the sharing of knowledge. The paper also describes the pilot approaches to implementing new content in SRHR curriculum. After a review of findings, three areas were selected as critically important, 1) myths and misconceptions 2) emotional management challenges, and 3) how to use condom, that have come up from adolescents. Technology centric educational materials such as web page based information platform and you tube videos are opted for which allow adolescents to bypass gatekeepers and learn facts and information from a legitimate educational site. In the era of social media, when information is always a click away, adolescents need sources that are reliable and not overwhelming. The research aims to ensure that adolescents learn and apply knowledge effectively, through creating the new materials and making it accessible to adolescents.

Keywords: adolescents, Bangladesh, media, sexuality education, unmet needs

Procedia PDF Downloads 204
26 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning

Authors: Xingyu Gao, Qiang Wu

Abstract:

Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.

Keywords: patent influence, interpretable machine learning, predictive models, SHAP

Procedia PDF Downloads 23