Search results for: process data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 35057

Search results for: process data

17 Utilization of Developed Single Sequence Repeats Markers for Dalmatian Pyrethrum (Tanacetum cinerariifolium) in Preliminary Genetic Diversity Study on Natural Populations

Authors: F. Varga, Z. Liber, J. Jakše, A. Turudić, Z. Šatović, I. Radosavljević, N. Jeran, M. Grdiša

Abstract:

Dalmatian pyrethrum (Tanacetum cinerariifolium (Trevir.) Sch. Bip.; Asteraceae), a source of the commercially dominant plant insecticide pyrethrin, is a species endemic to the eastern Adriatic. Genetic diversity of T. cinerariifolium was previously studied using amplified fragment length polymorphism (AFLP) markers. However, microsatellite markers (single sequence repeats - SSRs) are more informative because they are codominant, highly polymorphic, locus-specific, and more reproducible, and thus are most often used to assess the genetic diversity of plant species. Dalmatian pyrethrum is an outcrossing diploid (2n = 18) whose large genome size and high repeatability have prevented the success of the traditional approach to SSR markers development. The advent of next-generation sequencing combined with the specifically developed method recently enabled the development of, to the author's best knowledge, the first set of SSRs for genomic characterization of Dalmatian pyrethrum, which is essential from the perspective of plant genetic resources conservation. To evaluate the effectiveness of the developed SSR markers in genetic differentiation of Dalmatian pyrethrum populations, a preliminary genetic diversity study was conducted on 30 individuals from three geographically distinct natural populations in Croatia (northern Adriatic island of Mali Lošinj, southern Adriatic island of Čiovo, and Mount Biokovo) based on 12 SSR loci. Analysis of molecular variance (AMOVA) by randomization test with 10,000 permutations was performed in Arlequin 3.5. The average number of alleles per locus, observed and expected heterozygosity, probability of deviations from Hardy-Weinberg equilibrium, and inbreeding coefficient was calculated using GENEPOP 4.4. Genetic distance based on the proportion of common alleles (DPSA) was calculated using MICROSAT. Cluster analysis using the neighbor-joining method with 1,000 bootstraps was performed with PHYLIP to generate a dendrogram. The results of the AMOVA analysis showed that the total SSR diversity was 23% within and 77% between the three populations. A slight deviation from Hardy-Weinberg equilibrium was observed in the Mali Lošinj population. Allele richness ranged from 2.92 to 3.92, with the highest number of private alleles observed in the Mali Lošinj population (17). The average observed DPSA between 30 individuals was 0.557. The highest DPSA (0.875) was observed between several pairs of Dalmatian pyrethrum individuals from the Mali Lošinj and Mt. Biokovo populations, and the lowest between two individuals from the Čiovo population. Neighbor-joining trees, based on DPSA, grouped individuals into clusters according to their population affiliation. The separation of Mt. Biokovo clade was supported (bootstrap value 58%), which is consistent with the previous study on AFLP markers, where isolated populations from Mt. Biokovo differed from the rest of the populations. The developed SSR markers are an effective tool for assessing the genetic diversity and structure of natural Dalmatian pyrethrum populations. These preliminary results are encouraging for a future comprehensive study with a larger sample size across the species' range. Combined with the biochemical data, these highly informative markers could help identify potential genotypes of interest for future development of breeding lines and cultivars that are both resistant to environmental stress and high in pyrethrins. Acknowledgment: This work has been supported by the Croatian Science Foundation under the project ‘Genetic background of Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir./ Sch. Bip.) insecticidal potential’- (PyrDiv) (IP-06-2016-9034) and by project KK.01.1.1.01.0005, Biodiversity and Molecular Plant Breeding, at the Centre of Excellence for Biodiversity and Molecular Plant Breeding (CoE CroP-BioDiv), Zagreb, Croatia.

Keywords: Asteraceae, genetic diversity, genomic SSRs, NGS, pyrethrum, Tanacetum cinerariifolium

Procedia PDF Downloads 95
16 Beyond Bindis, Bhajis, Bangles, and Bhangra: Exploring Multiculturalism in Southwest England Primary Schools, Early Research Findings

Authors: Suparna Bagchi

Abstract:

Education as a discipline will probably be shaped by the importance it places on a conceptual, curricular, and pedagogical need to shift the emphasis toward transformative classrooms working for positive change through cultural diversity. Awareness of cultural diversity and race equality has heightened following George Floyd’s killing in the USA in 2020. This increasing awareness is particularly relevant in areas of historically low ethnic diversity which have lately experienced a rise in ethnic minority populations and where inclusive growth is a challenge. This research study aims to explore the perspectives of practitioners, students, and parents towards multiculturalism in four South West England primary schools. A qualitative case study methodology has been adopted framed by sociocultural theory. Data were collected through virtually conducted semi-structured interviews with school practitioners and parents, observation of students’ classroom activities, and documentary analysis of classroom displays. Although one-third of the school population includes ethnically diverse children, BAME (Black, Asian, and Minority Ethnic) characters featured in children's books published in Britain in 2019 were almost invisible, let alone a BAME main character. The Office for Standards in Education, Children's Services and Skills (Ofsted) are vocal about extending the Curriculum beyond the academic and technical arenas for pupils’ broader development and creation of an understanding and appreciation of cultural diversity. However, race equality and community cohesion which could help in the students’ broader development are not Ofsted’s school inspection criteria. The absence of culturally diverse content in the school curriculum highlighted by the 1985 Swann Report and 2007 Ajegbo Report makes England’s National Curriculum look like a Brexit policy three decades before Brexit. A revised National Curriculum may be the starting point with the teachers as curriculum framers playing a significant part. The task design is crucial where teachers can place equal importance on the interwoven elements of “how”, “what” and “why” the task is taught. Teachers need to build confidence in encouraging difficult conversations around racism, fear, indifference, and ignorance breaking the stereotypical barriers, thus helping to create students’ conception of a multicultural Britain. Research showed that trainee teachers in predominantly White areas often exhibit confined perspectives while educating children. Irrespective of the geographical location, school teachers can be equipped with culturally responsive initial and continuous professional development necessary to impart multicultural education. This may aid in the reduction of employees’ unconscious bias. This becomes distinctly pertinent to avoid horrific cases in the future like the recent one in Hackney where a Black teenager was strip-searched during period wrongly suspected of cannabis possession. Early research findings show participants’ eagerness for more ethnic diversity content incorporated in teaching and learning. However, schools are considerably dependent on the knowledge-focused Primary National Curriculum in England. Moreover, they handle issues around the intersectionality of disability, poverty, and gender. Teachers were trained in times when foregrounding ethnicity matters was not happening. Therefore, preoccupied with Curriculum requirements, intersectionality issues, and teacher preparations, schools exhibit an incapacity due to which keeping momentum on ethnic diversity is somewhat endangered.

Keywords: case study, curriculum decolonisation, inclusive education, multiculturalism, qualitative research in Covid19 times

Procedia PDF Downloads 91
15 Fabrication of Highly Stable Low-Density Self-Assembled Monolayers by Thiolyne Click Reaction

Authors: Leila Safazadeh, Brad Berron

Abstract:

Self-assembled monolayers have tremendous impact in interfacial science, due to the unique opportunity they offer to tailor surface properties. Low-density self-assembled monolayers are an emerging class of monolayers where the environment-interfacing portion of the adsorbate has a greater level of conformational freedom when compared to traditional monolayer chemistries. This greater range of motion and increased spacing between surface-bound molecules offers new opportunities in tailoring adsorption phenomena in sensing systems. In particular, we expect low-density surfaces to offer a unique opportunity to intercalate surface bound ligands into the secondary structure of protiens and other macromolecules. Additionally, as many conventional sensing surfaces are built upon gold surfaces (SPR or QCM), these surfaces must be compatible with gold substrates. Here, we present the first stable method of generating low-density self assembled monolayer surfaces on gold for the analysis of their interactions with protein targets. Our approach is based on the 2:1 addition of thiol-yne chemistry to develop new classes of y-shaped adsorbates on gold, where the environment-interfacing group is spaced laterally from neighboring chemical groups. This technique involves an initial deposition of a crystalline monolayer of 1,10 decanedithiol on the gold substrate, followed by grafting of a low-packed monolayer on through a photoinitiated thiol-yne reaction in presence of light. Orthogonality of the thiol-yne chemistry (commonly referred to as a click chemistry) allows for preparation of low-density monolayers with variety of functional groups. To date, carboxyl, amine, alcohol, and alkyl terminated monolayers have been prepared using this core technology. Results from surface characterization techniques such as FTIR, contact angle goniometry and electrochemical impedance spectroscopy confirm the proposed low chain-chain interactions of the environment interfacing groups. Reductive desorption measurements suggest a higher stability for the click-LDMs compared to traditional SAMs, along with the equivalent packing density at the substrate interface, which confirms the proposed stability of the monolayer-gold interface. In addition, contact angle measurements change in the presence of an applied potential, supporting our description of a surface structure which allows the alkyl chains to freely orient themselves in response to different environments. We are studying the differences in protein adsorption phenomena between well packed and our loosely packed surfaces, and we expect this data will be ready to present at the GRC meeting. This work aims to contribute biotechnology science in the following manner: Molecularly imprinted polymers are a promising recognition mode with several advantages over natural antibodies in the recognition of small molecules. However, because of their bulk polymer structure, they are poorly suited for the rapid diffusion desired for recognition of proteins and other macromolecules. Molecularly imprinted monolayers are an emerging class of materials where the surface is imprinted, and there is not a bulk material to impede mass transfer. Further, the short distance between the binding site and the signal transduction material improves many modes of detection. My dissertation project is to develop a new chemistry for protein-imprinted self-assembled monolayers on gold, for incorporation into SPR sensors. Our unique contribution is the spatial imprinting of not only physical cues (seen in current imprinted monolayer techniques), but to also incorporate complementary chemical cues. This is accomplished through a photo-click grafting of preassembled ligands around a protein template. This conference is important for my development as a graduate student to broaden my appreciation of the sensor development beyond surface chemistry.

Keywords: low-density self-assembled monolayers, thiol-yne click reaction, molecular imprinting

Procedia PDF Downloads 205
14 The Effect of Using Emg-based Luna Neurorobotics for Strengthening of Affected Side in Chronic Stroke Patients - Retrospective Study

Authors: Surbhi Kaura, Sachin Kandhari, Shahiduz Zafar

Abstract:

Chronic stroke, characterized by persistent motor deficits, often necessitates comprehensive rehabilitation interventions to improve functional outcomes and mitigate long-term dependency. Luna neurorobotic devices, integrated with EMG feedback systems, provide an innovative platform for facilitating neuroplasticity and functional improvement in stroke survivors. This retrospective study aims to investigate the impact of EMG-based Luna neurorobotic interventions on the strengthening of the affected side in chronic stroke patients. In rehabilitation, active patient participation significantly activates the sensorimotor network during motor control, unlike passive movement. Stroke is a debilitating condition that, when not effectively treated, can result in significant deficits and lifelong dependency. Common issues like neglecting the use of limbs can lead to weakness in chronic stroke cases. In rehabilitation, active patient participation significantly activates the sensorimotor network during motor control, unlike passive movement. This study aims to assess how electromyographic triggering (EMG-triggered) robotic treatments affect walking, ankle muscle force after an ischemic stroke, and the coactivation of agonist and antagonist muscles, which contributes to neuroplasticity with the assistance of biofeedback using robotics. Methods: The study utilized robotic techniques based on electromyography (EMG) for daily rehabilitation in long-term stroke patients, offering feedback and monitoring progress. Each patient received one session per day for two weeks, with the intervention group undergoing 45 minutes of robot-assisted training and exercise at the hospital, while the control group performed exercises at home. Eight participants with impaired motor function and gait after stroke were involved in the study. EMG-based biofeedback exercises were administered through the LUNA neuro-robotic machine, progressing from trigger and release mode to trigger and hold, and later transitioning to dynamic mode. Assessments were conducted at baseline and after two weeks, including the Timed Up and Go (TUG) test, a 10-meter walk test (10m), Berg Balance Scale (BBG), and gait parameters like cadence, step length, upper limb strength measured by EMG threshold in microvolts, and force in Newton meters. Results: The study utilized a scale to assess motor strength and balance, illustrating the benefits of EMG-biofeedback following LUNA robotic therapy. In the analysis of the left hemiparetic group, an increase in strength post-rehabilitation was observed. The pre-TUG mean value was 72.4, which decreased to 42.4 ± 0.03880133 seconds post-rehabilitation, with a significant difference indicated by a p-value below 0.05, reflecting a reduced task completion time. Similarly, in the force-based task, the pre-knee dynamic force in Newton meters was 18.2NM, which increased to 31.26NM during knee extension post-rehabilitation. The post-student t-test showed a p-value of 0.026, signifying a significant difference. This indicated an increase in the strength of knee extensor muscles after LUNA robotic rehabilitation. Lastly, at baseline, the EMG value for ankle dorsiflexion was 5.11 (µV), which increased to 43.4 ± 0.06 µV post-rehabilitation, signifying an increase in the threshold and the patient's ability to generate more motor units during left ankle dorsiflexion. Conclusion: This study aimed to evaluate the impact of EMG and dynamic force-based rehabilitation devices on walking and strength of the affected side in chronic stroke patients without nominal data comparisons among stroke patients. Additionally, it provides insights into the inclusion of EMG-triggered neurorehabilitation robots in the daily rehabilitation of patients.

Keywords: neurorehabilitation, robotic therapy, stroke, strength, paralysis

Procedia PDF Downloads 39
13 Developing a Place-Name Gazetteer for Singapore by Mining Historical Planning Archives and Selective Crowd-Sourcing

Authors: Kevin F. Hsu, Alvin Chua, Sarah X. Lin

Abstract:

As a multilingual society, Singaporean names for different parts of the city have changed over time. Residents included Indigenous Malays, dialect-speakers from China, European settler-colonists, and Tamil-speakers from South India. Each group would name locations in their own languages. Today, as ancestral tongues are increasingly supplanted by English, contemporary Singaporeans’ understanding of once-common place names is disappearing. After demolition or redevelopment, some urban places will only exist in archival records or in human memory. United Nations conferences on the standardization of geographic names have called attention to how place names relate to identity, well-being, and a sense of belonging. The Singapore Place-Naming Project responds to these imperatives by capturing past and present place names through digitizing historical maps, mining archival records, and applying selective crowd-sourcing to trace the evolution of place names throughout the city. The project ensures that both formal and vernacular geographical names remain accessible to historians, city planners, and the public. The project is compiling a gazetteer, a geospatial archive of placenames, with streets, buildings, landmarks, and other points of interest (POI) appearing in the historic maps and planning documents of Singapore, currently held by the National Archives of Singapore, the National Library Board, university departments, and the Urban Redevelopment Authority. To create a spatial layer of information, the project links each place name to either a geo-referenced point, line segment, or polygon, along with the original source material in which the name appears. This record is supplemented by crowd-sourced contributions from civil service officers and heritage specialists, drawing from their collective memory to (1) define geospatial boundaries of historic places that appear in past documents, but maybe unfamiliar to users today, and (2) identify and record vernacular place names not captured in formal planning documents. An intuitive interface allows participants to demarcate feature classes, vernacular phrasings, time periods, and other knowledge related to historical or forgotten spaces. Participants are stratified into age bands and ethnicity to improve representativeness. Future iterations could allow additional public contributions. Names reveal meanings that communities assign to each place. While existing historical maps of Singapore allow users to toggle between present-day and historical raster files, this project goes a step further by adding layers of social understanding and planning documents. Tracking place names illuminates linguistic, cultural, commercial, and demographic shifts in Singapore, in the context of transformations of the urban environment. The project also demonstrates how a moderated, selectively crowd-sourced effort can solicit useful geospatial data at scale, sourced from different generations, and at higher granularity than traditional surveys, while mitigating negative impacts of unmoderated crowd-sourcing. Stakeholder agencies believe the project will achieve several objectives, including Supporting heritage conservation and public education; Safeguarding intangible cultural heritage; Providing historical context for street, place or development-renaming requests; Enhancing place-making with deeper historical knowledge; Facilitating emergency and social services by tagging legal addresses to vernacular place names; Encouraging public engagement with heritage by eliciting multi-stakeholder input.

Keywords: collective memory, crowd-sourced, digital heritage, geospatial, geographical names, linguistic heritage, place-naming, Singapore, Southeast Asia

Procedia PDF Downloads 103
12 Sustainable Development Goal (SDG)-Driven Intercultural Citizenship Education through Dance-Fitness Development: A Classroom Research Project Based on History Research into Japanese Traditional Performing Art (Menburyu)

Authors: Stephanie Ann Houghton

Abstract:

SDG-driven intercultural citizenship education through performing arts and history research, combined with dance-fitness development inspired by performing arts, can provide a third space in which performing arts, local history, and contemporary society drive educational and social development, supporting the performing arts in student-generated ways, reflecting their sense, priorities, and goals. Within a string of rugged volcanic peninsulas along the north-western coastline of the Ariake Sea, Kyushu, southern Japan, are found a range of traditional performing arts endangered in Japan’s ageing society, including Menburyu mask dance. From 2017, Menburyu culture and history were explored with Menburyu veterans and students within Houghton’s FURYU Educational Program (FEP) at Saga University. Through collaboration with professional fitness instructor Kazuki Miyata, basic Menburyu movements and concepts were blended into aerobics routines to generate Menburyu-Inspired Dance-Fitness (MIDF). Drawing on history, legends, and myths, three important storylines for understanding Menburyu, captured in students’ bilingual (English/Japanese) exhibition panels, emerged: harvest, demons and gods, and the Battle of Tadenawate 1530. Houghton and Miyata performed the first MIDF routine at the 22nd Traditional Performing Arts Festival at Yutoku Inari Shrine, Kashima, in September 2019. FEP exhibitions, dance-fitness events, and MIDF performance have been reported in the media locally and nationally. In an action research case study, a classroom research project was conducted with four female Japanese students over fifteen three-hour online lessons (April-July 2020). Part 1 of each lesson focused on Menburyu history. This included a guest lecture by Kensuke Ryuzoji. The three Menburyu storylines served as keys for exploring Menburyu history from international standpoints.Part 2 focused on the development of MIDF basic steps and an online MIDF event with outside guests. Through post-lesson reflective diaries and reports/videos documenting their experience, students engaged in heritage management, intercultural dialogue, health/fitness, technology and art generation activities within the FEP, centring on UN Sustainable Development Goals (SDGs) including health and wellness (SDG3), and quality education (SDG4), taking a glocal approach. In this presentation, qualitative analysis of student-generated reflective diary and reports will be presented to reveal educational processes, learning outcomes,and apparent areas of (potential) social impact of this classroom research project. Data will be presented in two main parts: (1) The mutually beneficial relationship between local traditional performing arts research and local history researchwill be addressed. One has the power both inform and illuminate the other given their deep connections. This can drive the development of students’ intercultural history competence related to and through the performing arts. (2) The development of dance-fitness inspired by traditional performing arts provides a third space in which performing arts, local history and contemporary society can be connected through SDG-driven education inside the classroom in ways that can also drive social innovation outside the classroom, potentially supporting the performing arts itself in student-generated ways, reflecting their own sense, priorities and social goals. Links will be drawn with intercultural citizenship, strengths and weaknesses of this teaching approach will be highlighted, and avenues for future research in this exciting new area will be suggested.

Keywords: cultural traditions, dance-fitness performance and participation, intercultural communication approach, mask dance origins

Procedia PDF Downloads 115
11 Integrating Personality Traits and Travel Motivations for Enhanced Small and Medium-sized Tourism Enterprises (SMEs) Strategies: A Case Study of Cumbria, United Kingdom

Authors: Delia Gabriela Moisa, Demos Parapanos, Tim Heap

Abstract:

The tourism sector is mainly comprised of small and medium-sized tourism enterprises (SMEs), representing approximately 80% of global businesses in this field. These entities require focused attention and support to address challenges, ensuring their competitiveness and relevance in a dynamic industry characterized by continuously changing customer preferences. To address these challenges, it becomes imperative to consider not only socio-demographic factors but also delve into the intricate interplay of psychological elements influencing consumer behavior. This study investigates the impact of personality traits and travel motivations on visitor activities in Cumbria, United Kingdom, an iconic region marked by UNESCO World Heritage Sites, including The Lake District National Park and Hadrian's Wall. With a £4.1 billion tourism industry primarily driven by SMEs, Cumbria serves as an ideal setting for examining the relationship between tourist psychology and activities. Employing the Big Five personality model and the Travel Career Pattern motivation theory, this study aims to explain the relationship between psychological factors and tourist activities. The study further explores SME perspectives on personality-based market segmentation, providing strategic insights into addressing evolving tourist preferences.This pioneering mixed-methods study integrates quantitative data from 330 visitor surveys, subsequently complemented by qualitative insights from tourism SME representatives. The findings unveil that socio-demographic factors do not exhibit statistically significant variations in the activities pursued by visitors in Cumbria. However, significant correlations emerge between personality traits and motivations with preferred visitor activities. Open-minded tourists gravitate towards events and cultural activities, while Conscientious individuals favor cultural pursuits. Extraverted tourists lean towards adventurous, recreational, and wellness activities, while Agreeable personalities opt for lake cruises. Interestingly, a contrasting trend emerges as Extraversion increases, leading to a decrease in interest in cultural activities. Similarly, heightened Agreeableness corresponds to a decrease in interest in adventurous activities. Furthermore, travel motivations, including nostalgia and building relationships, drive event participation, while self-improvement and novelty-seeking lead to adventurous activities. Additionally, qualitative insights from tourism SME representatives underscore the value of targeted messaging aligned with visitor personalities for enhancing loyalty and experiences. This study contributes significantly to scholarship through its novel framework, integrating tourist psychology with activities and industry perspectives. The proposed conceptual model holds substantial practical implications for SMEs to formulate personalized offerings, optimize marketing, and strategically allocate resources tailored to tourist personalities. While the focus is on Cumbria, the methodology's universal applicability offers valuable insights for destinations globally seeking a competitive advantage. Future research addressing scale reliability and geographic specificity limitations can further advance knowledge on this critical relationship between visitor psychology, individual preferences, and industry imperatives. Moreover, by extending the investigation to other districts, future studies could draw comparisons and contrasts in the results, providing a more nuanced understanding of the factors influencing visitor psychology and preferences.

Keywords: personality trait, SME, tourist behaviour, tourist motivation, visitor activity

Procedia PDF Downloads 47
10 Full Characterization of Heterogeneous Antibody Samples under Denaturing and Native Conditions on a Hybrid Quadrupole-Orbitrap Mass Spectrometer

Authors: Rowan Moore, Kai Scheffler, Eugen Damoc, Jennifer Sutton, Aaron Bailey, Stephane Houel, Simon Cubbon, Jonathan Josephs

Abstract:

Purpose: MS analysis of monoclonal antibodies (mAbs) at the protein and peptide levels is critical during development and production of biopharmaceuticals. The compositions of current generation therapeutic proteins are often complex due to various modifications which may affect efficacy. Intact proteins analyzed by MS are detected in higher charge states that also provide more complexity in mass spectra. Protein analysis in native or native-like conditions with zero or minimal organic solvent and neutral or weakly acidic pH decreases charge state value resulting in mAb detection at higher m/z ranges with more spatial resolution. Methods: Three commercially available mAbs were used for all experiments. Intact proteins were desalted online using size exclusion chromatography (SEC) or reversed phase chromatography coupled on-line with a mass spectrometer. For streamlined use of the LC- MS platform we used a single SEC column and alternately selected specific mobile phases to perform separations in either denaturing or native-like conditions: buffer A (20 % ACN, 0.1 % FA) with Buffer B (100 mM ammonium acetate). For peptide analysis mAbs were proteolytically digested with and without prior reduction and alkylation. The mass spectrometer used for all experiments was a commercially available Thermo Scientific™ hybrid Quadrupole-Orbitrap™ mass spectrometer, equipped with the new BioPharma option which includes a new High Mass Range (HMR) mode that allows for improved high mass transmission and mass detection up to 8000 m/z. Results: We have analyzed the profiles of three mAbs under reducing and native conditions by direct infusion with offline desalting and with on-line desalting via size exclusion and reversed phase type columns. The presence of high salt under denaturing conditions was found to influence the observed charge state envelope and impact mass accuracy after spectral deconvolution. The significantly lower charge states observed under native conditions improves the spatial resolution of protein signals and has significant benefits for the analysis of antibody mixtures, e.g. lysine variants, degradants or sequence variants. This type of analysis requires the detection of masses beyond the standard mass range ranging up to 6000 m/z requiring the extended capabilities available in the new HMR mode. We have compared each antibody sample that was analyzed individually with mixtures in various relative concentrations. For this type of analysis, we observed that apparent native structures persist and ESI is benefited by the addition of low amounts of acetonitrile and formic acid in combination with the ammonium acetate-buffered mobile phase. For analyses on the peptide level we analyzed reduced/alkylated, and non-reduced proteolytic digests of the individual antibodies separated via reversed phase chromatography aiming to retrieve as much information as possible regarding sequence coverage, disulfide bridges, post-translational modifications such as various glycans, sequence variants, and their relative quantification. All data acquired were submitted to a single software package for analysis aiming to obtain a complete picture of the molecules analyzed. Here we demonstrate the capabilities of the mass spectrometer to fully characterize homogeneous and heterogeneous therapeutic proteins on one single platform. Conclusion: Full characterization of heterogeneous intact protein mixtures by improved mass separation on a quadrupole-Orbitrap™ mass spectrometer with extended capabilities has been demonstrated.

Keywords: disulfide bond analysis, intact analysis, native analysis, mass spectrometry, monoclonal antibodies, peptide mapping, post-translational modifications, sequence variants, size exclusion chromatography, therapeutic protein analysis, UHPLC

Procedia PDF Downloads 344
9 Hydrocarbon Source Rocks of the Maragh Low

Authors: Elhadi Nasr, Ibrahim Ramadan

Abstract:

Biostratigraphical analyses of well sections from the Maragh Low in the Eastern Sirt Basin has allowed high resolution correlations to be undertaken. Full integration of this data with available palaeoenvironmental, lithological, gravity, seismic, aeromagnetic, igneous, radiometric and wireline log information and a geochemical analysis of source rock quality and distribution has led to a more detailed understanding of the geological and the structural history of this area. Pre Sirt Unconformity two superimposed rifting cycles have been identified. The oldest is represented by the Amal Group of sediments and is of Late Carboniferous, Kasimovian / Gzelian to Middle Triassic, Anisian age. Unconformably overlying is a younger rift cycle which is represented the Sarir Group of sediments and is of Early Cretaceous, late Neocomian to Aptian in age. Overlying the Sirt Unconformity is the marine Late Cretaceous section. An assessment of pyrolysis results and a palynofacies analysis has allowed hydrocarbon source facies and quality to be determined. There are a number of hydrocarbon source rock horizons in the Maragh Low, these are sometimes vertically stacked and they are of fair to excellent quality. The oldest identified source rock is the Triassic Shale, this unit is unconformably overlain by sandstones belonging to the Sarir Group and conformably overlies a Triassic Siltstone unit. Palynological dating of the Triassic Shale unit indicates a Middle Triassic, Anisian age. The Triassic Shale is interpreted to have been deposited in a lacustrine palaeoenvironment. This particularly is evidenced by the dark, fine grained, organic rich nature of the sediment and is supported by palynofacies analysis and by the recovery of fish fossils. Geochemical analysis of the Triassic Shale indicates total organic carbon varying between 1.37 and 3.53. S2 pyrolysate yields vary between 2.15 mg/g and 6.61 mg/g and hydrogen indices vary between 156.91 and 278.91. The source quality of the Triassic Shale varies from being of fair to very good / rich. Linked to thermal maturity it is now a very good source for light oil and gas. It was once a very good to rich oil source. The Early Barremian Shale was also deposited in a lacustrine palaeoenvironment. Recovered palynomorphs indicate an Early Cretaceous, late Neocomian to early Barremian age. The Early Barremian Shale is conformably underlain and overlain by sandstone units belonging to the Sarir Group of sediments which are also of Early Cretaceous age. Geochemical analysis of the Early Barremian Shale indicates that it is a good oil source and was originally very good. Total organic carbon varies between 3.59% and 7%. S2 varies between 6.30 mg/g and 10.39 mg/g and the hydrogen indices vary between 148.4 and 175.5. A Late Barremian Shale unit of this age has also been identified in the central Maragh Low. Geochemical analyses indicate that total organic carbon varies between 1.05 and 2.38%, S2 pyrolysate between 1.6 and 5.34 mg/g and the hydrogen index between 152.4 and 224.4. It is a good oil source rock which is now mature. In addition to the non marine hydrocarbon source rocks pre Sirt Unconformity, three formations in the overlying Late Cretaceous section also provide hydrocarbon quality source rocks. Interbedded shales within the Rachmat Formation of Late Cretaceous, early Campanian age have total organic carbon ranging between, 0.7 and 1.47%, S2 pyrolysate varying between 1.37 and 4.00 mg/g and hydrogen indices varying between 195.7 and 272.1. The indication is that this unit would provide a fair gas source to a good oil source. Geochemical analyses of the overlying Tagrifet Limestone indicate that total organic carbon varies between 0.26% and 1.01%. S2 pyrolysate varies between 1.21 and 2.16 mg/g and hydrogen indices vary between 195.7 and 465.4. For the overlying Sirt Shale Formation of Late Cretaceous, late Campanian age, total organic carbon varies between 1.04% and 1.51%, S2 pyrolysate varies between 4.65 mg/g and 6.99 mg/g and the hydrogen indices vary between 151 and 462.9. The study has proven that both the Sirt Shale Formation and the Tagrifet Limestone are good to very good and rich sources for oil in the Maragh Low. High resolution biostratigraphical interpretations have been integrated and calibrated with thermal maturity determinations (Vitrinite Reflectance (%Ro), Spore Colour Index (SCI) and Tmax (ºC) and the determined present day geothermal gradient of 25ºC / Km for the Maragh Low. Interpretation of generated basin modelling profiles allows a detailed prediction of timing of maturation development of these source horizons and leads to a determination of amounts of missing section at major unconformities. From the results the top of the oil window (0.72% Ro) is picked as high as 10,700’ and the base of the oil window (1.35% Ro) assuming a linear trend and by projection is picked as low as 18,000’ in the Maragh Low. For the Triassic Shale the early phase of oil generation was in the Late Palaeocene / Early to Middle Eocene and the main phase of oil generation was in the Middle to Late Eocene. The Early Barremian Shale reached the main phase of oil generation in the Early Oligocene with late generation being reached in the Middle Miocene. For the Rakb Group section (Rachmat Formation, Tagrifet Limestone and Sirt Shale Formation) the early phase of oil generation started in the Late Eocene with the main phase of generation being between the Early Oligocene and the Early Miocene. From studying maturity profiles and from regional considerations it can be predicted that up to 500’ of sediment may have been deposited and eroded by the Sirt Unconformity in the central Maragh Low while up to 2000’ of sediment may have been deposited and then eroded to the south of the trough.

Keywords: Geochemical analysis of the source rocks from wells in Eastern Sirt Basin.

Procedia PDF Downloads 386
8 Unleashing Potential in Pedagogical Innovation for STEM Education: Applying Knowledge Transfer Technology to Guide a Co-Creation Learning Mechanism for the Lingering Effects Amid COVID-19

Authors: Lan Cheng, Harry Qin, Yang Wang

Abstract:

Background: COVID-19 has induced the largest digital learning experiment in history. There is also emerging research evidence that students have paid a high cost of learning loss from virtual learning. University-wide survey results demonstrate that digital learning remains difficult for students who struggle with learning challenges, isolation, or a lack of resources. Large-scale efforts are therefore increasingly utilized for digital education. To better prepare students in higher education for this grand scientific and technological transformation, STEM education has been prioritized and promoted as a strategic imperative in the ongoing curriculum reform essential for unfinished learning needs and whole-person development. Building upon five key elements identified in the STEM education literature: Problem-based Learning, Community and Belonging, Technology Skills, Personalization of Learning, Connection to the External Community, this case study explores the potential of pedagogical innovation that integrates computational and experimental methodologies to support, enrich, and navigate STEM education. Objectives: The goal of this case study is to create a high-fidelity prototype design for STEM education with knowledge transfer technology that contains a Cooperative Multi-Agent System (CMAS), which has the objectives of (1) conduct assessment to reveal a virtual learning mechanism and establish strategies to facilitate scientific learning engagement, accessibility, and connection within and beyond university setting, (2) explore and validate an interactional co-creation approach embedded in project-based learning activities under the STEM learning context, which is being transformed by both digital technology and student behavior change,(3) formulate and implement the STEM-oriented campaign to guide learning network mapping, mitigate the loss of learning, enhance the learning experience, scale-up inclusive participation. Methods: This study applied a case study strategy and a methodology informed by Social Network Analysis Theory within a cross-disciplinary communication paradigm (students, peers, educators). Knowledge transfer technology is introduced to address learning challenges and to increase the efficiency of Reinforcement Learning (RL) algorithms. A co-creation learning framework was identified and investigated in a context-specific way with a learning analytic tool designed in this study. Findings: The result shows that (1) CMAS-empowered learning support reduced students’ confusion, difficulties, and gaps during problem-solving scenarios while increasing learner capacity empowerment, (2) The co-creation learning phenomenon have examined through the lens of the campaign and reveals that an interactive virtual learning environment fosters students to navigate scientific challenge independently and collaboratively, (3) The deliverables brought from the STEM educational campaign provide a methodological framework both within the context of the curriculum design and external community engagement application. Conclusion: This study brings a holistic and coherent pedagogy to cultivates students’ interest in STEM and develop them a knowledge base to integrate and apply knowledge across different STEM disciplines. Through the co-designing and cross-disciplinary educational content and campaign promotion, findings suggest factors to empower evidence-based learning practice while also piloting and tracking the impact of the scholastic value of co-creation under the dynamic learning environment. The data nested under the knowledge transfer technology situates learners’ scientific journey and could pave the way for theoretical advancement and broader scientific enervators within larger datasets, projects, and communities.

Keywords: co-creation, cross-disciplinary, knowledge transfer, STEM education, social network analysis

Procedia PDF Downloads 96
7 Supply Side Readiness for Universal Health Coverage: Assessing the Availability and Depth of Essential Health Package in Rural, Remote and Conflict Prone District

Authors: Veenapani Rajeev Verma

Abstract:

Context: Assessing facility readiness is paramount as it can indicate capacity of facilities to provide essential care for resilience to health challenges. In the context of decentralization, estimation of supply side readiness indices at sub national level is imperative for effective evidence based policy but remains a colossal challenge due to lack of dependable and representative data sources. Setting: District Poonch of Jammu and Kashmir was selected for this study. It is remote, rural district with unprecedented topographical barriers and is identified as high priority by government. It is also a fragile area as is bounded by Line of Control with Pakistan bearing the brunt of cease fire violations, military skirmishes and sporadic militant attacks. Hilly geographical terrain, rudimentary/absence of road network and impoverishment are quintessential to this area. Objectives: Objective of the study is to a) Evaluate the service readiness of health facilities and create a concise index subsuming plethora of discrete indicators and b) Ascertain supply side barriers in service provisioning via stakeholder’s analysis. Study also strives to expand analytical domain unravelling context and area specific intricacies associated with service delivery. Methodology: Mixed method approach was employed to triangulate quantitative analysis with qualitative nuances. Facility survey encompassing 90 Subcentres, 44 Primary health centres, 3 Community health centres and 1 District hospital was conducted to gauge general service availability and service specific availability (depth of coverage). Compendium of checklist was designed using Indian Public Health Standards (IPHS) in form of standard core questionnaire and scorecard generated for each facility. Information was collected across dimensions of amenities, equipment, medicines, laboratory and infection control protocols as proposed in WHO’s Service Availability and Readiness Assesment (SARA). Two stage polychoric principal component analysis employed to generate a parsimonious index by coalescing an array of tracer indicators. OLS regression method used to determine factors explaining composite index generated from PCA. Stakeholder analysis was conducted to discern qualitative information. Myriad of techniques like observations, key informant interviews and focus group discussions using semi structured questionnaires on both leaders and laggards were administered for critical stakeholder’s analysis. Results: General readiness score of health facilities was found to be 0.48. Results indicated poorest readiness for subcentres and PHC’s (first point of contact) with composite score of 0.47 and 0.41 respectively. For primary care facilities; principal component was characterized by basic newborn care as well as preparedness for delivery. Results revealed availability of equipment and surgical preparedness having lowest score (0.46 and 0.47) for facilities providing secondary care. Presence of contractual staff, more than 1 hr walk to facility, facilities in zone A (most vulnerable) to cross border shelling and facilities inaccessible due to snowfall and thick jungles was negatively associated with readiness index. Nonchalant staff attitude, unavailability of staff quarters, leakages and constraint in supply chain of drugs and consumables were other impediments identified. Conclusions/Policy Implications: It is pertinent to first strengthen primary care facilities in this setting. Complex dimensions such as geographic barriers, user and provider behavior is not under precinct of this methodology.

Keywords: effective coverage, principal component analysis, readiness index, universal health coverage

Procedia PDF Downloads 88
6 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 29
5 Acute Severe Hyponatremia in Patient with Psychogenic Polydipsia, Learning Disability and Epilepsy

Authors: Anisa Suraya Ab Razak, Izza Hayat

Abstract:

Introduction: The diagnosis and management of severe hyponatremia in neuropsychiatric patients present a significant challenge to physicians. Several factors contribute, including diagnostic shadowing and attributing abnormal behavior to intellectual disability or psychiatric conditions. Hyponatraemia is the commonest electrolyte abnormality in the inpatient population, ranging from mild/asymptomatic, moderate to severe levels with life-threatening symptoms such as seizures, coma and death. There are several documented fatal case reports in the literature of severe hyponatremia secondary to psychogenic polydipsia, often diagnosed only in autopsy. This paper presents a case study of acute severe hyponatremia in a neuropsychiatric patient with early diagnosis and admission to intensive care. Case study: A 21-year old Caucasian male with known epilepsy and learning disability was admitted from residential living with generalized tonic-clonic self-terminating seizures after refusing medications for several weeks. Evidence of superficial head injury was detected on physical examination. His laboratory data demonstrated mild hyponatremia (125 mmol/L). Computed tomography imaging of his brain demonstrated no acute bleed or space-occupying lesion. He exhibited abnormal behavior - restlessness, drinking water from bathroom taps, inability to engage, paranoia, and hypersexuality. No collateral history was available to establish his baseline behavior. He was loaded with intravenous sodium valproate and leveritircaetam. Three hours later, he developed vomiting and a generalized tonic-clonic seizure lasting forty seconds. He remained drowsy for several hours and regained minimal recovery of consciousness. A repeat set of blood tests demonstrated profound hyponatremia (117 mmol/L). Outcomes: He was referred to intensive care for peripheral intravenous infusion of 2.7% sodium chloride solution with two-hourly laboratory monitoring of sodium concentration. Laboratory monitoring identified dangerously rapid correction of serum sodium concentration, and hypertonic saline was switched to a 5% dextrose solution to reduce the risk of acute large-volume fluid shifts from the cerebral intracellular compartment to the extracellular compartment. He underwent urethral catheterization and produced 8 liters of urine over 24 hours. Serum sodium concentration remained stable after 24 hours of correction fluids. His GCS recovered to baseline after 48 hours with improvement in behavior -he engaged with healthcare professionals, understood the importance of taking medications, admitted to illicit drug use and drinking massive amounts of water. He was transferred from high-dependency care to ward level and was initiated on multiple trials of anti-epileptics before achieving seizure-free days two weeks after resolution of acute hyponatremia. Conclusion: Psychogenic polydipsia is often found in young patients with intellectual disability or psychiatric disorders. Patients drink large volumes of water daily ranging from ten to forty liters, resulting in acute severe hyponatremia with mortality rates as high as 20%. Poor outcomes are due to challenges faced by physicians in making an early diagnosis and treating acute hyponatremia safely. A low index of suspicion of water intoxication is required in this population, including patients with known epilepsy. Monitoring urine output proved to be clinically effective in aiding diagnosis. Early referral and admission to intensive care should be considered for safe correction of sodium concentration while minimizing risk of fatal complications e.g. central pontine myelinolysis.

Keywords: epilepsy, psychogenic polydipsia, seizure, severe hyponatremia

Procedia PDF Downloads 109
4 The Impact of the Macro-Level: Organizational Communication in Undergraduate Medical Education

Authors: Julie M. Novak, Simone K. Brennan, Lacey Brim

Abstract:

Undergraduate medical education (UME) curriculum notably addresses micro-level communications (e.g., patient-provider, intercultural, inter-professional), yet frequently under-examines the role and impact of organizational communication, a more macro-level. Organizational communication, however, functions as foundation and through systemic structures of an organization and thereby serves as hidden curriculum and influences learning experiences and outcomes. Yet, little available research exists fully examining how students experience organizational communication while in medical school. Extant literature and best practices provide insufficient guidance for UME programs, in particular. The purpose of this study was to map and examine current organizational communication systems and processes in a UME program. Employing a phenomenology-grounded and participatory approach, this study sought to understand the organizational communication system from medical students' perspective. The research team consisted of a core team and 13 medical student co-investigators. This research employed multiple methods, including focus groups, individual interviews, and two surveys (one reflective of focus group questions, the other requesting students to submit ‘examples’ of communications). To provide context for student responses, nonstudent participants (faculty, administrators, and staff) were sampled, as they too express concerns about communication. Over 400 students across all cohorts and 17 nonstudents participated. Data were iteratively analyzed and checked for triangulation. Findings reveal the complex nature of organizational communication and student-oriented communications. They reveal program-impactful strengths, weaknesses, gaps, and tensions and speak to the role of organizational communication practices influencing both climate and culture. With regard to communications, students receive multiple, simultaneous communications from multiple sources/channels, both formal (e.g., official email) and informal (e.g., social media). Students identified organizational strengths including the desire to improve student voice, and message frequency. They also identified weaknesses related to over-reliance on emails, numerous platforms with inconsistent utilization, incorrect information, insufficient transparency, assessment/input fatigue, tacit expectations, scheduling/deadlines, responsiveness, and mental health confidentiality concerns. Moreover, they noted gaps related to lack of coordination/organization, ambiguous point-persons, student ‘voice-only’, open communication loops, lack of core centralization and consistency, and mental health bridges. Findings also revealed organizational identity and cultural characteristics as impactful on the medical school experience. Cultural characteristics included program size, diversity, urban setting, student organizations, community-engagement, crisis framing, learning for exams, inefficient bureaucracy, and professionalism. Moreover, they identified system structures that do not always leverage cultural strengths or reduce cultural problematics. Based on the results, opportunities for productive change are identified. These include leadership visibly supporting and enacting overall organizational narratives, making greater efforts in consistently ‘closing the loop’, regularly sharing how student input effects change, employing strategies of crisis communication more often, strengthening communication infrastructure, ensuring structures facilitate effective operations and change efforts, and highlighting change efforts in informational communication. Organizational communication and communications are not soft-skills, or of secondary concern within organizations, rather they are foundational in nature and serve to educate/inform all stakeholders. As primary stakeholders, students and their success directly affect the accomplishment of organizational goals. This study demonstrates how inquiries about how students navigate their educational experience extends research-based knowledge and provides actionable knowledge for the improvement of organizational operations in UME.

Keywords: medical education programs, organizational communication, participatory research, qualitative mixed methods

Procedia PDF Downloads 96
3 Recent Developments in E-waste Management in India

Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhay, Ananya Mukhopadhyay, Harendra Nath Bhattacharya

Abstract:

This study investigates the global issue of electronic waste (e-waste), focusing on its prevalence in India and other regions. E-waste has emerged as a significant worldwide problem, with India contributing a substantial share of annual e-waste generation. The primary sources of e-waste in India are computer equipment and mobile phones. Many developed nations utilize India as a dumping ground for their e-waste, with major contributions from the United States, China, Europe, Taiwan, South Korea, and Japan. The study identifies Maharashtra, Tamil Nadu, Mumbai, and Delhi as prominent contributors to India's e-waste crisis. This issue is contextualized within the broader framework of the United Nations' 2030 Agenda for Sustainable Development, which encompasses 17 Sustainable Development Goals (SDGs) and 169 associated targets to address poverty, environmental preservation, and universal prosperity. The study underscores the interconnectedness of e-waste management with several SDGs, including health, clean water, economic growth, sustainable cities, responsible consumption, and ocean conservation. Central Pollution Control Board (CPCB) data reveals that e-waste generation surpasses that of plastic waste, increasing annually at a rate of 31%. However, only 20% of electronic waste is recycled through organized and regulated methods in underdeveloped nations. In Europe, efficient e-waste management stands at just 35%. E-waste pollution poses serious threats to soil, groundwater, and public health due to toxic components such as mercury, lead, bromine, and arsenic. Long-term exposure to these toxins, notably arsenic in microchips, has been linked to severe health issues, including cancer, neurological damage, and skin disorders. Lead exposure, particularly concerning for children, can result in brain damage, kidney problems, and blood disorders. The study highlights the problematic transboundary movement of e-waste, with approximately 352,474 metric tonnes of electronic waste illegally shipped from Europe to developing nations annually, mainly to Africa, including Nigeria, Ghana, and Tanzania. Effective e-waste management, underpinned by appropriate infrastructure, regulations, and policies, offers opportunities for job creation and aligns with the objectives of the 2030 Agenda for SDGs, especially in the realms of decent work, economic growth, and responsible production and consumption. E-waste represents hazardous pollutants and valuable secondary resources, making it a focal point for anthropogenic resource exploitation. The United Nations estimates that e-waste holds potential secondary raw materials worth around 55 billion Euros. The study also identifies numerous challenges in e-waste management, encompassing the sheer volume of e-waste, child labor, inadequate legislation, insufficient infrastructure, health concerns, lack of incentive schemes, limited awareness, e-waste imports, high costs associated with recycling plant establishment, and more. To mitigate these issues, the study offers several solutions, such as providing tax incentives for scrap dealers, implementing reward and reprimand systems for e-waste management compliance, offering training on e-waste handling, promoting responsible e-waste disposal, advancing recycling technologies, regulating e-waste imports, and ensuring the safe disposal of domestic e-waste. A mechanism, Buy-Back programs, will compensate customers in cash when they deposit unwanted digital products. This E-waste could contain any portable electronic device, such as cell phones, computers, tablets, etc. Addressing the e-waste predicament necessitates a multi-faceted approach involving government regulations, industry initiatives, public awareness campaigns, and international cooperation to minimize environmental and health repercussions while harnessing the economic potential of recycling and responsible management.

Keywords: e-waste management, sustainable development goal, e-waste disposal, recycling technology, buy-back policy

Procedia PDF Downloads 63
2 A Study on the Use Intention of Smart Phone

Authors: Zhi-Zhong Chen, Jun-Hao Lu, Jr., Shih-Ying Chueh

Abstract:

Based on Unified Theory of Acceptance and Use of Technology (UTAUT), the study investigates people’s intention on using smart phones. The study additionally incorporates two new variables: 'self-efficacy' and 'attitude toward using'. Samples are collected by questionnaire survey, in which 240 are valid. After Correlation Analysis, Reliability Test, ANOVA, t-test and Multiple Regression Analysis, the study finds that social impact and self-efficacy have positive effect on use intentions, and the use intentions also have positive effect on use behavior.

Keywords: [1] Ajzen & Fishbein (1975), “Belief, attitude, intention and behavior: An introduction to theory and research”, Reading MA: Addison-Wesley. [2] Bandura (1977) Self-efficacy: toward a unifying theory of behavioural change. Psychological Review , 84, 191–215. [3] Bandura( 1986) A. Bandura, Social foundations of though and action, Prentice-Hall. Englewood Cliffs. [4] Ching-Hui Huang (2005). The effect of Regular Exercise on Elderly Optimism: The Self-efficacy and Theory of Reasoned Action Perspectives.(Master's dissertation, National Taiwan Sport University, 2005).National Digital Library of Theses and Dissertations in Taiwan。 [5] Chun-Mo Wu (2007).The Effects of Perceived Risk and Service Quality on Purchase Intention - an Example of Taipei City Long-Term Care Facilities. (Master's dissertation, Ming Chuan University, 2007).National Digital Library of Theses and Dissertations in Taiwan. [6] Compeau, D.R., and Higgins, C.A., (1995) “Application of social cognitive theory to training for computer skills.”, Information Systems Research, 6(2), pp.118-143. [7] computer-self-efficacy and mediators of the efficacy-performance relationship. International Journal of Human-Computer Studies, 62, 737-758. [8] Davis et al(1989), “User acceptance of computer technology: A comparison of two theoretical models ”, Management Science, 35(8), p.982-1003. [9] Davis et al(1989), “User acceptance of computer technology:A comparison of two theoretical models ”, Management Science, 35(8), p.982-1003. [10] Davis, F.D. (1989). Perceived Usefulness, Perceived Ease of Use and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319-340。 [11] Davis. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319–340. doi:10.2307/249008 [12] Johnson, R. D. (2005). An empirical investigation of sources of application-specific [13] Mei-yin Hsu (2010).The Study on Attitude and Satisfaction of Electronic Documents System for Administrators of Elementary Schools in Changhua County.(Master's dissertation , Feng Chia University, 2010).National Digital Library of Theses and Dissertations in Taiwan. [14] Ming-Chun Hsieh (2010). Research on Parents’ Attitudes Toward Electronic Toys: The case of Taichung City.(Master's dissertation, Chaoyang University of Technology,2010).National Digital Library of Theses and Dissertations in Taiwan. [15] Moon and Kim(2001). Extending the TAM for a World-Wide-Web context, Information and Management, v.38 n.4, p.217-230. [16] Shang-Yi Hu (2010).The Impacts of Knowledge Management on Customer Relationship Management – Enterprise Characteristicsand Corporate Governance as a Moderator.(Master's dissertation, Leader University, 2010)。National Digital Library of Theses and Dissertations in Taiwan. [17] Sheng-Yi Hung (2013, September10).Worldwide sale of smartphones to hit one billion IDC:Android dominate the market. ETtoday. Retrieved data form the available protocol:2013/10/3. [18] Thompson, R.L., Higgins, C.A., and Howell, J.M.(1991), “Personal Computing: Toward a Conceptual Model of Utilization”, MIS Quarterly(15:1), pp. 125-143. [19] Venkatesh, V., M.G. Morris, G.B. Davis, and F. D. Davis (2003), “User acceptance of information technology: Toward a unified view, ” MIS Quarterly, 27, No. 3, pp.425-478. [20] Vijayasarathy, L. R. (2004), Predicting Consumer Intentions to Use On-Line Shopping: The Case for an Augmented Technology Acceptance Model, Information and Management, Vol.41, No.6, pp.747-762. [21] Wikipedia - smartphone (http://zh.wikipedia.org/zh-tw/%E6%99%BA%E8%83%BD%E6%89%8B%E6%9C%BA)。 [22] Wu-Minsan (2008).The impacts of self-efficacy, social support on work adjustment with hearing impaired. (Master's dissertation, Southern Taiwan University of Science and Technology, 2008).National Digital Library of Theses and Dissertations in Taiwan. [23] Yu-min Lin (2006). The Influence of Business Employee’s MSN Self-efficacy On Instant Messaging Usage Behavior and Communicaiton Satisfaction.(Master's dissertation, National Taiwan University of Science and Technology, 2006).National Digital Library of Theses and Dissertations in Taiwan.

Procedia PDF Downloads 382
1 Detailed Degradation-Based Model for Solid Oxide Fuel Cells Long-Term Performance

Authors: Mina Naeini, Thomas A. Adams II

Abstract:

Solid Oxide Fuel Cells (SOFCs) feature high electrical efficiency and generate substantial amounts of waste heat that make them suitable for integrated community energy systems (ICEs). By harvesting and distributing the waste heat through hot water pipelines, SOFCs can meet thermal demand of the communities. Therefore, they can replace traditional gas boilers and reduce greenhouse gas (GHG) emissions. Despite these advantages of SOFCs over competing power generation units, this technology has not been successfully commercialized in large-scale to replace traditional generators in ICEs. One reason is that SOFC performance deteriorates over long-term operation, which makes it difficult to find the proper sizing of the cells for a particular ICE system. In order to find the optimal sizing and operating conditions of SOFCs in a community, a proper knowledge of degradation mechanisms and effects of operating conditions on SOFCs long-time performance is required. The simplified SOFC models that exist in the current literature usually do not provide realistic results since they usually underestimate rate of performance drop by making too many assumptions or generalizations. In addition, some of these models have been obtained from experimental data by curve-fitting methods. Although these models are valid for the range of operating conditions in which experiments were conducted, they cannot be generalized to other conditions and so have limited use for most ICEs. In the present study, a general, detailed degradation-based model is proposed that predicts the performance of conventional SOFCs over a long period of time at different operating conditions. Conventional SOFCs are composed of Yttria Stabilized Zirconia (YSZ) as electrolyte, Ni-cermet anodes, and LaSr₁₋ₓMnₓO₃ (LSM) cathodes. The following degradation processes are considered in this model: oxidation and coarsening of nickel particles in the Ni-cermet anodes, changes in the pore radius in anode, electrolyte, and anode electrical conductivity degradation, and sulfur poisoning of the anode compartment. This model helps decision makers discover the optimal sizing and operation of the cells for a stable, efficient performance with the fewest assumptions. It is suitable for a wide variety of applications. Sulfur contamination of the anode compartment is an important cause of performance drop in cells supplied with hydrocarbon-based fuel sources. H₂S, which is often added to hydrocarbon fuels as an odorant, can diminish catalytic behavior of Ni-based anodes by lowering their electrochemical activity and hydrocarbon conversion properties. Therefore, the existing models in the literature for H₂-supplied SOFCs cannot be applied to hydrocarbon-fueled SOFCs as they only account for the electrochemical activity reduction. A regression model is developed in the current work for sulfur contamination of the SOFCs fed with hydrocarbon fuel sources. The model is developed as a function of current density and H₂S concentration in the fuel. To the best of authors' knowledge, it is the first model that accounts for impact of current density on sulfur poisoning of cells supplied with hydrocarbon-based fuels. Proposed model has wide validity over a range of parameters and is consistent across multiple studies by different independent groups. Simulations using the degradation-based model illustrated that SOFCs voltage drops significantly in the first 1500 hours of operation. After that, cells exhibit a slower degradation rate. The present analysis allowed us to discover the reason for various degradation rate values reported in literature for conventional SOFCs. In fact, the reason why literature reports very different degradation rates, is that literature is inconsistent in definition of how degradation rate is calculated. In the literature, the degradation rate has been calculated as the slope of voltage versus time plot with the unit of voltage drop percentage per 1000 hours operation. Due to the nonlinear profile of voltage over time, degradation rate magnitude depends on the magnitude of time steps selected to calculate the curve's slope. To avoid this issue, instantaneous rate of performance drop is used in the present work. According to a sensitivity analysis, the current density has the highest impact on degradation rate compared to other operating factors, while temperature and hydrogen partial pressure affect SOFCs performance less. The findings demonstrated that a cell running at lower current density performs better in long-term in terms of total average energy delivered per year, even though initially it generates less power than if it had a higher current density. This is because of the dominant and devastating impact of large current densities on the long-term performance of SOFCs, as explained by the model.

Keywords: degradation rate, long-term performance, optimal operation, solid oxide fuel cells, SOFCs

Procedia PDF Downloads 114