Search results for: lead free ceramic
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7619

Search results for: lead free ceramic

359 Comparing Perceived Restorativeness in Natural and Urban Environment: A Meta-Analysis

Authors: Elisa Menardo, Margherita Pasini, Margherita Brondino

Abstract:

A growing body of empirical research from different areas of inquiry suggests that brief contact with natural environment restore mental resources. The Attention Restoration Theory (ART) is the widespread used and empirical founded theory developed to explain why exposure to nature helps people to recovery cognitive resources. It assumes that contact with nature allows people to free (and then recovery) voluntary attention resources and thus allows them to recover from a cognitive fatigue situation. However, it was suggested that some people could have more cognitive benefit after exposure to urban environment. The objective of this study is to report the results of a meta-analysis on studies (peer-reviewed articles) comparing the restorativeness (the quality to be restorative) perceived in natural environments than those perceived in urban environments. This meta-analysis intended to estimate how much nature environments (forests, parks, boulevards) are perceived to be more restorativeness than urban ones (i.e., the magnitude of the perceived restorativeness’ difference). Moreover, given the methodological difference between study, it studied the potential role of moderator variables as participants (student or other), instrument used (Perceived Restorativeness Scale or other), and procedure (in laboratory or in situ). PsycINFO, PsycARTICLES, Scopus, SpringerLINK, Web of Science online database were used to identify all peer-review articles on restorativeness published to date (k = 167). Reference sections of obtained papers were examined for additional studies. Only 22 independent studies (with a total of 1371 participants) met inclusion criteria (direct exposure to environment, comparison between one outdoor environment with natural element and one without natural element, and restorativeness measured by self-report scale) and were included in meta-analysis. To estimate the average effect size, a random effect model (Restricted Maximum-likelihood estimator) was used because the studies included in the meta-analysis were conducted independently and using different methods in different populations, so no common effect-size was expected. The presence of publication bias was checked using trim and fill approach. Univariate moderator analysis (mixed effect model) were run to determine whether the variable coded moderated the perceived restorativeness difference. Results show that natural environments are perceived to be more restorativeness than urban environments, confirming from an empirical point of view what is now considered a knowledge gained in environmental psychology. The relevant information emerging from this study is the magnitude of the estimated average effect size, which is particularly high (d = 1.99) compared to those that are commonly observed in psychology. Significant heterogeneity between study was found (Q(19) = 503.16, p < 0.001;) and studies’ variability was very high (I2[C.I.] = 96.97% [94.61 - 98.62]). Subsequent univariate moderator analyses were not significant. Methodological difference (participants, instrument, and procedure) did not explain variability between study. Other methodological difference (e.g., research design, environment’s characteristics, light’s condition) could explain this variability between study. In the mine while, studies’ variability could be not due to methodological difference but to individual difference (age, gender, education level) and characteristics (connection to nature, environmental attitude). Furthers moderator analysis are working in progress.

Keywords: meta-analysis, natural environments, perceived restorativeness, urban environments

Procedia PDF Downloads 147
358 Analysis of Taxonomic Compositions, Metabolic Pathways and Antibiotic Resistance Genes in Fish Gut Microbiome by Shotgun Metagenomics

Authors: Anuj Tyagi, Balwinder Singh, Naveen Kumar B. T., Niraj K. Singh

Abstract:

Characterization of diverse microbial communities in specific environment plays a crucial role in the better understanding of their functional relationship with the ecosystem. It is now well established that gut microbiome of fish is not the simple replication of microbiota of surrounding local habitat, and extensive species, dietary, physiological and metabolic variations in fishes may have a significant impact on its composition. Moreover, overuse of antibiotics in human, veterinary and aquaculture medicine has led to rapid emergence and propagation of antibiotic resistance genes (ARGs) in the aquatic environment. Microbial communities harboring specific ARGs not only get a preferential edge during selective antibiotic exposure but also possess the significant risk of ARGs transfer to other non-resistance bacteria within the confined environments. This phenomenon may lead to the emergence of habitat-specific microbial resistomes and subsequent emergence of virulent antibiotic-resistant pathogens with severe fish and consumer health consequences. In this study, gut microbiota of freshwater carp (Labeo rohita) was investigated by shotgun metagenomics to understand its taxonomic composition and functional capabilities. Metagenomic DNA, extracted from the fish gut, was subjected to sequencing on Illumina NextSeq to generate paired-end (PE) 2 x 150 bp sequencing reads. After the QC of raw sequencing data by Trimmomatic, taxonomic analysis by Kraken2 taxonomic sequence classification system revealed the presence of 36 phyla, 326 families and 985 genera in the fish gut microbiome. At phylum level, Proteobacteria accounted for more than three-fourths of total bacterial populations followed by Actinobacteria (14%) and Cyanobacteria (3%). Commonly used probiotic bacteria (Bacillus, Lactobacillus, Streptococcus, and Lactococcus) were found to be very less prevalent in fish gut. After sequencing data assembly by MEGAHIT v1.1.2 assembler and PROKKA automated analysis pipeline, pathway analysis revealed the presence of 1,608 Metacyc pathways in the fish gut microbiome. Biosynthesis pathways were found to be the most dominant (51%) followed by degradation (39%), energy-metabolism (4%) and fermentation (2%). Almost one-third (33%) of biosynthesis pathways were involved in the synthesis of secondary metabolites. Metabolic pathways for the biosynthesis of 35 antibiotic types were also present, and these accounted for 5% of overall metabolic pathways in the fish gut microbiome. Fifty-one different types of antibiotic resistance genes (ARGs) belonging to 15 antimicrobial resistance (AMR) gene families and conferring resistance against 24 antibiotic types were detected in fish gut. More than 90% ARGs in fish gut microbiome were against beta-lactams (penicillins, cephalosporins, penems, and monobactams). Resistance against tetracycline, macrolides, fluoroquinolones, and phenicols ranged from 0.7% to 1.3%. Some of the ARGs for multi-drug resistance were also found to be located on sequences of plasmid origin. The presence of pathogenic bacteria and ARGs on plasmid sequences suggested the potential risk due to horizontal gene transfer in the confined gut environment.

Keywords: antibiotic resistance, fish gut, metabolic pathways, microbial diversity

Procedia PDF Downloads 113
357 A Corpus-based Study of Adjuncts in Colombian English as a Second Language (ESL) Argumentative Essays

Authors: E. Velasco

Abstract:

Meeting high standards of writing in a Second Language (L2) is extremely important for many students who wish to undertake studies at universities in both English and non-English speaking countries. University lecturers in English speaking countries continue to express dissatisfaction with the apparent poor quality of essay writing skills displayed by English as a Second Language (ESL) students, whose essays are often criticised for their lack of cohesion and coherence. These critiques have extended to contexts such as Colombia, where many ESL students are criticised for their inability to write high-quality academic texts in L2-English, particularly at the tertiary level. If Colombian ESL students are expected to meet high standards of writing when studying locally and abroad, it makes sense to carry out specific research that can perhaps lead to recommendations to support their quest for improving argumentative strategies. Employing Corpus Linguistics methods within a Learner Corpus Research framework, and a combination of Log-Likelihood and Bayes Factor measures, this paper investigated argumentative essays written by Colombian ESL students. The study specifically aimed to analyse conjunctive adjuncts in argumentative essays to find out how Colombian ESL students connect their ideas in discourse. Results suggest that a) Colombian ESL learners need explicit instruction on specific areas of conjunctive adjuncts to counteract overuse, underuse and misuse; b) underuse of endophoric and evidential adjuncts highlights gaps between IELTS-like essays and good quality tertiary-level essays and published papers, and these gaps are linked to prior knowledge brought into writing task, rhetorical functions in writing, and research processes before writing takes place; c) both Colombian ESL learners and L1-English writers (in a reference corpus) overuse some adjuncts and underuse endophoric and evidential adjuncts, when compared to skilled L1-English and L2-English writers, so differences in frequencies of adjuncts has little to do with the writers’ L1, and differences are rather linked to types of essays writers produce (e.g. ESL vs. university essays). Ender Velasco: The pedagogical recommendations deriving from the study are that: a) Colombian ESL learners need to be shown that overuse is not the only way of giving cohesion to argumentative essays and there are other alternatives to cohesion (e.g., implicit adjuncts, lexical chains and collocations); b) syllabi and classroom input need to raise awareness of gaps in writing skills between IELTS-like and tertiary-level argumentative essays, and of how endophoric and evidential adjuncts are used to refer to anaphoric and cataphoric sections of essays, and to other people’s work or ideas; c) syllabi and classroom input need to include essay-writing tasks based on previous research/reading which learners need to incorporate into their arguments, and tasks that raise awareness of referencing systems (e.g., APA); d) classroom input needs to include explicit instruction on use of punctuation, functions and/or syntax with specific conjunctive adjuncts such as for example, for that reason, although, despite and nevertheless.

Keywords: argumentative essays, colombian english as a second language (esl) learners, conjunctive adjuncts, corpus linguistics

Procedia PDF Downloads 53
356 Surface-Enhanced Raman Detection in Chip-Based Chromatography via a Droplet Interface

Authors: Renata Gerhardt, Detlev Belder

Abstract:

Raman spectroscopy has attracted much attention as a structurally descriptive and label-free detection method. It is particularly suited for chemical analysis given as it is non-destructive and molecules can be identified via the fingerprint region of the spectra. In this work possibilities are investigated how to integrate Raman spectroscopy as a detection method for chip-based chromatography, making use of a droplet interface. A demanding task in lab-on-a-chip applications is the specific and sensitive detection of low concentrated analytes in small volumes. Fluorescence detection is frequently utilized but restricted to fluorescent molecules. Furthermore, no structural information is provided. Another often applied technique is mass spectrometry which enables the identification of molecules based on their mass to charge ratio. Additionally, the obtained fragmentation pattern gives insight into the chemical structure. However, it is only applicable as an end-of-the-line detection because analytes are destroyed during measurements. In contrast to mass spectrometry, Raman spectroscopy can be applied on-chip and substances can be processed further downstream after detection. A major drawback of Raman spectroscopy is the inherent weakness of the Raman signal, which is due to the small cross-sections associated with the scattering process. Enhancement techniques, such as surface enhanced Raman spectroscopy (SERS), are employed to overcome the poor sensitivity even allowing detection on a single molecule level. In SERS measurements, Raman signal intensity is improved by several orders of magnitude if the analyte is in close proximity to nanostructured metal surfaces or nanoparticles. The main gain of lab-on-a-chip technology is the building block-like ability to seamlessly integrate different functionalities, such as synthesis, separation, derivatization and detection on a single device. We intend to utilize this powerful toolbox to realize Raman detection in chip-based chromatography. By interfacing on-chip separations with a droplet generator, the separated analytes are encapsulated into numerous discrete containers. These droplets can then be injected with a silver nanoparticle solution and investigated via Raman spectroscopy. Droplet microfluidics is a sub-discipline of microfluidics which instead of a continuous flow operates with the segmented flow. Segmented flow is created by merging two immiscible phases (usually an aqueous phase and oil) thus forming small discrete volumes of one phase in the carrier phase. The study surveys different chip designs to realize coupling of chip-based chromatography with droplet microfluidics. With regards to maintaining a sufficient flow rate for chromatographic separation and ensuring stable eluent flow over the column different flow rates of eluent and oil phase are tested. Furthermore, the detection of analytes in droplets with surface enhanced Raman spectroscopy is examined. The compartmentalization of separated compounds preserves the analytical resolution since the continuous phase restricts dispersion between the droplets. The droplets are ideal vessels for the insertion of silver colloids thus making use of the surface enhancement effect and improving the sensitivity of the detection. The long-term goal of this work is the first realization of coupling chip based chromatography with droplets microfluidics to employ surface enhanced Raman spectroscopy as means of detection.

Keywords: chip-based separation, chip LC, droplets, Raman spectroscopy, SERS

Procedia PDF Downloads 222
355 A Practical Methodology for Evaluating Water, Sanitation and Hygiene Education and Training Programs

Authors: Brittany E. Coff, Tommy K. K. Ngai, Laura A. S. MacDonald

Abstract:

Many organizations in the Water, Sanitation and Hygiene (WASH) sector provide education and training in order to increase the effectiveness of their WASH interventions. A key challenge for these organizations is measuring how well their education and training activities contribute to WASH improvements. It is crucial for implementers to understand the returns of their education and training activities so that they can improve and make better progress toward the desired outcomes. This paper presents information on CAWST’s development and piloting of the evaluation methodology. The Centre for Affordable Water and Sanitation Technology (CAWST) has developed a methodology for evaluating education and training activities, so that organizations can understand the effectiveness of their WASH activities and improve accordingly. CAWST developed this methodology through a series of research partnerships, followed by staged field pilots in Nepal, Peru, Ethiopia and Haiti. During the research partnerships, CAWST collaborated with universities in the UK and Canada to: review a range of available evaluation frameworks, investigate existing practices for evaluating education activities, and develop a draft methodology for evaluating education programs. The draft methodology was then piloted in three separate studies to evaluate CAWST’s, and CAWST’s partner’s, WASH education programs. Each of the pilot studies evaluated education programs in different locations, with different objectives, and at different times within the project cycles. The evaluations in Nepal and Peru were conducted in 2013 and investigated the outcomes and impacts of CAWST’s WASH education services in those countries over the past 5-10 years. In 2014, the methodology was applied to complete a rigorous evaluation of a 3-day WASH Awareness training program in Ethiopia, one year after the training had occurred. In 2015, the methodology was applied in Haiti to complete a rapid assessment of a Community Health Promotion program, which informed the development of an improved training program. After each pilot evaluation, the methodology was reviewed and improvements were made. A key concept within the methodology is that in order for training activities to lead to improved WASH practices at the community level, it is not enough for participants to acquire new knowledge and skills; they must also apply the new skills and influence the behavior of others following the training. The steps of the methodology include: development of a Theory of Change for the education program, application of the Kirkpatrick model to develop indicators, development of data collection tools, data collection, data analysis and interpretation, and use of the findings for improvement. The methodology was applied in different ways for each pilot and was found to be practical to apply and adapt to meet the needs of each case. It was useful in gathering specific information on the outcomes of the education and training activities, and in developing recommendations for program improvement. Based on the results of the pilot studies, CAWST is developing a set of support materials to enable other WASH implementers to apply the methodology. By using this methodology, more WASH organizations will be able to understand the outcomes and impacts of their training activities, leading to higher quality education programs and improved WASH outcomes.

Keywords: education and training, capacity building, evaluation, water and sanitation

Procedia PDF Downloads 283
354 Strengths Profiling: An Alternative Approach to Assessing Character Strengths Based on Personal Construct Psychology

Authors: Sam J. Cooley, Mary L. Quinton, Benjamin J. Parry, Mark J. G. Holland, Richard J. Whiting, Jennifer Cumming

Abstract:

Practitioners draw attention to people’s character strengths to promote empowerment and well-being. This paper explores the possibility that existing approaches for assessing character strengths (e.g., the Values in Action survey; VIA-IS) could be even more autonomy supportive and empowering when combined with strengths profiling, an ideographic tool informed by personal construct theory (PCT). A PCT approach ensures that: (1) knowledge is co-created (i.e., the practitioner is not seen as the ‘expert’ who leads the process); (2) individuals are not required to ‘fit’ within a prescribed list of characteristics; and (3) individuals are free to use their own terminology and interpretations. A combined Strengths Profiling and VIA approach was used in a sample of homeless youth (aged 16-25) who are commonly perceived as ‘hard-to-engage’ through traditional forms of assessment. Strengths Profiling was completed face-to-face in small groups. Participants (N = 116) began by listing a variety of personally meaningful characteristics. Participants gave each characteristic a score out of ten for how important it was to them (1 = not so important; 10 = very important), their ideal competency, and their current competency (1 = poor; 10 = excellent). A discrepancy score was calculated for each characteristic (discrepancy score = ideal score - current score x importance), whereby a lower discrepancy score indicated greater satisfaction. Strengths Profiling was used at the beginning and end of a 10-week positive youth development programme. Experiences were captured through video diary room entries made by participants and through reflective notes taken by the facilitators. Participants were also asked to complete a pre-and post-programme questionnaire, measuring perceptions of well-being, self-worth, and resilience. All of the young people who attended the strengths profiling session agreed to complete a profile, and the majority became highly engaged in the process. Strengths profiling was found to be an autonomy supportive and empowering experience, with each participant identifying an average of 10 character strengths (M = 10.27, SD = 3.23). In total, 215 different character strengths were identified, each with varying terms and definitions used, which differed greatly between participants and demonstrated the value in soliciting personal constructs. Using the participants’ definitions, 98% of characteristics were categorized deductively into the VIA framework. Bravery, perseverance, and hope were the character strengths that featured most, whilst temperance and courage received the highest discrepancy scores. Discrepancy scores were negatively correlated with well-being, self-worth, and resilience, and meaningful improvements were recorded following the intervention. These findings support the use of strengths profiling as a theoretically-driven and novel way to engage disadvantaged youth in identifying and monitoring character strengths. When young people are given the freedom to express their own characteristics, the resulting terminologies extend beyond the language used in existing frameworks. This added freedom and control over the process of strengths identification encouraged youth to take ownership over their profiles and apply their strengths. In addition, the ability to transform characteristics post hoc into the VIA framework means that strengths profiling can be used to explore aggregated/nomothetic hypotheses, whilst still benefiting from its ideographic roots.

Keywords: ideographic, nomothetic, positive youth development, VIA-IS, assessment, homeless youth

Procedia PDF Downloads 174
353 Humanizing Industrial Architecture: When Form Meets Function and Emotion

Authors: Sahar Majed Asad

Abstract:

Industrial structures have historically focused on functionality and efficiency, often disregarding aesthetics and human experience. However, a new approach is emerging that prioritizes humanizing industrial architecture and creating spaces that promote well-being, sustainability, and social responsibility. This study explores the motivations and design strategies behind this shift towards more human-centered industrial environments, providing practical guidance for architects, designers, and other stakeholders interested in incorporating these principles into their work. Through in-depth interviews with architects, designers, and industry experts, as well as a review of relevant literature, this study uncovers the reasons for this change in industrial design. The findings reveal that this shift is driven by a desire to create environments that prioritize the needs and experiences of the people who use them. The study identifies strategies such as incorporating natural elements, flexible design, and advanced technologies as crucial in achieving human-centric industrial design. It also emphasizes that effective communication and collaboration among stakeholders are crucial for successful human-centered design outcomes. This paper provides a comprehensive analysis of the motivations and design strategies behind the humanization of industrial architecture. It begins by examining the history of industrial architecture and highlights the focus on functionality and efficiency. The paper then explores the emergence of human-centered design principles in industrial architecture, discussing the benefits of this approach, including creating more sustainable and socially responsible environments.The paper explains specific design strategies that prioritize the human experience of industrial spaces. It outlines how incorporating natural elements like greenery and natural lighting can create more visually appealing and comfortable environments for industrial workers. Flexible design solutions, such as movable walls and modular furniture, can make spaces more adaptable to changing needs and promote a sense of ownership and creativity among workers. Advanced technologies, such as sensors and automation, can improve the efficiency and safety of industrial spaces while also enhancing the human experience. To provide practical guidance, the paper offers recommendations for incorporating human-centered design principles into industrial structures. It emphasizes the importance of understanding the needs and experiences of the people who use these spaces and provides specific examples of how natural elements, flexible design, and advanced technologies can be incorporated into industrial structures to promote human well-being. In conclusion, this study demonstrates that the humanization of industrial architecture is a growing trend that offers tremendous potential for creating more sustainable and socially responsible built environments. By prioritizing the human experience of industrial spaces, designers can create environments that promote well-being, sustainability, and social responsibility. This research study provides practical guidance for architects, designers, and other stakeholders interested in incorporating human-centered design principles into their work, demonstrating that a human-centered approach can lead to functional and aesthetically pleasing industrial spaces that promote human well-being and contribute to a better future for all.

Keywords: human-centered design, industrial architecture, sustainability, social responsibility

Procedia PDF Downloads 133
352 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 43
351 Possible Involvement of DNA-methyltransferase and Histone Deacetylase in the Regulation of Virulence Potential of Acanthamoeba castellanii

Authors: Yi H. Wong, Li L. Chan, Chee O. Leong, Stephen Ambu, Joon W. Mak, Priyadashi S. Sahu

Abstract:

Background: Acanthamoeba is a free-living opportunistic protist which is ubiquitously distributed in the environment. Virulent Acanthamoeba can cause fatal encephalitis in immunocompromised patients and potential blinding keratitis in immunocompetent contact lens wearers. Approximately 24 species have been identified but only the A. castellanii, A. polyphaga and A. culbertsoni are commonly associated with human infections. Until to date, the precise molecular basis for Acanthamoeba pathogenesis remains unclear. Previous studies reported that Acanthamoeba virulence can be diminished through prolonged axenic culture but revived through serial mouse passages. As no clear explanation on this reversible pathogenesis is established, hereby, we postulate that the epigenetic regulators, DNA-methyltransferases (DNMT) and histone-deacetylases (HDAC), could possibly be involved in granting the virulence plasticity of Acanthamoeba spp. Methods: Four rounds of mouse passages were conducted to revive the virulence potential of the virulence-attenuated Acanthamoeba castellanii strain (ATCC 50492). Briefly, each mouse (n=6/group) was inoculated intraperitoneally with Acanthamoebae cells (2x 105 trophozoites/mouse) and incubated for 2 months. Acanthamoebae cells were isolated from infected mouse organs by culture method and subjected to subsequent mouse passage. In vitro cytopathic, encystment and gelatinolytic assays were conducted to evaluate the virulence characteristics of Acanthamoebae isolates for each passage. PCR primers which targeted on the 2 members (DNMT1 and DNMT2) and 5 members (HDAC1 to 5) of the DNMT and HDAC gene families respectively were custom designed. Quantitative real-time PCR (qPCR) was performed to detect and quantify the relative expression of the two gene families in each Acanthamoeba isolates. Beta-tubulin of A. castellanii (Genbank accession no: XP_004353728) was included as housekeeping gene for data normalisation. PCR mixtures were also analyzed by electrophoresis for amplicons detection. All statistical analyses were performed using the paired one-tailed Student’s t test. Results: Our pathogenicity tests showed that the virulence-reactivated Acanthamoeba had a higher degree of cytopathic effect on vero cells, a better resistance to encystment challenge and a higher gelatinolytic activity which was catalysed by serine protease. qPCR assay showed that DNMT1 expression was significantly higher in the virulence-reactivated compared to the virulence-attenuated Acanthamoeba strain (p ≤ 0.01). The specificity of primers which targeted on DNMT1 was confirmed by sequence analysis of PCR amplicons, which showed a 97% similarity to the published DNA-methyltransferase gene of A. castellanii (GenBank accession no: XM_004332804.1). Out of the five primer pairs which targeted on the HDAC family genes, only HDAC4 expression was significantly difference between the two variant strains. In contrast to DNMT1, HDAC4 expression was much higher in the virulence-attenuated Acanthamoeba strain. Conclusion: Our mouse passages had successfully restored the virulence of the attenuated strain. Our findings suggested that DNA-methyltransferase (DNMT1) and histone deacetylase (HDAC4) expressions are associated with virulence potential of Acanthamoeba spp.

Keywords: acanthamoeba, DNA-methyltransferase, histone deacetylase, virulence-associated proteins

Procedia PDF Downloads 264
350 Generative Syntaxes: Macro-Heterophony and the Form of ‘Synchrony’

Authors: Luminiţa Duţică, Gheorghe Duţică

Abstract:

One of the most powerful language innovation in the twentieth century music was the heterophony–hypostasis of the vertical syntax entered into the sphere of interest of many composers, such as George Enescu, Pierre Boulez, Mauricio Kagel, György Ligeti and others. The heterophonic syntax has a history of its growth, which means a succession of different concepts and writing techniques. The trajectory of settling this phenomenon does not necessarily take into account the chronology: there are highly complex primary stages and advanced stages of returning to the simple forms of writing. In folklore, the plurimelodic simultaneities are free or random and originate from the (unintentional) differences/‘deviations’ from the state of unison, through a variety of ornaments, melismas, imitations, elongations and abbreviations, all in a flexible rhythmic and non-periodic/immeasurable framework, proper to the parlando-rubato rhythmics. Within the general framework of the multivocal organization, the heterophonic syntax in elaborate (academic) version has imposed itself relatively late compared with polyphony and homophony. Of course, the explanation is simple, if we consider the causal relationship between the sound vocabulary elements – in this case, the modalism – and the typologies of vertical organization appropriate for it. Therefore, adding up the ‘classic’ pathway of the writing typologies (monody – polyphony – homophony), heterophony - applied equally to the structures of modal, serial or synthesis vocabulary – reclaims necessarily an own macrotemporal form, in the sense of the analogies enshrined by the evolution of the musical styles and languages: polyphony→fugue, homophony→sonata. Concerned about the prospect of edifying a new musical ontology, the composer Ştefan Niculescu experienced – along with the mathematical organization of heterophony according to his own original methods – the possibility of extrapolation of this phenomenon in macrostructural plan, reaching this way to the unique form of ‘synchrony’. Founded on coincidentia oppositorum principle (involving the ‘one-multiple’ binom), the sound architecture imagined by Ştefan Niculescu consists in one (temporal) model / algorithm of articulation of two sound states: 1. monovocality state (principle of identity) and 2. multivocality state (principle of difference). In this context, the heterophony becomes an (auto)generative mechanism, with macrotemporal amplitude, strategy that will be grown by the composer, practically throughout his creation (see the works: Ison I, Ison II, Unisonos I, Unisonos II, Duplum, Triplum, Psalmus, Héterophonies pour Montreux (Homages to Enescu and Bartók etc.). For the present demonstration, we selected one of the most edifying works of Ştefan Niculescu – Simphony II, Opus dacicum – where the form of (heterophony-)synchrony acquires monumental-symphonic features, representing an emblematic case for the complexity level achieved by this type of vertical syntax in the twentieth century music.

Keywords: heterophony, modalism, serialism, synchrony, syntax

Procedia PDF Downloads 319
349 Red Dawn in the Desert: A World-Systems Analysis of the Maritime Silk Road Initiative

Authors: Toufic Sarieddine

Abstract:

The current debate on the hegemonic impact of China’s Belt and Road Initiative (BRI) is of two opposing strands: Resilient and absolute US hegemony on the one hand and various models of multipolar hegemony such as bifurcation on the other. Bifurcation theories illustrate an unprecedented division of hegemonic functions between China and the US, whereby Beijing becomes the world’s economic hegemon, leaving Washington the world’s military hegemon and security guarantor. While consensus points to China being the main driver of unipolarity’s rupturing, the debate among bifurcationists is on the location of the first rupture. In this regard, the Middle East and North Africa (MENA) region has seen increasing Chinese foreign direct investment in recent years while that to other regions has declined, ranking it second in 2018 as part of the financing for the Maritime Silk Road Initiative (MSRI). China has also become the top trade partner of 11 states in the MENA region, as well as its top source of machine imports, surpassing the US and achieving an overall trade surplus almost double that of Washington’s. These are among other features outlined in world-systems analysis (WSA) literature which correspond with the emergence of a new hegemon. WSA is further utilized to gauge other facets of China’s increasing involvement in MENA and assess whether bifurcation is unfolding therein. These features of hegemony include the adoption of China’s modi operandi, economic dominance in production, trade, and finance, military capacity, cultural hegemony in ideology, education, and language, and the promotion of a general interest around which to rally potential peripheries (MENA states in this case). China’s modi operandi has seen some adoption with regards to support against the United Nations Convention on the Law of the Sea, oil bonds denominated in the yuan, and financial institutions such as the Shanghai Gold Exchange enjoying increasing Arab patronage. However, recent elections in Qatar, as well as liberal reforms in Saudi Arabia, demonstrate Washington’s stronger normative influence. Meanwhile, Washington’s economic dominance is challenged by China’s sizable machine exports, increasing overall imports, and widening trade surplus, but retains some clout via dominant arms and transport exports, as well as free-trade deals across the region. Militarily, Washington bests Beijing’s arms exports, has a dominant and well-established presence in the region, and successfully blocked Beijing’s attempt to penetrate through the UAE. Culturally, Beijing enjoys higher favorability in Arab public opinion, and its broadcast networks have found some resonance with Arab audiences. In education, the West remains MENA students’ preferred destination. Further, while Mandarin has become increasingly available in schools across MENA, its usage and availability still lag far behind English. Finally, Beijing’s general interest in infrastructure provision and prioritizing economic development over social justice and democracy provides an avenue for increased incorporation between Beijing and the MENA region. The overall analysis shows solid progress towards bifurcation in MENA.

Keywords: belt and road initiative, hegemony, Middle East and North Africa, world-systems analysis

Procedia PDF Downloads 82
348 Energy Metabolism and Mitochondrial Biogenesis in Muscles of Rats Subjected to Cold Water Immersion

Authors: Bosiacki Mateusz, Anna Lubkowska, Dariusz Chlubek, Irena Baranowska-Bosiacka

Abstract:

Exposure to cold temperatures can be considered a stressor that can lead to adaptive responses. The present study hypothesized the possibility of a positive effect of cold water exercise on mitochondrial biogenesis and muscle energy metabolism in aging rats. The purpose of this study was to evaluate the effects of cold water exercise on energy status, purine compounds, and mitochondrial biogenesis in the muscles of aging rats as indicators of the effects of cold water exercise and their usefulness in monitoring adaptive changes. The study was conducted on 64 aging rats of both sexes, 15 months old at the time of the experiment. The rats (male and female separately) were randomly assigned to the following study groups: control, sedentary animals; 5°C groups animals - training swimming in cold water at 5°C; 36°C groups - animals training swimming in water at thermal comfort temperature. The study was conducted with the approval of the Local Ethical Committee for Animal Experiments. The animals in the experiment were subjected to swimming training for 9 weeks. During the first week of the study, the duration of the first swimming training was 2 minutes (on the first day), increasing daily by 0.5 minutes up to 4 minutes on the fifth day of the first week. From the second to the eighth week, the swimming training was 4 minutes per day, five days a week. At the end of the study, forty-eight hours after the last swim training, the animals were dissected. In the skeletal muscle tissue of the thighs of the rats, we determined the concentrations of ATP, ADP, AMP, Ado (HPLC), PGC-1a protein expression (Western blot), PGC1A, Mfn1, Mfn2, Opa1, and Drp1 gene expression (qRT PCR). The study showed that swimming in water at a thermally comfortable temperature improved the energy metabolism of the aging rat muscles by increasing the metabolic rate (increase in ATP, ADP, TAN, AEC) and enhancing mitochondrial fusion (increase in mRNA expression of regulatory proteins Mfn1 and Mfn2). Cold water swimming improved muscle energy metabolism in aging rats by increasing the rate of muscle energy metabolism (increase in ATP, ADP, TAN, AEC concentrations) and enhancing mitochondrial biogenesis and dynamics (increase in the mRNA expression of proteins of fusion-regulating factors – Mfn1, Mfn2, and Opa1, and the factor regulating mitochondrial fission – Drp1). The concentration of high-energy compounds and the expression of proteins regulating mitochondrial dynamics in the muscle may be a useful indicator in monitoring adaptive changes occurring in aging muscles under the influence of exercise in cold water. It represents a short-term adaptation to changing environmental conditions and has a beneficial effect on maintaining the bioenergetic capacity of muscles in the long term. Conclusion: exercise in cold water can exert positive effects on energy metabolism, biogenesis and dynamics of mitochondria in aging rat muscles. Enhancement of mitochondrial dynamics under cold water exercise conditions can improve mitochondrial function and optimize the bioenergetic capacity of mitochondria in aging rat muscles.

Keywords: cold water immersion, adaptive responses, muscle energy metabolism, aging

Procedia PDF Downloads 59
347 Surveying Adolescent Males in India Regarding Mobile Phone Use and Sexual and Reproductive Health Education

Authors: Rohan M. Dalal, Elena Pirondini, Shanu Somvanshi

Abstract:

Introduction: The current state of reproductive health outcomes in lower-income countries is poor, with inadequate knowledge and culture among adolescent boys. Moreover, boys have traditionally not been a priority target. To explore the opportunity to educate adolescent boys in the developing world regarding accurate reproductive health information, the purpose of this study is to investigate how adolescent boys in the developing world engage and use technology, utilizing cell phones. This electronic survey and video interview study were conducted to determine the feasibility of a mobile phone platform for an educational video game specifically designed for boys that will improve health knowledge, influence behavior, and change health outcomes, namely teen pregnancies. Methods: With the assistance of Plan India, a subsidiary of Plan International, informed consent was obtained from parents of adolescent males who participated in an electronic survey and video interviews via Microsoft Teams. An electronic survey was created with 27 questions, including topics of mobile phone usage, gaming preferences, and sexual and reproductive health, with a sample size of 181 adolescents, ages 11-25, near New Delhi, India. The interview questions were written to explore more in-depth topics after the completion of the electronic survey. Eight boys, aged 15, were interviewed for 40 minutes about gaming and usage of mobile phones as well as sexual and reproductive health. Data/Results. 154 boys and 27 girls completed the survey. They rated their English fluency as relatively high. 97% of boys (149/154) had access to mobile phones. The majority of phones were smartphones (97%, 143/148). 48% (71/149) of boys borrowed cell phones. The most popular phone platform was Samsung (22%, 33/148). 36% (54/148) of adolescent males looked at their phones 1-10 times per day for 1-2 hours. 55% (81/149) of the boys had parental restrictions. 51% (76/148) had 32 GB of storage on their phone. 78% (117/150) of the boys had wifi access. 80% (120/150) of respondents reported ease in downloading apps. 97% (145/150) of male adolescents had social media, including WhatsApp, Facebook, and YouTube. 58% (87/150) played video games. Favorite video games included Free Fire, PubG, and other shooting games. In the video interviews, the boys revealed what made games fun and engaging, including customized avatars, progression to higher levels, realistic interactive platforms, shooting/guns, the ability to perform multiple actions, and a variety of worlds/settings/adventures. Ideas to improve engagement in sexual and reproductive health classes included open discussions in the community, enhanced access to information, and posting on social media. Conclusion: This study involving an electronic survey and video interviews provides an initial foray into understanding mobile phone usage among adolescent males and understanding sexual and reproductive health education in New Delhi, India. The data gathered from this study support using mobile phone platforms, and this will be used to create a serious video game to educate adolescent males about sexual and reproductive health in an attempt to lower the rate of unwanted pregnancies in the world.

Keywords: adolescent males, India, mobile phone, sexual and reproductive health

Procedia PDF Downloads 104
346 The Technique of Mobilization of the Colon for Pull-Through Procedure in Hirschsprung's Disease

Authors: Medet K. Khamitov, Marat M. Ospanov, Vasiliy M. Lozovoy, Zhenis N. Sakuov, Dastan Z. Rustemov

Abstract:

With a high rectosigmoid transitional zone in children with Hirschsprung’s disease, the upper rectal, sigmoid, left colon arteries are ligated during the pull-through of the descending part of the colon. As a result, the inferior mesenteric artery ceases to participate in the blood supply to the descending part of the colon. As a result, the reduced colon is supplied with blood only by the middle colon artery, which originates from the superior mesenteric artery. Insufficiency of blood supply to the reduced colon is the cause of the development of chronic hypoxia of the intestinal wall or necrosis of the reduced descending colon. Some surgeons prefer to preserve the left colon artery. However, it is possible to stretch the mesentery, which can lead to bowel retraction to anastomotic leaks and stenosis. Chronic hypoxia of the reduced colon, in turn, is the cause of acquired (secondary) aganglionosis. The highest frequency of anastomotic leaks is observed in children older than five years. The purpose is to reduce the risk of complications in the pull-through procedure of the descending part of the colon in patients with Hirschsprung’s disease by ensuring its sufficient mobility and maintaining blood supply to the lower mesenteric artery. Methodology and events. Two children aged 5 and 7 years with Hirschsprung’s disease were operated under the conditions of the hospital in Nur-Sultan. The diagnosis was made using x-ray contrast enema and histological examination. Operational technique. After revision of the left part of the colon and assessment of the architectonics of its blood vessels, parietal mobilization of the affected sigmoid and rectum was performed on laparotomy access, while maintaining the arterial and venous terminal arcades of the sigmoid vessels. Then, the descending branch of the left colon artery was crossed (if there is an insufficient length of the reduced intestine, the left colonic artery itself may also be crossed). This manipulation provides additional mobility of the pull-through descending part of the colon. The resulting "windows" in the mesentery of the reduced intestine were sutured to prevent the development of an internal hernia. Formed a full-blooded, sufficiently long transplant from the transverse loops of the splenic angle and the descending parts of the colon with blood supply from the upper and lower mesenteric artery, freely, without tension, is reduced to the rectal zone with the coloanal anastomosis 1.5 cm above the dentate line. Results. The postoperative period was uneventful. Patients were discharged on the 7th day. The observation was carried out for six months. In no case, there was a bowel retraction, anastomotic leak, anastomotic stenosis, or other complications. Conclusion. The presented technique of mobilization of the colon for the pull-through procedure in a high transitional rectosigmoid zone of Hirschsprung’s disease allows to maintain normal blood supply to the distal part of the colon and to avoid the tension of the colon. The technique allows reducing the risk of anastomotic leak, bowel necrosis, chronic ischemia, to exclude colon retraction and anastomotic stenosis.

Keywords: blood supply, children, colon mobilization, Hirschsprung's disease, pull-through

Procedia PDF Downloads 125
345 An Indispensable Parameter in Lipid Ratios to Discriminate between Morbid Obesity and Metabolic Syndrome in Children: High Density Lipoprotein Cholesterol

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Obesity is a low-grade inflammatory disease and may lead to health problems such as hypertension, dyslipidemia, diabetes. It is also associated with important risk factors for cardiovascular diseases. This requires the detailed evaluation of obesity, particularly in children. The aim of this study is to enlighten the potential associations between lipid ratios and obesity indices and to introduce those with discriminating features among children with obesity and metabolic syndrome (MetS). A total of 408 children (aged between six and eighteen years) participated in the scope of the study. Informed consent forms were taken from the participants and their parents. Ethical Committee approval was obtained. Anthropometric measurements such as weight, height as well as waist, hip, head, neck circumferences and body fat mass were taken. Systolic and diastolic blood pressure values were recorded. Body mass index (BMI), diagnostic obesity notation model assessment index-II (D2 index), waist-to-hip, head-to-neck ratios were calculated. Total cholesterol, triglycerides, high-density lipoprotein cholesterol (HDLChol), low-density lipoprotein cholesterol (LDLChol) analyses were performed in blood samples drawn from 110 children with normal body weight, 164 morbid obese (MO) children and 134 children with MetS. Age- and sex-adjusted BMI percentiles tabulated by World Health Organization were used to classify groups; normal body weight, MO and MetS. 15th-to-85th percentiles were used to define normal body weight children. Children, whose values were above the 99th percentile, were described as MO. MetS criteria were defined. Data were evaluated statistically by SPSS Version 20. The degree of statistical significance was accepted as p≤0.05. Mean±standard deviation values of BMI for normal body weight children, MO children and those with MetS were 15.7±1.1, 27.1±3.8 and 29.1±5.3 kg/m2, respectively. Corresponding values for the D2 index were calculated as 3.4±0.9, 14.3±4.9 and 16.4±6.7. Both BMI and D2 index were capable of discriminating the groups from one another (p≤0.01). As far as other obesity indices were considered, waist-to hip and head-to-neck ratios did not exhibit any statistically significant difference between MO and MetS groups (p≥0.05). Diagnostic obesity notation model assessment index-II was correlated with the triglycerides-to-HDL-C ratio in normal body weight and MO (r=0.413, p≤0.01 and r=0.261, (p≤0.05, respectively). Total cholesterol-to-HDL-C and LDL-C-to-HDL-C showed statistically significant differences between normal body weight and MO as well as MO and MetS (p≤0.05). The only group in which these two ratios were significantly correlated with waist-to-hip ratio was MetS group (r=0.332 and r=0.334, p≤0.01, respectively). Lack of correlation between the D2 index and the triglycerides-to-HDL-C ratio was another important finding in MetS group. In this study, parameters and ratios, whose associations were defined previously with increased cardiovascular risk or cardiac death have been evaluated along with obesity indices in children with morbid obesity and MetS. Their profiles during childhood have been investigated. Aside from the nature of the correlation between the D2 index and triglycerides-to-HDL-C ratio, total cholesterol-to-HDL-C as well as LDL-C-to- HDL-C ratios along with their correlations with waist-to-hip ratio showed that the combination of obesity-related parameters predicts better than one parameter and appears to be helpful for discriminating MO children from MetS group.

Keywords: children, lipid ratios, metabolic syndrome, obesity indices

Procedia PDF Downloads 138
344 The Origins of Representations: Cognitive and Brain Development

Authors: Athanasios Raftopoulos

Abstract:

In this paper, an attempt is made to explain the evolution or development of human’s representational arsenal from its humble beginnings to its modern abstract symbols. Representations are physical entities that represent something else. To represent a thing (in a general sense of “thing”) means to use in the mind or in an external medium a sign that stands for it. The sign can be used as a proxy of the represented thing when the thing is absent. Representations come in many varieties, from signs that perceptually resemble their representative to abstract symbols that are related to their representata through conventions. Relying the distinction among indices, icons, and symbols, it is explained how symbolic representations gradually emerged from indices and icons. To understand the development or evolution of our representational arsenal, the development of the cognitive capacities that enabled the gradual emergence of representations of increasing complexity and expressive capability should be examined. The examination of these factors should rely on a careful assessment of the available empirical neuroscientific and paleo-anthropological evidence. These pieces of evidence should be synthesized to produce arguments whose conclusions provide clues concerning the developmental process of our representational capabilities. The analysis of the empirical findings in this paper shows that Homo Erectus was able to use both icons and symbols. Icons were used as external representations, while symbols were used in language. The first step in the emergence of representations is that a sensory-motor purely causal schema involved in indices is decoupled from its normal causal sensory-motor functions and serves as a representation of the object that initially called it into play. Sensory-motor schemes are tied to specific contexts of the organism-environment interactions and are activated only within these contexts. For a representation of an object to be possible, this scheme must be de-contextualized so that the same object can be represented in different contexts; a decoupled schema loses its direct ties to reality and becomes mental content. The analysis suggests that symbols emerged due to selection pressures of the social environment. The need to establish and maintain social relationships in ever-enlarging groups that would benefit the group was a sufficient environmental pressure to lead to the appearance of the symbolic capacity. Symbols could serve this need because they can express abstract relationships, such as marriage or monogamy. Icons, by being firmly attached to what can be observed, could not go beyond surface properties to express abstract relations. The cognitive capacities that are required for having iconic and then symbolic representations were present in Homo Erectus, which had a language that started without syntactic rules but was structured so as to mirror the structure of the world. This language became increasingly complex, and grammatical rules started to appear to allow for the construction of more complex expressions required to keep up with the increasing complexity of social niches. This created evolutionary pressures that eventually led to increasing cranial size and restructuring of the brain that allowed more complex representational systems to emerge.

Keywords: mental representations, iconic representations, symbols, human evolution

Procedia PDF Downloads 29
343 Role of Baseline Measurements in Assessing Air Quality Impact of Shale Gas Operations

Authors: Paula Costa, Ana Picado, Filomena Pinto, Justina Catarino

Abstract:

Environmental impact associated with large scale shale gas development is of major concern to the public, policy makers and other stakeholders. To assess this impact on the atmosphere, it is important to monitoring ambient air quality prior to and during all shale gas operation stages. Baseline observations can provide a standard of the pre-shale gas development state of the environment. The lack of baseline concentrations was identified as an important knowledge gap to assess the impact of emissions to the air due to shale gas operations. In fact baseline monitoring of air quality are missing in several regions, where there is a strong possibility of future shale gas exploration. This makes it difficult to properly identify, quantify and characterize environmental impacts that may be associated with shale gas development. The implementation of a baseline air monitoring program is imperative to be able to assess the total emissions related with shale gas operations. In fact, any monitoring programme should be designed to provide indicative information on background levels. A baseline air monitoring program should identify and characterize targeted air pollutants, most frequently described from monitoring and emission measurements, as well as those expected from hydraulic fracturing activities, and establish ambient air conditions prior to start-up of potential emission sources from shale gas operations. This program has to be planned for at least one year accounting for ambient variations. In the literature, in addition to GHG emissions of CH4, CO2 and nitrogen oxides (NOx), fugitive emissions from shale gas production can release volatile organic compounds (VOCs), aldehydes (formaldehyde, acetaldehyde) and hazardous air pollutants (HAPs). The VOCs include a.o., benzene, toluene, ethyl benzene, xylenes, hexanes, 2,2,4-trimethylpentane, styrene. The concentrations of six air pollutants (ozone, particulate matter (PM), carbon monoxide (CO), nitrogen oxides (NOx), sulphur oxides (SOx), and lead) whose regional ambient air levels are regulated by the Environmental Protection Agency (EPA), are often discussed. However, the main concern in the emissions to air associated to shale gas operations, seems to be the leakage of methane. Methane is identified as a compound of major concern due to its strong global warming potential. The identification of methane leakage from shale gas activities is complex due to the existence of several other CH4 sources (e.g. landfill, agricultural activity or gas pipeline/compressor station). An integrated monitoring study of methane emissions may be a suitable mean of distinguishing the contribution of different sources of methane to ambient levels. All data analysis needs to be carefully interpreted taking, also, into account the meteorological conditions of the site. This may require the implementation of a more intensive monitoring programme. So, it is essential the development of a low-cost sampling strategy, suitable for establishing pre-operations baseline data as well as an integrated monitoring program to assess the emissions from shale gas operation sites. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 640715.

Keywords: air emissions, baseline, green house gases, shale gas

Procedia PDF Downloads 305
342 Chemical and Electrochemical Syntheses of Two Organic Components of Ginger

Authors: Adrienn Kiss, Karoly Zauer, Gyorgy Keglevich, Rita Molnarne Bernath

Abstract:

Ginger (Zingiber officinale) is a perennial plant from Southeast Asia, widely used as a spice, herb, and medicine for many illnesses since its beneficial health effects were observed thousands of years ago. Among the compounds found in ginger, zingerone [4-hydroxy-3- methoxyphenyl-2-butanone] deserves special attention: it has an anti-inflammatory and antispasmodic effect, it can be used in case of diarrheal disease, helps to prevent the formation of blood clots, has antimicrobial properties, and can also play a role in preventing the Alzheimer's disease. Ferulic acid [(E)-3-(4-hydroxy-3-methoxyphenyl)-prop-2-enoic acid] is another cinnamic acid derivative in ginger, which has promising properties. Like many phenolic compounds, ferulic acid is also an antioxidant. Based on the results of animal experiments, it is assumed to have a direct antitumoral effect in lung and liver cancer. It also deactivates free radicals that can damage the cell membrane and the DNA and helps to protect the skin against UV radiation. The aim of this work was to synthesize these two compounds by new methods. A few of the reactions were based on the hydrogenation of dehydrozingerone [4-(4-Hydroxy-3-methoxyphenyl)-3-buten-2-one] to zingerone. Dehydrozingerone can be synthesized by a relatively simple method from acetone and vanillin with good yield (80%, melting point: 41 °C). Hydrogenation can be carried out chemically, for example by the reaction of zinc and acetic acid, or Grignard magnesium and ethyl alcohol. Another way to complete the reduction is the electrochemical pathway. The electrolysis of dehydrozingerone without diaphragm in aqueous media was attempted to produce ferulic acid in the presence of sodium carbonate and potassium iodide using platinum electrodes. The electrolysis of dehydrozingerone in the presence of potassium carbonate and acetic acid to prepare zingerone was carried out similarly. Ferulic acid was expected to be converted to dihydroferulic acid [3-(4-Hydroxy-3-methoxyphenyl)propanoic acid] in potassium hydroxide solution using iron electrodes, separating the anode and cathode space with a Soxhlet paper sheath impregnated with saturated magnesium chloride solution. For this reaction, ferulic acid was synthesized from vanillin and malonic acid in the presence of pyridine and piperidine (yield: 88.7%, melting point: 173°C). Unfortunately, in many cases, the expected transformations did not happen or took place in low conversions, although gas evolution occurred. Thus, a deeper understanding of these experiments and optimization are needed. Since both compounds are found in different plants, they can also be obtained by alkaline extraction or steam distillation from distinct plant parts (ferulic acid from ground bamboo shoots, zingerone from grated ginger root). The products of these reactions are rich in several other organic compounds as well; therefore, their separation must be solved to get the desired pure material. The products of the reactions described above were characterized by infrared spectral data and melting points. The use of these two simple methods may be informative for the formation of the products. In the future, we would like to study the ferulic acid and zingerone content of other plants and extract them efficiently. The optimization of electrochemical reactions and the use of other test methods are also among our plans.

Keywords: ferulic acid, ginger, synthesis, zingerone

Procedia PDF Downloads 153
341 Clinicomycological Pattern of Superficial Fungal Infections among Primary School Children in Communities in Enugu, Nigeria

Authors: Nkeiruka Elsie Ezomike, Chinwe L. Onyekonwu, Anthony N. Ikefuna, Bede C. Ibe

Abstract:

Superficial fungal infections (SFIs) are one of the common cutaneous infections that affect children worldwide. They may lead to school absenteeism or school drop-out and hence setback in the education of the child. Community-based studies in any locality are good reflections of the health conditions within that area. There is a dearth of information in the literature about SFI among primary school children in Enugu. This study aimed to determine the clinicomycological pattern of SFIs among primary school children in rural and urban communities in Enugu. This was a comparative descriptive cross-sectional study among primary school children in Awgu (rural) and Enugu North (urban) Local Government Areas (LGAs). Subjects' selection was made over 6 months using a multi-stage sampling method. Information such as age, sex, parental education, and occupation were collected using questionnaires. Socioeconomic classes of the children were determined using the classification proposed by Oyedeji et al. The samples were collected from subjects with SFIs. Potassium hydroxide tests were done on the samples. The samples that tested positive were cultured for SFI by inoculating onto Sabouraud's dextrose chloramphenicol actidione agar. The characteristics of the isolates were identified according to their morphological features using Mycology Online, Atlas 2000, and Mycology Review 2003. Equal numbers of children were recruited from the two LGAs. A total of 1662 pupils were studied. The mean ages of the study subjects were 9.03 ± 2.10years in rural and 10.46 ± 2.33years in urban communities. The male to female ratio was 1.6:1 in rural and 1:1.1 in urban communities. The personal hygiene of the children was significantly related to the presence of SFIs. The overall prevalence of SFIs among the study participants was 45%. In the rural, the prevalence was 29.6%, and in the urban prevalence was 60.4%. The types of SFIs were tinea capitis (the commonest), tinea corporis, pityriasis Versicolor, tinea unguium, and tinea manuum with prevalence rates lower in rural than urban communities. The clinical patterns were gray patch and black dot type of non-inflammatory tinea capitis, kerion, tinea corporis with trunk and limb distributions, and pityriasis Versicolor with face, trunk and limb distributions. Gray patch was the most frequent pattern of SFI seen in rural and urban communities. Black dot type was more frequent in rural than urban communities. SFIs were frequent among children aged 5 to 8years in rural and 9 to 12 years in urban communities. SFIs were commoner in males in the rural, whereas female dominance was observed in the urban. SFIs were more in children from low social class and those with poor hygiene. Trichophyton tonsurans and Trichophyton soudanese were the common mycological isolates in rural and urban communities, respectively. In conclusion, SFIs were less prevalent in rural than in urban communities. Trichophyton species were the most common fungal isolates in the communities. Health education of mothers and their children on SFI and good personal hygiene will reduce the incidence of SFIs.

Keywords: clinicomycological pattern, communities, primary school children, superficial fungal infections

Procedia PDF Downloads 102
340 The Interrelation of Institutional Care and Successful Aging

Authors: Naphaporn Sapsopha

Abstract:

Aging population has been growing rapidly in Thailand due to several factors – namely, the declining size of the average Thai family, changing family structure, higher survival rates of women, and job migration patterns – there are fewer working-age citizens who are able to care for and support their aging family members. When a family can no longer provide for their elders, the responsibility shifts to the government. Many non-profit institutional care facilities for older adults have already been established, but having such institutions are not enough. In addition to the provisions that a reliable shelter can provide, older adults also need efficient social services, physical wellness, and mental health, all of which are crucial for successful aging. Yet, to date, there is no consensus or a well-accepted definition of what constitutes successful aging. The issue is further complicated by cultural expectations, and the gendered experience of the older adults. These issues need to be better understood to promote effective care and wellness. This qualitative research investigates the relationship between institutional care and successful aging among the institutionalized Thai older adults at a non-profit facility in Bangkok, Thailand. Specifically, it examines: a) How do institutionalized older adults define successful aging?, b) What factors do they believe contribute to successful aging?, and c) Do their beliefs vary by gender? Data was collected using a phenomenological research approach that included focus groups and in-depth interviews using open-ended questions, conducted on 10 institutionalized older adults (5 men and 5 women) ages 60 or over. Interview transcripts were coded and analyzed using grounded theory methodology. The participants aged between 70-91 years old, and they varied in terms of gender, education, occupation, and life background. The results revealed that Thai institutionalized older adults viewed successful aging as a result of multiple interrelated factors: maintaining physical health, good mental and cognitive abilities. Remarkably, the participants identified as successful aging include independence for self-care and financial support, adhering to moral principles and religious practice, seeing the success of their loved ones, and making social contributions to their community. In addition, three primary themes were identified as a coping strategy to age successfully: self-acceptance by being sufficient and satisfied with all aspects of life, preparedness and adaptation for every stage of life, and self-esteem by maintaining their self. These beliefs are shared across gender and age differences. However, participants highlighted the importance of the interrelationship among these attributes similar to the need for a secure environment, the thoughtfulness and social support of institutional care in order to maintain positive attitude and well-being. With highly increased Thai aging population, many of these older adults will find themselves living in the institutional care; therefore, it is important to intensively understand how older adults viewed successful aging, what constituted successful aging and what could be done to promote it. Interventions to enhance successful aging may include meaningful practice and along with an effective coping strategy in order to lead a better quality of life those living in institutional care.

Keywords: institutional care, older adults, self-acceptant, successful aging

Procedia PDF Downloads 297
339 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 101
338 The Network Effect on Green Information on Taiwan Social Network Sites

Authors: Pi Hsia Liang

Abstract:

The rise of Facebook, Twitter, and other social networks significantly changes in interconnections between people, enhancing the process of information dissemination and amplify the influence of that information. Therefore, to develop informational efficiency or signaling equilibrium type of information environment among social networks, without adverse selection effects, becomes an important issue. Thus, someone may post a piece of intentional information in relation to personal interest for trying to create marginal influence. Therefore, economists are seeking to establish theories of informational efficiency under social network environment in order to resolve adverse selection issues. Reputation could be one of the important factors in the process of creating informational efficiency. Additionally, investors how to process green information, or information of corporate social responsibility is a very important study. This study essentially employs experimental study for examining how investors use stock relevant green information in Facebook and various Taiwan local networks. Facebook, and blogs of Money DJ, Technews and cnYES, respectively, are the primary sites for this examination that also allow to differentiate effects between Facebook and other local social networks. Questionnaire is developed for such an experimental testing. Note that questionnaire allows this study to group, for example, decision frequency and length of time duration focusing on social networks that are used for discriminating investor type and competence of informed investor. This study selects 500 investors that can be separated into two respective 250 samples as the control group and 250 samples in such an experimental. The quantity of sample investor sufficiently results in statistic significance of this experimental study. The empirical results of this study can be used for explaining how financial information in relation to corporate social responsibility would be disseminated in social websites. Therefore, we can lead to better interpretation of price/earnings relationship type of study and empirical studies of green information usefulness or informational efficiency Note that the above mentioned empirical studies did not exist any social network and annual report of corporate social responsibility. This study expects to find the results that both network degree and network cluster significantly affected green information dissemination frequency. In other words, investors with more connections and with high clustered connections might exert a greater influence on their green information dissemination process. The preferred users of financial social networks could make better stock decision that could amplify effects of green information. In addition, Facebook would be more influential than other local Taiwan financial social networks, although Facebook is not a specialized financial social network. In other words, the popularity and reputation effects of Facebook significantly contribute to usefulness of green information and influence of green information. Third, it has a better chance to find rumor or cheating information in local Taiwan financial social networks than Facebook. In other words, Facebook possesses reputation effect, or a better informational efficiency. Or, even though Taiwan local financial social networks have marginal informational effects on stock price, because of shortage of informational efficiency or monitoring system, information could be a tool for those whom owning superior information.

Keywords: network effect on financial services, informational efficiency theory, social networks, social websites

Procedia PDF Downloads 217
337 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy

Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon

Abstract:

Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).

Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect

Procedia PDF Downloads 158
336 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 51
335 Interdigitated Flexible Li-Ion Battery by Aerosol Jet Printing

Authors: Yohann R. J. Thomas, Sébastien Solan

Abstract:

Conventional battery technology includes the assembly of electrode/separator/electrode by standard techniques such as stacking or winding, depending on the format size. In that type of batteries, coating or pasting techniques are only used for the electrode process. The processes are suited for large scale production of batteries and perfectly adapted to plenty of application requirements. Nevertheless, as the demand for both easier and cost-efficient production modes, flexible, custom-shaped and efficient small sized batteries is rising. Thin-film, printable batteries are one of the key areas for printed electronics. In the frame of European BASMATI project, we are investigating the feasibility of a new design of lithium-ion battery: interdigitated planar core design. Polymer substrate is used to produce bendable and flexible rechargeable accumulators. Direct fully printed batteries lead to interconnect the accumulator with other electronic functions for example organic solar cells (harvesting function), printed sensors (autonomous sensors) or RFID (communication function) on a common substrate to produce fully integrated, thin and flexible new devices. To fulfill those specifications, a high resolution printing process have been selected: Aerosol jet printing. In order to fit with this process parameters, we worked on nanomaterials formulation for current collectors and electrodes. In addition, an advanced printed polymer-electrolyte is developed to be implemented directly in the printing process in order to avoid the liquid electrolyte filling step and to improve safety and flexibility. Results: Three different current collectors has been studied and printed successfully. An ink of commercial copper nanoparticles has been formulated and printed, then a flash sintering was applied to the interdigitated design. A gold ink was also printed, the resulting material was partially self-sintered and did not require any high temperature post treatment. Finally, carbon nanotubes were also printed with a high resolution and well defined patterns. Different electrode materials were formulated and printed according to the interdigitated design. For cathodes, NMC and LFP were efficaciously printed. For anodes, LTO and graphite have shown to be good candidates for the fully printed battery. The electrochemical performances of those materials have been evaluated in a standard coin cell with lithium-metal counter electrode and the results are similar with those of a traditional ink formulation and process. A jellified plastic crystal solid state electrolyte has been developed and showed comparable performances to classical liquid carbonate electrolytes with two different materials. In our future developments, focus will be put on several tasks. In a first place, we will synthesize and formulate new specific nano-materials based on metal-oxyde. Then a fully printed device will be produced and its electrochemical performance will be evaluated.

Keywords: high resolution digital printing, lithium-ion battery, nanomaterials, solid-state electrolytes

Procedia PDF Downloads 224
334 Timely Palliative Screening and Interventions in Oncology

Authors: Jaci Marie Mastrandrea, Rosario Haro

Abstract:

Background: The National Comprehensive Cancer Network (NCCN) recommends that healthcare institutions have established processes for integrating palliative care (PC) into cancer treatment and that all cancer patients be screened for PC needs upon initial diagnosis as well as throughout the entire continuum of care (National Comprehensive Cancer Network, 2021). Early PC screening and intervention is directly associated with improved patient outcomes. The Sky Lakes Cancer Treatment Center (SLCTC) is an institution that has access to PC services yet does not have protocols in place for identifying patients with palliative needs or a standardized referral process. The aim of this quality improvement project was to improve early access to PC services by establishing a standardized screening and referral process for outpatient oncology patients. Method: The sample population included all adult patients with an oncology diagnosis who presented to the SLCTC for treatment during the project timeline. The “Palliative and Supportive Needs Assessment'' (PSNA) screening tool was developed from validated, evidence-based PC referral criteria. The tool was initially implemented using paper forms, and data was collected over a period of eight weeks. Patients were screened by nurses on the SLCTC oncology treatment team. Nurses responsible for screening patients received an educational inservice prior to implementation. Patients with a PSNA score of three or higher received an educational handout on the topic of PC and education about PC and symptom management. A score of five or higher indicates that PC referral is strongly recommended, and the patient’s EHR is flagged for the oncology provider to review orders for PC referral. The PSNA tool was approved by Sky Lakes administration for full integration into Epic-Beacon. The project lead collaborated with the Sky Lakes’ information systems team and representatives from Epic on the tool’s aesthetic and functionality within the Epic system. SLCTC nurses and physicians were educated on how to document the PSNA within Epic and where to view results. Results: Prior to the implementation of the PSNA screening tool, the SLCTC had zero referrals to PC in the past year, excluding referrals to hospice. Data was collected from the completed screening assessments of 100 patients under active treatment at the SLCTC. Seventy-three percent of patients met criteria for PC referral with a score greater than or equal to three. Of those patients who met referral criteria, 53.4% (39 patients) were referred for a palliative and supportive care consultation. Patients that were not referred to PC upon meeting criteria were flagged in EPIC for re-screening within one to three months. Patients with lung cancer, chronic hematologic malignancies, breast cancer, and gastrointestinal malignancy most frequently met the criteria for PC referral and scored highest overall on the scale of 0-12. Conclusion: The implementation of a standardized PC screening tool at the SLCTC significantly increased awareness of PC needs among cancer patients in the outpatient setting. Additionally, data derived from this quality improvement project supports the national recommendation for PC to be an integral component of cancer treatment across the entire continuum of care.

Keywords: oncology, palliative and supportive care, symptom management, outpatient oncology, palliative screening tool

Procedia PDF Downloads 86
333 Biomaterials Solutions to Medical Problems: A Technical Review

Authors: Ashish Thakur

Abstract:

This technical paper was written in view of focusing the biomaterials and its various applications in modern industries. Author tires to elaborate not only the medical, infect plenty of application in other industries. The scope of the research area covers the wide range of physical, biological and chemical sciences that underpin the design of biomaterials and the clinical disciplines in which they are used. A biomaterial is now defined as a substance that has been engineered to take a form which, alone or as part of a complex system, is used to direct, by control of interactions with components of living systems, the course of any therapeutic or diagnostic procedure. Biomaterials are invariably in contact with living tissues. Thus, interactions between the surface of a synthetic material and biological environment must be well understood. This paper reviews the benefits and challenges associated with surface modification of the metals in biomedical applications. The paper also elaborates how the surface characteristics of metallic biomaterials, such as surface chemistry, topography, surface charge, and wettability, influence the protein adsorption and subsequent cell behavior in terms of adhesion, proliferation, and differentiation at the biomaterial–tissue interface. The chapter also highlights various techniques required for surface modification and coating of metallic biomaterials, including physicochemical and biochemical surface treatments and calcium phosphate and oxide coatings. In this review, the attention is focused on the biomaterial-associated infections, from which the need for anti-infective biomaterials originates. Biomaterial-associated infections differ markedly for epidemiology, aetiology and severity, depending mainly on the anatomic site, on the time of biomaterial application, and on the depth of the tissues harbouring the prosthesis. Here, the diversity and complexity of the different scenarios where medical devices are currently utilised are explored, providing an overview of the emblematic applicative fields and of the requirements for anti-infective biomaterials. In addition to this, chapter introduces nanomedicine and the use of both natural and synthetic polymeric biomaterials, focuses on specific current polymeric nanomedicine applications and research, and concludes with the challenges of nanomedicine research. Infection is currently regarded as the most severe and devastating complication associated to the use of biomaterials. Osteoporosis is a worldwide disease with a very high prevalence in humans older than 50. The main clinical consequences are bone fractures, which often lead to patient disability or even death. A number of commercial biomaterials are currently used to treat osteoporotic bone fractures, but most of these have not been specifically designed for that purpose. Many drug- or cell-loaded biomaterials have been proposed in research laboratories, but very few have received approval for commercial use. Polymeric nanomaterial-based therapeutics plays a key role in the field of medicine in treatment areas such as drug delivery, tissue engineering, cancer, diabetes, and neurodegenerative diseases. Advantages in the use of polymers over other materials for nanomedicine include increased functionality, design flexibility, improved processability, and, in some cases, biocompatibility.

Keywords: nanomedicine, tissue, infections, biomaterials

Procedia PDF Downloads 240
332 Functionalizing Gold Nanostars with Ninhydrin as Vehicle Molecule for Biomedical Applications

Authors: Swati Mishra

Abstract:

In recent years, there has been an explosion in Gold NanoParticle (GNP) research, with a rapid increase in publications in diverse fields, including imaging, bioengineering, and molecular biology. GNPs exhibit unique physicochemical properties, including surface plasmon resonance (SPR) and bind amine and thiol groups, allowing surface modification and use in biomedical applications. Nanoparticle functionalization is the subject of intense research at present, with rapid progress being made towards developing biocompatible, multi-functional particles. In the present study, the photochemical method has been done to functionalize various-shaped GNPs like nanostars by the molecules like ninhydrin. Ninhydrin is bactericidal, virucidal, fungicidal, antigen-antibody reactive, and used in fingerprint technology in forensics. The GNPs functionalized with ninhydrin efficiently will bind to the amino acids on the target protein, which is of eminent importance during the pandemic, especially where long-term treatments of COVID- 19 bring many side effects of the drugs. The photochemical method is adopted as it provides low thermal load, selective reactivity, selective activation, and controlled radiation in time, space, and energy. The GNPs exhibit their characteristic spectrum, but a distinctly blue or redshift in the peak will be observed after UV irradiation, ensuring efficient ninhydrin binding. Now, the bound ninhydrin in the GNP carrier, upon chemically reacting with any amino acid, will lead to the formation of Rhumann purple. A common method of GNP production includes citrate reduction of Au [III] derivatives such as aurochloric acid (HAuCl4) in water to Au [0] through a one-step synthesis of size-tunable GNPs. The following reagents are prepared to validate the approach. Reagent A solution 1 is0.0175 grams ninhydrin in 5 ml Millipore water Reagent B 30 µl of HAuCl₄.3H₂O in 3 ml of solution 1 Reagent C 1 µl of gold nanostars in 3 ml of solution 1 Reagent D 6 µl of cetrimonium bromide (CTAB) in 3 ml of solution1 ReagentE 1 µl of gold nanostars in 3 ml of ethanol ReagentF 30 µl of HAuCl₄.₃H₂O in 3 ml of ethanol ReagentG 30 µl of HAuCl₄.₃H₂O in 3 ml of solution 2 ReagentH solution 2 is0.0087 grams ninhydrin in 5 ml Millipore water ReagentI 30 µl of HAuCl₄.₃H₂O in 3 ml of water The reagents were irradiated at 254 nm for 15 minutes, followed by their UV Visible spectroscopy. The wavelength was selected based on the one reported for excitation of a similar molecule Pthalimide. It was observed that the solution B and G deviate around 600 nm, while C peaks distinctively at 567.25 nm and 983.9 nm. Though it is tough to say about the chemical reaction happening, butATR-FTIR of reagents will ensure that ninhydrin is not forming Rhumann purple in the absence of amino acids. Therefore, these experiments, we achieved the functionalization of gold nanostars with ninhydrin corroborated by the deviation in the spectrum obtained in a mixture of GNPs and ninhydrin irradiated with UV light. It prepares them as a carrier molecule totake up amino acids for targeted delivery or germicidal action.

Keywords: gold nanostars, ninhydrin, photochemical method, UV visible specgtroscopy

Procedia PDF Downloads 126
331 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer

Authors: Binder Hans

Abstract:

Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.

Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas

Procedia PDF Downloads 122
330 Train Timetable Rescheduling Using Sensitivity Analysis: Application of Sobol, Based on Dynamic Multiphysics Simulation of Railway Systems

Authors: Soha Saad, Jean Bigeon, Florence Ossart, Etienne Sourdille

Abstract:

Developing better solutions for train rescheduling problems has been drawing the attention of researchers for decades. Most researches in this field deal with minor incidents that affect a large number of trains due to cascading effects. They focus on timetables, rolling stock and crew duties, but do not take into account infrastructure limits. The present work addresses electric infrastructure incidents that limit the power available for train traction, and hence the transportation capacity of the railway system. Rescheduling is needed in order to optimally share the available power among the different trains. We propose a rescheduling process based on dynamic multiphysics railway simulations that include the mechanical and electrical properties of all the system components and calculate physical quantities such as the train speed profiles, voltage along the catenary lines, temperatures, etc. The optimization problem to solve has a large number of continuous and discrete variables, several output constraints due to physical limitations of the system, and a high computation cost. Our approach includes a phase of sensitivity analysis in order to analyze the behavior of the system and help the decision making process and/or more precise optimization. This approach is a quantitative method based on simulation statistics of the dynamic railway system, considering a predefined range of variation of the input parameters. Three important settings are defined. Factor prioritization detects the input variables that contribute the most to the outputs variation. Then, factor fixing allows calibrating the input variables which do not influence the outputs. Lastly, factor mapping is used to study which ranges of input values lead to model realizations that correspond to feasible solutions according to defined criteria or objectives. Generalized Sobol indexes are used for factor prioritization and factor fixing. The approach is tested in the case of a simple railway system, with a nominal traffic running on a single track line. The considered incident is the loss of a feeding power substation, which limits the power available and the train speed. Rescheduling is needed and the variables to be adjusted are the trains departure times, train speed reduction at a given position and the number of trains (cancellation of some trains if needed). The results show that the spacing between train departure times is the most critical variable, contributing to more than 50% of the variation of the model outputs. In addition, we identify the reduced range of variation of this variable which guarantees that the output constraints are respected. Optimal solutions are extracted, according to different potential objectives: minimizing the traveling time, the train delays, the traction energy, etc. Pareto front is also built.

Keywords: optimization, rescheduling, railway system, sensitivity analysis, train timetable

Procedia PDF Downloads 379