Search results for: fundamental particle
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3318

Search results for: fundamental particle

108 Fighting the Crisis with 4.0 Competences: Higher Education Projects in the Times of Pandemic

Authors: Jadwiga Fila, Mateusz Jezowski, Pawel Poszytek

Abstract:

The outbreak of the global COVID-19 pandemic started the times of crisis full of uncertainty, especially in the field of transnational cooperation projects based on the international mobility of their participants. This is notably the case of Erasmus+ Program for higher education, which is the flagship European initiative boosting cooperation between educational institutions, businesses, and other actors, enabling students and staff mobility, as well as strategic partnerships between different parties. The aim of this abstract is to study whether competences 4.0 are able to empower Erasmus+ project leaders in sustaining their international cooperation in times of global crisis, widespread online learning, and common project disruption or cancellation. The concept of competences 4.0 emerged from the notion of the industry 4.0, and it relates to skills that are fundamental for the current labor market. For the aim of the study presented in this abstract, four main 4.0 competences were distinguished: digital, managerial, social, and cognitive competence. The hypothesis for the study stipulated that the above-mentioned highly-developed competences may act as a protective shield against the pandemic challenges in terms of projects’ sustainability and continuation. The objective of the research was to assess to what extent individual competences are useful in managing projects in times of crisis. For this purpose, the study was conducted, involving, among others, 141 Polish higher education project leaders who were running their cooperation projects during the peak of the COVID-19 pandemic (Mar-Nov 2020). The research explored the self-perception of the above-mentioned competences among Erasmus+ project leaders and the contextual data regarding the sustainability of the projects. The quantitative character of data permitted validation of scales (Cronbach’s Alfa measure), and the use of factor analysis made it possible to create a distinctive variable for each competence and its dimensions. Finally, logistic regression was used to examine the association of competences and other factors on project status. The study shows that the project leaders’ competence profile attributed the highest score to digital competence (4.36 on the 1-5 scale). Slightly lower values were obtained for cognitive competence (3.96) and managerial competence (3.82). The lowest score was accorded to one specific dimension of social competence: adaptability and ability to manage stress (1.74), which proves that the pandemic was a real challenge which had to be faced by project coordinators. For higher education projects, 10% were suspended or prolonged because of the COVID-19 pandemic, whereas 90% were undisrupted (continued or already successfully finished). The quantitative analysis showed a positive relationship between the leaders’ levels of competences and the projects status. In the case of all competences, the scores were higher for project leaders who finished projects successfully than for leaders who suspended or prolonged their projects. The research demonstrated that, in the demanding times of the COVID-19 pandemic, competences 4.0, to a certain extent, do play a significant role in the successful management of Erasmus+ projects. The implementation and sustainability of international educational projects, despite mobility and sanitary obstacles, depended, among other factors, on the level of leaders’ competences.

Keywords: Competences 4.0, COVID-19 pandemic, Erasmus+ Program, international education, project sustainability

Procedia PDF Downloads 71
107 Fake News Domination and Threats on Democratic Systems

Authors: Laura Irimies, Cosmin Irimies

Abstract:

The public space all over the world is currently confronted with the aggressive assault of fake news that have lately impacted public agenda setting, collective decisions and social attitudes. Top leaders constantly call out most mainstream news as “fake news” and the public opinion get more confused. "Fake news" are generally defined as false, often sensational, information disseminated under the guise of news reporting and has been declared the word of the year 2017 by Collins Dictionary and it also has been one of the most debated socio-political topics of recent years. Websites which, deliberately or not, publish misleading information are often shared on social media where they essentially increase their reach and influence. According to international reports, the exposure to fake news is an undeniable reality all over the world as the exposure to completely invented information goes up to the 31 percent in the US, and it is even bigger in Eastern Europe countries, such as Hungary (42%) and Romania (38%) or in Mediterranean countries, such as Greece (44%) or Turkey (49%), and lower in Northern and Western Europe countries – Germany (9%), Denmark (9%) or Holland (10%). While the study of fake news (mechanism and effects) is still in its infancy, it has become truly relevant as the phenomenon seems to have a growing impact on democratic systems. Studies conducted by the European Commission show that 83% of respondents out of a total of 26,576 interviewees consider the existence of news that misrepresent reality as a threat for democracy. Studies recently conducted at Arizona State University show that people with higher education can more easily spot fake headlines, but over 30 percent of them can still be trapped by fake information. If we were to refer only to some of the most recent situations in Romania, fake news issues and hidden agenda suspicions related to the massive and extremely violent public demonstrations held on August 10th, 2018 with a strong participation of the Romanian diaspora have been massively reflected by the international media and generated serious debates within the European Commission. Considering the above framework, the study raises four main research questions: 1. Is fake news a problem or just a natural consequence of mainstream media decline and the abundance of sources of information? 2. What are the implications for democracy? 3. Can fake news be controlled without restricting fundamental human rights? 4. How could the public be properly educated to detect fake news? The research uses mostly qualitative but also quantitative methods, content analysis of studies, websites and media content, official reports and interviews. The study will prove the real threat fake news represent and also the need for proper media literacy education and will draw basic guidelines for developing a new and essential skill: that of detecting fake in news in a society overwhelmed by sources of information that constantly roll massive amounts of information increasing the risk of misinformation and leading to inadequate public decisions that could affect democratic stability.

Keywords: agenda setting democracy, fake news, journalism, media literacy

Procedia PDF Downloads 100
106 Evaluating the ‘Assembled Educator’ of a Specialized Postgraduate Engineering Course Using Activity Theory and Genre Ecologies

Authors: Simon Winberg

Abstract:

The landscape of professional postgraduate education is changing: the focus of these programmes is moving from preparing candidates for a life in academia towards a focus of training in expert knowledge and skills to support industry. This is especially pronounced in engineering disciplines where increasingly more complex products are drawing on a depth of knowledge from multiple fields. This connects strongly with the broader notion of Industry 4.0 – where technology and society are being brought together to achieve more powerful and desirable products, but products whose inner workings also are more complex than before. The changes in what we do, and how we do it, has a profound impact on what industry would like universities to provide. One such change is the increased demand for taught doctoral and Masters programmes. These programmes aim to provide skills and training for professionals, to expand their knowledge of state-of-the-art tools and technologies. This paper investigates one such course, namely a Software Defined Radio (SDR) Master’s degree course. The teaching support for this course had to be drawn from an existing pool of academics, none of who were specialists in this field. The paper focuses on the kind of educator, a ‘hybrid academic’, assembled from available academic staff and bolstered by research. The conceptual framework for this paper combines Activity Theory and Genre Ecology. Activity Theory is used to reason about learning and interactions during the course, and Genre Ecology is used to model building and sharing of technical knowledge related to using tools and artifacts. Data were obtained from meetings with students and lecturers, logs, project reports, and course evaluations. The findings show how the course, which was initially academically-oriented, metamorphosed into a tool-dominant peer-learning structure, largely supported by the sharing of technical tool-based knowledge. While the academic staff could address gaps in the participants’ fundamental knowledge of radio systems, the participants brought with them extensive specialized knowledge and tool experience which they shared with the class. This created a complicated dynamic in the class, which centered largely on engagements with technology artifacts, such as simulators, from which knowledge was built. The course was characterized by a richness of ‘epistemic objects’, which is to say objects that had knowledge-generating qualities. A significant portion of the course curriculum had to be adapted, and the learning methods changed to accommodate the dynamic interactions that occurred during classes. This paper explains the SDR Masters course in terms of conflicts and innovations in its activity system, as well as the continually hybridizing genre ecology to show how the structuring and resource-dependence of the course transformed from its initial ‘traditional’ academic structure to a more entangled arrangement over time. It is hoped that insights from this paper would benefit other educators involved in the design and teaching of similar types of specialized professional postgraduate taught programmes.

Keywords: professional postgraduate education, taught masters, engineering education, software defined radio

Procedia PDF Downloads 68
105 Structural Molecular Dynamics Modelling of FH2 Domain of Formin DAAM

Authors: Rauan Sakenov, Peter Bukovics, Peter Gaszler, Veronika Tokacs-Kollar, Beata Bugyi

Abstract:

FH2 (formin homology-2) domains of several proteins, collectively known as formins, including DAAM, DAAM1 and mDia1, promote G-actin nucleation and elongation. FH2 domains of these formins exist as oligomers. Chain dimerization by ring structure formation serves as a structural basis for actin polymerization function of FH2 domain. Proper single chain configuration and specific interactions between its various regions are necessary for individual chains to form a dimer functional in G-actin nucleation and elongation. FH1 and WH2 domain-containing formins were shown to behave as intrinsically disordered proteins. Thus, the aim of this research was to study structural dynamics of FH2 domain of DAAM. To investigate structural features of FH2 domain of DAAM, molecular dynamics simulation of chain A of FH2 domain of DAAM solvated in water box in 50 mM NaCl was conducted at temperatures from 293.15 to 353.15K, with VMD 1.9.2, NAMD 2.14 and Amber Tools 21 using 2z6e and 1v9d PDB structures of DAAM was obtained on I-TASSER webserver. Calcium and ATP bound G-actin 3hbt PDB structure was used as a reference protein with well-described structural dynamics of denaturation. Topology and parameter information of CHARMM 2012 additive all-atom force fields for proteins, carbohydrate derivatives, water and ions were used in NAMD 2.14 and ff19SB force field for proteins in Amber Tools 21. The systems were energy minimized for the first 1000 steps, equilibrated and produced in NPT ensemble for 1ns using stochastic Langevin dynamics and the particle mesh Ewald method. Our root-mean square deviation (RMSD) analysis of molecular dynamics of chain A of FH2 domains of DAAM revealed similar insignificant changes of total molecular average RMSD values of FH2 domain of these formins at temperatures from 293.15 to 353.15K. In contrast, total molecular average RMSD values of G-actin showed considerable increase at 328K, which corresponds to the denaturation of G-actin molecule at this temperature and its transition from native, ordered, to denatured, disordered, state which is well-described in the literature. RMSD values of lasso and tail regions of chain A of FH2 domain of DAAM exhibited higher than total molecular average RMSD at temperatures from 293.15 to 353.15K. These regions are functional in intra- and interchain interactions and contain highly conserved tryptophan residues of lasso region, highly conserved GNYMN sequence of post region and amino acids of the shell of hydrophobic pocket of the salt bridge between Arg171 and Asp321, which are important for structural stability and ordered state of FH2 domain of DAAM and its functions in FH2 domain dimerization. In conclusion, higher than total molecular average RMSD values of lasso and post regions of chain A of FH2 domain of DAAM may explain disordered state of FH2 domain of DAAM at temperatures from 293.15 to 353.15K. Finally, absence of marked transition, in terms of significant changes in average molecular RMSD values between native and denatured states of FH2 domain of DAAM at temperatures from 293.15 to 353.15K, can make it possible to attribute these formins to the group of intrinsically disordered proteins rather than to the group of intrinsically ordered proteins such as G-actin.

Keywords: FH2 domain, DAAM, formins, molecular modelling, computational biophysics

Procedia PDF Downloads 109
104 The Influence of Mechanical and Physicochemical Characteristics of Perfume Microcapsules on Their Rupture Behaviour and How This Relates to Performance in Consumer Products

Authors: Andrew Gray, Zhibing Zhang

Abstract:

The ability for consumer products to deliver a sustained perfume response can be a key driver for a variety of applications. Many compounds in perfume oils are highly volatile, meaning they readily evaporate once the product is applied, and the longevity of the scent is poor. Perfume capsules have been introduced as a means of abating this evaporation once the product has been delivered. The impermeable capsules are aimed to be stable within the formulation, and remain intact during delivery to the desired substrate, only rupturing to release the core perfume oil through application of mechanical force applied by the consumer. This opens up the possibility of obtaining an olfactive response hours, weeks or even months after delivery, depending on the nature of the desired application. Tailoring the properties of the polymeric capsules to better address the needs of the application is not a trivial challenge and currently design of capsules is largely done by trial and error. The aim of this work is to have more predictive methods for capsule design depending on the consumer application. This means refining formulations such that they rupture at the right time for the specific consumer application, not too early, not too late. Finding the right balance between these extremes is essential if a benefit is sought with respect to neat addition of perfume to formulations. It is important to understand the forces that influence capsule rupture, first, by quantifying the magnitude of these different forces, and then by assessing bulk rupture in real-world applications to understand how capsules actually respond. Samples were provided by an industrial partner and the mechanical properties of individual capsules within the samples were characterized via a micromanipulation technique, developed by Professor Zhang at the University of Birmingham. The capsules were synthesized such as to change one particular physicochemical property at a time, such as core: wall material ratio, and the average size of capsules. Analysis of shell thickness via Transmission Electron Microscopy, size distribution via the use of a Mastersizer, as well as a variety of other techniques confirmed that only one particular physicochemical property was altered for each sample. The mechanical analysis was subsequently undertaken, showing the effect that changing certain capsule properties had on the response under compression. It was, however, important to link this fundamental mechanical response to capsule performance in real-world applications. As such, the capsule samples were introduced to a formulation and exposed to full scale stresses. GC-MS headspace analysis of the perfume oil released from broken capsules enabled quantification of what the relative strengths of capsules truly means for product performance. Correlations have been found between the mechanical strength of capsule samples and performance in terms of perfume release in consumer applications. Having a better understanding of the key parameters that drive performance benefits the design of future formulations by offering better guidelines on the parameters that can be adjusted without worrying about the performance effects, and singles out those parameters that are essential in finding the sweet spot for capsule performance.

Keywords: consumer products, mechanical and physicochemical properties, perfume capsules, rupture behaviour

Procedia PDF Downloads 114
103 Evaluation of Herbal Extracts for Their Potential Application as Skin Prebiotics

Authors: Anja I. Petrov, Milica B. Veljković, Marija M. Ćorović, Ana D. Milivojević, Milica B. Simović, Katarina M. Banjanac, Dejan I. Bezbradica

Abstract:

One of the fundamental requirements for overall human well-being is a stable and balanced microbiome. Aside from the microorganisms that reside within the body, a large number of microorganisms, especially bacteria, swarming the human skin is in homeostasis with the host and represents a skin microbiota. Even though the immune system of the skin is capable of distinguishing between commensal and potentially harmful transient bacteria, the cutaneous microbial balance can be disrupted under certain circumstances. In that case, a reduction in the skin microbiota diversity, as well as changes in metabolic activity, results in dermal infections and inflammation. Probiotics and prebiotics have the potential to play a significant role in the treatment of these skin disorders. The most common resident bacteria found on the skin, Staphylococcus epidermidis, can act as a potential skin probiotic, contributing to the protection of healthy skin from pathogen colonization, such as Staphylococcus aureus, which is related to atopic dermatitis exacerbation. However, as it is difficult to meet regulations in cosmetic products, another therapy approach could be topical prebiotic supplementation of the skin microbiota. In recent research, polyphenols are attracting scientists' interest as biomolecules with possible prebiotic effects on the skin microbiota. This research aimed to determine how herbal extracts rich in different polyphenolic compounds (lemon balm, St. John's wort, coltsfoot, pine needle, and yarrow) affected the growth of S. epidermidis and S. aureus. The first part of the study involved screening plants to determine if they could be regarded as probable candidates to be skin prebiotics. The effect of each plant on bacterial growth was examined by supplementing the nutrient medium with their extracts and comparing it with control samples (without extract). The results obtained after 24 h of incubation showed that all tested extracts influenced the growth of the examined bacteria to some extent. Since lemon balm and St. John's wort extracts displayed bactericidal activity against S. epidermidis, whereas coltsfoot inhibited both bacteria equally, they were not explored further. On the other hand, pine needles and yarrow extract led to an increase in S. epidermidis/S. aureus ratio, making them prospective candidates to be used as skin prebiotics. By examining the prebiotic effect of two extracts at different concentrations, it was revealed that, in the case of yarrow, 0.1% of extract dry matter in the fermentation medium was optimal, while for the pine needle extract, a concentration of 0.05% was preferred, since it selectively stimulated S. epidermidis growth and inhibited S. aureus proliferation. Additionally, the total polyphenols and flavonoid content of the two extracts were determined, revealing different concentrations and polyphenol profiles. Since yarrow and pine extracts affected the growth of skin bacteria in a dose-dependent manner, by carefully selecting the quantities of these extracts, and thus polyphenols content, it is possible to achieve desirable alterations of skin microbiota composition, which may be suitable for the treatment of atopic dermatitis.

Keywords: herbal extracts, polyphenols, skin microbiota, skin prebiotics

Procedia PDF Downloads 149
102 A Conceptual Model of Sex Trafficking Dynamics in the Context of Pandemics and Provisioning Systems

Authors: Brian J. Biroscak

Abstract:

In the United States (US), “sex trafficking” is defined at the federal level in the Trafficking Victims Protection Act of 2000 as encompassing a number of processes such as recruitment, transportation, and provision of a person for the purpose of a commercial sex act. It involves the use of force, fraud, or coercion, or in which the person induced to perform such act has not attained 18 years of age. Accumulating evidence suggests that sex trafficking is exacerbated by social and environmental stressors (e.g., pandemics). Given that “provision” is a key part of the definition, “provisioning systems” may offer a useful lens through which to study sex trafficking dynamics. Provisioning systems are the social systems connecting individuals, small groups, entities, and embedded communities as they seek to satisfy their needs and wants for goods, services, experiences and ideas through value-based exchange in communities. This project presents a conceptual framework for understanding sex trafficking dynamics in the context of the COVID pandemic. The framework is developed as a system dynamics simulation model based on published evidence, social and behavioral science theory, and key informant interviews with stakeholders from the Protection, Prevention, Prosecution, and Partnership sectors in one US state. This “4 P Paradigm” has been described as fundamental to the US government’s anti-trafficking strategy. The present research question is: “How do sex trafficking systems (e.g., supply, demand and price) interact with other provisioning systems (e.g., networks of organizations that help sexually exploited persons) to influence trafficking over time vis-à-vis the COVID pandemic?” Semi-structured interviews with stakeholders (n = 19) were analyzed based on grounded theory and combined for computer simulation. The first step (Problem Definition) was completed by open coding video-recorded interviews, supplemented by a literature review. The model depicts provision of sex trafficking services for victims and survivors as declining in March 2020, coincidental with COVID, but eventually rebounding. The second modeling step (Dynamic Hypothesis Formulation) was completed by open- and axial coding of interview segments, as well as consulting peer-reviewed literature. Part of the hypothesized explanation for changes over time is that the sex trafficking system behaves somewhat like a commodities market, with each of the other subsystems exhibiting delayed responses but collectively keeping trafficking levels below what they would be otherwise. Next steps (Model Building & Testing) led to a ‘proof of concept’ model that can be used to conduct simulation experiments and test various action ideas, by taking model users outside the entire system and seeing it whole. If sex trafficking dynamics unfold as hypothesized, e.g., oscillated post-COVID, then one potential leverage point is to address the lack of information feedback loops between the actual occurrence and consequences of sex trafficking and those who seek to prevent its occurrence, prosecute the traffickers, protect the victims and survivors, and partner with the other anti-trafficking advocates. Implications for researchers, administrators, and other stakeholders are discussed.

Keywords: pandemics, provisioning systems, sex trafficking, system dynamics modeling

Procedia PDF Downloads 58
101 A Model for Teaching Arabic Grammar in Light of the Common European Framework of Reference for Languages

Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla

Abstract:

The complexity of Arabic grammar poses challenges for learners, particularly in relation to its arrangement, classification, abundance, and bifurcation. The challenge at hand is a result of the contextual factors that gave rise to the grammatical rules in question, as well as the pedagogical approach employed at the time, which was tailored to the needs of learners during that particular historical period. Consequently, modern-day students encounter this same obstacle. This requires a thorough examination of the arrangement and categorization of Arabic grammatical rules based on particular criteria, as well as an assessment of their objectives. Additionally, it is necessary to identify the prevalent and renowned grammatical rules, as well as those that are infrequently encountered, obscure and disregarded. This paper presents a compilation of grammatical rules that require arrangement and categorization in accordance with the standards outlined in the Common European Framework of Reference for Languages (CEFR). In addition to facilitating comprehension of the curriculum, accommodating learners' requirements, and establishing the fundamental competencies for achieving proficiency in Arabic, it is imperative to ascertain the conventions that language learners necessitate in alignment with explicitly delineated benchmarks such as the CEFR criteria. The aim of this study is to reduce the quantity of grammatical rules that are typically presented to non-native Arabic speakers in Arabic textbooks. This reduction is expected to enhance the motivation of learners to continue their Arabic language acquisition and to approach the level of proficiency of native speakers. The primary obstacle faced by learners is the intricate nature of Arabic grammar, which poses a significant challenge in the realm of study. The proliferation and complexity of regulations evident in Arabic language textbooks designed for individuals who are not native speakers is noteworthy. The inadequate organisation and delivery of the material create the impression that the grammar is being imparted to a student with the intention of memorising "Alfiyyat-Ibn-Malik." Consequently, the sequence of grammatical rules instruction was altered, with rules originally intended for later instruction being presented first and those intended for earlier instruction being presented subsequently. Students often focus on learning grammatical rules that are not necessarily required while neglecting the rules that are commonly used in everyday speech and writing. Non-Arab students are taught Arabic grammar chapters that are infrequently utilised in Arabic literature and may be a topic of debate among grammarians. The aforementioned findings are derived from the statistical analysis and investigations conducted by the researcher, which will be disclosed in due course of the research. To instruct non-Arabic speakers on grammatical rules, it is imperative to discern the most prevalent grammatical frameworks in grammar manuals and linguistic literature (study sample). The present proposal suggests the allocation of grammatical structures across linguistic levels, taking into account the guidelines of the CEFR, as well as the grammatical structures that are necessary for non-Arabic-speaking learners to generate a modern, cohesive, and comprehensible language.

Keywords: grammar, Arabic, functional, framework, problems, standards, statistical, popularity, analysis

Procedia PDF Downloads 65
100 Research on the Spatial Evolution of Tourism-Oriented Rural Settlements: Take the Xiaochanfangyu Village, Dongshuichang Village, Maojiayu Village in Jixian County, Tianjin City as Examples

Authors: Yu Zhang, Jie Wu, Li Dong

Abstract:

Rural tourism is the service industry which regards the agricultural production, rural life, rural nature and cultural landscape as the tourist attraction. It aims to meet the needs of the city tourists such as country sightseeing, vacation, and leisure. According to the difference of the tourist resources, the rural settlements can be divided into different types: The type of tourism resources, scenic spot, and peri-urban. In the past ten years, the rural tourism has promoted the industrial transformation and economic growth in rural areas of China. And it is conducive to the coordinated development of urban and rural areas and has greatly improved the ecological environment and the standard of living for farmers in rural areas. At the same time, a large number of buildings and sites are built in the countryside in order to enhance the tourist attraction and the ability of tourist reception and also to increase the travel comfort and convenience, which has significant influence on the spatial evolution of the village settlement. This article takes the XiangYing Subdistrict, which is in JinPu District of Dalian in China as the exemplification and uses the technology of Remote Sensing (RS), Geographic Information System (GIS) and the technology of Landscape Spatial Analysis to study the influence of the rural tourism development in the rural settlement spaces in four steps. First, acquiring the remote sensing image data at different times of 8 administrative villages in the XiangYing Subdistrict, by using the remote sensing application EDRAS8.6; second, vectoring basic maps of XiangYing Subdistrict including its land-use map with the application of ArcGIS 9.3, associating with social and economic attribute data of rural settlements and analyzing on the rural evolution visually; third, quantifying the comparison of these patches in rural settlements by using the landscape spatial calculation application Fragstats 3.3 and analyzing on the evolution of the spatial structure of settlement in macro and medium scale; finally, summarizing the evolution characteristics and internal reasons of tourism-oriented rural settlements. The main findings of this article include: first of all, there is difference in the evolution of the spatial structure between the developing rural settlements and undeveloped rural settlements among the eight administrative villages; secondly, the villages relying on the surrounding tourist attractions, the villages developing agricultural ecological garden and the villages with natural or historical and cultural resources have different laws of development; then, the rural settlements whose tourism development in germination period, development period and mature period have different characteristics of spatial evolution; finally, the different evolution modes of the tourism-oriented rural settlement space have different influences on the protection and inheritance of the village scene. The development of tourism has a significant impact on the spatial evolution of rural settlement. The intensive use of rural land and natural resources is the fundamental principle to protect the rural cultural landscape and ecological environment as well as the critical way to improve the attraction of rural tourism and promote the sustainable development of countryside.

Keywords: landscape pattern, rural settlement, spatial evolution, tourism-oriented, Xiangying Subdistrict

Procedia PDF Downloads 259
99 Impedimetric Phage-Based Sensor for the Rapid Detection of Staphylococcus aureus from Nasal Swab

Authors: Z. Yousefniayejahr, S. Bolognini, A. Bonini, C. Campobasso, N. Poma, F. Vivaldi, M. Di Luca, A. Tavanti, F. Di Francesco

Abstract:

Pathogenic bacteria represent a threat to healthcare systems and the food industry because their rapid detection remains challenging. Electrochemical biosensors are gaining prominence as a novel technology for the detection of pathogens due to intrinsic features such as low cost, rapid response time, and portability, which make them a valuable alternative to traditional methodologies. These sensors use biorecognition elements that are crucial for the identification of specific bacteria. In this context, bacteriophages are promising tools for their inherent high selectivity towards bacterial hosts, which is of fundamental importance when detecting bacterial pathogens in complex biological samples. In this study, we present the development of a low-cost and portable sensor based on the Zeno phage for the rapid detection of Staphylococcus aureus. Screen-printed gold electrodes functionalized with the Zeno phage were used, and electrochemical impedance spectroscopy was applied to evaluate the change of the charge transfer resistance (Rct) as a result of the interaction with S. aureus MRSA ATCC 43300. The phage-based biosensor showed a linear range from 101 to 104 CFU/mL with a 20-minute response time and a limit of detection (LOD) of 1.2 CFU/mL under physiological conditions. The biosensor’s ability to recognize various strains of staphylococci was also successfully demonstrated in the presence of clinical isolates collected from different geographic areas. Assays using S. epidermidis were also carried out to verify the species-specificity of the phage sensor. We only observed a remarkable change of the Rct in the presence of the target S. aureus bacteria, while no substantial binding to S. epidermidis occurred. This confirmed that the Zeno phage sensor only targets S. aureus species within the genus Staphylococcus. In addition, the biosensor's specificity with respect to other bacterial species, including gram-positive bacteria like Enterococcus faecium and the gram-negative bacterium Pseudomonas aeruginosa, was evaluated, and a non-significant impedimetric signal was observed. Notably, the biosensor successfully identified S. aureus bacterial cells in a complex matrix such as a nasal swab, opening the possibility of its use in a real-case scenario. We diluted different concentrations of S. aureus from 108 to 100 CFU/mL with a ratio of 1:10 in the nasal swap matrices collected from healthy donors. Three different sensors were applied to measure various concentrations of bacteria. Our sensor indicated high selectivity to detect S. aureus in biological matrices compared to time-consuming traditional methods, such as enzyme-linked immunosorbent assay (ELISA), polymerase chain reaction (PCR), and radioimmunoassay (RIA), etc. With the aim to study the possibility to use this biosensor to address the challenge associated to pathogen detection, ongoing research is focused on the assessment of the biosensor’s analytical performances in different biological samples and the discovery of new phage bioreceptors.

Keywords: electrochemical impedance spectroscopy, bacteriophage, biosensor, Staphylococcus aureus

Procedia PDF Downloads 41
98 Raman Spectroscopy of Fossil-like Feature in Sooke #1 from Vancouver Island

Authors: J. A. Sawicki, C. Ebrahimi

Abstract:

The first geochemical, petrological, X-ray diffraction, Raman, Mössbauer, and oxygen isotopic analyses of very intriguing 13-kg Sooke #1 stone covered in 70% of its surface with black fusion crust, found in and recovered from Sooke Basin, near Juan de Fuca Strait, in British Columbia, were reported as poster #2775 at LPSC52 in March. Our further analyses reported in poster #6305 at 84AMMS in August and comparisons with the Mössbauer spectra of Martian meteorite MIL03346 and Martian rocks in Gusev Crater reported by Morris et al. suggest that Sooke #1 find could be a stony achondrite of Martian polymict breccia type ejected from early watery Mars. Here, the Raman spectra of a carbon-rich ~1-mm² fossil-like white area identified in this rock on a surface of polished cut have been examined in more detail. The low-intensity 532 nm and 633 nm beams of the InviaRenishaw microscope were used to avoid any destructive effects. The beam was focused through the microscope objective to a 2 m spot on a sample, and backscattered light collected through this objective was recorded with CCD detector. Raman spectra of dark areas outside fossil have shown bands of clinopyroxene at 320, 660, and 1020 cm-1 and small peaks of forsteritic olivine at 820-840 cm-1, in agreement with results of X-ray diffraction and Mössbauer analyses. Raman spectra of the white area showed the broad band D at ~1310 cm-1 consisting of main mode A1g at 1305 cm⁻¹, E2g mode at 1245 cm⁻¹, and E1g mode at 1355 cm⁻¹ due to stretching diamond-like sp3 bonds in diamond polytype lonsdaleite, as in Ovsyuk et al. study. The band near 1600 cm-1 mostly consists of D2 band at 1620 cm-1 and not of the narrower G band at 1583 cm⁻¹ due to E2g stretching in planar sp2 bonds that are fundamental building blocks of carbon allotropes graphite and graphene. In addition, the broad second-order Raman bands were observed with 532 nm beam at 2150, ~2340, ~2500, 2650, 2800, 2970, 3140, and ~3300 cm⁻¹ shifts. Second-order bands in diamond and other carbon structures are ascribed to the combinations of bands observed in the first-order region: here 2650 cm⁻¹ as 2D, 2970 cm⁻¹ as D+G, and 3140 cm⁻¹ as 2G ones. Nanodiamonds are abundant in the Universe, found in meteorites, interplanetary dust particles, comets, and carbon-rich stars. The diamonds in meteorites are presently intensely investigated using Raman spectroscopy. Such particles can be formed by CVD process and during major impact shocks at ~1000-2300 K and ~30-40 GPa. It cannot be excluded that the fossil discovered in Sooke #1 could be a remnant of an alien carbon organism that transformed under shock impact to nanodiamonds. We trust that for the benefit of research in astro-bio-geology of meteorites, asteroids, Martian rocks, and soil, this find deserves further, more thorough investigations. If possible, the Raman SHERLOCK spectrometer operating on the Perseverance Rover should also search for such objects in the Martian rocks.

Keywords: achondrite, nanodiamonds, lonsdaleite, raman spectra

Procedia PDF Downloads 126
97 Magnetic Solid-Phase Separation of Uranium from Aqueous Solution Using High Capacity Diethylenetriamine Tethered Magnetic Adsorbents

Authors: Amesh P, Suneesh A S, Venkatesan K A

Abstract:

The magnetic solid-phase extraction is a relatively new method among the other solid-phase extraction techniques for the separating of metal ions from aqueous solutions, such as mine water and groundwater, contaminated wastes, etc. However, the bare magnetic particles (Fe3O4) exhibit poor selectivity due to the absence of target-specific functional groups for sequestering the metal ions. The selectivity of these magnetic particles can be remarkably improved by covalently tethering the task-specific ligands on magnetic surfaces. The magnetic particles offer a number of advantages such as quick phase separation aided by the external magnetic field. As a result, the solid adsorbent can be prepared with the particle size ranging from a few micrometers to the nanometer, which again offers the advantages such as enhanced kinetics of extraction, higher extraction capacity, etc. Conventionally, the magnetite (Fe3O4) particles were prepared by the hydrolysis and co-precipitation of ferrous and ferric salts in aqueous ammonia solution. Since the covalent linking of task-specific functionalities on Fe3O4 was difficult, and it is also susceptible to redox reaction in the presence of acid or alkali, it is necessary to modify the surface of Fe3O4 by silica coating. This silica coating is usually carried out by hydrolysis and condensation of tetraethyl orthosilicate over the surface of magnetite to yield a thin layer of silica-coated magnetite particles. Since the silica-coated magnetite particles amenable for further surface modification, it can be reacted with task-specific functional groups to obtain the functionalized magnetic particles. The surface area exhibited by such magnetic particles usually falls in the range of 50 to 150 m2.g-1, which offer advantage such as quick phase separation, as compared to the other solid-phase extraction systems. In addition, the magnetic (Fe3O4) particles covalently linked on mesoporous silica matrix (MCM-41) and task-specific ligands offer further advantages in terms of extraction kinetics, high stability, longer reusable cycles, and metal extraction capacity, due to the large surface area, ample porosity and enhanced number of functional groups per unit area on these adsorbents. In view of this, the present paper deals with the synthesis of uranium specific diethylenetriamine ligand (DETA) ligand anchored on silica-coated magnetite (Fe-DETA) as well as on magnetic mesoporous silica (MCM-Fe-DETA) and studies on the extraction of uranium from aqueous solution spiked with uranium to mimic the mine water or groundwater contaminated with uranium. The synthesized solid-phase adsorbents were characterized by FT-IR, Raman, TG-DTA, XRD, and SEM. The extraction behavior of uranium on the solid-phase was studied under several conditions like the effect of pH, initial concentration of uranium, rate of extraction and its variation with pH and initial concentration of uranium, effect of interference ions like CO32-, Na+, Fe+2, Ni+2, and Cr+3, etc. The maximum extraction capacity of 233 mg.g-1 was obtained for Fe-DETA, and a huge capacity of 1047 mg.g-1 was obtained for MCM-Fe-DETA. The mechanism of extraction, speciation of uranium, extraction studies, reusability, and the other results obtained in the present study suggests Fe-DETA and MCM-Fe-DETA are the potential candidates for the extraction of uranium from mine water, and groundwater.

Keywords: diethylenetriamine, magnetic mesoporous silica, magnetic solid-phase extraction, uranium extraction, wastewater treatment

Procedia PDF Downloads 140
96 Distribution Routs Redesign through the Vehicle Problem Routing in Havana Distribution Center

Authors: Sonia P. Marrero Duran, Lilian Noya Dominguez, Lisandra Quintana Alvarez, Evert Martinez Perez, Ana Julia Acevedo Urquiaga

Abstract:

Cuban business and economic policy are in the constant update as well as facing a client ever more knowledgeable and demanding. For that reason become fundamental for companies competitiveness through the optimization of its processes and services. One of the Cuban’s pillars, which has been sustained since the triumph of the Cuban Revolution back in 1959, is the free health service to all those who need it. This service is offered without any charge under the concept of preserving human life, but it implied costly management processes and logistics services to be able to supply the necessary medicines to all the units who provide health services. One of the key actors on the medicine supply chain is the Havana Distribution Center (HDC), which is responsible for the delivery of medicines in the province; as well as the acquisition of medicines from national and international producers and its subsequent transport to health care units and pharmacies in time, and with the required quality. This HDC also carries for all distribution centers in the country. Given the eminent need to create an actor in the supply chain that specializes in the medicines supply, the possibility of centralizing this operation in a logistics service provider is analyzed. Based on this decision, pharmacies operate as clients of the logistic service center whose main function is to centralize all logistics operations associated with the medicine supply chain. The HDC is precisely the logistic service provider in Havana and it is the center of this research. In 2017 the pharmacies had affectations in the availability of medicine due to deficiencies in the distribution routes. This is caused by the fact that they are not based on routing studies, besides the long distribution cycle. The distribution routs are fixed, attend only one type of customer and there respond to a territorial location by the municipality. Taking into consideration the above-mentioned problem, the objective of this research is to optimize the routes system in the Havana Distribution Center. To accomplish this objective, the techniques applied were document analysis, random sampling, statistical inference and tools such as Ishikawa diagram and the computerized software’s: ArcGis, Osmand y MapIfnfo. As a result, were analyzed four distribution alternatives; the actual rout, by customer type, by the municipality and the combination of the two last. It was demonstrated that the territorial location alternative does not take full advantage of the transportation capacities or the distance of the trips, which leads to elevated costs breaking whit the current ways of distribution and the currents characteristics of the clients. The principal finding of the investigation was the optimum option distribution rout is the 4th one that is formed by hospitals and the join of pharmacies, stomatology clinics, polyclinics and maternal and elderly homes. This solution breaks the territorial location by the municipality and permits different distribution cycles in dependence of medicine consumption and transport availability.

Keywords: computerized geographic software, distribution, distribution routs, vehicle problem routing (VPR)

Procedia PDF Downloads 138
95 Development of Mesoporous Gel Based Nonwoven Structure for Thermal Barrier Application

Authors: R. P. Naik, A. K. Rakshit

Abstract:

In recent years, with the rapid development in science and technology, people have increasing requirements on uses of clothing for new functions, which contributes to opportunities for further development and incorporation of new technologies along with novel materials. In this context, textiles are of fast decalescence or fast heat radiation media as per as comfort accountability of textile articles are concern. The microstructure and texture of textiles play a vital role in determining the heat-moisture comfort level of the human body because clothing serves as a barrier to the outside environment and a transporter of heat and moisture from the body to the surrounding environment to keep thermal balance between body heat produced and body heat loss. The main bottleneck which is associated with textile materials to be successful as thermal insulation materials can be enumerated as; firstly, high loft or bulkiness of material so as to provide predetermined amount of insulation by ensuring sufficient trapping of air. Secondly, the insulation depends on forced convection; such convective heat loss cannot be prevented by textile material. Third is that the textile alone cannot reach the level of thermal conductivity lower than 0.025 W/ m.k of air. Perhaps, nano-fibers can do so, but still, mass production and cost-effectiveness is a problem. Finally, such high loft materials for thermal insulation becomes heavier and uneasy to manage especially when required to carry over a body. The proposed works aim at developing lightweight effective thermal insulation textiles in combination with nanoporous silica-gel which provides the fundamental basis for the optimization of material properties to achieve good performance of the clothing system. This flexible nonwoven silica-gel composites fabric in intact monolith was successfully developed by reinforcing SiO2-gel in thermal bonded nonwoven fabric via sol-gel processing. Ambient Pressure Drying method is opted for silica gel preparation for cost-effective manufacturing. The formed structure of the nonwoven / SiO₂ -gel composites were analyzed, and the transfer properties were measured. The effects of structure and fibre on the thermal properties of the SiO₂-gel composites were evaluated. Samples are then tested against untreated samples of same GSM in order to study the effect of SiO₂-gel application on various properties of nonwoven fabric. The nonwoven fabric composites reinforced with aerogel showed intact monolith structure were also analyzed for their surface structure, functional group present, microscopic images. Developed product reveals a significant reduction in pores' size and air permeability than the conventional nonwoven fabric. Composite made from polyester fibre with lower GSM shows lowest thermal conductivity. Results obtained were statistically analyzed by using STATISTICA-6 software for their level of significance. Univariate tests of significance for various parameters are practiced which gives the P value for analyzing significance level along with that regression summary for dependent variable are also studied to obtain correlation coefficient.

Keywords: silica-gel, heat insulation, nonwoven fabric, thermal barrier clothing

Procedia PDF Downloads 92
94 Chemical Synthesis and Microwave Sintering of SnO2-Based Nanoparticles for Varistor Films

Authors: Glauco M. M. M. Lustosa, João Paulo C. Costa, Leinig Antônio Perazolli, Maria Aparecida Zaghete

Abstract:

SnO2 has electrical conductivity due to the excess of electrons and structural defects, being its electrical behavior highly dependent on sintering temperature and chemical composition. The addition of metals modifiers into the crystalline structure can improve and controlling the behavior of some semiconductor oxides that can therefore develop different applications such as varistors (ceramic with non-ohmic behavior between current and voltage, i.e. conductive during normal operation and resistive during overvoltage). The polymeric precursor method, based on the complexation reaction between metal ion and policarboxylic acid and then polymerized with ethylene glycol, was used to obtain nanopowders ceramic. The metal immobilization reduces its segregation during the decomposition of the polyester resulting in a crystalline oxide with high chemical homogeneity. The preparation of films from ceramics nanoparticles using electrophoretic deposition method (EPD) brings prospects for a new generation of smaller size devices with easy integration technology. EPD allows to control time and current and therefore it can have control of the thickness, surface roughness and the film density, quickly and with low production costs. The sintering process is key to control size and grain boundary density of the film. In this step, there is the diffusion of metals that promote densification and control of intrinsic defects or change these defects which will form and modify the potential barrier in the grain boundary. The use of microwave oven for sintering is an advantageous process due to the fast and homogeneous heating rate, promoting the diffusion and densification without irregular grain growth. This research was done a comparative study of sintering temperature by use of zinc as modifier agent to verify the influence on sintering step aiming to promote densification and grain growth, which influences the potential barrier formation and then changed the electrical behavior. SnO2-nanoparticles were obtained with 1 %mol of ZnO + 0.05 %mol of Nb2O5 (SZN), deposited as film through EPD (voltage 2 kV, time of 10 min) on Si/Pt substrate. Sintering was made in a microwave oven at 800, 900 and 1000 °C. For complete coverage of the substrate by nanoparticles with low surface roughness and uniform thickness was added 0.02 g of solid iodine in alcoholic suspension SnO2 to increase particle surface charge. They were also used magneto in EPD system that improved the deposition rate forming a compact film. Using a scanning electron microscope of high resolution (SEM_FEG) it was observed nanoparticles with average size between 10-20 nm, after sintering the average size was 150 to 200 nm and thickness of 5 µm. Also, it was verified that the temperature at 1000 °C was the most efficient in sintering. The best sintering time was also recorded and determined as 40 minutes. After sintering, the films were recovered with Cr3+ ions layer by EPD, then the films were again thermally treated. The electrical characterizations (nonlinear coefficient of 11.4, voltage rupture of ~60 V and leakage current = 4.8x10−6 A), allow considering the new methodology suitable for prepare SnO2-based varistor applied for development of electrical protection devices for low voltage.

Keywords: chemical synthesis, electrophoretic deposition, microwave sintering, tin dioxide

Procedia PDF Downloads 246
93 Purple Spots on Historical Parchments: Confirming the Microbial Succession at the Basis of Biodeterioration

Authors: N. Perini, M. C. Thaller, F. Mercuri, S. Orlanducci, A. Rubechini, L. Migliore

Abstract:

The preservation of cultural heritage is one of the major challenges of today’s society, because of the fundamental right of future generations to inherit it as the continuity with their historical and cultural identity. Parchments, consisting of a semi-solid matrix of collagen produced from animal skin (i.e., sheep or goats), are a significant part of the cultural heritage, being used as writing material for many centuries. Due to their animal origin, parchments easily undergo biodeterioration. The most common biological damage is characterized by isolated or coalescent purple spots that often leads to the detachment of the superficial layer and the loss of the written historical content of the document. Although many parchments with the same biodegradative features were analyzed, no common causative agent has been found so far. Very recently, a study was performed on a purple-damaged parchment roll dated back 1244 A.D, the A.A. Arm. I-XVIII 3328, belonging to the oldest collection of the Vatican Secret Archive (Fondo 'Archivum Arcis'), by comparing uncolored undamaged and purple damaged areas of the same document. As a whole, the study gave interesting results to hypothesize a model of biodeterioration, consisting of a microbial succession acting in two main phases: the first one, common to all the damaged parchments, is characterized by halophilic and halotolerant bacteria fostered by the salty environment within the parchment maybe induced by bringing of the hides; the second one, changing with the individual history of each parchment, determines the identity of its colonizers. The design of this model was pivotal to this study, performed by different labs of the Tor Vergata University (Rome, Italy), in collaboration with the Vatican Secret Archive. Three documents, belonging to a collection of dramatically damaged parchments archived as 'Faldone Patrizi A 19' (dated back XVII century A.D.), were analyzed through a multidisciplinary approach, including three updated technologies: (i) Next Generation Sequencing (NGS, Illumina) to describe the microbial communities colonizing the damaged and undamaged areas, (ii) RAMAN spectroscopy to analyze the purple pigments, (iii) Light Transmitted Analysis (LTA) to evaluate the kind and entity of the damage to native collagen. The metagenomic analysis obtained from NGS revealed DNA sequences belonging to Halobacterium salinarum mainly in the undamaged areas. RAMAN spectroscopy detected pigments within the purple spots, mainly bacteriorhodopsine/rhodopsin-like pigments, a purple transmembrane protein containing retinal and present in Halobacteria. The LTA technique revealed extremely damaged collagen structures in both damaged and undamaged areas of the parchments. In the light of these data, the study represents a first confirmation of the microbial succession model described above. The demonstration of this model is pivotal to start any possible new restoration strategy to bring back historical parchments to their original beauty, but also to open opportunities for intervention on a huge amount of documents.

Keywords: biodeterioration, parchments, purple spots, ecological succession

Procedia PDF Downloads 146
92 Legal Pluralism and Ideology: The Recognition of the Indigenous Justice Administration in Bolivia through the "Indigenismo" and "Decolonisation" Discourses

Authors: Adriana Pereira Arteaga

Abstract:

In many Latin American countries the transition towards legal pluralism - has developed as part of what is called Latin-American-Constitutionalism over the last thirty years. The aim of this paper is to discuss how legal pluralism in its current form in Bolivia may produce exclusion and violence. Legal sources and discourse analysis - as an approach to examine written language on discourse documentation- will be used to develop this paper. With the constitution of 2009, Bolivia was symbolically "re-founded" into a multi-nation state. This shift goes hand in hand with the "indigenista" and "decolonisation" ideologies developing since the early 20th century. Discourses based on these ideologies reflect the rejection of liberal and western premises on which the Bolivian republic was originally built after independence. According to the "indigenista" movements, the liberal nation-state generates institutions corresponding to a homogenous society. These liberal institutions not only ignore the Bolivian multi-nation reality, but also maintain the social structures originating form the colony times, based on prejudices against the indigenous. The described statements were elaborated through the image: the indigenous people humiliated by a cruel western system as highlighted by the constitution's preamble. This narrative had a considerable impact on the sensitivity of people and received great social support. Therefore the proposal for changing structures of the nation-state, is charged with an emancipatory message of restoring even the pre-Columbian order. An order at times romantically described as the perfect order. Legally this connotes a rejection of the positivistic national legal system based on individual rights and the promotion of constitutional recognition of indigenous justice administration. The pluralistic Constitution is supposed to promote tolerance and a peaceful coexistence among nations, so that the unity and integrity of the country could be maintained. In its current form, legal pluralism in Bolivia is justified on pre-existing rights contained for example in the International - Labour - Organization - Convention 169, but it is more developed on the described discursive constructions. Over time these discursive constructions created inconsistencies in terms of putting indigenous justice administration into practice: First, because legal pluralism has been more developed on level of political discourse, so a real interaction between the national and the indigenous jurisdiction cannot be observed. There are no clear coordination and cooperation mechanisms. Second, since the recently reformed constitution is based on deep sensitive experiences, little is said about the general legal principles on which a pluralistic administration of justice in Bolivia should be based. Third, basic rights, liberties, and constitutional guarantees are also affected by the antagonized image of the national justice administration. As a result, fundamental rights could be violated on a large scale because many indigenous justice administration practices run counter to these constitutional rules. These problems are not merely Bolivian but may also be encountered in other regional countries with similar backgrounds, like Ecuador.

Keywords: discourse, indigenous justice, legal pluralism, multi-nation

Procedia PDF Downloads 425
91 Nonequilibrium Effects in Photoinduced Ultrafast Charge Transfer Reactions

Authors: Valentina A. Mikhailova, Serguei V. Feskov, Anatoly I. Ivanov

Abstract:

In the last decade the nonequilibrium charge transfer have attracted considerable interest from the scientific community. Examples of such processes are the charge recombination in excited donor-acceptor complexes and the intramolecular electron transfer from the second excited electronic state. In these reactions the charge transfer proceeds predominantly in the nonequilibrium mode. In the excited donor-acceptor complexes the nuclear nonequilibrium is created by the pump pulse. The intramolecular electron transfer from the second excited electronic state is an example where the nuclear nonequilibrium is created by the forward electron transfer. The kinetics of these nonequilibrium reactions demonstrate a number of peculiar properties. Most important from them are: (i) the absence of the Marcus normal region in the free energy gap law for the charge recombination in excited donor-acceptor complexes, (ii) extremely low quantum yield of thermalized charge separated state in the ultrafast charge transfer from the second excited state, (iii) the nonexponential charge recombination dynamics in excited donor-acceptor complexes, (iv) the dependence of the charge transfer rate constant on the excitation pulse frequency. This report shows that most of these kinetic features can be well reproduced in the framework of stochastic point-transition multichannel model. The model involves an explicit description of the nonequilibrium excited state formation by the pump pulse and accounts for the reorganization of intramolecular high-frequency vibrational modes, for their relaxation as well as for the solvent relaxation. The model is able to quantitatively reproduce complex nonequilibrium charge transfer kinetics observed in modern experiments. The interpretation of the nonequilibrium effects from a unified point of view in the terms of the multichannel point transition stochastic model allows to see similarities and differences of electron transfer mechanism in various molecular donor-acceptor systems and formulates general regularities inherent in these phenomena. The nonequilibrium effects in photoinduced ultrafast charge transfer which have been studied for the last 10 years are analyzed. The methods of suppression of the ultrafast charge recombination, similarities and dissimilarities of electron transfer mechanism in different molecular donor-acceptor systems are discussed. The extremely low quantum yield of the thermalized charge separated state observed in the ultrafast charge transfer from the second excited state in the complex consisting of 1,2,4-trimethoxybenzene and tetracyanoethylene in acetonitrile solution directly demonstrates that its effectiveness can be close to unity. This experimental finding supports the idea that the nonequilibrium charge recombination in the excited donor-acceptor complexes can be also very effective so that the part of thermalized complexes is negligible. It is discussed the regularities inherent to the equilibrium and nonequilibrium reactions. Their fundamental differences are analyzed. Namely the opposite dependencies of the charge transfer rates on the dynamical properties of the solvent. The increase of the solvent viscosity results in decreasing the thermal rate and vice versa increasing the nonequilibrium rate. The dependencies of the rates on the solvent reorganization energy and the free energy gap also can considerably differ. This work was supported by the Russian Science Foundation (Grant No. 16-13-10122).

Keywords: Charge recombination, higher excited states, free energy gap law, nonequilibrium

Procedia PDF Downloads 296
90 A Functional Analysis of a Political Leader in Terms of Marketing

Authors: Aşina Gülerarslan, M. Faik Özdengül

Abstract:

The new economic, social and political world order has led to the emergence of a wide range of persuasion strategies and practices based on an ever expanding marketing axis that involves organizations, ideas and persons as well as products and services. It is seen that since the 1990's, a wide variety of competitive marketing ideas have been offered systematically to target audiences in the field of politics as in other fields. When the components of marketing are taken into consideration, all kinds of communication efforts involving “political leaders”, who are conceptualized as products in terms of political marketing, serve a process of social persuasion, which cannot be restricted to election periods only, and a manageable “image”. In this context, image, which is concerned with how the political product is perceived, involves not only the political discourses shared with the public but also all kinds of biographical information about the leader, the leader’s specific way of living and routines and his/her attitudes and behaviors in their private lives, and all these are regarded as components of the “product image”. While on the one hand the leader’s verbal or supra-verbal references serve the way the “spirit of the product” is perceived –just as in brand positioning- they also show their self-esteem levels, in other words how they perceive themselves on the other hand. Indeed, their self-esteem levels are evaluated in three fundamental categories in the “Functional Analysis”, namely parent, child and adult, and it is revealed that the words, tone of voice and body language a person uses makes it easy to understand at what self-esteem level that person is. In this context, words, tone of voice and body language, which provide important clues as to the “self” of the person, are also an indication of how political leaders evaluate both “themselves” and “the mass/audience” in the communication they establish with their audiences. When the matter is taken from the perspective of Turkey, the levels of self-esteem in the relationships that the political leaders establish with the masses are also important in revealing how our society is seen from the perspective of a specific leader. Since the leader is a part of the marketing strategy of a political party as a product, this evaluation is significant in terms of the forms of relationships between political institutions in our country with the society. In this study, the self-esteem level in the documentary entitled “Master’s Story”, where Recep Tayyip Erdoğan’s life history is told, is analyzed in the context of words, tone of voice and body language. Within the scope of the study, at what level of self-esteem Recep Tayyip Erdoğan was in the “Master’s Story”, a documentary broadcast on Beyaz TV, was investigated using the content analysis method. First, based on the Functional Analysis Literature, a transactional approach scale was created regarding parent, adult and child self-esteem levels. On the basis of this scale, the prime minister’s self-esteem level was determined in three basic groups, namely “tone of voice”, “the words he used” and “body language”. Descriptive analyses were made to the data within the framework of these criteria and at what self-esteem level the prime minister spoke throughout the documentary was revealed.

Keywords: political marketing, leader image, level of self-esteem, transactional approach

Procedia PDF Downloads 315
89 The Effects of Aging on Visuomotor Behaviors in Reaching

Authors: Mengjiao Fan, Thomson W. L. Wong

Abstract:

It is unavoidable that older adults may have to deal with aging-related motor problems. Aging is highly likely to affect motor learning and control as well. For example, older adults may suffer from poor motor function and quality of life due to age-related eye changes. These adverse changes in vision results in impairment of movement automaticity. Reaching is a fundamental component of various complex movements, which is therefore beneficial to explore the changes and adaptation in visuomotor behaviors. The current study aims to explore how aging affects visuomotor behaviors by comparing motor performance and gaze behaviors between two age groups (i.e., young and older adults). Visuomotor behaviors in reaching under providing or blocking online visual feedback (simulated visual deficiency) conditions were investigated in 60 healthy young adults (Mean age=24.49 years, SD=2.12) and 37 older adults (Mean age=70.07 years, SD=2.37) with normal or corrected-to-normal vision. Participants in each group were randomly allocated into two subgroups. Subgroup 1 was provided with online visual feedback of the hand-controlled mouse cursor. However, in subgroup 2, visual feedback was blocked to simulate visual deficiency. The experimental task required participants to complete 20 times of reaching to a target by controlling the mouse cursor on the computer screen. Among all the 20 trials, start position was upright in the center of the screen and target appeared at a randomly selected position by the tailor-made computer program. Primary outcomes of motor performance and gaze behaviours data were recorded by the EyeLink II (SR Research, Canada). The results suggested that aging seems to affect the performance of reaching tasks significantly in both visual feedback conditions. In both age groups, blocking online visual feedback of the cursor in reaching resulted in longer hand movement time (p < .001), longer reaching distance away from the target center (p<.001) and poorer reaching motor accuracy (p < .001). Concerning gaze behaviors, blocking online visual feedback increased the first fixation duration time in young adults (p<.001) but decreased it in older adults (p < .001). Besides, under the condition of providing online visual feedback of the cursor, older adults conducted a longer fixation dwell time on target throughout reaching than the young adults (p < .001) although the effect was not significant under blocking online visual feedback condition (p=.215). Therefore, the results suggested that different levels of visual feedback during movement execution can affect gaze behaviors differently in older and young adults. Differential effects by aging on visuomotor behaviors appear on two visual feedback patterns (i.e., blocking or providing online visual feedback of hand-controlled cursor in reaching). Several specific gaze behaviors among the older adults were found, which imply that blocking of visual feedback may act as a stimulus to seduce extra perceptive load in movement execution and age-related visual degeneration might further deteriorate the situation. It indeed provides us with insight for the future development of potential rehabilitative training method (e.g., well-designed errorless training) in enhancing visuomotor adaptation for our aging population in the context of improving their movement automaticity by facilitating their compensation of visual degeneration.

Keywords: aging effect, movement automaticity, reaching, visuomotor behaviors, visual degeneration

Procedia PDF Downloads 294
88 The Employment of Unmanned Aircraft Systems for Identification and Classification of Helicopter Landing Zones and Airdrop Zones in Calamity Situations

Authors: Marielcio Lacerda, Angelo Paulino, Elcio Shiguemori, Alvaro Damiao, Lamartine Guimaraes, Camila Anjos

Abstract:

Accurate information about the terrain is extremely important in disaster management activities or conflict. This paper proposes the use of the Unmanned Aircraft Systems (UAS) at the identification of Airdrop Zones (AZs) and Helicopter Landing Zones (HLZs). In this paper we consider the AZs the zones where troops or supplies are dropped by parachute, and HLZs areas where victims can be rescued. The use of digital image processing enables the automatic generation of an orthorectified mosaic and an actual Digital Surface Model (DSM). This methodology allows obtaining this fundamental information to the terrain’s comprehension post-disaster in a short amount of time and with good accuracy. In order to get the identification and classification of AZs and HLZs images from DJI drone, model Phantom 4 have been used. The images were obtained with the knowledge and authorization of the responsible sectors and were duly registered in the control agencies. The flight was performed on May 24, 2017, and approximately 1,300 images were obtained during approximately 1 hour of flight. Afterward, new attributes were generated by Feature Extraction (FE) from the original images. The use of multispectral images and complementary attributes generated independently from them increases the accuracy of classification. The attributes of this work include the Declivity Map and Principal Component Analysis (PCA). For the classification four distinct classes were considered: HLZ 1 – small size (18m x 18m); HLZ 2 – medium size (23m x 23m); HLZ 3 – large size (28m x 28m); AZ (100m x 100m). The Decision Tree method Random Forest (RF) was used in this work. RF is a classification method that uses a large collection of de-correlated decision trees. Different random sets of samples are used as sampled objects. The results of classification from each tree and for each object is called a class vote. The resulting classification is decided by a majority of class votes. In this case, we used 200 trees for the execution of RF in the software WEKA 3.8. The classification result was visualized on QGIS Desktop 2.12.3. Through the methodology used, it was possible to classify in the study area: 6 areas as HLZ 1, 6 areas as HLZ 2, 4 areas as HLZ 3; and 2 areas as AZ. It should be noted that an area classified as AZ covers the classifications of the other classes, and may be used as AZ, HLZ of large size (HLZ3), medium size (HLZ2) and small size helicopters (HLZ1). Likewise, an area classified as HLZ for large rotary wing aircraft (HLZ3) covers the smaller area classifications, and so on. It was concluded that images obtained through small UAV are of great use in calamity situations since they can provide data with high accuracy, with low cost, low risk and ease and agility in obtaining aerial photographs. This allows the generation, in a short time, of information about the features of the terrain in order to serve as an important decision support tool.

Keywords: disaster management, unmanned aircraft systems, helicopter landing zones, airdrop zones, random forest

Procedia PDF Downloads 151
87 A Distributed Smart Battery Management System – sBMS, for Stationary Energy Storage Applications

Authors: António J. Gano, Carmen Rangel

Abstract:

Currently, electric energy storage systems for stationary applications have known an increasing interest, namely with the integration of local renewable energy power sources into energy communities. Li-ion batteries are considered the leading electric storage devices to achieve this integration, and Battery Management Systems (BMS) are decisive for their control and optimum performance. In this work, the advancement of a smart BMS (sBMS) prototype with a modular distributed topology is described. The system, still under development, has a distributed architecture with modular characteristics to operate with different battery pack topologies and charge capacities, integrating adaptive algorithms for functional state real-time monitoring and management of multicellular Li-ion batteries, and is intended for application in the context of a local energy community fed by renewable energy sources. This sBMS system includes different developed hardware units: (1) Cell monitoring units (CMUs) for interfacing with each individual cell or module monitoring within the battery pack; (2) Battery monitoring and switching unit (BMU) for global battery pack monitoring, thermal control and functional operating state switching; (3) Main management and local control unit (MCU) for local sBMS’s management and control, also serving as a communications gateway to external systems and devices. This architecture is fully expandable to battery packs with a large number of cells, or modules, interconnected in series, as the several units have local data acquisition and processing capabilities, communicating over a standard CAN bus and will be able to operate almost autonomously. The CMU units are intended to be used with Li-ion cells but can be used with other cell chemistries, with output voltages within the 2.5 to 5 V range. The different unit’s characteristics and specifications are described, including the different implemented hardware solutions. The developed hardware supports both passive and active methods for charge equalization, considered fundamental functionalities for optimizing the performance and the useful lifetime of a Li-ion battery package. The functional characteristics of the different units of this sBMS system, including different process variables data acquisition using a flexible set of sensors, can support the development of custom algorithms for estimating the parameters defining the functional states of the battery pack (State-of-Charge, State-of-Health, etc.) as well as different charge equalizing strategies and algorithms. This sBMS system is intended to interface with other systems and devices using standard communication protocols, like those used by the Internet of Things. In the future, this sBMS architecture can evolve to a fully decentralized topology, with all the units using Wi-Fi protocols and integrating a mesh network, making unnecessary the MCU unit. The status of the work in progress is reported, leading to conclusions on the system already executed, considering the implemented hardware solution, not only as fully functional advanced and configurable battery management system but also as a platform for developing custom algorithms and optimizing strategies to achieve better performance of electric energy stationary storage devices.

Keywords: Li-ion battery, smart BMS, stationary electric storage, distributed BMS

Procedia PDF Downloads 74
86 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 113
85 Prompt Photons Production in Compton Scattering of Quark-Gluon and Annihilation of Quark-Antiquark Pair Processes

Authors: Mohsun Rasim Alizada, Azar Inshalla Ahmdov

Abstract:

Prompt photons are perhaps the most versatile tools for studying the dynamics of relativistic collisions of heavy ions. The study of photon radiation is of interest that in most hadron interactions, photons fly out as a background to other studied signals. The study of the birth of prompt photons in nucleon-nucleon collisions was previously carried out in experiments on Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). Due to the large energy of colliding nucleons, in addition to prompt photons, many different elementary particles are born. However, the birth of additional elementary particles makes it difficult to determine the accuracy of the effective section of the birth of prompt photons. From this point of view, the experiments planned on the Nuclotron-based Ion Collider Facility (NICA) complex will have a great advantage, since the energy obtained for colliding heavy ions will reduce the number of additionally born elementary particles. Of particular importance is the study of the processes of birth of prompt photons to determine the gluon leaving hadrons since the photon carries information about a rigid subprocess. At present, paper production of prompt photon in Compton scattering of quark-gluon and annihilation of quark–antiquark processes is investigated. The matrix elements Compton scattering of quark-gluon and annihilation of quark-antiquark pair processes has been written. The Square of matrix elements of processes has been calculated in FeynCalc. The phase volume of subprocesses has been determined. Expression to calculate the differential cross-section of subprocesses has been obtained: Given the resulting expressions for the square of the matrix element in the differential section expression, we see that the differential section depends not only on the energy of colliding protons, but also on the mass of quarks, etc. Differential cross-section of subprocesses is estimated. It is shown that the differential cross-section of subprocesses decreases with the increasing energy of colliding protons. Asymmetry coefficient with polarization of colliding protons is determined. The calculation showed that the squares of the matrix element of the Compton scattering process without and taking into account the polarization of colliding protons are identical. The asymmetry coefficient of this subprocess is zero, which is consistent with the literary data. It is known that in any single polarization processes with a photon, squares of matrix elements without taking into account and taking into account the polarization of the original particle must coincide, that is, the terms in the square of the matrix element with the degree of polarization are equal to zero. The coincidence of the squares of the matrix elements indicates that the parity of the system is preserved. The asymmetry coefficient of annihilation of quark–antiquark pair process linearly decreases from positive unit to negative unit with increasing the production of the polarization degrees of colliding protons. Thus, it was obtained that the differential cross-section of the subprocesses decreases with the increasing energy of colliding protons. The value of the asymmetry coefficient is maximal when the polarization of colliding protons is opposite and minimal when they are directed equally. Taking into account the polarization of only the initial quarks and gluons in Compton scattering does not contribute to the differential section of the subprocess.

Keywords: annihilation of a quark-antiquark pair, coefficient of asymmetry, Compton scattering, effective cross-section

Procedia PDF Downloads 128
84 Detection and Quantification of Viable but Not Culturable Vibrio Parahaemolyticus in Frozen Bivalve Molluscs

Authors: Eleonora Di Salvo, Antonio Panebianco, Graziella Ziino

Abstract:

Background: Vibrio parahaemolyticus is a human pathogen that is widely distributed in marine environments. It is frequently isolated from raw seafood, particularly shellfish. Consumption of raw or undercooked seafood contaminated with V. parahaemolyticus may lead to acute gastroenteritis. Vibrio spp. has excellent resistance to low temperatures so it can be found in frozen products for a long time. Recently, the viable but non-culturable state (VBNC) of bacteria has attracted great attention, and more than 85 species of bacteria have been demonstrated to be capable of entering this state. VBNC cells cannot grow in conventional culture medium but are viable and maintain metabolic activity, which may constitute an unrecognized source of food contamination and infection. Also V. parahaemolyticus could exist in VBNC state under nutrient starvation or low-temperature conditions. Aim: The aim of the present study was to optimize methods and investigate V. parahaemolyticus VBNC cells and their presence in frozen bivalve molluscs, regularly marketed. Materials and Methods: propidium monoazide (PMA) was integrated with real-time polymerase chain reaction (qPCR) targeting the tl gene to detect and quantify V. parahaemolyticus in the VBNC state. PMA-qPCR resulted highly specific to V. parahaemolyticus with a limit of detection (LOD) of 10-1 log CFU/mL in pure bacterial culture. A standard curve for V. parahaemolyticus cell concentrations was established with the correlation coefficient of 0.9999 at the linear range of 1.0 to 8.0 log CFU/mL. A total of 77 samples of frozen bivalve molluscs (35 mussels; 42 clams) were subsequently subjected to the qualitative (on alkaline phosphate buffer solution) and quantitative research of V. parahaemolyticus on thiosulfate-citrate-bile salts-sucrose (TCBS) agar (DIFCO) NaCl 2.5%, and incubation at 30°C for 24-48 hours. Real-time PCR was conducted on homogenate samples, in duplicate, with and without propidium monoazide (PMA) dye, and exposed for 45 min under halogen lights (650 W). Total DNA was extracted from cell suspension in homogenate samples according to bolliture protocol. The Real-time PCR was conducted with species-specific primers for V. parahaemolitycus. The RT-PCR was performed in a final volume of 20 µL, containing 10 µL of SYBR Green Mixture (Applied Biosystems), 2 µL of template DNA, 2 µL of each primer (final concentration 0.6 mM), and H2O 4 µL. The qPCR was carried out on CFX96 TouchTM (Bio-Rad, USA). Results: All samples were negative both to the quantitative and qualitative detection of V. parahaemolyticus by the classical culturing technique. The PMA-qPCR let us individuating VBNC V. parahaemolyticus in the 20,78% of the samples evaluated with a value between the Log 10-1 and Log 10-3 CFU/g. Only clams samples were positive for PMA-qPCR detection. Conclusion: The present research is the first evaluating PMA-qPCR assay for detection of VBNC V. parahaemolyticus in bivalve molluscs samples, and the used method was applicable to the rapid control of marketed bivalve molluscs. We strongly recommend to use of PMA-qPCR in order to identify VBNC forms, undetectable by the classic microbiological methods. A precise knowledge of the V.parahaemolyticus in a VBNC form is fundamental for the correct risk assessment not only in bivalve molluscs but also in other seafood.

Keywords: food safety, frozen bivalve molluscs, PMA dye, Real-time PCR, VBNC state, Vibrio parahaemolyticus

Procedia PDF Downloads 110
83 Consumers and Voters’ Choice: Two Different Contexts with a Powerful Behavioural Parallel

Authors: Valentina Dolmova

Abstract:

What consumers choose to buy and who voters select on election days are two questions that have captivated the interest of both academics and practitioners for many decades. The importance of understanding what influences the behavior of those groups and whether or not we can predict or control it fuels a steady stream of research in a range of fields. By looking only at the past 40 years, more than 70 thousand scientific papers have been published in each field – consumer behavior and political psychology, respectively. From marketing, economics, and the science of persuasion to political and cognitive psychology - we have all remained heavily engaged. The ever-evolving technology, inevitable socio-cultural shifts, global economic conditions, and much more play an important role in choice-equations regardless of context. On one hand, this makes the research efforts always relevant and needed. On the other, the relatively low number of cross-field collaborations, which seem to be picking up only in more in recent years, makes the existing findings isolated into framed bubbles. By performing systematic research across both areas of psychology and building a parallel between theories and factors of influence, however, we find that there is not only a definitive common ground between the behaviors of consumers and voters but that we are moving towards a global model of choice. This means that the lines between contexts are fading which has a direct implication on what we should focus on when predicting or navigating buyers and voters’ behavior. Internal and external factors in four main categories determine the choices we make as consumers and as voters. Together, personal, psychological, social, and cultural create a holistic framework through which all stimuli in relation to a particular product or a political party get filtered. The analogy “consumer-voter” solidifies further. Leading academics suggest that this fundamental parallel is the key to managing successfully political and consumer brands alike. However, we distinguish additional four key stimuli that relate to those factor categories (1/ opportunity costs; 2/the memory of the past; 3/recognisable figures/faces and 4/conflict) arguing that the level of expertise a person has determines the prevalence of factors or specific stimuli. Our efforts take into account global trends such as the establishment of “celebrity politics” and the image of “ethically concerned consumer brands” which bridge the gap between contexts to an even greater extent. Scientists and practitioners are pushed to accept the transformative nature of both fields in social psychology. Existing blind spots as well as the limited number of research conducted outside the American and European societies open up space for more collaborative efforts in this highly demanding and lucrative field. A mixed method of research tests three main hypotheses, the first two of which are focused on the level of irrelevance of context when comparing voting or consumer behavior – both from the factors and stimuli lenses, the third on determining whether or not the level of expertise in any field skews the weight of what prism we are more likely to choose when evaluating options.

Keywords: buyers’ behaviour, decision-making, voters’ behaviour, social psychology

Procedia PDF Downloads 133
82 Study on Aerosol Behavior in Piping Assembly under Varying Flow Conditions

Authors: Anubhav Kumar Dwivedi, Arshad Khan, S. N. Tripathi, Manish Joshi, Gaurav Mishra, Dinesh Nath, Naveen Tiwari, B. K. Sapra

Abstract:

In a nuclear reactor accident scenario, a large number of fission products may release to the piping system of the primary heat transport. The released fission products, mostly in the form of the aerosol, get deposited on the inner surface of the piping system mainly due to gravitational settling and thermophoretic deposition. The removal processes in the complex piping system are controlled to a large extent by the thermal-hydraulic conditions like temperature, pressure, and flow rates. These parameters generally vary with time and therefore must be carefully monitored to predict the aerosol behavior in the piping system. The removal process of aerosol depends on the size of particles that determines how many particles get deposit or travel across the bends and reach to the other end of the piping system. The released aerosol gets deposited onto the inner surface of the piping system by various mechanisms like gravitational settling, Brownian diffusion, thermophoretic deposition, and by other deposition mechanisms. To quantify the correct estimate of deposition, the identification and understanding of the aforementioned deposition mechanisms are of great importance. These mechanisms are significantly affected by different flow and thermodynamic conditions. Thermophoresis also plays a significant role in particle deposition. In the present study, a series of experiments were performed in the piping system of the National Aerosol Test Facility (NATF), BARC using metal aerosols (zinc) in dry environments to study the spatial distribution of particles mass and number concentration, and their depletion due to various removal mechanisms in the piping system. The experiments were performed at two different carrier gas flow rates. The commercial CFD software FLUENT is used to determine the distribution of temperature, velocity, pressure, and turbulence quantities in the piping system. In addition to the in-built models for turbulence, heat transfer and flow in the commercial CFD code (FLUENT), a new sub-model PBM (population balance model) is used to describe the coagulation process and to compute the number concentration along with the size distribution at different sections of the piping. In the sub-model coagulation kernels are incorporated through user-defined function (UDF). The experimental results are compared with the CFD modeled results. It is found that most of the Zn particles (more than 35 %) deposit near the inlet of the plenum chamber and a low deposition is obtained in piping sections. The MMAD decreases along the length of the test assembly, which shows that large particles get deposited or removed in the course of flow, and only fine particles travel to the end of the piping system. The effect of a bend is also observed, and it is found that the relative loss in mass concentration at bends is more in case of a high flow rate. The simulation results show that the thermophoresis and depositional effects are more dominating for the small and larger sizes as compared to the intermediate particles size. Both SEM and XRD analysis of the collected samples show the samples are highly agglomerated non-spherical and composed mainly of ZnO. The coupled model framed in this work could be used as an important tool for predicting size distribution and concentration of some other aerosol released during a reactor accident scenario.

Keywords: aerosol, CFD, deposition, coagulation

Procedia PDF Downloads 126
81 Ragging and Sludging Measurement in Membrane Bioreactors

Authors: Pompilia Buzatu, Hazim Qiblawey, Albert Odai, Jana Jamaleddin, Mustafa Nasser, Simon J. Judd

Abstract:

Membrane bioreactor (MBR) technology is challenged by the tendency for the membrane permeability to decrease due to ‘clogging’. Clogging includes ‘sludging’, the filling of the membrane channels with sludge solids, and ‘ragging’, the aggregation of short filaments to form long rag-like particles. Both sludging and ragging demand manual intervention to clear out the solids, which is time-consuming, labour-intensive and potentially damaging to the membranes. These factors impact on costs more significantly than membrane surface fouling which, unlike clogging, is largely mitigated by the chemical clean. However, practical evaluation of MBR clogging has thus far been limited. This paper presents the results of recent work attempting to quantify sludging and clogging based on simple bench-scale tests. Results from a novel ragging simulation trial indicated that rags can be formed within 24-36 hours from dispersed < 5 mm-long filaments at concentrations of 5-10 mg/L under gently agitated conditions. Rag formation occurred for both a cotton wool standard and samples taken from an operating municipal MBR, with between 15% and 75% of the added fibrous material forming a single rag. The extent of rag formation depended both on the material type or origin – lint from laundering operations forming zero rags – and the filament length. Sludging rates were quantified using a bespoke parallel-channel test cell representing the membrane channels of an immersed flat sheet MBR. Sludge samples were provided from two local MBRs, one treating municipal and the other industrial effluent. Bulk sludge properties measured comprised mixed liquor suspended solids (MLSS) concentration, capillary suction time (CST), particle size, soluble COD (sCOD) and rheology (apparent viscosity μₐ vs shear rate γ). The fouling and sludging propensity of the sludge was determined using the test cell, ‘fouling’ being quantified as the pressure incline rate against flux via the flux step test (for which clogging was absent) and sludging by photographing the channel and processing the image to determine the ratio of the clogged to unclogged regions. A substantial difference in rheological and fouling behaviour was evident between the two sludge sources, the industrial sludge having a higher viscosity but less shear-thinning than the municipal. Fouling, as manifested by the pressure increase Δp/Δt, as a function of flux from classic flux-step experiments (where no clogging was evident), was more rapid for the industrial sludge. Across all samples of both sludge origins the expected trend of increased fouling propensity with increased CST and sCOD was demonstrated, whereas no correlation was observed between clogging rate and these parameters. The relative contribution of fouling and clogging was appraised by adjusting the clogging propensity via increasing the MLSS both with and without a commensurate increase in the COD. Results indicated that whereas for the municipal sludge the fouling propensity was affected by the increased sCOD, there was no associated increased in the sludging propensity (or cake formation). The clogging rate actually decreased on increasing the MLSS. Against this, for the industrial sludge the clogging rate dramatically increased with solids concentration despite a decrease in the soluble COD. From this was surmised that sludging did not relate to fouling.

Keywords: clogging, membrane bioreactors, ragging, sludge

Procedia PDF Downloads 157
80 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing

Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto

Abstract:

In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.

Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration

Procedia PDF Downloads 220
79 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki

Abstract:

Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.

Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol

Procedia PDF Downloads 210