Search results for: interfaces of processes
715 Humic Acid and Azadirachtin Derivatives for the Management of Crop Pests
Authors: R. S. Giraddi, C. M. Poleshi
Abstract:
Organic cultivation of crops is gaining importance consumer awareness towards pesticide residue free foodstuffs is increasing globally. This is also because of high costs of synthetic fertilizers and pesticides, making the conventional farming non-remunerative. In India, organic manures (such as vermicompost) are an important input in organic agriculture. Though vermicompost obtained through earthworm and microbe-mediated processes is known to comprise most of the crop nutrients, but they are in small amounts thus necessitating enrichment of nutrients so that crop nourishment is complete. Another characteristic of organic manures is that the pest infestations are kept under check due to induced resistance put up by the crop plants. In the present investigation, deoiled neem cake containing azadirachtin, copper ore tailings (COT), a source of micro-nutrients and microbial consortia were added for enrichment of vermicompost. Neem cake is a by-product obtained during the process of oil extraction from neem plant seeds. Three enriched vermicompost blends were prepared using vermicompost (at 70, 65 and 60%), deoiled neem cake (25, 30 and 35%), microbial consortia and COTwastes (5%). Enriched vermicompost was thoroughly mixed, moistened (25+5%), packed and incubated for 15 days at room temperature. In the crop response studies, the field trials on chili (Capsicum annum var. longum) and soybean, (Glycine max cv JS 335) were conducted during Kharif 2015 at the Main Agricultural Research Station, UAS, Dharwad-Karnataka, India. The vermicompost blend enriched with neem cake (known to possess higher amounts of nutrients) and vermicompost were applied to the crops and at two dosages and at two intervals of crop cycle (at sowing and 30 days after sowing) as per the treatment plan along with 50% recommended dose of fertilizer (RDF). 10 plants selected randomly in each plot were studied for pest density and plant damage. At maturity, crops were harvested, and the yields were recorded as per the treatments, and the data were analyzed using appropriate statistical tools and procedures. In the crops, chili and soybean, crop nourishment with neem enriched vermicompost reduced insect density and plant damage significantly compared to other treatments. These treatments registered as much yield (16.7 to 19.9 q/ha) as that realized in conventional chemical control (18.2 q/ha) in soybean, while 72 to 77 q/ha of green chili was harvested in the same treatments, being comparable to the chemical control (74 q/ha). The yield superiority of the treatments was of the order neem enriched vermicompost>conventional chemical control>neem cake>vermicompost>untreated control. The significant features of the result are that it reduces use of inorganic manures by 50% and synthetic chemical insecticides by 100%.Keywords: humic acid, azadirachtin, vermicompost, insect-pest
Procedia PDF Downloads 277714 Food Security in Germany: Inclusion of the Private Sector through Law Reform Faces Challenges
Authors: Agnetha Schuchardt, Jennifer Hartmann, Laura Schulte, Roman Peperhove, Lars Gerhold
Abstract:
If critical infrastructures fail, even for a short period of time, it can have significant negative consequences for the affected population. This is especially true for the food sector that is strongly interlinked with other sectors like the power supply. A blackout could lead to several cities being without food supply for numerous days, simply because cash register systems do no longer work properly. Following the public opinion, securing the food supply in emergencies is considered a task of the state, however, in the German context, the key players are private enterprises and private households. Both are not aware of their responsibility and both cannot be forced to take any preventive measures prior to an emergency. This problem became evident to officials and politicians so that the law covering food security was revised in order to include private stakeholders into mitigation processes. The paper will present a scientific review of governmental and regulatory literature. The focus is the inclusion of the food industry through a law reform and the challenges that still exist. Together with legal experts, an analysis of regulations will be presented that explains the development of the law reform concerning food security and emergency storage in Germany. The main findings are that the existing public food emergency storage is out-dated, insufficient and too expensive. The state is required to protect food as a critical infrastructure but does not have the capacities to live up to this role. Through a law reform in 2017, new structures should to established. The innovation was to include the private sector into the civil defense concept since it has the required knowledge and experience. But the food industry is still reluctant. Preventive measures do not serve economic purposes – on the contrary, they cost money. The paper will discuss respective examples like equipping supermarkets with emergency power supply or self-sufficient cash register systems and why the state is not willing to cover the costs of these measures, but neither is the economy. The biggest problem with the new law is that private enterprises can only be forced to support food security if the state of emergency has occurred already and not one minute earlier. The paper will cover two main results: the literature review and an expert workshop that will be conducted in summer 2018 with stakeholders from different parts of the food supply chain as well as officials of the public food emergency concept. The results from this participative process will be presented and recommendations will be offered that show how the private economy could be better included into a modern food emergency concept (e. g. tax reductions for stockpiling).Keywords: critical infrastructure, disaster control, emergency food storage, food security, private economy, resilience
Procedia PDF Downloads 188713 Adding a Degree of Freedom to Opinion Dynamics Models
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 119712 Applying the View of Cognitive Linguistics on Teaching and Learning English at UFLS - UDN
Authors: Tran Thi Thuy Oanh, Nguyen Ngoc Bao Tran
Abstract:
In the view of Cognitive Linguistics (CL), knowledge and experience of things and events are used by human beings in expressing concepts, especially in their daily life. The human conceptual system is considered to be fundamentally metaphorical in nature. It is also said that the way we think, what we experience, and what we do everyday is very much a matter of language. In fact, language is an integral factor of cognition in that CL is a family of broadly compatible theoretical approaches sharing the fundamental assumption. The relationship between language and thought, of course, has been addressed by many scholars. CL, however, strongly emphasizes specific features of this relation. By experiencing, we receive knowledge of lives. The partial things are ideal domains, we make use of all aspects of this domain in metaphorically understanding abstract targets. The paper refered to applying this theory on pragmatics lessons for major English students at University of Foreign Language Studies - The University of Da Nang, Viet Nam. We conducted the study with two third – year students groups studying English pragmatics lessons. To clarify this study, the data from these two classes were collected for analyzing linguistic perspectives in the view of CL and traditional concepts. Descriptive, analytic, synthetic, comparative, and contrastive methods were employed to analyze data from 50 students undergoing English pragmatics lessons. The two groups were taught how to transfer the meanings of expressions in daily life with the view of CL and one group used the traditional view for that. The research indicated that both ways had a significant influence on students' English translating and interpreting abilities. However, the traditional way had little effect on students' understanding, but the CL view had a considerable impact. The study compared CL and traditional teaching approaches to identify benefits and challenges associated with incorporating CL into the curriculum. It seeks to extend CL concepts by analyzing metaphorical expressions in daily conversations, offering insights into how CL can enhance language learning. The findings shed light on the effectiveness of applying CL in teaching and learning English pragmatics. They highlight the advantages of using metaphorical expressions from daily life to facilitate understanding and explore how CL can enhance cognitive processes in language learning in general and teaching English pragmatics to third-year students at the UFLS - UDN, Vietnam in personal. The study contributes to the theoretical understanding of the relationship between language, cognition, and learning. By emphasizing the metaphorical nature of human conceptual systems, it offers insights into how CL can enrich language teaching practices and enhance students' comprehension of abstract concepts.Keywords: cognitive linguisitcs, lakoff and johnson, pragmatics, UFLS
Procedia PDF Downloads 36711 Public Participation for an Effective Flood Risk Management: Building Social Capacities in Ribera Alta Del Ebro, Spain
Authors: Alba Ballester Ciuró, Marc Pares Franzi
Abstract:
While coming decades are likely to see a higher flood risk in Europe and greater socio-economic damages, traditional flood risk management has become inefficient. In response to that, new approaches such as capacity building and public participation have recently been incorporated in natural hazards mitigation policy (i.e. Sendai Framework for Action, Intergovernmental Panel on Climate Change reports and EU Floods Directive). By integrating capacity building and public participation, we present a research concerning the promotion of participatory social capacity building actions for flood risk mitigation at the local level. Social capacities have been defined as the resources and abilities available at individual and collective level that can be used to anticipate, respond to, cope with, recover from and adapt to external stressors. Social capacity building is understood as a process of identifying communities’ social capacities and of applying collaborative strategies to improve them. This paper presents a proposal of systematization of participatory social capacity building process for flood risk mitigation, and its implementation in a high risk of flooding area in the Ebro river basin: Ribera Alta del Ebro. To develop this process, we designed and tested a tool that allows measuring and building five types of social capacities: knowledge, motivation, networks, participation and finance. The tool implementation has allowed us to assess social capacities in the area. Upon the results of the assessment we have developed a co-decision process with stakeholders and flood risk management authorities on which participatory activities could be employed to improve social capacities for flood risk mitigation. Based on the results of this process, and focused on the weaker social capacities, we developed a set of participatory actions in the area oriented to general public and stakeholders: informative sessions on flood risk management plan and flood insurances, interpretative river descents on flood risk management (with journalists, teachers, and general public), interpretative visit to the floodplain, workshop on agricultural insurance, deliberative workshop on project funding, deliberative workshops in schools on flood risk management (playing with a flood risk model). The combination of obtaining data through a mixed-methods approach of qualitative inquiry and quantitative surveys, as well as action research through co-decision processes and pilot participatory activities, show us the significant impact of public participation on social capacity building for flood risk mitigation and contributes to the understanding of which main factors intervene in this process.Keywords: flood risk management, public participation, risk reduction, social capacities, vulnerability assessment
Procedia PDF Downloads 211710 Examining the European Central Bank's Marginal Attention to Human Rights Concerns during the Eurozone Crisis through the Lens of Organizational Culture
Authors: Hila Levi
Abstract:
Respect for human rights is a fundamental element of the European Union's (EU) identity and law. Surprisingly, however, the protection of human rights has been significantly restricted in the austerity programs ordered by the International Monetary Fund (IMF), the European Central Bank (ECB) and the European Commission (EC) (often labeled 'the Troika') in return for financial aid to the crisis-hit countries. This paper focuses on the role of the ECB in the crisis management. While other international financial institutions, such as the IMF or the World Bank, may opt to neglect human rights obligations, one might expect a greater respect of human rights from the ECB, which is bound by the EU Charter of Fundamental Rights. However, this paper argues that ECB officials made no significant effort to protect human rights or strike an adequate balance between competing financial and human rights needs while coping with the crisis. ECB officials were preoccupied with the need to stabilize the economy and prevent a collapse of the Eurozone, and paid only marginal attention to human rights concerns in the design and implementation of Troikas' programs. This paper explores the role of Organizational Culture (OC) in explaining this marginalization. While International Relations (IR) research on Intergovernmental Organizations (IGOs) behavior has traditionally focused on external interests of powerful member states, and on national and economic considerations, this study focuses on particular institutions' internal factors and independent processes. OC characteristics have been identified in OC literature as an important determinant of organizational behavior. This paper suggests that cultural characteristics are also vital for the examination of IGOs, and particularly for understanding the ECB's behavior during the crisis. In order to assess the OC of the ECB and the impact it had on its policies and decisions during the Eurozone crisis, the paper uses the results of numerous qualitative interviews conducted with high-ranking officials and staff members of the ECB involved in the crisis management. It further reviews primary sources of the ECB (such as ECB statutes, and the Memoranda of Understanding signed between the crisis countries and the Troika), and secondary sources (such as the report of the UN High Commissioner for Human Rights on Austerity measures and economic, social, and cultural rights). It thus analyzes the interaction between the ECBs culture and the almost complete absence of human rights considerations in the Eurozone crisis resolution scheme. This paper highlights the importance and influence of internal ideational factors on IGOs behavior. From a more practical perspective, this paper may contribute to understanding one of the obstacles in the process of human rights implementation in international organizations, and provide instruments for better protection of social and economic rights.Keywords: European central bank, eurozone crisis, intergovernmental organizations, organizational culture
Procedia PDF Downloads 155709 Immobilization of Horseradish Peroxidase onto Bio-Linked Magnetic Particles with Allium Cepa Peel Water Extracts
Authors: Mirjana Petronijević, Sanja Panić, Aleksandra Cvetanović, Branko Kordić, Nenad Grba
Abstract:
Enzyme peroxidases are biological catalysts and play a major role in phenolic wastewater treatments and other environmental applications. The most studied species from the peroxidases family is horseradish peroxidase (HRP). In environmental processes, HRP could be used in its free or immobilized form. Enzyme immobilization onto solid support is performed to improve the enzyme properties, prolong its lifespan and operational stability and allow its reuse in industrial applications. One of the enzyme supports of a newer generation is magnetic particles (MPs). Fe₃O₄ MPs are the most widely pursued immobilization of enzymes owing to their remarkable advantages of biocompatibility and non-toxicity. Also, MPs can be easily separated and recovered from the water by applying an external magnetic field. On the other hand, metals and metal oxides are not suitable for the covalent binding of enzymes, so it is necessary to perform their surface modification. Fe₃O₄ MPs functionalization could be performed during the process of their synthesis if it takes place in the presence of plant extracts. Extracts of plant material, such as wild plants, herbs, even waste materials of the food and agricultural industry (bark, shell, leaves, peel), are rich in various bioactive components such as polyphenols, flavonoids, sugars, etc. When the synthesis of magnetite is performed in the presence of plant extracts, bioactive components are incorporated into the surface of the magnetite, thereby affecting its functionalization. In this paper, the suitability of bio-magnetite as solid support for covalent immobilization of HRP across glutaraldehyde was examined. The activity of immobilized HRP at different pH values (4-9) and temperatures (20-80°C) and reusability were examined. Bio-MP was synthesized by co-precipitation method from Fe(II) and Fe(III) sulfate salts in the presence of water extract of the Allium cepa peel. The water extract showed 81% of antiradical potential (according to DPPH assay), which is connected with the high content of polyphenols. According to the FTIR analysis, the bio-magnetite contains oxygen functional groups (-OH, -COOH, C=O) suitable for binding to glutaraldehyde, after which the enzyme is covalently immobilized. The immobilized enzyme showed high activity at ambient temperature and pH 7 (30 U/g) and retained ≥ 80% of its activity at a wide range of pH (5-8) and temperature (20-50°C). The HRP immobilized onto bio-MPs showed remarkable stability towards temperature and pH variations compared to the free enzyme form. On the other hand, immobilized HRP showed low reusability after the first washing cycle enzyme retains 50% of its activity, while after the third washing cycle retains only 22%.Keywords: bio-magnetite, enzyme immobilization, water extracts, environmental protection
Procedia PDF Downloads 224708 Cloud Based Supply Chain Traceability
Authors: Kedar J. Mahadeshwar
Abstract:
Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.Keywords: cloud, pharmaceutical, supply chain, tracking
Procedia PDF Downloads 527707 The French Ekang Ethnographic Dictionary. The Quantum Approach
Authors: Henda Gnakate Biba, Ndassa Mouafon Issa
Abstract:
Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: music, language, entenglement, science, research
Procedia PDF Downloads 69706 Forging A Distinct Understanding of Implicit Bias
Authors: Benjamin D Reese Jr
Abstract:
Implicit bias is understood as unconscious attitudes, stereotypes, or associations that can influence the cognitions, actions, decisions, and interactions of an individual without intentional control. These unconscious attitudes or stereotypes are often targeted toward specific groups of people based on their gender, race, age, perceived sexual orientation or other social categories. Since the late 1980s, there has been a proliferation of research that hypothesizes that the operation of implicit bias is the result of the brain needing to process millions of bits of information every second. Hence, one’s prior individual learning history provides ‘shortcuts’. As soon as one see someone of a certain race, one have immediate associations based on their past learning, and one might make assumptions about their competence, skill, or danger. These assumptions are outside of conscious awareness. In recent years, an alternative conceptualization has been proposed. The ‘bias of crowds’ theory hypothesizes that a given context or situation influences the degree of accessibility of particular biases. For example, in certain geographic communities in the United States, there is a long-standing and deeply ingrained history of structures, policies, and practices that contribute to racial inequities and bias toward African Americans. Hence, negative biases among groups of people towards African Americans are more accessible in such contexts or communities. This theory does not focus on individual brain functioning or cognitive ‘shortcuts.’ Therefore, attempts to modify individual perceptions or learning might have negligible impact on those embedded environmental systems or policies that are within certain contexts or communities. From the ‘bias of crowds’ perspective, high levels of racial bias in a community can be reduced by making fundamental changes in structures, policies, and practices to create a more equitable context or community rather than focusing on training or education aimed at reducing an individual’s biases. The current paper acknowledges and supports the foundational role of long-standing structures, policies, and practices that maintain racial inequities, as well as inequities related to other social categories, and highlights the critical need to continue organizational, community, and national efforts to eliminate those inequities. It also makes a case for providing individual leaders with a deep understanding of the dynamics of how implicit biases impact cognitions, actions, decisions, and interactions so that those leaders might more effectively develop structural changes in the processes and systems under their purview. This approach incorporates both the importance of an individual’s learning history as well as the important variables within the ‘bias of crowds’ theory. The paper also offers a model for leadership education, as well as examples of structural changes leaders might consider.Keywords: implicit bias, unconscious bias, bias, inequities
Procedia PDF Downloads 8705 Study of the Possibility of Adsorption of Heavy Metal Ions on the Surface of Engineered Nanoparticles
Authors: Antonina A. Shumakova, Sergey A. Khotimchenko
Abstract:
The relevance of research is associated, on the one hand, with an ever-increasing volume of production and the expansion of the scope of application of engineered nanomaterials (ENMs), and on the other hand, with the lack of sufficient scientific information on the nature of the interactions of nanoparticles (NPs) with components of biogenic and abiogenic origin. In particular, studying the effect of ENMs (TiO2 NPs, SiO2 NPs, Al2O3 NPs, fullerenol) on the toxicometric characteristics of common contaminants such as lead and cadmium is an important hygienic task, given the high probability of their joint presence in food products. Data were obtained characterizing a multidirectional change in the toxicity of model toxicants when they are co-administered with various types of ENMs. One explanation for this fact is the difference in the adsorption capacity of ENMs, which was further studied in in vitro studies. For this, a method was proposed based on in vitro modeling of conditions simulating the environment of the small intestine. It should be noted that the obtained data are in good agreement with the results of in vivo experiments: - with the combined administration of lead and TiO2 NPs, there were no significant changes in the accumulation of lead in rat liver; in other organs (kidneys, spleen, testes and brain), the lead content was lower than in animals of the control group; - studying the combined effect of lead and Al2O3 NPs, a multiple and significant increase in the accumulation of lead in rat liver was observed with an increase in the dose of Al2O3 NPs. For other organs, the introduction of various doses of Al2O3 NPs did not significantly affect the bioaccumulation of lead; - with the combined administration of lead and SiO2 NPs in different doses, there was no increase in lead accumulation in all studied organs. Based on the data obtained, it can be assumed that at least three scenarios of the combined effects of ENMs and chemical contaminants on the body: - ENMs quite firmly bind contaminants in the gastrointestinal tract and such a complex becomes inaccessible (or inaccessible) for absorption; in this case, it can be expected that the toxicity of both ENMs and contaminants will decrease; - the complex formed in the gastrointestinal tract has partial solubility and can penetrate biological membranes and / or physiological barriers of the body; in this case, ENMs can play the role of a kind of conductor for contaminants and, thus, their penetration into the internal environment of the body increases, thereby increasing the toxicity of contaminants; - ENMs and contaminants do not interact with each other in any way, therefore the toxicity of each of them is determined only by its quantity and does not depend on the quantity of another component. Authors hypothesized that the degree of adsorption of various elements on the surface of ENMs may be a unique characteristic of their action, allowing a more accurate understanding of the processes occurring in a living organism.Keywords: absorption, cadmium, engineered nanomaterials, lead
Procedia PDF Downloads 87704 Reconceptualising the Voice of Children in Child Protection
Authors: Sharon Jackson, Lynn Kelly
Abstract:
This paper proposes a conceptual review of the interdisciplinary literature which has theorised the concept of ‘children’s voices’. The primary aim is to identify and consider the theoretical relevance of conceptual thought on ‘children’s voices’ for research and practice in child protection contexts. Attending to the ‘voice of the child’ has become a core principle of social work practice in contemporary child protection contexts. Discourses of voice permeate the legislative, policy and practice frameworks of child protection practices within the UK and internationally. Voice is positioned within a ‘child-centred’ moral imperative to ‘hear the voices’ of children and take their preferences and perspectives into account. This practice is now considered to be central to working in a child-centered way. The genesis of this call to voice is revealed through sociological analysis of twentieth-century child welfare reform as rooted inter alia in intersecting political, social and cultural discourses which have situated children and childhood as cites of state intervention as enshrined in the 1989 United Nations Convention on the Rights of the Child ratified by the UK government in 1991 and more specifically Article 12 of the convention. From a policy and practice perspective, the professional ‘capturing’ of children’s voices has come to saturate child protection practice. This has incited a stream of directives, resources, advisory publications and ‘how-to’ guides which attempt to articulate practice methods to ‘listen’, ‘hear’ and above all – ‘capture’ the ‘voice of the child’. The idiom ‘capturing the voice of the child’ is frequently invoked within the literature to express the requirements of the child-centered practice task to be accomplished. Despite the centrality of voice, and an obsession with ‘capturing’ voices, evidence from research, inspection processes, serious case reviews, child abuse and death inquires has consistently highlighted professional neglect of ‘the voice of the child’. Notable research studies have highlighted the relative absence of the child’s voice in social work assessment practices, a troubling lack of meaningful engagement with children and the need to more thoroughly examine communicative practices in child protection contexts. As a consequence, the project of capturing ‘the voice of the child’ has intensified, and there has been an increasing focus on developing methods and professional skills to attend to voice. This has been guided by a recognition that professionals often lack the skills and training to engage with children in age-appropriate ways. We argue however that the problem with ‘capturing’ and [re]representing ‘voice’ in child protection contexts is, more fundamentally, a failure to adequately theorise the concept of ‘voice’ in the ‘voice of the child’. For the most part, ‘The voice of the child’ incorporates psychological conceptions of child development. While these concepts are useful in the context of direct work with children, they fail to consider other strands of sociological thought, which position ‘the voice of the child’ within an agentic paradigm to emphasise the active agency of the child.Keywords: child-centered, child protection, views of the child, voice of the child
Procedia PDF Downloads 136703 Human Creativity through Dooyeweerd's Philosophy: The Case of Creative Diagramming
Authors: Kamaran Fathulla
Abstract:
Human creativity knows no bounds. More than a millennia ago humans have expressed their knowledge on cave walls and on clay artefacts. Using visuals such as diagrams and paintings have always provided us with a natural and intuitive medium for expressing such creativity. Making sense of human generated visualisation has been influenced by western scientific philosophies which are often reductionist in their nature. Theoretical frameworks such as those delivered by Peirce dominated our views of how to make sense of visualisation where a visual is seen as an emergent property of our thoughts. Others have reduced the richness of human-generated visuals to mere shapes drawn on a piece of paper or on a screen. This paper introduces an alternate framework where the centrality of human functioning is given explicit and richer consideration through the multi aspectual philosophical works of Herman Dooyeweerd. Dooyeweerd's framework of understanding reality was based on fifteen aspects of reality, each having a distinct core meaning. The totality of the aspects formed a ‘rainbow’ like spectrum of meaning. The thesis of this approach is that meaningful human functioning in most cases involves the diversity of all aspects working in synergy and harmony. Illustration of the foundations and applicability of this approach is underpinned in the case of humans use of diagramming for creative purposes, particularly within an educational context. Diagrams play an important role in education. Students and lecturers use diagrams as a powerful tool to aid their thinking. However, research into the role of diagrams used in education continues to reveal difficulties students encounter during both processes of interpretation and construction of diagrams. Their main problems shape up students difficulties with diagrams. The ever-increasing diversity of diagrams' types coupled with the fact that most real-world diagrams often contain a mix of these different types of diagrams such as boxes and lines, bar charts, surfaces, routes, shapes dotted around the drawing area, and so on with each type having its own distinct set of static and dynamic semantics. We argue that the persistence of these problems is grounded in our existing ways of understanding diagrams that are often reductionist in their underpinnings driven by a single perspective or formalism. In this paper, we demonstrate the limitations of these approaches in dealing with the three problems. Consequently, we propose, discuss, and demonstrate the potential of a nonreductionist framework for understanding diagrams based on Symbolic and Spatial Mappings (SySpM) underpinned by Dooyeweerd philosophy. The potential of the framework to account for the meaning of diagrams is demonstrated by applying it to a real-world case study physics diagram.Keywords: SySpM, drawing style, mapping
Procedia PDF Downloads 238702 The Impact of HKUST-1 Metal-Organic Framework Pretreatment on Dynamic Acetaldehyde Adsorption
Authors: M. François, L. Sigot, C. Vallières
Abstract:
Volatile Organic Compounds (VOCs) are a real health issue, particularly in domestic indoor environments. Among these VOCs, acetaldehyde is frequently monitored in dwellings ‘air, especially due to smoking and spontaneous emissions from the new wall and soil coverings. It is responsible for respiratory complaints and is classified as possibly carcinogenic to humans. Adsorption processes are commonly used to remove VOCs from the air. Metal-Organic Frameworks (MOFs) are a promising type of material for high adsorption performance. These hybrid porous materials composed of metal inorganic clusters and organic ligands are interesting thanks to their high porosity and surface area. The HKUST-1 (also referred to as MOF-199) is a copper-based MOF with the formula [Cu₃(BTC)₂(H₂O)₃]n (BTC = benzene-1,3,5-tricarboxylate) and exhibits unsaturated metal sites that can be attractive sites for adsorption. The objective of this study is to investigate the impact of HKUST-1 pretreatment on acetaldehyde adsorption. Thus, dynamic adsorption experiments were conducted in 1 cm diameter glass column packed with 2 cm MOF bed height. MOF were sieved to 630 µm - 1 mm. The feed gas (Co = 460 ppmv ± 5 ppmv) was obtained by diluting a 1000 ppmv acetaldehyde gas cylinder in air. The gas flow rate was set to 0.7 L/min (to guarantee a suitable linear velocity). Acetaldehyde concentration was monitored online by gas chromatography coupled with a flame ionization detector (GC-FID). Breakthrough curves must allow to understand the interactions between the MOF and the pollutant as well as the impact of the HKUST-1 humidity in the adsorption process. Consequently, different MOF water content conditions were tested, from a dry material with 7 % water content (dark blue color) to water saturated state with approximately 35 % water content (turquoise color). The rough material – without any pretreatment – containing 30 % water serves as a reference. First, conclusions can be drawn from the comparison of the evolution of the ratio of the column outlet concentration (C) on the inlet concentration (Co) as a function of time for different HKUST-1 pretreatments. The shape of the breakthrough curves is significantly different. The saturation of the rough material is slower (20 h to reach saturation) than that of the dried material (2 h). However, the breakthrough time defined for C/Co = 10 % appears earlier in the case of the rough material (0.75 h) compared to the dried HKUST-1 (1.4 h). Another notable difference is the shape of the curve before the breakthrough at 10 %. An abrupt increase of the outlet concentration is observed for the material with the lower humidity in comparison to a smooth increase for the rough material. Thus, the water content plays a significant role on the breakthrough kinetics. This study aims to understand what can explain the shape of the breakthrough curves associated to the pretreatments of HKUST-1 and which mechanisms take place in the adsorption process between the MOF, the pollutant, and the water.Keywords: acetaldehyde, dynamic adsorption, HKUST-1, pretreatment influence
Procedia PDF Downloads 238701 The Effects of Grape Waste Bioactive Compounds on the Immune Response and Oxidative Stress in Pig Kidney
Authors: Mihai Palade, Gina Cecilia Pistol, Mariana Stancu, Veronica Chedea, Ionelia Taranu
Abstract:
Nutrition is an important determinant of general health status, with especially focus on prevention and/or attenuation of the inflammatory-associated pathologies. People with chronic kidney disease can experience chronic inflammation that can lead to cardiovascular disease and even an increased rate of death. There are important links between chronic kidney diseases, inflammation and nutritional strategies that may prevent or protect against undesirable inflammation and oxidative stress. The grape by-products either seeds or pomace are rich in polyphenols which may be beneficial in prevention of inflammatory, antioxidant and antimicrobial processes. As a model for studying the impact of grape seeds on renal inflammation and oxidative stress, we used in this study weaned piglets. After a feeding trial of 30 days with a control diet and an experimental diet containing 5% grape seed (GS), kidney samples were collected. In renal tissues were determined the expression and activity of important markers of immune respose and oxidative stress: pro-inflammatory cytokines (TNF-alpha, IL-1 beta, IL-6, IL-8, IFN-gamma), anti-inflammatory cytokines (IL-4, IL-10), anti-oxidant enzymes (catalase CAT, superoxide dismutase SOD, glutathione peroxidise GPx) and important mediators belonging to nuclear receptors (NF-kB1, Nrf-2 and PPAR-gamma). Gene expression was evaluated by qPCR, whereas protein concentration was determined using proteomic techniques (ELISA). The activity of anti-oxidant enzymes was determined using specific kits. Our results showed that GS enriched in polyphenols does not have effect on TNF-alpha, IL-6 and IL-1 beta gene expression and protein concentration in kidney. By contrast, the gene expression and protein level of IL-8 and IFN-gamma were decreased in GS kidney. Anti-inflammatory cytokines IL-4 and IL-10 gene levels were increased in kidneys collected from GS piglets in comparison with controls, with no modification of protein levels between the two groups. The activities of anti-oxidant enzymes CAT and GPx were increased in kidney by GS, whereas SOD activity was unmodified in comparison with control samples. Also, the GS diet was associated with no modulation of mRNAs for nuclear receptors NF-kB1, Nrf-2 and PPAR-gamma gene expressions in kidneys. In conclusion, our results demonstrated that GS enriched in bioactive compounds such polyphenols could modulate inflammation and oxidative stress markers in kidney tissues. Further studies are necessary to elucidate the mechanism of action of GS compounds in case kidney inflammation associated with oxidative stress, and signalling molecules involved in these mechanisms.Keywords: animal model, kidney inflammation, oxidative stress, grape seed
Procedia PDF Downloads 298700 Material Chemistry Level Deformation and Failure in Cementitious Materials
Authors: Ram V. Mohan, John Rivas-Murillo, Ahmed Mohamed, Wayne D. Hodo
Abstract:
Cementitious materials, an excellent example of highly complex, heterogeneous material systems, are cement-based systems that include cement paste, mortar, and concrete that are heavily used in civil infrastructure; though commonly used are one of the most complex in terms of the material morphology and structure than most materials, for example, crystalline metals. Processes and features occurring at the nanometer sized morphological structures affect the performance, deformation/failure behavior at larger length scales. In addition, cementitious materials undergo chemical and morphological changes gaining strength during the transient hydration process. Hydration in cement is a very complex process creating complex microstructures and the associated molecular structures that vary with hydration. A fundamental understanding can be gained through multi-scale level modeling for the behavior and properties of cementitious materials starting from the material chemistry level atomistic scale to further explore their role and the manifested effects at larger length and engineering scales. This predictive modeling enables the understanding, and studying the influence of material chemistry level changes and nanomaterial additives on the expected resultant material characteristics and deformation behavior. Atomistic-molecular dynamic level modeling is required to couple material science to engineering mechanics. Starting at the molecular level a comprehensive description of the material’s chemistry is required to understand the fundamental properties that govern behavior occurring across each relevant length scale. Material chemistry level models and molecular dynamics modeling and simulations are employed in our work to describe the molecular-level chemistry features of calcium-silicate-hydrate (CSH), one of the key hydrated constituents of cement paste, their associated deformation and failure. The molecular level atomic structure for CSH can be represented by Jennite mineral structure. Jennite has been widely accepted by researchers and is typically used to represent the molecular structure of the CSH gel formed during the hydration of cement clinkers. This paper will focus on our recent work on the shear and compressive deformation and failure behavior of CSH represented by Jennite mineral structure that has been widely accepted by researchers and is typically used to represent the molecular structure of CSH formed during the hydration of cement clinkers. The deformation and failure behavior under shear and compression loading deformation in traditional hydrated CSH; effect of material chemistry changes on the predicted stress-strain behavior, transition from linear to non-linear behavior and identify the on-set of failure based on material chemistry structures of CSH Jennite and changes in its chemistry structure will be discussed.Keywords: cementitious materials, deformation, failure, material chemistry modeling
Procedia PDF Downloads 286699 Mesoporous BiVO4 Thin Films as Efficient Visible Light Driven Photocatalyst
Authors: Karolina Ordon, Sandrine Coste, Malgorzata Makowska-Janusik, Abdelhadi Kassiba
Abstract:
Photocatalytic processes play key role in the production of a new source of energy (as hydrogen), design of self-cleaning surfaces or for the environment preservation. The most challenging task deals with the purification of water distinguished by high efficiency. In the mentioned process, organic pollutants in solutions are decomposed to the simple, non-toxic compounds as H2O and CO2. The most known photocatalytic materials are ZnO, CdS and TiO2 semiconductors with a particular involvement of TiO2 as an efficient photocatalysts even with a high band gap equal to 3.2 eV which exploit only UV radiation from solar emitted spectrum. However, promising material with visible light induced photoactivity was searched through the monoclinic polytype of BiVO4 which has energy gap about 2.4 eV. As required in heterogeneous photocatalysis, the high contact surface is required. Also, BiVO4 as photocatalyst can be optimized by increasing its surface area by achieving the mesoporous structure synthesize. The main goal of the present work consists in the synthesis and characterization of BiVO4 mesoporous thin film. The synthesis method based on sol-gel was carried out using a standard surfactants such as P123 and F127. The thin film was deposited by spin and dip coating method. Then, the structural analysis of the obtained material was performed thanks to X-ray diffraction (XRD) and Raman spectroscopy. The surface of resulting structure was investigated using a scanning electron microscopy (SEM). The computer simulations based on modeling the optical and electronic properties of bulk BiVO4 by using DFT (density functional theory) methodology were carried out. The semiempirical parameterized method PM6 was used to compute the physical properties of BiVO4 nanostructures. The Raman and IR absorption spectra were also measured for synthesized mesoporous material, and the results were compared with the theoretical predictions. The simulations of nanostructured BiVO4 have pointed out the occurrence of quantum confinement for nanosized clusters leading to widening of the band gap. This result overcame the relevance of nanosized objects to harvest wide part of the solar spectrum. Also, a balance was searched experimentally through the mesoporous nature of the films devoted to enhancing the contact surface as required for heterogeneous catalysis without to lower the nanocrystallite size under some critical sizes inducing an increased band gap. The present contribution will discuss the relevant features of the mesoporous films with respect to their photocatalytic responses.Keywords: bismuth vanadate, photocatalysis, thin film, quantum-chemical calculations
Procedia PDF Downloads 324698 Cross-Sectoral Energy Demand Prediction for Germany with a 100% Renewable Energy Production in 2050
Authors: Ali Hashemifarzad, Jens Zum Hingst
Abstract:
The structure of the world’s energy systems has changed significantly over the past years. One of the most important challenges in the 21st century in Germany (and also worldwide) is the energy transition. This transition aims to comply with the recent international climate agreements from the United Nations Climate Change Conference (COP21) to ensure sustainable energy supply with minimal use of fossil fuels. Germany aims for complete decarbonization of the energy sector by 2050 according to the federal climate protection plan. One of the stipulations of the Renewable Energy Sources Act 2017 for the expansion of energy production from renewable sources in Germany is that they cover at least 80% of the electricity requirement in 2050; The Gross end energy consumption is targeted for at least 60%. This means that by 2050, the energy supply system would have to be almost completely converted to renewable energy. An essential basis for the development of such a sustainable energy supply from 100% renewable energies is to predict the energy requirement by 2050. This study presents two scenarios for the final energy demand in Germany in 2050. In the first scenario, the targets for energy efficiency increase and demand reduction are set very ambitiously. To build a comparison basis, the second scenario provides results with less ambitious assumptions. For this purpose, first, the relevant framework conditions (following CUTEC 2016) were examined, such as the predicted population development and economic growth, which were in the past a significant driver for the increase in energy demand. Also, the potential for energy demand reduction and efficiency increase (on the demand side) was investigated. In particular, current and future technological developments in energy consumption sectors and possible options for energy substitution (namely the electrification rate in the transport sector and the building renovation rate) were included. Here, in addition to the traditional electricity sector, the areas of heat, and fuel-based consumptions in different sectors such as households, commercial, industrial and transport are taken into account, supporting the idea that for a 100% supply from renewable energies, the areas currently based on (fossil) fuels must be almost completely be electricity-based by 2050. The results show that in the very ambitious scenario a final energy demand of 1,362 TWh/a is required, which is composed of 818 TWh/a electricity, 229 TWh/a ambient heat for electric heat pumps and approx. 315 TWh/a non-electric energy (raw materials for non-electrifiable processes). In the less ambitious scenario, in which the targets are not fully achieved by 2050, the final energy demand will need a higher electricity part of almost 1,138 TWh/a (from the total: 1,682 TWh/a). It has also been estimated that 50% of the electricity revenue must be saved to compensate for fluctuations in the daily and annual flows. Due to conversion and storage losses (about 50%), this would mean that the electricity requirement for the very ambitious scenario would increase to 1,227 TWh / a.Keywords: energy demand, energy transition, German Energiewende, 100% renewable energy production
Procedia PDF Downloads 134697 Evaluation of Suspended Particles Impact on Condensation in Expanding Flow with Aerodynamics Waves
Authors: Piotr Wisniewski, Sławomir Dykas
Abstract:
Condensation has a negative impact on turbomachinery efficiency in many energy processes.In technical applications, it is often impossible to dry the working fluid at the nozzle inlet. One of the most popular working fluid is atmospheric air that always contains water in form of steam, liquid, or ice crystals. Moreover, it always contains some amount of suspended particles which influence the phase change process. It is known that the phenomena of evaporation or condensation are connected with release or absorption of latent heat, what influence the fluid physical properties and might affect the machinery efficiency therefore, the phase transition has to be taken under account. This researchpresents an attempt to evaluate the impact of solid and liquid particles suspended in the air on the expansion of moist air in a low expansion rate, i.e., with expansion rate, P≈1000s⁻¹. The numerical study supported by analytical and experimental research is presented in this work. The experimental study was carried out using an in-house experimental test rig, where nozzle was examined for different inlet air relative humidity values included in the range of 25 to 51%. The nozzle was tested for a supersonic flow as well as for flow with shock waves induced by elevated back pressure. The Schlieren photography technique and measurement of static pressure on the nozzle wall were used for qualitative identification of both condensation and shock waves. A numerical model validated against experimental data available in the literature was used for analysis of occurring flow phenomena. The analysis of the suspended particles number, diameter, and character (solid or liquid) revealed their connection with heterogeneous condensation importance. If the expansion of fluid without suspended particlesis considered, the condensation triggers so called condensation wave that appears downstream the nozzle throat. If the solid particles are considered, with increasing number of them, the condensation triggers upwind the nozzle throat, decreasing the condensation wave strength. Due to the release of latent heat during condensation, the fluid temperature and pressure increase, leading to the shift of normal shock upstream the flow. Owing relatively large diameters of the droplets created during heterogeneous condensation, they evaporate partially on the shock and continues to evaporate downstream the nozzle. If the liquid water particles are considered, due to their larger radius, their do not affect the expanding flow significantly, however might be in major importance while considering the compression phenomena as they will tend to evaporate on the shock wave. This research proves the need of further study of phase change phenomena in supersonic flow especially considering the interaction of droplets with the aerodynamic waves in the flow.Keywords: aerodynamics, computational fluid dynamics, condensation, moist air, multi-phase flows
Procedia PDF Downloads 119696 Translation and Validation of the Pain Resilience Scale in a French Population Suffering from Chronic Pain
Authors: Angeliki Gkiouzeli, Christine Rotonda, Elise Eby, Claire Touchet, Marie-Jo Brennstuhl, Cyril Tarquinio
Abstract:
Resilience is a psychological concept of possible relevance to the development and maintenance of chronic pain (CP). It refers to the ability of individuals to maintain reasonably healthy levels of physical and psychological functioning when exposed to an isolated and potentially highly disruptive event. Extensive research in recent years has supported the importance of this concept in the CP literature. Increased levels of resilience were associated with lower levels of perceived pain intensity and better mental health outcomes in adults with persistent pain. The ongoing project seeks to include the concept of pain-specific resilience in the French literature in order to provide more appropriate measures for assessing and understanding the complexities of CP in the near future. To the best of our knowledge, there is currently no validated version of the pain-specific resilience measure, the Pain Resilience scale (PRS), for French-speaking populations. Therefore, the present work aims to address this gap, firstly by performing a linguistic and cultural translation of the scale into French and secondly by studying the internal validity and reliability of the PRS for French CP populations. The forward-translation-back translation methodology was used to achieve as perfect a cultural and linguistic translation as possible according to the recommendations of the COSMIN (Consensus-based Standards for the selection of health Measurement Instruments) group, and an online survey is currently conducted among a representative sample of the French population suffering from CP. To date, the survey has involved one hundred respondents, with a total target of around three hundred participants at its completion. We further seek to study the metric properties of the French version of the PRS, ''L’Echelle de Résilience à la Douleur spécifique pour les Douleurs Chroniques'' (ERD-DC), in French patients suffering from CP, assessing the level of pain resilience in the context of CP. Finally, we will explore the relationship between the level of pain resilience in the context of CP and other variables of interest commonly assessed in pain research and treatment (i.e., general resilience, self-efficacy, pain catastrophising, and quality of life). This study will provide an overview of the methodology used to address our research objectives. We will also present for the first time the main findings and further discuss the validity of the scale in the field of CP research and pain management. We hope that this tool will provide a better understanding of how CP-specific resilience processes can influence the development and maintenance of this disease. This could ultimately result in better treatment strategies specifically tailored to individual needs, thus leading to reduced healthcare costs and improved patient well-being.Keywords: chronic pain, pain measure, pain resilience, questionnaire adaptation
Procedia PDF Downloads 90695 Low Frequency Ultrasonic Degassing to Reduce Void Formation in Epoxy Resin and Its Effect on the Thermo-Mechanical Properties of the Cured Polymer
Authors: A. J. Cobley, L. Krishnan
Abstract:
The demand for multi-functional lightweight materials in sectors such as automotive, aerospace, electronics is growing, and for this reason fibre-reinforced, epoxy polymer composites are being widely utilized. The fibre reinforcing material is mainly responsible for the strength and stiffness of the composites whilst the main role of the epoxy polymer matrix is to enhance the load distribution applied on the fibres as well as to protect the fibres from the effect of harmful environmental conditions. The superior properties of the fibre-reinforced composites are achieved by the best properties of both of the constituents. Although factors such as the chemical nature of the epoxy and how it is cured will have a strong influence on the properties of the epoxy matrix, the method of mixing and degassing of the resin can also have a significant impact. The production of a fibre-reinforced epoxy polymer composite will usually begin with the mixing of the epoxy pre-polymer with a hardener and accelerator. Mechanical methods of mixing are often employed for this stage but such processes naturally introduce air into the mixture, which, if it becomes entrapped, will lead to voids in the subsequent cured polymer. Therefore, degassing is normally utilised after mixing and this is often achieved by placing the epoxy resin mixture in a vacuum chamber. Although this is reasonably effective, it is another process stage and if a method of mixing could be found that, at the same time, degassed the resin mixture this would lead to shorter production times, more effective degassing and less voids in the final polymer. In this study the effect of four different methods for mixing and degassing of the pre-polymer with hardener and accelerator were investigated. The first two methods were manual stirring and magnetic stirring which were both followed by vacuum degassing. The other two techniques were ultrasonic mixing/degassing using a 40 kHz ultrasonic bath and a 20 kHz ultrasonic probe. The cured cast resin samples were examined under scanning electron microscope (SEM), optical microscope, and Image J analysis software to study morphological changes, void content and void distribution. Three point bending test and differential scanning calorimetry (DSC) were also performed to determine the thermal and mechanical properties of the cured resin. It was found that the use of the 20 kHz ultrasonic probe for mixing/degassing gave the lowest percentage voids of all the mixing methods in the study. In addition, the percentage voids found when employing a 40 kHz ultrasonic bath to mix/degas the epoxy polymer mixture was only slightly higher than when magnetic stirrer mixing followed by vacuum degassing was utilized. The effect of ultrasonic mixing/degassing on the thermal and mechanical properties of the cured resin will also be reported. The results suggest that low frequency ultrasound is an effective means of mixing/degassing a pre-polymer mixture and could enable a significant reduction in production times.Keywords: degassing, low frequency ultrasound, polymer composites, voids
Procedia PDF Downloads 296694 Educational Leadership Preparation Program Review of Employer Satisfaction
Authors: Glenn Koonce
Abstract:
There is a need to address the improvement of university educational leadership preparation programs through the processes of accreditation and continuous improvement. The program faculty in a university in the eastern part of the United States has incorporated an employer satisfaction focus group to address their national accreditation standard so that employers are satisfied with completers' preparation for the position of principal or assistant principal. Using the Council for the Accreditation of Educator Preparation (CAEP) required proficiencies, the following research questions are investigated: 1) what proficiencies do completers perform the strongest? 2) what proficiencies need to be strengthened? 3) what other strengths beyond the required proficiencies do completers demonstrate? 4) what other areas of responsibility beyond the required proficiencies do completers demonstrate? and 5) how can the program improve in preparing candidates for their positions? This study focuses on employers of one public school district that has a large number of educational leadership completers employed as principals and assistant principals. Central office directors who evaluate principals and principals who evaluate assistant principals are focus group participants. Construction of the focus group questions is a result of recommendations from an accreditation regulatory specialist, reviewed by an expert panel, and piloted by an experienced focus group leader. The focus group session was audio recorded, transcribed, and analyzed using the NVivo Version 14 software. After constructing folders in NVivo, the focus group transcript was loaded and skimmed by diagnosing significant statements and assessing core ideas for developing primary themes. These themes were aligned to address the research questions. From the transcript, codes were assigned to the themes and NVivo provided a coding hierarchy chart or graphical illustration for framing the coding. A final report of the coding process was designed using the primary themes and pertinent codes that were supported in excerpts from the transcript. The outcome of this study is to identify themes that can provide evidence that the educational leadership program is meeting its mission to improve PreK-12 student achievement through well-prepared completers who have achieved the position of principal or assistant principal. The considerations will be used to derive a composite profile of employers' satisfaction with program completers with the capacity to serve, influence, and thrive as educational leaders. Analysis of the idealized themes will result in identifying issues that may challenge university educational leadership programs to improve. Results, conclusions, and recommendations are used for continuous improvement, which is another national accreditation standard required for the program.Keywords: educational leadership preparation, CAEP accreditation, principal & assistant principal evaluations, continuous improvement
Procedia PDF Downloads 28693 Electret: A Solution of Partial Discharge in High Voltage Applications
Authors: Farhina Haque, Chanyeop Park
Abstract:
The high efficiency, high field, and high power density provided by wide bandgap (WBG) semiconductors and advanced power electronic converter (PEC) topologies enabled the dynamic control of power in medium to high voltage systems. Although WBG semiconductors outperform the conventional Silicon based devices in terms of voltage rating, switching speed, and efficiency, the increased voltage handling properties, high dv/dt, and compact device packaging increase local electric fields, which are the main causes of partial discharge (PD) in the advanced medium and high voltage applications. PD, which occurs actively in voids, triple points, and airgaps, is an inevitable dielectric challenge that causes insulation and device aging. The aging process accelerates over time and eventually leads to the complete failure of the applications. Hence, it is critical to mitigating PD. Sharp edges, airgaps, triple points, and bubbles are common defects that exist in any medium to high voltage device. The defects are created during the manufacturing processes of the devices and are prone to high-electric-field-induced PD due to the low permittivity and low breakdown strength of the gaseous medium filling the defects. A contemporary approach of mitigating PD by neutralizing electric fields in high power density applications is introduced in this study. To neutralize the locally enhanced electric fields that occur around the triple points, airgaps, sharp edges, and bubbles, electrets are developed and incorporated into high voltage applications. Electrets are electric fields emitting dielectric materials that are embedded with electrical charges on the surface and in bulk. In this study, electrets are fabricated by electrically charging polyvinylidene difluoride (PVDF) films based on the widely used triode corona discharge method. To investigate the PD mitigation performance of the fabricated electret films, a series of PD experiments are conducted on both the charged and uncharged PVDF films under square voltage stimuli that represent PWM waveform. In addition to the use of single layer electrets, multiple layers of electrets are also experimented with to mitigate PD caused by higher system voltages. The electret-based approach shows great promise in mitigating PD by neutralizing the local electric field. The results of the PD measurements suggest that the development of an ultimate solution to the decades-long dielectric challenge would be possible with further developments in the fabrication process of electrets.Keywords: electrets, high power density, partial discharge, triode corona discharge
Procedia PDF Downloads 203692 The Importance of Urban Pattern and Planting Design in Urban Transformation Projects
Authors: Mustafa Var, Yasin Kültiğin Yaman, Elif Berna Var, Müberra Pulatkan
Abstract:
This study deals with real application of an urban transformation project in Trabzon, Turkey. It aims to highlight the significance of using native species in terms of planting design of transformation projects which will also promote sustainability of urban identity. Urban identity is a phenomenon shaped not only by physical, but also by natural, spatial, social, historical and cultural factors. Urban areas face with continuous change which can be whether positive or negative way. If it occurs in a negative way that may have some destructive effects on urban identity. To solve this problematic issue, urban renewal movements initally started after 1840s around the world especially in the cities with ports. This process later followed by the places where people suffered a lot from fires and has expanded to all over the world. In Turkey, those processes have been experienced mostly after 1980s as country experienced the worst effects of unplanned urbanization especially in 1950-1990 period. Also old squares, streets, meeting points, green areas, Ottoman bazaars have changed slowly. This change was resulted in alienation of inhabitants to their environments. As a solution, several actions were taken like Mass Housing Laws which was enacted in 1981 and 1984 or urban transformation projects. Although projects between 1990-2000 were tried to satisfy the expectations of local inhabitants by the help of several design solutions to promote cultural identity; unfortunately those modern projects has also been resulted in alienation of urban environments to the inhabitants. Those projects were initially done by TOKI (Housing Development Administration of Turkey) and later followed by the Ministry of Environment and Urbanization after 2011. Although they had significant potentials to create healthy urban environments, they could not use this opportunity in an effective way. The reason for their failure is that their architectural styles and planting designs are unrespectful to local identity and environments. Generally, it can be said that the most of the urban transformation projects implementing in Turkey nearly have no concerns about the locality. However, those projects can be used as a positive tool for enhanching the urban identity of cities by means of local planting material. For instance, Kyoto can be identified by Japanese Maple trees or Seattle can be specified by Dahlia. In the same way, in Turkey, Istanbul city can be identified by Judas and Stone Pine trees or Giresun city can be identified by Cherry trees. Thus, in this paper, the importance of conserving urban identity is discussed specificly with the help of using local planting elements. After revealing the mistakes that are made during urban transformation projects, the techniques and design criterias for preserving and promoting urban identity are examined. In the end, it is emphasized that every city should have their own original, local character and specific planting design which can be used for highlighting its identity as well as architectural elements.Keywords: urban identity, urban transformation, planting design, landscape architecture
Procedia PDF Downloads 546691 Formation of the Water Assisted Supramolecular Assembly in the Transition Structure of Organocatalytic Asymmetric Aldol Reaction: A DFT Study
Authors: Kuheli Chakrabarty, Animesh Ghosh, Atanu Roy, Gourab Kanti Das
Abstract:
Aldol reaction is an important class of carbon-carbon bond forming reactions. One of the popular ways to impose asymmetry in aldol reaction is the introduction of chiral auxiliary that binds the approaching reactants and create dissymmetry in the reaction environment, which finally evolves to enantiomeric excess in the aldol products. The last decade witnesses the usage of natural amino acids as chiral auxiliary to control the stereoselectivity in various carbon-carbon bond forming processes. In this context, L-proline was found to be an effective organocatalyst in asymmetric aldol additions. In last few decades the use of water as solvent or co-solvent in asymmetric organocatalytic reaction is increased sharply. Simple amino acids like L-proline does not catalyze asymmetric aldol reaction in aqueous medium not only that, In organic solvent medium high catalytic loading (~30 mol%) is required to achieve moderate to high asymmetric induction. In this context, huge efforts have been made to modify L-proline and 4-hydroxy-L-proline to prepare organocatalyst for aqueous medium asymmetric aldol reaction. Here, we report the result of our DFT calculations on asymmetric aldol reaction of benzaldehyde, p-NO2 benzaldehyde and t-butyraldehyde with a number of ketones using L-proline hydrazide as organocatalyst in wet solvent free condition. Gaussian 09 program package and Gauss View program were used for the present work. Geometry optimizations were performed using B3LYP hybrid functional and 6-31G(d,p) basis set. Transition structures were confirmed by hessian calculation and IRC calculation. As the reactions were carried out in solvent free condition, No solvent effect were studied theoretically. Present study has revealed for the first time, the direct involvement of two water molecules in the aldol transition structures. In the TS, the enamine and the aldehyde is connected through hydrogen bonding by the assistance of two intervening water molecules forming a supramolecular network. Formation of this type of supramolecular assembly is possible due to the presence of protonated -NH2 group in the L-proline hydrazide moiety, which is responsible for the favorable entropy contribution to the aldol reaction. It is also revealed from the present study that, water assisted TS is energetically more favorable than the TS without involving any water molecule. It can be concluded from this study that, insertion of polar group capable of hydrogen bond formation in the L-proline skeleton can lead to a favorable aldol reaction with significantly high enantiomeric excess in wet solvent free condition by reducing the activation barrier of this reaction.Keywords: aldol reaction, DFT, organocatalysis, transition structure
Procedia PDF Downloads 435690 Current Status of Scaled-Up Synthesis/Purification and Characterization of a Potentially Translatable Tantalum Oxide Nanoparticle Intravenous CT Contrast Agent
Authors: John T. Leman, James Gibson, Peter J. Bonitatibus
Abstract:
There have been no potential clinically translatable developments of intravenous CT contrast materials over decades, and iodinated contrast agents (ICA) remain the only FDA-approved media for CT. Small molecule ICA used to highlight vascular anatomy have weak CT signals in large-to-obese patients due to their rapid redistribution from plasma into interstitial fluid, thereby diluting their intravascular concentration, and because of a mismatch of iodine’s K-edge and the high kVp settings needed to image this patient population. The use of ICA is also contraindicated in a growing population of renally impaired patients who are hypersensitive to these contrast agents; a transformative intravenous contrast agent with improved capabilities is urgently needed. Tantalum oxide nanoparticles (TaO NPs) with zwitterionic siloxane polymer coatings have high potential as clinically translatable general-purpose CT contrast agents because of (1) substantially improved imaging efficacy compared to ICA in swine/phantoms emulating medium-sized and larger adult abdomens and superior thoracic vascular contrast enhancement of thoracic arteries and veins in rabbit, (2) promising biological safety profiles showing near-complete renal clearance and low tissue retention at 3x anticipated clinical dose (ACD), and (3) clinically acceptable physiochemical parameters as concentrated bulk solutions(250-300 mgTa/mL). Here, we review requirements for general-purpose intravenous CT contrast agents in terms of patient safety, X-ray attenuating properties and contrast-producing capabilities, and physicochemical and pharmacokinetic properties. We report the current status of a TaO NP-based contrast agent, including chemical process technology developments and results of newly defined scaled-up processes for NP synthesis and purification, yielding reproducible formulations with appropriate size and concentration specifications. We discuss recent results of recent pre-clinical in vitro immunology, non-GLP high dose tolerability in rats (10x ACD), non-GLP long-term biodistribution in rats at 3x ACD, and non-GLP repeat dose in rats at ACD. We also include a discussion of NP characterization, in particular size-stability testing results under accelerated conditions (37C), and insights into TaO NP purity, surface structure, and bonding of the zwitterionic siloxane polymer coating by multinuclear (1H, 13C, 29Si) and multidimensional (2D) solution NMR spectroscopy.Keywords: nanoparticle, imaging, diagnostic, process technology, nanoparticle characterization
Procedia PDF Downloads 37689 Changes in Geospatial Structure of Households in the Czech Republic: Findings from Population and Housing Census
Authors: Jaroslav Kraus
Abstract:
Spatial information about demographic processes are a standard part of outputs in the Czech Republic. That was also the case of Population and Housing Census which was held on 2011. This is a starting point for a follow up study devoted to two basic types of households: single person households and households of one completed family. Single person households and one family households create more than 80 percent of all households, but the share and spatial structure is in long-term changing. The increase of single households is results of long-term fertility decrease and divorce increase, but also possibility of separate living. There are regions in the Czech Republic with traditional demographic behavior, and regions like capital Prague and some others with changing pattern. Population census is based - according to international standards - on the concept of currently living population. Three types of geospatial approaches will be used for analysis: (i) firstly measures of geographic distribution, (ii) secondly mapping clusters to identify the locations of statistically significant hot spots, cold spots, spatial outliers, and similar features and (iii) finally analyzing pattern approach as a starting point for more in-depth analyses (geospatial regression) in the future will be also applied. For analysis of this type of data, number of households by types should be distinct objects. All events in a meaningful delimited study region (e.g. municipalities) will be included in an analysis. Commonly produced measures of central tendency and spread will include: identification of the location of the center of the point set (by NUTS3 level); identification of the median center and standard distance, weighted standard distance and standard deviational ellipses will be also used. Identifying that clustering exists in census households datasets does not provide a detailed picture of the nature and pattern of clustering but will be helpful to apply simple hot-spot (and cold spot) identification techniques to such datasets. Once the spatial structure of households will be determined, any particular measure of autocorrelation can be constructed by defining a way of measuring the difference between location attribute values. The most widely used measure is Moran’s I that will be applied to municipal units where numerical ratio is calculated. Local statistics arise naturally out of any of the methods for measuring spatial autocorrelation and will be applied to development of localized variants of almost any standard summary statistic. Local Moran’s I will give an indication of household data homogeneity and diversity on a municipal level.Keywords: census, geo-demography, households, the Czech Republic
Procedia PDF Downloads 96688 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique
Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina
Abstract:
The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.Keywords: diffusion, glass-ceramics, ion exchange, vitrification
Procedia PDF Downloads 269687 Investigation of a Single Feedstock Particle during Pyrolysis in Fluidized Bed Reactors via X-Ray Imaging Technique
Authors: Stefano Iannello, Massimiliano Materazzi
Abstract:
Fluidized bed reactor technologies are one of the most valuable pathways for thermochemical conversions of biogenic fuels due to their good operating flexibility. Nevertheless, there are still issues related to the mixing and separation of heterogeneous phases during operation with highly volatile feedstocks, including biomass and waste. At high temperatures, the volatile content of the feedstock is released in the form of the so-called endogenous bubbles, which generally exert a “lift” effect on the particle itself by dragging it up to the bed surface. Such phenomenon leads to high release of volatile matter into the freeboard and limited mass and heat transfer with particles of the bed inventory. The aim of this work is to get a better understanding of the behaviour of a single reacting particle in a hot fluidized bed reactor during the devolatilization stage. The analysis has been undertaken at different fluidization regimes and temperatures to closely mirror the operating conditions of waste-to-energy processes. Beechwood and polypropylene particles were used to resemble the biomass and plastic fractions present in waste materials, respectively. The non-invasive X-ray technique was coupled to particle tracking algorithms to characterize the motion of a single feedstock particle during the devolatilization with high resolution. A high-energy X-ray beam passes through the vessel where absorption occurs, depending on the distribution and amount of solids and fluids along the beam path. A high-speed video camera is synchronised to the beam and provides frame-by-frame imaging of the flow patterns of fluids and solids within the fluidized bed up to 72 fps (frames per second). A comprehensive mathematical model has been developed in order to validate the experimental results. Beech wood and polypropylene particles have shown a very different dynamic behaviour during the pyrolysis stage. When the feedstock is fed from the bottom, the plastic material tends to spend more time within the bed than the biomass. This behaviour can be attributed to the presence of the endogenous bubbles, which drag effect is more pronounced during the devolatilization of biomass, resulting in a lower residence time of the particle within the bed. At the typical operating temperatures of thermochemical conversions, the synthetic polymer softens and melts, and the bed particles attach on its outer surface, generating a wet plastic-sand agglomerate. Consequently, this additional layer of sand may hinder the rapid evolution of volatiles in the form of endogenous bubbles, and therefore the establishment of a poor drag effect acting on the feedstock itself. Information about the mixing and segregation of solid feedstock is of prime importance for the design and development of more efficient industrial-scale operations.Keywords: fluidized bed, pyrolysis, waste feedstock, X-ray
Procedia PDF Downloads 172686 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme
Authors: Eleanor Nel
Abstract:
Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.Keywords: evaluation, framework, impact, research output
Procedia PDF Downloads 76