Search results for: continuous cycle industrial technological processes
778 Cloud Based Supply Chain Traceability
Authors: Kedar J. Mahadeshwar
Abstract:
Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.Keywords: cloud, pharmaceutical, supply chain, tracking
Procedia PDF Downloads 527777 The French Ekang Ethnographic Dictionary. The Quantum Approach
Authors: Henda Gnakate Biba, Ndassa Mouafon Issa
Abstract:
Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: music, language, entenglement, science, research
Procedia PDF Downloads 69776 Forging A Distinct Understanding of Implicit Bias
Authors: Benjamin D Reese Jr
Abstract:
Implicit bias is understood as unconscious attitudes, stereotypes, or associations that can influence the cognitions, actions, decisions, and interactions of an individual without intentional control. These unconscious attitudes or stereotypes are often targeted toward specific groups of people based on their gender, race, age, perceived sexual orientation or other social categories. Since the late 1980s, there has been a proliferation of research that hypothesizes that the operation of implicit bias is the result of the brain needing to process millions of bits of information every second. Hence, one’s prior individual learning history provides ‘shortcuts’. As soon as one see someone of a certain race, one have immediate associations based on their past learning, and one might make assumptions about their competence, skill, or danger. These assumptions are outside of conscious awareness. In recent years, an alternative conceptualization has been proposed. The ‘bias of crowds’ theory hypothesizes that a given context or situation influences the degree of accessibility of particular biases. For example, in certain geographic communities in the United States, there is a long-standing and deeply ingrained history of structures, policies, and practices that contribute to racial inequities and bias toward African Americans. Hence, negative biases among groups of people towards African Americans are more accessible in such contexts or communities. This theory does not focus on individual brain functioning or cognitive ‘shortcuts.’ Therefore, attempts to modify individual perceptions or learning might have negligible impact on those embedded environmental systems or policies that are within certain contexts or communities. From the ‘bias of crowds’ perspective, high levels of racial bias in a community can be reduced by making fundamental changes in structures, policies, and practices to create a more equitable context or community rather than focusing on training or education aimed at reducing an individual’s biases. The current paper acknowledges and supports the foundational role of long-standing structures, policies, and practices that maintain racial inequities, as well as inequities related to other social categories, and highlights the critical need to continue organizational, community, and national efforts to eliminate those inequities. It also makes a case for providing individual leaders with a deep understanding of the dynamics of how implicit biases impact cognitions, actions, decisions, and interactions so that those leaders might more effectively develop structural changes in the processes and systems under their purview. This approach incorporates both the importance of an individual’s learning history as well as the important variables within the ‘bias of crowds’ theory. The paper also offers a model for leadership education, as well as examples of structural changes leaders might consider.Keywords: implicit bias, unconscious bias, bias, inequities
Procedia PDF Downloads 6775 Study of the Possibility of Adsorption of Heavy Metal Ions on the Surface of Engineered Nanoparticles
Authors: Antonina A. Shumakova, Sergey A. Khotimchenko
Abstract:
The relevance of research is associated, on the one hand, with an ever-increasing volume of production and the expansion of the scope of application of engineered nanomaterials (ENMs), and on the other hand, with the lack of sufficient scientific information on the nature of the interactions of nanoparticles (NPs) with components of biogenic and abiogenic origin. In particular, studying the effect of ENMs (TiO2 NPs, SiO2 NPs, Al2O3 NPs, fullerenol) on the toxicometric characteristics of common contaminants such as lead and cadmium is an important hygienic task, given the high probability of their joint presence in food products. Data were obtained characterizing a multidirectional change in the toxicity of model toxicants when they are co-administered with various types of ENMs. One explanation for this fact is the difference in the adsorption capacity of ENMs, which was further studied in in vitro studies. For this, a method was proposed based on in vitro modeling of conditions simulating the environment of the small intestine. It should be noted that the obtained data are in good agreement with the results of in vivo experiments: - with the combined administration of lead and TiO2 NPs, there were no significant changes in the accumulation of lead in rat liver; in other organs (kidneys, spleen, testes and brain), the lead content was lower than in animals of the control group; - studying the combined effect of lead and Al2O3 NPs, a multiple and significant increase in the accumulation of lead in rat liver was observed with an increase in the dose of Al2O3 NPs. For other organs, the introduction of various doses of Al2O3 NPs did not significantly affect the bioaccumulation of lead; - with the combined administration of lead and SiO2 NPs in different doses, there was no increase in lead accumulation in all studied organs. Based on the data obtained, it can be assumed that at least three scenarios of the combined effects of ENMs and chemical contaminants on the body: - ENMs quite firmly bind contaminants in the gastrointestinal tract and such a complex becomes inaccessible (or inaccessible) for absorption; in this case, it can be expected that the toxicity of both ENMs and contaminants will decrease; - the complex formed in the gastrointestinal tract has partial solubility and can penetrate biological membranes and / or physiological barriers of the body; in this case, ENMs can play the role of a kind of conductor for contaminants and, thus, their penetration into the internal environment of the body increases, thereby increasing the toxicity of contaminants; - ENMs and contaminants do not interact with each other in any way, therefore the toxicity of each of them is determined only by its quantity and does not depend on the quantity of another component. Authors hypothesized that the degree of adsorption of various elements on the surface of ENMs may be a unique characteristic of their action, allowing a more accurate understanding of the processes occurring in a living organism.Keywords: absorption, cadmium, engineered nanomaterials, lead
Procedia PDF Downloads 87774 Reconceptualising the Voice of Children in Child Protection
Authors: Sharon Jackson, Lynn Kelly
Abstract:
This paper proposes a conceptual review of the interdisciplinary literature which has theorised the concept of ‘children’s voices’. The primary aim is to identify and consider the theoretical relevance of conceptual thought on ‘children’s voices’ for research and practice in child protection contexts. Attending to the ‘voice of the child’ has become a core principle of social work practice in contemporary child protection contexts. Discourses of voice permeate the legislative, policy and practice frameworks of child protection practices within the UK and internationally. Voice is positioned within a ‘child-centred’ moral imperative to ‘hear the voices’ of children and take their preferences and perspectives into account. This practice is now considered to be central to working in a child-centered way. The genesis of this call to voice is revealed through sociological analysis of twentieth-century child welfare reform as rooted inter alia in intersecting political, social and cultural discourses which have situated children and childhood as cites of state intervention as enshrined in the 1989 United Nations Convention on the Rights of the Child ratified by the UK government in 1991 and more specifically Article 12 of the convention. From a policy and practice perspective, the professional ‘capturing’ of children’s voices has come to saturate child protection practice. This has incited a stream of directives, resources, advisory publications and ‘how-to’ guides which attempt to articulate practice methods to ‘listen’, ‘hear’ and above all – ‘capture’ the ‘voice of the child’. The idiom ‘capturing the voice of the child’ is frequently invoked within the literature to express the requirements of the child-centered practice task to be accomplished. Despite the centrality of voice, and an obsession with ‘capturing’ voices, evidence from research, inspection processes, serious case reviews, child abuse and death inquires has consistently highlighted professional neglect of ‘the voice of the child’. Notable research studies have highlighted the relative absence of the child’s voice in social work assessment practices, a troubling lack of meaningful engagement with children and the need to more thoroughly examine communicative practices in child protection contexts. As a consequence, the project of capturing ‘the voice of the child’ has intensified, and there has been an increasing focus on developing methods and professional skills to attend to voice. This has been guided by a recognition that professionals often lack the skills and training to engage with children in age-appropriate ways. We argue however that the problem with ‘capturing’ and [re]representing ‘voice’ in child protection contexts is, more fundamentally, a failure to adequately theorise the concept of ‘voice’ in the ‘voice of the child’. For the most part, ‘The voice of the child’ incorporates psychological conceptions of child development. While these concepts are useful in the context of direct work with children, they fail to consider other strands of sociological thought, which position ‘the voice of the child’ within an agentic paradigm to emphasise the active agency of the child.Keywords: child-centered, child protection, views of the child, voice of the child
Procedia PDF Downloads 136773 Human Creativity through Dooyeweerd's Philosophy: The Case of Creative Diagramming
Authors: Kamaran Fathulla
Abstract:
Human creativity knows no bounds. More than a millennia ago humans have expressed their knowledge on cave walls and on clay artefacts. Using visuals such as diagrams and paintings have always provided us with a natural and intuitive medium for expressing such creativity. Making sense of human generated visualisation has been influenced by western scientific philosophies which are often reductionist in their nature. Theoretical frameworks such as those delivered by Peirce dominated our views of how to make sense of visualisation where a visual is seen as an emergent property of our thoughts. Others have reduced the richness of human-generated visuals to mere shapes drawn on a piece of paper or on a screen. This paper introduces an alternate framework where the centrality of human functioning is given explicit and richer consideration through the multi aspectual philosophical works of Herman Dooyeweerd. Dooyeweerd's framework of understanding reality was based on fifteen aspects of reality, each having a distinct core meaning. The totality of the aspects formed a ‘rainbow’ like spectrum of meaning. The thesis of this approach is that meaningful human functioning in most cases involves the diversity of all aspects working in synergy and harmony. Illustration of the foundations and applicability of this approach is underpinned in the case of humans use of diagramming for creative purposes, particularly within an educational context. Diagrams play an important role in education. Students and lecturers use diagrams as a powerful tool to aid their thinking. However, research into the role of diagrams used in education continues to reveal difficulties students encounter during both processes of interpretation and construction of diagrams. Their main problems shape up students difficulties with diagrams. The ever-increasing diversity of diagrams' types coupled with the fact that most real-world diagrams often contain a mix of these different types of diagrams such as boxes and lines, bar charts, surfaces, routes, shapes dotted around the drawing area, and so on with each type having its own distinct set of static and dynamic semantics. We argue that the persistence of these problems is grounded in our existing ways of understanding diagrams that are often reductionist in their underpinnings driven by a single perspective or formalism. In this paper, we demonstrate the limitations of these approaches in dealing with the three problems. Consequently, we propose, discuss, and demonstrate the potential of a nonreductionist framework for understanding diagrams based on Symbolic and Spatial Mappings (SySpM) underpinned by Dooyeweerd philosophy. The potential of the framework to account for the meaning of diagrams is demonstrated by applying it to a real-world case study physics diagram.Keywords: SySpM, drawing style, mapping
Procedia PDF Downloads 238772 Evaluation of Soil Erosion Risk and Prioritization for Implementation of Management Strategies in Morocco
Authors: Lahcen Daoudi, Fatima Zahra Omdi, Abldelali Gourfi
Abstract:
In Morocco, as in most Mediterranean countries, water scarcity is a common situation because of low and unevenly distributed rainfall. The expansions of irrigated lands, as well as the growth of urban and industrial areas and tourist resorts, contribute to an increase of water demand. Therefore in the 1960s Morocco embarked on an ambitious program to increase the number of dams to boost water retention capacity. However, the decrease in the capacity of these reservoirs caused by sedimentation is a major problem; it is estimated at 75 million m3/year. Dams and reservoirs became unusable for their intended purposes due to sedimentation in large rivers that result from soil erosion. Soil erosion presents an important driving force in the process affecting the landscape. It has become one of the most serious environmental problems that raised much interest throughout the world. Monitoring soil erosion risk is an important part of soil conservation practices. The estimation of soil loss risk is the first step for a successful control of water erosion. The aim of this study is to estimate the soil loss risk and its spatial distribution in the different fields of Morocco and to prioritize areas for soil conservation interventions. The approach followed is the Revised Universal Soil Loss Equation (RUSLE) using remote sensing and GIS, which is the most popular empirically based model used globally for erosion prediction and control. This model has been tested in many agricultural watersheds in the world, particularly for large-scale basins due to the simplicity of the model formulation and easy availability of the dataset. The spatial distribution of the annual soil loss was elaborated by the combination of several factors: rainfall erosivity, soil erodability, topography, and land cover. The average annual soil loss estimated in several basins watershed of Morocco varies from 0 to 50t/ha/year. Watersheds characterized by high-erosion-vulnerability are located in the North (Rif Mountains) and more particularly in the Central part of Morocco (High Atlas Mountains). This variation of vulnerability is highly correlated to slope variation which indicates that the topography factor is the main agent of soil erosion within these basin catchments. These results could be helpful for the planning of natural resources management and for implementing sustainable long-term management strategies which are necessary for soil conservation and for increasing over the projected economic life of the dam implemented.Keywords: soil loss, RUSLE, GIS-remote sensing, watershed, Morocco
Procedia PDF Downloads 461771 Data-Driven Surrogate Models for Damage Prediction of Steel Liquid Storage Tanks under Seismic Hazard
Authors: Laura Micheli, Majd Hijazi, Mahmoud Faytarouni
Abstract:
The damage reported by oil and gas industrial facilities revealed the utmost vulnerability of steel liquid storage tanks to seismic events. The failure of steel storage tanks may yield devastating and long-lasting consequences on built and natural environments, including the release of hazardous substances, uncontrolled fires, and soil contamination with hazardous materials. It is, therefore, fundamental to reliably predict the damage that steel liquid storage tanks will likely experience under future seismic hazard events. The seismic performance of steel liquid storage tanks is usually assessed using vulnerability curves obtained from the numerical simulation of a tank under different hazard scenarios. However, the computational demand of high-fidelity numerical simulation models, such as finite element models, makes the vulnerability assessment of liquid storage tanks time-consuming and often impractical. As a solution, this paper presents a surrogate model-based strategy for predicting seismic-induced damage in steel liquid storage tanks. In the proposed strategy, the surrogate model is leveraged to reduce the computational demand of time-consuming numerical simulations. To create the data set for training the surrogate model, field damage data from past earthquakes reconnaissance surveys and reports are collected. Features representative of steel liquid storage tank characteristics (e.g., diameter, height, liquid level, yielding stress) and seismic excitation parameters (e.g., peak ground acceleration, magnitude) are extracted from the field damage data. The collected data are then utilized to train a surrogate model that maps the relationship between tank characteristics, seismic hazard parameters, and seismic-induced damage via a data-driven surrogate model. Different types of surrogate algorithms, including naïve Bayes, k-nearest neighbors, decision tree, and random forest, are investigated, and results in terms of accuracy are reported. The model that yields the most accurate predictions is employed to predict future damage as a function of tank characteristics and seismic hazard intensity level. Results show that the proposed approach can be used to estimate the extent of damage in steel liquid storage tanks, where the use of data-driven surrogates represents a viable alternative to computationally expensive numerical simulation models.Keywords: damage prediction , data-driven model, seismic performance, steel liquid storage tanks, surrogate model
Procedia PDF Downloads 143770 The Impact of HKUST-1 Metal-Organic Framework Pretreatment on Dynamic Acetaldehyde Adsorption
Authors: M. François, L. Sigot, C. Vallières
Abstract:
Volatile Organic Compounds (VOCs) are a real health issue, particularly in domestic indoor environments. Among these VOCs, acetaldehyde is frequently monitored in dwellings ‘air, especially due to smoking and spontaneous emissions from the new wall and soil coverings. It is responsible for respiratory complaints and is classified as possibly carcinogenic to humans. Adsorption processes are commonly used to remove VOCs from the air. Metal-Organic Frameworks (MOFs) are a promising type of material for high adsorption performance. These hybrid porous materials composed of metal inorganic clusters and organic ligands are interesting thanks to their high porosity and surface area. The HKUST-1 (also referred to as MOF-199) is a copper-based MOF with the formula [Cu₃(BTC)₂(H₂O)₃]n (BTC = benzene-1,3,5-tricarboxylate) and exhibits unsaturated metal sites that can be attractive sites for adsorption. The objective of this study is to investigate the impact of HKUST-1 pretreatment on acetaldehyde adsorption. Thus, dynamic adsorption experiments were conducted in 1 cm diameter glass column packed with 2 cm MOF bed height. MOF were sieved to 630 µm - 1 mm. The feed gas (Co = 460 ppmv ± 5 ppmv) was obtained by diluting a 1000 ppmv acetaldehyde gas cylinder in air. The gas flow rate was set to 0.7 L/min (to guarantee a suitable linear velocity). Acetaldehyde concentration was monitored online by gas chromatography coupled with a flame ionization detector (GC-FID). Breakthrough curves must allow to understand the interactions between the MOF and the pollutant as well as the impact of the HKUST-1 humidity in the adsorption process. Consequently, different MOF water content conditions were tested, from a dry material with 7 % water content (dark blue color) to water saturated state with approximately 35 % water content (turquoise color). The rough material – without any pretreatment – containing 30 % water serves as a reference. First, conclusions can be drawn from the comparison of the evolution of the ratio of the column outlet concentration (C) on the inlet concentration (Co) as a function of time for different HKUST-1 pretreatments. The shape of the breakthrough curves is significantly different. The saturation of the rough material is slower (20 h to reach saturation) than that of the dried material (2 h). However, the breakthrough time defined for C/Co = 10 % appears earlier in the case of the rough material (0.75 h) compared to the dried HKUST-1 (1.4 h). Another notable difference is the shape of the curve before the breakthrough at 10 %. An abrupt increase of the outlet concentration is observed for the material with the lower humidity in comparison to a smooth increase for the rough material. Thus, the water content plays a significant role on the breakthrough kinetics. This study aims to understand what can explain the shape of the breakthrough curves associated to the pretreatments of HKUST-1 and which mechanisms take place in the adsorption process between the MOF, the pollutant, and the water.Keywords: acetaldehyde, dynamic adsorption, HKUST-1, pretreatment influence
Procedia PDF Downloads 238769 The Effects of Grape Waste Bioactive Compounds on the Immune Response and Oxidative Stress in Pig Kidney
Authors: Mihai Palade, Gina Cecilia Pistol, Mariana Stancu, Veronica Chedea, Ionelia Taranu
Abstract:
Nutrition is an important determinant of general health status, with especially focus on prevention and/or attenuation of the inflammatory-associated pathologies. People with chronic kidney disease can experience chronic inflammation that can lead to cardiovascular disease and even an increased rate of death. There are important links between chronic kidney diseases, inflammation and nutritional strategies that may prevent or protect against undesirable inflammation and oxidative stress. The grape by-products either seeds or pomace are rich in polyphenols which may be beneficial in prevention of inflammatory, antioxidant and antimicrobial processes. As a model for studying the impact of grape seeds on renal inflammation and oxidative stress, we used in this study weaned piglets. After a feeding trial of 30 days with a control diet and an experimental diet containing 5% grape seed (GS), kidney samples were collected. In renal tissues were determined the expression and activity of important markers of immune respose and oxidative stress: pro-inflammatory cytokines (TNF-alpha, IL-1 beta, IL-6, IL-8, IFN-gamma), anti-inflammatory cytokines (IL-4, IL-10), anti-oxidant enzymes (catalase CAT, superoxide dismutase SOD, glutathione peroxidise GPx) and important mediators belonging to nuclear receptors (NF-kB1, Nrf-2 and PPAR-gamma). Gene expression was evaluated by qPCR, whereas protein concentration was determined using proteomic techniques (ELISA). The activity of anti-oxidant enzymes was determined using specific kits. Our results showed that GS enriched in polyphenols does not have effect on TNF-alpha, IL-6 and IL-1 beta gene expression and protein concentration in kidney. By contrast, the gene expression and protein level of IL-8 and IFN-gamma were decreased in GS kidney. Anti-inflammatory cytokines IL-4 and IL-10 gene levels were increased in kidneys collected from GS piglets in comparison with controls, with no modification of protein levels between the two groups. The activities of anti-oxidant enzymes CAT and GPx were increased in kidney by GS, whereas SOD activity was unmodified in comparison with control samples. Also, the GS diet was associated with no modulation of mRNAs for nuclear receptors NF-kB1, Nrf-2 and PPAR-gamma gene expressions in kidneys. In conclusion, our results demonstrated that GS enriched in bioactive compounds such polyphenols could modulate inflammation and oxidative stress markers in kidney tissues. Further studies are necessary to elucidate the mechanism of action of GS compounds in case kidney inflammation associated with oxidative stress, and signalling molecules involved in these mechanisms.Keywords: animal model, kidney inflammation, oxidative stress, grape seed
Procedia PDF Downloads 298768 Greenhouse Gasses’ Effect on Atmospheric Temperature Increase and the Observable Effects on Ecosystems
Authors: Alexander J. Severinsky
Abstract:
Radiative forces of greenhouse gases (GHG) increase the temperature of the Earth's surface, more on land, and less in oceans, due to their thermal capacities. Given this inertia, the temperature increase is delayed over time. Air temperature, however, is not delayed as air thermal capacity is much lower. In this study, through analysis and synthesis of multidisciplinary science and data, an estimate of atmospheric temperature increase is made. Then, this estimate is used to shed light on current observations of ice and snow loss, desertification and forest fires, and increased extreme air disturbances. The reason for this inquiry is due to the author’s skepticism that current changes cannot be explained by a "~1 oC" global average surface temperature rise within the last 50-60 years. The only other plausible cause to explore for understanding is that of atmospheric temperature rise. The study utilizes an analysis of air temperature rise from three different scientific disciplines: thermodynamics, climate science experiments, and climactic historical studies. The results coming from these diverse disciplines are nearly the same, within ± 1.6%. The direct radiative force of GHGs with a high level of scientific understanding is near 4.7 W/m2 on average over the Earth’s entire surface in 2018, as compared to one in pre-Industrial time in the mid-1700s. The additional radiative force of fast feedbacks coming from various forms of water gives approximately an additional ~15 W/m2. In 2018, these radiative forces heated the atmosphere by approximately 5.1 oC, which will create a thermal equilibrium average ground surface temperature increase of 4.6 oC to 4.8 oC by the end of this century. After 2018, the temperature will continue to rise without any additional increases in the concentration of the GHGs, primarily of carbon dioxide and methane. These findings of the radiative force of GHGs in 2018 were applied to estimates of effects on major Earth ecosystems. This additional force of nearly 20 W/m2 causes an increase in ice melting by an additional rate of over 90 cm/year, green leaves temperature increase by nearly 5 oC, and a work energy increase of air by approximately 40 Joules/mole. This explains the observed high rates of ice melting at all altitudes and latitudes, the spread of deserts and increases in forest fires, as well as increased energy of tornadoes, typhoons, hurricanes, and extreme weather, much more plausibly than the 1.5 oC increase in average global surface temperature in the same time interval. Planned mitigation and adaptation measures might prove to be much more effective when directed toward the reduction of existing GHGs in the atmosphere.Keywords: greenhouse radiative force, greenhouse air temperature, greenhouse thermodynamics, greenhouse historical, greenhouse radiative force on ice, greenhouse radiative force on plants, greenhouse radiative force in air
Procedia PDF Downloads 104767 Antagonistic Potential of Epiphytic Bacteria Isolated in Kazakhstan against Erwinia amylovora, the Causal Agent of Fire Blight
Authors: Assel E. Molzhigitova, Amankeldi K. Sadanov, Elvira T. Ismailova, Kulyash A. Iskandarova, Olga N. Shemshura, Ainur I. Seitbattalova
Abstract:
Fire blight is a very harmful for commercial apple and pear production quarantine bacterial disease. To date, several different methods have been proposed for disease control, including the use of copperbased preparations and antibiotics, which are not always reliable or effective. The use of bacteria as biocontrol agents is one of the most promising and eco-friendly alternative methods. Bacteria with protective activity against the causal agent of fire blight are often present among the epiphytic microorganisms of the phyllosphere of host plants. Therefore, the main objective of our study was screening of local epiphytic bacteria as possible antagonists against Erwinia amylovora, the causal agent of fire blight. Samples of infected organs of apple and pear trees (shoots, leaves, fruits) were collected from the industrial horticulture areas in various agro-ecological zones of Kazakhstan. Epiphytic microorganisms were isolated by standard and modified methods on specific nutrient media. The primary screening of selected microorganisms under laboratory conditions to determine the ability to suppress the growth of Erwinia amylovora was performed by agar-diffusion-test. Among 142 bacteria isolated from the fire blight host plants, 5 isolates, belonging to the genera Bacillus, Lactobacillus, Pseudomonas, Paenibacillus and Pantoea showed higher antagonistic activity against the pathogen. The diameters of inhibition zone have been depended on the species and ranged from 10 mm to 48 mm. The maximum diameter of inhibition zone (48 mm) was exhibited by B. amyloliquefaciens. Less inhibitory effect was showed by Pantoea agglomerans PA1 (19 mm). The study of inhibitory effect of Lactobacillus species against E. amylovora showed that among 7 isolates tested only one (Lactobacillus plantarum 17M) demonstrated inhibitory zone (30 mm). In summary, this study was devoted to detect the beneficial epiphytic bacteria from plants organs of pear and apple trees due to fire blight control in Kazakhstan. Results obtained from the in vitro experiments showed that the most efficient bacterial isolates are Lactobacillus plantarum 17M, Bacillus amyloliquefaciens MB40, and Pantoea agglomerans PA1. These antagonists are suitable for development as biocontrol agents for fire blight control. Their efficacies will be evaluated additionally, in biological tests under in vitro and field conditions during our further study.Keywords: antagonists, epiphytic bacteria, Erwinia amylovora, fire blight
Procedia PDF Downloads 167766 Material Chemistry Level Deformation and Failure in Cementitious Materials
Authors: Ram V. Mohan, John Rivas-Murillo, Ahmed Mohamed, Wayne D. Hodo
Abstract:
Cementitious materials, an excellent example of highly complex, heterogeneous material systems, are cement-based systems that include cement paste, mortar, and concrete that are heavily used in civil infrastructure; though commonly used are one of the most complex in terms of the material morphology and structure than most materials, for example, crystalline metals. Processes and features occurring at the nanometer sized morphological structures affect the performance, deformation/failure behavior at larger length scales. In addition, cementitious materials undergo chemical and morphological changes gaining strength during the transient hydration process. Hydration in cement is a very complex process creating complex microstructures and the associated molecular structures that vary with hydration. A fundamental understanding can be gained through multi-scale level modeling for the behavior and properties of cementitious materials starting from the material chemistry level atomistic scale to further explore their role and the manifested effects at larger length and engineering scales. This predictive modeling enables the understanding, and studying the influence of material chemistry level changes and nanomaterial additives on the expected resultant material characteristics and deformation behavior. Atomistic-molecular dynamic level modeling is required to couple material science to engineering mechanics. Starting at the molecular level a comprehensive description of the material’s chemistry is required to understand the fundamental properties that govern behavior occurring across each relevant length scale. Material chemistry level models and molecular dynamics modeling and simulations are employed in our work to describe the molecular-level chemistry features of calcium-silicate-hydrate (CSH), one of the key hydrated constituents of cement paste, their associated deformation and failure. The molecular level atomic structure for CSH can be represented by Jennite mineral structure. Jennite has been widely accepted by researchers and is typically used to represent the molecular structure of the CSH gel formed during the hydration of cement clinkers. This paper will focus on our recent work on the shear and compressive deformation and failure behavior of CSH represented by Jennite mineral structure that has been widely accepted by researchers and is typically used to represent the molecular structure of CSH formed during the hydration of cement clinkers. The deformation and failure behavior under shear and compression loading deformation in traditional hydrated CSH; effect of material chemistry changes on the predicted stress-strain behavior, transition from linear to non-linear behavior and identify the on-set of failure based on material chemistry structures of CSH Jennite and changes in its chemistry structure will be discussed.Keywords: cementitious materials, deformation, failure, material chemistry modeling
Procedia PDF Downloads 286765 Mesoporous BiVO4 Thin Films as Efficient Visible Light Driven Photocatalyst
Authors: Karolina Ordon, Sandrine Coste, Malgorzata Makowska-Janusik, Abdelhadi Kassiba
Abstract:
Photocatalytic processes play key role in the production of a new source of energy (as hydrogen), design of self-cleaning surfaces or for the environment preservation. The most challenging task deals with the purification of water distinguished by high efficiency. In the mentioned process, organic pollutants in solutions are decomposed to the simple, non-toxic compounds as H2O and CO2. The most known photocatalytic materials are ZnO, CdS and TiO2 semiconductors with a particular involvement of TiO2 as an efficient photocatalysts even with a high band gap equal to 3.2 eV which exploit only UV radiation from solar emitted spectrum. However, promising material with visible light induced photoactivity was searched through the monoclinic polytype of BiVO4 which has energy gap about 2.4 eV. As required in heterogeneous photocatalysis, the high contact surface is required. Also, BiVO4 as photocatalyst can be optimized by increasing its surface area by achieving the mesoporous structure synthesize. The main goal of the present work consists in the synthesis and characterization of BiVO4 mesoporous thin film. The synthesis method based on sol-gel was carried out using a standard surfactants such as P123 and F127. The thin film was deposited by spin and dip coating method. Then, the structural analysis of the obtained material was performed thanks to X-ray diffraction (XRD) and Raman spectroscopy. The surface of resulting structure was investigated using a scanning electron microscopy (SEM). The computer simulations based on modeling the optical and electronic properties of bulk BiVO4 by using DFT (density functional theory) methodology were carried out. The semiempirical parameterized method PM6 was used to compute the physical properties of BiVO4 nanostructures. The Raman and IR absorption spectra were also measured for synthesized mesoporous material, and the results were compared with the theoretical predictions. The simulations of nanostructured BiVO4 have pointed out the occurrence of quantum confinement for nanosized clusters leading to widening of the band gap. This result overcame the relevance of nanosized objects to harvest wide part of the solar spectrum. Also, a balance was searched experimentally through the mesoporous nature of the films devoted to enhancing the contact surface as required for heterogeneous catalysis without to lower the nanocrystallite size under some critical sizes inducing an increased band gap. The present contribution will discuss the relevant features of the mesoporous films with respect to their photocatalytic responses.Keywords: bismuth vanadate, photocatalysis, thin film, quantum-chemical calculations
Procedia PDF Downloads 324764 Evaluation of Suspended Particles Impact on Condensation in Expanding Flow with Aerodynamics Waves
Authors: Piotr Wisniewski, Sławomir Dykas
Abstract:
Condensation has a negative impact on turbomachinery efficiency in many energy processes.In technical applications, it is often impossible to dry the working fluid at the nozzle inlet. One of the most popular working fluid is atmospheric air that always contains water in form of steam, liquid, or ice crystals. Moreover, it always contains some amount of suspended particles which influence the phase change process. It is known that the phenomena of evaporation or condensation are connected with release or absorption of latent heat, what influence the fluid physical properties and might affect the machinery efficiency therefore, the phase transition has to be taken under account. This researchpresents an attempt to evaluate the impact of solid and liquid particles suspended in the air on the expansion of moist air in a low expansion rate, i.e., with expansion rate, P≈1000s⁻¹. The numerical study supported by analytical and experimental research is presented in this work. The experimental study was carried out using an in-house experimental test rig, where nozzle was examined for different inlet air relative humidity values included in the range of 25 to 51%. The nozzle was tested for a supersonic flow as well as for flow with shock waves induced by elevated back pressure. The Schlieren photography technique and measurement of static pressure on the nozzle wall were used for qualitative identification of both condensation and shock waves. A numerical model validated against experimental data available in the literature was used for analysis of occurring flow phenomena. The analysis of the suspended particles number, diameter, and character (solid or liquid) revealed their connection with heterogeneous condensation importance. If the expansion of fluid without suspended particlesis considered, the condensation triggers so called condensation wave that appears downstream the nozzle throat. If the solid particles are considered, with increasing number of them, the condensation triggers upwind the nozzle throat, decreasing the condensation wave strength. Due to the release of latent heat during condensation, the fluid temperature and pressure increase, leading to the shift of normal shock upstream the flow. Owing relatively large diameters of the droplets created during heterogeneous condensation, they evaporate partially on the shock and continues to evaporate downstream the nozzle. If the liquid water particles are considered, due to their larger radius, their do not affect the expanding flow significantly, however might be in major importance while considering the compression phenomena as they will tend to evaporate on the shock wave. This research proves the need of further study of phase change phenomena in supersonic flow especially considering the interaction of droplets with the aerodynamic waves in the flow.Keywords: aerodynamics, computational fluid dynamics, condensation, moist air, multi-phase flows
Procedia PDF Downloads 118763 Industrial Wastewater from Paper Mills Used for Biofuel Production and Soil Improvement
Authors: Karin M. Granstrom
Abstract:
Paper mills produce wastewater with a high content of organic substances. Treatment usually consists of sedimentation, biological treatment of activated sludge basins, and chemical precipitation. The resulting sludges are currently a waste problem, deposited in landfills or used as low-grade fuels for incineration. There is a growing awareness of the need for energy efficiency and environmentally sound management of sludge. A resource-efficient method would be to digest the wastewater sludges anaerobically to produce biogas, refine the biogas to biomethane for use in the transportation sector, and utilize the resulting digestate for soil improvement. The biomethane yield of pulp and paper wastewater sludge is comparable to that of straw or manure. As a bonus, the digestate has an improved dewaterability compared to the feedstock biosludge. Limitations of this process are predominantly a weak economic viability - necessitating both sufficiently large-scale paper production for the necessary large amounts of produced wastewater sludge, and the resolving of remaining questions on the certifiability of the digestate and thus its sales price. A way to improve the practical and economical feasibility of using paper mill wastewater for biomethane production and soil improvement is to co-digest it with other feedstocks. In this study, pulp and paper sludge were co-digested with (1) silage and manure, (2) municipal sewage sludge, (3) food waste, or (4) microalgae. Biomethane yield analysis was performed in 500 ml batch reactors, using an Automatic Methane Potential Test System at thermophilic temperature, with a 20 days test duration. The results show that (1) the harvesting season of grass silage and manure collection was an important factor for methane production, with spring feedstocks producing much more than autumn feedstock, and pulp mill sludge benefitting the most from co-digestion; (2) pulp and paper mill sludge is a suitable co-substrate to add when a high nitrogen content cause impaired biogas production due to ammonia inhibition; (3) the combination of food waste and paper sludge gave higher methane yield than either of the substrates digested separately; (4) pure microalgae gave the highest methane yield. In conclusion, although pulp and paper mills are an almost untapped resource for biomethane production, their wastewater is a suitable feedstock for such a process. Furthermore, through co-digestion, the pulp and paper mill wastewater and mill sludges can aid biogas production from more nutrient-rich waste streams from other industries. Such co-digestion also enhances the soil improvement properties of the residue digestate.Keywords: anaerobic, biogas, biomethane, paper, sludge, soil
Procedia PDF Downloads 259762 Bauhaus Exhibition 1922: New Weapon of Anti-Colonial Resistance in India
Authors: Suneet Jagdev
Abstract:
The development of the original Bauhaus occurred at a time in the beginning of the 20th century when the industrialization of Germany had reached a climax. The cities were a reflection of the new living conditions of an industrialized society. The Bauhaus can be interpreted as an ambitious attempt to find appropriate answers to the challenges by using architecture-urban development and design. The core elements of the conviction of the day were the belief in the necessary crossing of boundaries between the various disciplines and courage to experiment for a better solution. Even after 100 years, the situation in our cities is shaped by similar complexity. The urban consequences of developments are difficult to estimate and to predict. The paper critically reflected on the central aspects of the history of the Bauhaus and its role in bringing the modernism in India by comparative studies of the methodology adopted by the artists and designer in both the countries. The paper talked in detail about how the Bauhaus Exhibition in 1922 offered Indian artists a new weapon of anti-colonial resistance. The original Bauhaus fought its aesthetic and political battles in the context of economic instability and the rise of German fascism. The Indians had access to dominant global languages and in a particular English. The availability of print media and a vibrant indigenous intellectual culture provided Indian people a tool to accept technology while denying both its dominant role in culture and the inevitability of only one form of modernism. The indigenous was thus less an engagement with their culture as in the West than a tool of anti-colonial struggle. We have shown how the Indian people used Bauhaus as a critique of colonialism itself through an undermining of its typical modes of representation and as a means of incorporating the Indian desire for spirituality into art and as providing the cultural basis for a non-materialistic and anti-industrial form of what we might now term development. The paper reflected how through painting the Bauhaus entered the artistic consciousness of the sub-continent not only for its stylistic and technical innovations but as a tool for a critical and even utopian modernism that could challenge both the hegemony of academic and orientalist art and as the bearer of a transnational avant-garde as much political as it was artistic, and as such the basis of a non-Eurocentric but genuinely cosmopolitan alternative to the hierarchies of oppression and domination that had long bound India and were at that moment rising once again to a tragic crescendo in Europe. We have talked about how the Bauhaus of today can offer an innovative orientation towards discourse around architecture and design.Keywords: anti-colonial struggle, art over architecture, Bauhaus exhibition of 1922, industrialization
Procedia PDF Downloads 259761 Translation and Validation of the Pain Resilience Scale in a French Population Suffering from Chronic Pain
Authors: Angeliki Gkiouzeli, Christine Rotonda, Elise Eby, Claire Touchet, Marie-Jo Brennstuhl, Cyril Tarquinio
Abstract:
Resilience is a psychological concept of possible relevance to the development and maintenance of chronic pain (CP). It refers to the ability of individuals to maintain reasonably healthy levels of physical and psychological functioning when exposed to an isolated and potentially highly disruptive event. Extensive research in recent years has supported the importance of this concept in the CP literature. Increased levels of resilience were associated with lower levels of perceived pain intensity and better mental health outcomes in adults with persistent pain. The ongoing project seeks to include the concept of pain-specific resilience in the French literature in order to provide more appropriate measures for assessing and understanding the complexities of CP in the near future. To the best of our knowledge, there is currently no validated version of the pain-specific resilience measure, the Pain Resilience scale (PRS), for French-speaking populations. Therefore, the present work aims to address this gap, firstly by performing a linguistic and cultural translation of the scale into French and secondly by studying the internal validity and reliability of the PRS for French CP populations. The forward-translation-back translation methodology was used to achieve as perfect a cultural and linguistic translation as possible according to the recommendations of the COSMIN (Consensus-based Standards for the selection of health Measurement Instruments) group, and an online survey is currently conducted among a representative sample of the French population suffering from CP. To date, the survey has involved one hundred respondents, with a total target of around three hundred participants at its completion. We further seek to study the metric properties of the French version of the PRS, ''L’Echelle de Résilience à la Douleur spécifique pour les Douleurs Chroniques'' (ERD-DC), in French patients suffering from CP, assessing the level of pain resilience in the context of CP. Finally, we will explore the relationship between the level of pain resilience in the context of CP and other variables of interest commonly assessed in pain research and treatment (i.e., general resilience, self-efficacy, pain catastrophising, and quality of life). This study will provide an overview of the methodology used to address our research objectives. We will also present for the first time the main findings and further discuss the validity of the scale in the field of CP research and pain management. We hope that this tool will provide a better understanding of how CP-specific resilience processes can influence the development and maintenance of this disease. This could ultimately result in better treatment strategies specifically tailored to individual needs, thus leading to reduced healthcare costs and improved patient well-being.Keywords: chronic pain, pain measure, pain resilience, questionnaire adaptation
Procedia PDF Downloads 90760 Low Frequency Ultrasonic Degassing to Reduce Void Formation in Epoxy Resin and Its Effect on the Thermo-Mechanical Properties of the Cured Polymer
Authors: A. J. Cobley, L. Krishnan
Abstract:
The demand for multi-functional lightweight materials in sectors such as automotive, aerospace, electronics is growing, and for this reason fibre-reinforced, epoxy polymer composites are being widely utilized. The fibre reinforcing material is mainly responsible for the strength and stiffness of the composites whilst the main role of the epoxy polymer matrix is to enhance the load distribution applied on the fibres as well as to protect the fibres from the effect of harmful environmental conditions. The superior properties of the fibre-reinforced composites are achieved by the best properties of both of the constituents. Although factors such as the chemical nature of the epoxy and how it is cured will have a strong influence on the properties of the epoxy matrix, the method of mixing and degassing of the resin can also have a significant impact. The production of a fibre-reinforced epoxy polymer composite will usually begin with the mixing of the epoxy pre-polymer with a hardener and accelerator. Mechanical methods of mixing are often employed for this stage but such processes naturally introduce air into the mixture, which, if it becomes entrapped, will lead to voids in the subsequent cured polymer. Therefore, degassing is normally utilised after mixing and this is often achieved by placing the epoxy resin mixture in a vacuum chamber. Although this is reasonably effective, it is another process stage and if a method of mixing could be found that, at the same time, degassed the resin mixture this would lead to shorter production times, more effective degassing and less voids in the final polymer. In this study the effect of four different methods for mixing and degassing of the pre-polymer with hardener and accelerator were investigated. The first two methods were manual stirring and magnetic stirring which were both followed by vacuum degassing. The other two techniques were ultrasonic mixing/degassing using a 40 kHz ultrasonic bath and a 20 kHz ultrasonic probe. The cured cast resin samples were examined under scanning electron microscope (SEM), optical microscope, and Image J analysis software to study morphological changes, void content and void distribution. Three point bending test and differential scanning calorimetry (DSC) were also performed to determine the thermal and mechanical properties of the cured resin. It was found that the use of the 20 kHz ultrasonic probe for mixing/degassing gave the lowest percentage voids of all the mixing methods in the study. In addition, the percentage voids found when employing a 40 kHz ultrasonic bath to mix/degas the epoxy polymer mixture was only slightly higher than when magnetic stirrer mixing followed by vacuum degassing was utilized. The effect of ultrasonic mixing/degassing on the thermal and mechanical properties of the cured resin will also be reported. The results suggest that low frequency ultrasound is an effective means of mixing/degassing a pre-polymer mixture and could enable a significant reduction in production times.Keywords: degassing, low frequency ultrasound, polymer composites, voids
Procedia PDF Downloads 296759 Exploratory Study to Obtain a Biolubricant Base from Transesterified Oils of Animal Fats (Tallow)
Authors: Carlos Alfredo Camargo Vila, Fredy Augusto Avellaneda Vargas, Debora Alcida Nabarlatz
Abstract:
Due to the current need to implement environmentally friendly technologies, the possibility of using renewable raw materials to produce bioproducts such as biofuels, or in this case, to produce biolubricant bases, from residual oils (tallow), originating has been studied of the bovine industry. Therefore, it is hypothesized that through the study and control of the operating variables involved in the reverse transesterification method, a biolubricant base with high performance is obtained on a laboratory scale using animal fats from the bovine industry as raw materials, as an alternative for material recovery and environmental benefit. To implement this process, esterification of the crude tallow oil must be carried out in the first instance, which allows the acidity index to be decreased ( > 1 mg KOH/g oil), this by means of an acid catalysis with sulfuric acid and methanol, molar ratio 7.5:1 methanol: tallow, 1.75% w/w catalyst at 60°C for 150 minutes. Once the conditioning has been completed, the biodiesel is continued to be obtained from the improved sebum, for which an experimental design for the transesterification method is implemented, thus evaluating the effects of the variables involved in the process such as the methanol molar ratio: improved sebum and catalyst percentage (KOH) over methyl ester content (% FAME). Finding that the highest percentage of FAME (92.5%) is given with a 7.5:1 methanol: improved tallow ratio and 0.75% catalyst at 60°C for 120 minutes. And although the% FAME of the biodiesel produced does not make it suitable for commercialization, it does ( > 90%) for its use as a raw material in obtaining biolubricant bases. Finally, once the biodiesel is obtained, an experimental design is carried out to obtain biolubricant bases using the reverse transesterification method, which allows the study of the effects of the biodiesel: TMP (Trimethylolpropane) molar ratio and the percentage of catalyst on viscosity and yield as response variables. As a result, a biolubricant base is obtained that meets the requirements of ISO VG (Classification for industrial lubricants according to ASTM D 2422) 32 (viscosity and viscosity index) for commercial lubricant bases, using a 4:1 biodiesel molar ratio: TMP and 0.51% catalyst at 120°C, at a pressure of 50 mbar for 180 minutes. It is necessary to highlight that the product obtained consists of two phases, a liquid and a solid one, being the first object of study, and leaving the classification and possible application of the second one incognito. Therefore, it is recommended to carry out studies of the greater depth that allows characterizing both phases, as well as improving the method of obtaining by optimizing the variables involved in the process and thus achieving superior results.Keywords: biolubricant base, bovine tallow, renewable resources, reverse transesterification
Procedia PDF Downloads 117758 Electret: A Solution of Partial Discharge in High Voltage Applications
Authors: Farhina Haque, Chanyeop Park
Abstract:
The high efficiency, high field, and high power density provided by wide bandgap (WBG) semiconductors and advanced power electronic converter (PEC) topologies enabled the dynamic control of power in medium to high voltage systems. Although WBG semiconductors outperform the conventional Silicon based devices in terms of voltage rating, switching speed, and efficiency, the increased voltage handling properties, high dv/dt, and compact device packaging increase local electric fields, which are the main causes of partial discharge (PD) in the advanced medium and high voltage applications. PD, which occurs actively in voids, triple points, and airgaps, is an inevitable dielectric challenge that causes insulation and device aging. The aging process accelerates over time and eventually leads to the complete failure of the applications. Hence, it is critical to mitigating PD. Sharp edges, airgaps, triple points, and bubbles are common defects that exist in any medium to high voltage device. The defects are created during the manufacturing processes of the devices and are prone to high-electric-field-induced PD due to the low permittivity and low breakdown strength of the gaseous medium filling the defects. A contemporary approach of mitigating PD by neutralizing electric fields in high power density applications is introduced in this study. To neutralize the locally enhanced electric fields that occur around the triple points, airgaps, sharp edges, and bubbles, electrets are developed and incorporated into high voltage applications. Electrets are electric fields emitting dielectric materials that are embedded with electrical charges on the surface and in bulk. In this study, electrets are fabricated by electrically charging polyvinylidene difluoride (PVDF) films based on the widely used triode corona discharge method. To investigate the PD mitigation performance of the fabricated electret films, a series of PD experiments are conducted on both the charged and uncharged PVDF films under square voltage stimuli that represent PWM waveform. In addition to the use of single layer electrets, multiple layers of electrets are also experimented with to mitigate PD caused by higher system voltages. The electret-based approach shows great promise in mitigating PD by neutralizing the local electric field. The results of the PD measurements suggest that the development of an ultimate solution to the decades-long dielectric challenge would be possible with further developments in the fabrication process of electrets.Keywords: electrets, high power density, partial discharge, triode corona discharge
Procedia PDF Downloads 203757 Pressure-Robust Approximation for the Rotational Fluid Flow Problems
Authors: Medine Demir, Volker John
Abstract:
Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces
Procedia PDF Downloads 67756 The Outcome of Early Balance Exercises and Agility Training in Sports Rehabilitation for Patients Post Anterior Cruciate Ligament (ACL) Reconstruction
Authors: S. M. A. Ismail, M. I. Ibrahim, H. Masdar, F. M. Effendi, M. F. Suhaimi, A. Suun
Abstract:
Introduction: It is generally known that the rehabilitation process is as important as the reconstruction surgery. Several literature has focused on how early the rehabilitation modalities can be initiated after the surgery to ensure a safe return of patients to sports or at least regaining the pre-injury level of function following an ACL reconstruction. Objectives: The main objective is to study and evaluate the outcome of early balance exercises and agility training in sports rehabilitation for patients post ACL reconstruction. To compare between early balance exercises and agility training as intervention and control. (material or non-material). All of them were recruited for material exercise (balance exercises and agility training with strengthening) and strengthening only rehabilitation protocol (non-material). Followed the prospective intervention trial. Materials and Methods: Post-operative ACL reconstruction patients performed in Selayang and Sg Buloh Hospitals from 2012 to 2014 were selected for this study. They were taken from Malaysian Knee Ligament Registry (MKLR) and all patients had single bundle reconstruction with autograft hamstring tendon (semitendinosus and gracilis). ACL injury from any type of sports were included. Subjects performed various type of physical activity for rehabilitation in every 18 week for a different type of rehab activity. All subject attended all 18 sessions of rehabilitation exercises and evaluation was done during the first, 9th and 18th session. Evaluation format were based on clinical assessment (anterior drawer, Lachmann, pivot shift, laxity with rolimeter, the end point and thigh circumference) and scoring (Lysholm Knee scoring and Tegner Activity Level scale). Rehabilitation protocol initiated from 24 week after the surgery. Evaluation format were based on clinical assessment (anterior drawer, Lachmann, pivot shift, laxity with rolimeter, the end point and thigh circumference) and scoring (Lysholm Knee scoring and Tegner Activity Level scale). Results and Discussion: 100 patients were selected of which 94 patients are male and 6 female. Age range is 18 to 54 year with the average of 28 years old for included 100 patients. All patients are evaluated after 24 week after the surgery. 50 of them were recruited for material exercise (balance exercises and agility training with strengthening) and 50 for strengthening only rehabilitation protocol (non-material). Demographically showed 85% suffering sports injury mainly from futsal and football. 39 % of them have abnormal BMI (26 – 38) and involving of the left knee. 100% of patient had the basic radiographic x-ray of knee and 98% had MRI. All patients had negative anterior drawer’s, Lachman test and Pivot shift test during the post ACL reconstruction after the complete rehabilitation. There was 95 subject sustained grade I injury, 5 of grade II and 0 of grade III with 90% of them had soft end-point. Overall they scored badly on presentation with 53% of Lysholm score (poor) and Tegner activity score level 3/10. After completing 9 weeks of exercises, of material group 90% had grade I laxity, 75% with firm end-point, Lysholm score 71% (fair) and Tegner activity level 5/10 comparing non-material group who had 62% of grade I laxity , 54% of firm end-point, Lyhslom score 62 % (poor) and Tegner activity level 4/10. After completed 18 weeks of exercises, of material group maintained 90% grade I laxity with 100 % with firm end-point, Lysholm score increase 91% (excellent) and Tegner activity level 7/10 comparing non-material group who had 69% of grade I laxity but maintained 54% of firm end-point, Lysholm score 76% (fair) and Tegner activity level 5/10. These showed the improvement were achieved fast on material group who have achieved satisfactory level after 9th cycle of exercises 75% (15/20) comparing non-material group who only achieved 54% (7/13) after completed 18th session. Most of them were grade I. These concepts are consolidated into our approach to prepare patients for return to play including field testing and maintenance training. Conclusions: The basic approach in ACL rehabilitation is to ensure return to sports at post-operative 6 month. Grade I and II laxity has favourable and early satisfactory outcome base on clinical assessment and Lysholm and Tegner scoring point. Reduction of laxity grading indicates satisfactory outcome. Firm end-point showed the adequacy of rehabilitation before starting previous sports game. Material exercise (balance exercises and agility training with strengthening) were beneficial and reliable in order to achieve favourable and early satisfactory outcome comparing strengthening only (non-material).We have identified that rehabilitation protocol varies between different patients. Therefore future post ACL reconstruction rehabilitation guidelines should look into focusing on rehabilitation techniques instead of time.Keywords: post anterior cruciate ligament (ACL) reconstruction, single bundle, hamstring tendon, sports rehabilitation, balance exercises, agility balance
Procedia PDF Downloads 255755 Formation of the Water Assisted Supramolecular Assembly in the Transition Structure of Organocatalytic Asymmetric Aldol Reaction: A DFT Study
Authors: Kuheli Chakrabarty, Animesh Ghosh, Atanu Roy, Gourab Kanti Das
Abstract:
Aldol reaction is an important class of carbon-carbon bond forming reactions. One of the popular ways to impose asymmetry in aldol reaction is the introduction of chiral auxiliary that binds the approaching reactants and create dissymmetry in the reaction environment, which finally evolves to enantiomeric excess in the aldol products. The last decade witnesses the usage of natural amino acids as chiral auxiliary to control the stereoselectivity in various carbon-carbon bond forming processes. In this context, L-proline was found to be an effective organocatalyst in asymmetric aldol additions. In last few decades the use of water as solvent or co-solvent in asymmetric organocatalytic reaction is increased sharply. Simple amino acids like L-proline does not catalyze asymmetric aldol reaction in aqueous medium not only that, In organic solvent medium high catalytic loading (~30 mol%) is required to achieve moderate to high asymmetric induction. In this context, huge efforts have been made to modify L-proline and 4-hydroxy-L-proline to prepare organocatalyst for aqueous medium asymmetric aldol reaction. Here, we report the result of our DFT calculations on asymmetric aldol reaction of benzaldehyde, p-NO2 benzaldehyde and t-butyraldehyde with a number of ketones using L-proline hydrazide as organocatalyst in wet solvent free condition. Gaussian 09 program package and Gauss View program were used for the present work. Geometry optimizations were performed using B3LYP hybrid functional and 6-31G(d,p) basis set. Transition structures were confirmed by hessian calculation and IRC calculation. As the reactions were carried out in solvent free condition, No solvent effect were studied theoretically. Present study has revealed for the first time, the direct involvement of two water molecules in the aldol transition structures. In the TS, the enamine and the aldehyde is connected through hydrogen bonding by the assistance of two intervening water molecules forming a supramolecular network. Formation of this type of supramolecular assembly is possible due to the presence of protonated -NH2 group in the L-proline hydrazide moiety, which is responsible for the favorable entropy contribution to the aldol reaction. It is also revealed from the present study that, water assisted TS is energetically more favorable than the TS without involving any water molecule. It can be concluded from this study that, insertion of polar group capable of hydrogen bond formation in the L-proline skeleton can lead to a favorable aldol reaction with significantly high enantiomeric excess in wet solvent free condition by reducing the activation barrier of this reaction.Keywords: aldol reaction, DFT, organocatalysis, transition structure
Procedia PDF Downloads 435754 Current Status of Scaled-Up Synthesis/Purification and Characterization of a Potentially Translatable Tantalum Oxide Nanoparticle Intravenous CT Contrast Agent
Authors: John T. Leman, James Gibson, Peter J. Bonitatibus
Abstract:
There have been no potential clinically translatable developments of intravenous CT contrast materials over decades, and iodinated contrast agents (ICA) remain the only FDA-approved media for CT. Small molecule ICA used to highlight vascular anatomy have weak CT signals in large-to-obese patients due to their rapid redistribution from plasma into interstitial fluid, thereby diluting their intravascular concentration, and because of a mismatch of iodine’s K-edge and the high kVp settings needed to image this patient population. The use of ICA is also contraindicated in a growing population of renally impaired patients who are hypersensitive to these contrast agents; a transformative intravenous contrast agent with improved capabilities is urgently needed. Tantalum oxide nanoparticles (TaO NPs) with zwitterionic siloxane polymer coatings have high potential as clinically translatable general-purpose CT contrast agents because of (1) substantially improved imaging efficacy compared to ICA in swine/phantoms emulating medium-sized and larger adult abdomens and superior thoracic vascular contrast enhancement of thoracic arteries and veins in rabbit, (2) promising biological safety profiles showing near-complete renal clearance and low tissue retention at 3x anticipated clinical dose (ACD), and (3) clinically acceptable physiochemical parameters as concentrated bulk solutions(250-300 mgTa/mL). Here, we review requirements for general-purpose intravenous CT contrast agents in terms of patient safety, X-ray attenuating properties and contrast-producing capabilities, and physicochemical and pharmacokinetic properties. We report the current status of a TaO NP-based contrast agent, including chemical process technology developments and results of newly defined scaled-up processes for NP synthesis and purification, yielding reproducible formulations with appropriate size and concentration specifications. We discuss recent results of recent pre-clinical in vitro immunology, non-GLP high dose tolerability in rats (10x ACD), non-GLP long-term biodistribution in rats at 3x ACD, and non-GLP repeat dose in rats at ACD. We also include a discussion of NP characterization, in particular size-stability testing results under accelerated conditions (37C), and insights into TaO NP purity, surface structure, and bonding of the zwitterionic siloxane polymer coating by multinuclear (1H, 13C, 29Si) and multidimensional (2D) solution NMR spectroscopy.Keywords: nanoparticle, imaging, diagnostic, process technology, nanoparticle characterization
Procedia PDF Downloads 37753 Changes in Geospatial Structure of Households in the Czech Republic: Findings from Population and Housing Census
Authors: Jaroslav Kraus
Abstract:
Spatial information about demographic processes are a standard part of outputs in the Czech Republic. That was also the case of Population and Housing Census which was held on 2011. This is a starting point for a follow up study devoted to two basic types of households: single person households and households of one completed family. Single person households and one family households create more than 80 percent of all households, but the share and spatial structure is in long-term changing. The increase of single households is results of long-term fertility decrease and divorce increase, but also possibility of separate living. There are regions in the Czech Republic with traditional demographic behavior, and regions like capital Prague and some others with changing pattern. Population census is based - according to international standards - on the concept of currently living population. Three types of geospatial approaches will be used for analysis: (i) firstly measures of geographic distribution, (ii) secondly mapping clusters to identify the locations of statistically significant hot spots, cold spots, spatial outliers, and similar features and (iii) finally analyzing pattern approach as a starting point for more in-depth analyses (geospatial regression) in the future will be also applied. For analysis of this type of data, number of households by types should be distinct objects. All events in a meaningful delimited study region (e.g. municipalities) will be included in an analysis. Commonly produced measures of central tendency and spread will include: identification of the location of the center of the point set (by NUTS3 level); identification of the median center and standard distance, weighted standard distance and standard deviational ellipses will be also used. Identifying that clustering exists in census households datasets does not provide a detailed picture of the nature and pattern of clustering but will be helpful to apply simple hot-spot (and cold spot) identification techniques to such datasets. Once the spatial structure of households will be determined, any particular measure of autocorrelation can be constructed by defining a way of measuring the difference between location attribute values. The most widely used measure is Moran’s I that will be applied to municipal units where numerical ratio is calculated. Local statistics arise naturally out of any of the methods for measuring spatial autocorrelation and will be applied to development of localized variants of almost any standard summary statistic. Local Moran’s I will give an indication of household data homogeneity and diversity on a municipal level.Keywords: census, geo-demography, households, the Czech Republic
Procedia PDF Downloads 96752 Effect of Pollutions on Mangrove Forests of Nayband National Marine Park
Authors: Esmaeil Kouhgardi, Elaheh Shakerdargah
Abstract:
The mangrove ecosystem is a complex of various inter-related elements in the land-sea interface zone which is linked with other natural systems of the coastal region such as corals, sea-grass, coastal fisheries and beach vegetation. The mangrove ecosystem consists of water, muddy soil, trees, shrubs, and their associated flora, fauna and microbes. It is a very productive ecosystem sustaining various forms of life. Its waters are nursery grounds for fish, crustacean, and mollusk and also provide habitat for a wide range of aquatic life, while the land supports a rich and diverse flora and fauna, but pollutions may affect these characteristics. Iran has the lowest share of Persian Gulf pollution among the eight littoral states; environmental experts are still deeply concerned about the serious consequences of the pollution in the oil-rich gulf. Prolongation of critical conditions in the Persian Gulf has endangered its aquatic ecosystem. Water purification equipment, refineries, wastewater emitted by onshore installations, especially petrochemical plans, urban sewage, population density and extensive oil operations of Arab states are factors contaminating the Persian Gulf waters. Population density has been the major cause of pollution and environmental degradation in the Persian Gulf. Persian Gulf is a closed marine environment which is connected to open waterways only from one way. It usually takes between three and four years for the gulf's water to be completely replaced. Therefore, any pollution entering the water will remain there for a relatively long time. Presently, the high temperature and excessive salt level in the water have exposed the marine creatures to extra threats, which mean they have to survive very tough conditions. The natural environment of the Persian Gulf is very rich with good fish grounds, extensive coral reefs and pearl oysters in abundance, but has become increasingly under pressure due to the heavy industrialization and in particular the repeated major oil spillages associated with the various recent wars fought in the region. Pollution may cause the mortality of mangrove forests by effect on root, leaf and soil of the area. Study was showed the high correlation between industrial pollution and mangrove forests health in south of Iran and increase of population, coupled with economic growth, inevitably caused the use of mangrove lands for various purposes such as construction of roads, ports and harbors, industries and urbanization.Keywords: Mangrove forest, pollution, Persian Gulf, population, environment
Procedia PDF Downloads 399751 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique
Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina
Abstract:
The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.Keywords: diffusion, glass-ceramics, ion exchange, vitrification
Procedia PDF Downloads 269750 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme
Authors: Eleanor Nel
Abstract:
Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.Keywords: evaluation, framework, impact, research output
Procedia PDF Downloads 76749 Exploring the Potential of PVDF/CCB Composites Filaments as Potential Materials in Energy Harvesting Applications
Authors: Fawad Ali, Mohammad Albakri
Abstract:
The increasing demand for advanced multifunctional materials has led to significant research in polymer composites, particularly polyvinylidene fluoride (PVDF) and conducting carbon black (CCB) composites. This paper explores the development and application of PVDF/CCB conducting electrodes for energy harvesting applications. PVDF is renowned for its chemical resistance, thermal stability, and mechanical strength, making it an ideal matrix for composite materials in demanding environments. When combined with CCB, known for its excellent electrical conductivity, the resulting composite electrodes not only retain the advantageous properties of PVDF but also gain enhanced electrical conductivity. This synergy makes PVDF/CCB composites suitable for energy-harvesting devices that require both durability and electrical functionality. These electrodes can be used in sensors, actuators, and flexible electronics where efficient energy conversion is critical. The study provides a comprehensive overview of PVDF/CCB conducting electrodes, from synthesis and characterization to practical applications, and discusses challenges in optimizing these materials for industrial use and future development. This research aims to contribute to the understanding of conductive polymer composites and their potential in advancing sustainable energy technologies. This paper explores the development and application of polyvinylidene fluoride (PVDF) and conducting carbon black (CCB) composite conducting electrodes for energy harvesting applications. PVDF is renowned for its piezoelectric and mechanical strength, making it an ideal matrix for composite materials in demanding environments. When combined with CCB, known for its excellent electrical conductivity, the resulting composite electrodes not only retain the advantageous properties of PVDF but also gain enhanced electrical conductivity. This synergy makes PVDF/CCB composites suitable for energy-harvesting devices that require both durability and electrical functionality. These electrodes can be used in sensors, actuators, and flexible electronics where efficient energy conversion is critical. The study provides a comprehensive overview of PVDF/CCB conducting electrodes, from synthesis and characterization to practical applications. This research aims to contribute to the understanding of conductive polymer composites and their potential in advancing sustainable energy technologies.Keywords: additive manufacturing, polyvinylidene fluoride (PVDF), conducting polymer composite, energy harvesting, materials characterization
Procedia PDF Downloads 19