Search results for: compression and expansion waves
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2807

Search results for: compression and expansion waves

227 Investigating the Governance of Engineering Services in the Aerospace and Automotive Industries

Authors: Maria Jose Granero Paris, Ana Isabel Jimenez Zarco, Agustin Pablo Alvarez Herranz

Abstract:

In the industrial sector collaboration with suppliers is key to the development of innovations in the field of processes. Access to resources and expertise that are not available in the business, obtaining a cost advantage, or the reduction of the time needed to carry out innovation are some of the benefits associated with the process. However, the success of this collaborative process is compromised, when from the beginning not clearly rules have been established that govern the relationship. Abundant studies developed in the field of innovation emphasize the strategic importance of the concept of “Goverance”. Despite this, there have been few papers that have analyzed how the governance process of the relationship must be designed and managed to ensure the success of the cooperation process. The lack of literature in this area responds to the wide diversity of contexts where collaborative processes to innovate take place. Thus, in sectors such as the car industry there is a strong collaborative tradition between manufacturers and suppliers being part of the value chain. In this case, it is common to establish mechanisms and procedures that fix formal and clear objectives to regulate the relationship, and establishes the rights and obligations of each of the parties involved. By contrast, in other sectors, collaborative relationships to innovate are not a common way of working, particularly when their aim is the development of process improvements. It is in this case, it is when the lack of mechanisms to establish and regulate the behavior of those involved, can give rise to conflicts, and the failure of the cooperative relationship. Because of this the present paper analyzes the similarities and differences in the processes of governance in collaboration with service providers in engineering R & D in the European aerospace industry. With these ideas in mind, we present research is twofold: - Understand the importance of governance as a key element of the success of the cooperation in the development of process innovations, - Establish the mechanisms and procedures to ensure the proper management of the processes of cooperation. Following the methodology of the case study, we analyze the way in which manufacturers and suppliers cooperate in the development of new processes in two industries with different levels of technological intensity and collaborative tradition: the automotive and aerospace. The identification of those elements playing a key role to establish a successful governance and relationship management and the compression of the mechanisms of regulation and control in place at the automotive sector can be use to propose solutions to some of the conflicts that currently arise in aerospace industry. The paper concludes by analyzing the strategic implications for the aerospace industry entails the adoption of some of the practices traditionally used in other industrial sectors. Finally, it is important to highlight that in this paper are presented the first results of a research project currently in progress describing a model of governance that explains the way to manage outsourced engineering services to suppliers in the European aerospace industry, through the analysis of companies in the sector located in Germany, France and Spain.

Keywords: innovation management, innovation governance, managing collaborative innovation, process innovation

Procedia PDF Downloads 298
226 Experimental Analysis of the Performance of a System for Freezing Fish Products Equipped with a Modulating Vapour Injection Scroll Compressor

Authors: Domenico Panno, Antonino D’amico, Hamed Jafargholi

Abstract:

This paper presents an experimental analysis of the performance of a system for freezing fish products equipped with a modulating vapour injection scroll compressor operating with R448A refrigerant. Freezing is a critical process for the preservation of seafood products, as it influences quality, food safety, and environmental sustainability. The use of a modulating scroll compressor with vapour injection, associated with the R448A refrigerant, is proposed as a solution to optimize the performance of the system, reducing energy consumption and mitigating the environmental impact. The stream injection modulating scroll compressor represents an advanced technology that allows you to adjust the compressor capacity based on the actual cooling needs of the system. Vapour injection allows the optimization of the refrigeration cycle, reducing the evaporation temperature and improving the overall efficiency of the system. The use of R448A refrigerant, with a low Global Warming Potential (GWP), is part of an environmental sustainability perspective, helping to reduce the climate impact of the system. The aim of this research was to evaluate the performance of the system through a series of experiments conducted on a pilot plant for the freezing of fish products. Several operational variables were monitored and recorded, including evaporation temperature, condensation temperature, energy consumption, and freezing time of seafood products. The results of the experimental analysis highlighted the benefits deriving from the use of the modulating vapour injection scroll compressor with the R448A refrigerant. In particular, a significant reduction in energy consumption was recorded compared to conventional systems. The modulating capacity of the compressor made it possible to adapt the cold production to variations in the thermal load, ensuring optimal operation of the system and reducing energy waste. Furthermore, the use of an electronic expansion valve highlighted greater precision in the control of the evaporation temperature, with minimal deviation from the desired set point. This helped ensure better quality of the final product, reducing the risk of damage due to temperature changes and ensuring uniform freezing of the fish products. The freezing time of seafood has been significantly reduced thanks to the configuration of the entire system, allowing for faster production and greater production capacity of the plant. In conclusion, the use of a modulating vapour injection scroll compressor operating with R448A has proven effective in improving the performance of a system for freezing fish products. This technology offers an optimal balance between energy efficiency, temperature control, and environmental sustainability, making it an advantageous choice for food industries.

Keywords: scroll compressor, vapor injection, refrigeration system, EER

Procedia PDF Downloads 43
225 Open Theism in Confinement: A Conversation between Open and Confined Views of God

Authors: Charles Atkins

Abstract:

Anakainosis-desmios is the experience of spiritual renewal during incarceration. "Anakainosis” is a Greek word for “renovation or renewal" that has taken on profound meaning in Christocentric theology where it is defined as the phenomenon of spiritual renewal or a change of heart that is achieved by God’s power. “Desmios” is another Greek word found in the Bible which stands for “one who is bound or a prisoner. Anakainosis-desmios occurs when a person, while residing in an environment of surveillance and coercion, has his consciousness renewed in such a way that he generates unexpected emancipatory and hospitable attitudes. They expressed an awareness of the prison environment and a willingness to engage that environment through their transformed relationships with time, space, matter, and people. By the end of the 20th century, Open Theism, gained the attention of many American evangelicals and theologians. Open Theism was born out of the concerns people had about those scriptures which demonstrate a dynamic God who has unparalled wisdom instead of omniscience; liberating power instead of omnipotence; and abiding faithfulness instead of immutability—all of these attributes being aspects of God’s love for humanity. Scriptural exegesis is one of the primary factors that informed the creation of the open view of God and many who hold this view claim that the divine attributes of omniscience, omnipotence and immutability are not necessarily Scriptural but rather philosophical attempts to define the nature of God. Scriptures that do not support such divine attributes have been a source of distress for many. Some would say that open theists have created lenses that enable a Bible student to gain comfort from those scriptures which seem to show God demonstrating repentance, disappointment and a readiness to learn. This paper will bring Open Theism into conversation with anakainosis-desmios. For open theists the reading of Scripture is an important part of the foundation of their perspectives. Open theists focus on certain Scriptures which demonstrate God showing repentance, disappointment and a readiness to learn. This focus led to their questioning of the systematic theologies that have been created and the biblical hermeneutics that have been used historically as lenses for interpreting such Scriptures. The perspective of anakainosis-desmios is also significantly influenced by the reading of Scripture. Spiritual renewal while incarcerated can occur largely through the religious practice of Bible study. Studying Scriptures during incarceration has supported many people who are seeking to develop new renderings of reality that empower them to flourish in some way despite the hostile environment of prisons. A conversation between the two points of view on the God of the Bible will lead to an expansion of both and to a deepening of a person's experience of Scripture Study.

Keywords: open theism, anakainosis-desmios, religion in prison, open theology, practical theology, Bible, scripture, openness of God, incarceration, prison

Procedia PDF Downloads 64
224 Low- and High-Temperature Methods of CNTs Synthesis for Medicine

Authors: Grzegorz Raniszewski, Zbigniew Kolacinski, Lukasz Szymanski, Slawomir Wiak, Lukasz Pietrzak, Dariusz Koza

Abstract:

One of the most promising area for carbon nanotubes (CNTs) application is medicine. One of the most devastating diseases is cancer. Carbon nanotubes may be used as carriers of a slowly released drug. It is possible to use of electromagnetic waves to destroy cancer cells by the carbon nanotubes (CNTs). In our research we focused on thermal ablation by ferromagnetic carbon nanotubes (Fe-CNTs). In the cancer cell hyperthermia functionalized carbon nanotubes are exposed to radio frequency electromagnetic field. Properly functionalized Fe-CNTs join the cancer cells. Heat generated in nanoparticles connected to nanotubes warm up nanotubes and then the target tissue. When the temperature in tumor tissue exceeds 316 K the necrosis of cancer cells may be observed. Several techniques can be used for Fe-CNTs synthesis. In our work, we use high-temperature methods where arc-discharge is applied. Low-temperature systems are microwave plasma with assisted chemical vapor deposition (MPCVD) and hybrid physical-chemical vapor deposition (HPCVD). In the arc discharge system, the plasma reactor works with a pressure of He up to 0,5 atm. The electric arc burns between two graphite rods. Vapors of carbon move from the anode, through a short arc column and forms CNTs which can be collected either from the reactor walls or cathode deposit. This method is suitable for the production of multi-wall and single-wall CNTs. A disadvantage of high-temperature methods is a low purification, short length, random size and multi-directional distribution. In MPCVD system plasma is generated in waveguide connected to the microwave generator. Then containing carbon and ferromagnetic elements plasma flux go to the quartz tube. The additional resistance heating can be applied to increase the reaction effectiveness and efficiency. CNTs nucleation occurs on the quartz tube walls. It is also possible to use substrates to improve carbon nanotubes growth. HPCVD system involves both chemical decomposition of carbon containing gases and vaporization of a solid or liquid source of catalyst. In this system, a tube furnace is applied. A mixture of working and carbon-containing gases go through the quartz tube placed inside the furnace. As a catalyst ferrocene vapors can be used. Fe-CNTs may be collected then either from the quartz tube walls or on the substrates. Low-temperature methods are characterized by higher purity product. Moreover, carbon nanotubes from tested CVD systems were partially filled with the iron. Regardless of the method of Fe-CNTs synthesis the final product always needs to be purified for applications in medicine. The simplest method of purification is an oxidation of the amorphous carbon. Carbon nanotubes dedicated for cancer cell thermal ablation need to be additionally treated by acids for defects amplification on the CNTs surface what facilitates biofunctionalization. Application of ferromagnetic nanotubes for cancer treatment is a promising method of fighting with cancer for the next decade. Acknowledgment: The research work has been financed from the budget of science as a research project No. PBS2/A5/31/2013

Keywords: arc discharge, cancer, carbon nanotubes, CVD, thermal ablation

Procedia PDF Downloads 448
223 Unsupervised Detection of Burned Area from Remote Sensing Images Using Spatial Correlation and Fuzzy Clustering

Authors: Tauqir A. Moughal, Fusheng Yu, Abeer Mazher

Abstract:

Land-cover and land-use change information are important because of their practical uses in various applications, including deforestation, damage assessment, disasters monitoring, urban expansion, planning, and land management. Therefore, developing change detection methods for remote sensing images is an important ongoing research agenda. However, detection of change through optical remote sensing images is not a trivial task due to many factors including the vagueness between the boundaries of changed and unchanged regions and spatial dependence of the pixels to its neighborhood. In this paper, we propose a binary change detection technique for bi-temporal optical remote sensing images. As in most of the optical remote sensing images, the transition between the two clusters (change and no change) is overlapping and the existing methods are incapable of providing the accurate cluster boundaries. In this regard, a methodology has been proposed which uses the fuzzy c-means clustering to tackle the problem of vagueness in the changed and unchanged class by formulating the soft boundaries between them. Furthermore, in order to exploit the neighborhood information of the pixels, the input patterns are generated corresponding to each pixel from bi-temporal images using 3×3, 5×5 and 7×7 window. The between images and within image spatial dependence of the pixels to its neighborhood is quantified by using Pearson product moment correlation and Moran’s I statistics, respectively. The proposed technique consists of two phases. At first, between images and within image spatial correlation is calculated to utilize the information that the pixels at different locations may not be independent. Second, fuzzy c-means technique is used to produce two clusters from input feature by not only taking care of vagueness between the changed and unchanged class but also by exploiting the spatial correlation of the pixels. To show the effectiveness of the proposed technique, experiments are conducted on multispectral and bi-temporal remote sensing images. A subset (2100×1212 pixels) of a pan-sharpened, bi-temporal Landsat 5 thematic mapper optical image of Los Angeles, California, is used in this study which shows a long period of the forest fire continued from July until October 2009. Early forest fire and later forest fire optical remote sensing images were acquired on July 5, 2009 and October 25, 2009, respectively. The proposed technique is used to detect the fire (which causes change on earth’s surface) and compared with the existing K-means clustering technique. Experimental results showed that proposed technique performs better than the already existing technique. The proposed technique can be easily extendable for optical hyperspectral images and is suitable for many practical applications.

Keywords: burned area, change detection, correlation, fuzzy clustering, optical remote sensing

Procedia PDF Downloads 168
222 Possibilities and Limits for the Development of Care in Primary Health Care in Brazil

Authors: Ivonete Teresinha Schulter Buss Heidemann, Michelle Kuntz Durand, Aline Megumi Arakawa-Belaunde, Sandra Mara Corrêa, Leandro Martins Costa Do Araujo, Kamila Soares Maciel

Abstract:

Primary Health Care is defined as the level of a system of services that enables the achievement of answers to health needs. This level of care produces services and actions of attention to the person in the life cycle and in their health conditions or diseases. Primary Health Care refers to a conception of care model and organization of the health system that in Brazil seeks to reorganize the principles of the Unified Health System. This system is based on the principle of health as a citizen's right and duty of the State. Primary health care has family health as a priority strategy for its organization according to the precepts of the Unified Health System, structured in the logic of new sectoral practices, associating clinical work and health promotion. Thus, this study seeks to know the possibilities and limits of the care developed by professionals working in Primary Health Care. It was conducted by a qualitative approach of the participant action type, based on Paulo Freire's Research Itinerary, which corresponds to three moments: Thematic Investigation; Encoding and Decoding; and, Critical Unveiling. The themes were investigated in a health unit with the development of a culture circle with 20 professionals, from a municipality in southern Brazil, in the first half of 2021. The participants revealed as possibilities the involvement, bonding and strengthening of the interpersonal relationships of the professionals who work in the context of primary care. Promoting welcoming in primary care has favoured care and teamwork, as well as improved access. They also highlighted that care planning, the use of technologies in the process of communication and the orientation of the population enhances the levels of problem-solving capacity and the organization of services. As limits, the lack of professional recognition and the scarce material and human resources were revealed, conditions that generate tensions for health care. The reduction in the number of professionals and the low salary are pointed out as elements that boost the motivation of the health team for the development of the work. The participants revealed that due to COVID-19, the flow of care had as a priority the pandemic situation, which affected health care in primary care, and prevention and health promotion actions were canceled. The study demonstrated that empowerment and professional involvement are fundamental to promoting comprehensive and problem-solving care. However, limits of the teams are observed when exercising their activities, these are related to the lack of human and material resources, and the expansion of public health policies is urgent.

Keywords: health promotion, primary health care, health professionals, welcoming.

Procedia PDF Downloads 97
221 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 396
220 Public-Private Partnership for Critical Infrastructure Resilience

Authors: Anjula Negi, D. T. V. Raghu Ramaswamy, Rajneesh Sareen

Abstract:

Road infrastructure is emphatically one of the top most critical infrastructure to the Indian economy. Road network in the country of around 3.3 million km is the second largest in the world. Nationwide statistics released by Ministry of Road, Transport and Highways reveal that every minute an accident happens and one death every 3.7 minutes. This reported scale in terms of safety is a matter of grave concern, and economically represents a national loss of 3% to the GDP. Union Budget 2016-17 has allocated USD 12 billion annually for development and strengthening of roads, an increase of 56% from last year. Thus, highlighting the importance of roads as critical infrastructure. National highway alone represent only 1.7% of the total road linkages, however, carry over 40% of traffic. Further, trends analysed from 2002 -2011 on national highways, indicate that in less than a decade, a 22 % increase in accidents have been reported, but, 68% increase in death fatalities. Paramount inference is that accident severity has increased with time. Over these years many measures to increase road safety, lessening damage to physical assets, reducing vulnerabilities leading to a build-up for resilient road infrastructure have been taken. In the context of national highway development program, policy makers proposed implementation of around 20 % of such road length on PPP mode. These roads were taken up on high-density traffic considerations and for qualitative implementation. In order to understand resilience impacts and safety parameters, enshrined in various PPP concession agreements executed with the private sector partners, such highway specific projects would be appraised. This research paper would attempt to assess such safety measures taken and the possible reasons behind an increase in accident severity through these PPP case study projects. Delving further on safety features to understand policy measures adopted in these cases and an introspection on reasons of severity, whether an outcome of increased speeds, faulty road design and geometrics, driver negligence, or due to lack of discipline in following lane traffic with increased speed. Assessment exercise would study these aspects hitherto to PPP and post PPP project structures, based on literature review and opinion surveys with sectoral experts. On the way forward, it is understood that the Ministry of Road, Transport and Highway’s estimate for strengthening the national highway network is USD 77 billion within next five years. The outcome of this paper would provide an understanding of resilience measures adopted, possible options for accessible and safe road network and its expansion to policy makers for possible policy initiatives and funding allocation in securing critical infrastructure.

Keywords: national highways, policy, PPP, safety

Procedia PDF Downloads 257
219 Prosodic Transfer in Foreign Language Learning: A Phonetic Crosscheck of Intonation and F₀ Range between Italian and German Native and Non-Native Speakers

Authors: Violetta Cataldo, Renata Savy, Simona Sbranna

Abstract:

Background: Foreign Language Learning (FLL) is characterised by prosodic transfer phenomena regarding pitch accents placement, intonation patterns, and pitch range excursion from the learners’ mother tongue to their Foreign Language (FL) which suggests that the gradual development of general linguistic competence in FL does not imply an equally correspondent improvement of the prosodic competence. Topic: The present study aims to monitor the development of prosodic competence of learners of Italian and German throughout the FLL process. The primary object of this study is to investigate the intonational features and the f₀ range excursion of Italian and German from a cross-linguistic perspective; analyses of native speakers’ productions point out the differences between this pair of languages and provide models for the Target Language (TL). A following crosscheck compares the L2 productions in Italian and German by non-native speakers to the Target Language models, in order to verify the occurrence of prosodic interference phenomena, i.e., type, degree, and modalities. Methodology: The subjects of the research are university students belonging to two groups: Italian native speakers learning German as FL and German native speakers learning Italian as FL. Both of them have been divided into three subgroups according to the FL proficiency level (beginners, intermediate, advanced). The dataset consists of wh-questions placed in situational contexts uttered in both speakers’ L1 and FL. Using a phonetic approach, analyses have considered three domains of intonational contours (Initial Profile, Nuclear Accent, and Terminal Contour) and two dimensions of the f₀ range parameter (span and level), which provide a basis for comparison between L1 and L2 productions. Findings: Results highlight a strong presence of prosodic transfer phenomena affecting L2 productions in the majority of both Italian and German learners, irrespective of their FL proficiency level; the transfer concerns all the three domains of the contour taken into account, although with different modalities and characteristics. Currently, L2 productions of German learners show a pitch span compression on the domain of the Terminal Contour compared to their L1 towards the TL; furthermore, German learners tend to use lower pitch range values in deviation from their L1 when improving their general linguistic competence in Italian FL proficiency level. Results regarding pitch range span and level in L2 productions by Italian learners are still in progress. At present, they show a similar tendency to expand the pitch span and to raise the pitch level, which also reveals a deviation from the L1 possibly in the direction of German TL. Conclusion: Intonational features seem to be 'resistant' parameters to which learners appear not to be particularly sensitive. By contrast, they show a certain sensitiveness to FL pitch range dimensions. Making clear which the most resistant and the most sensitive parameters are when learning FL prosody could lay groundwork for the development of prosodic trainings thanks to which learners could finally acquire a clear and natural pronunciation and intonation.

Keywords: foreign language learning, German, Italian, L2 prosody, pitch range, transfer

Procedia PDF Downloads 284
218 Study of the Possibility of Adsorption of Heavy Metal Ions on the Surface of Engineered Nanoparticles

Authors: Antonina A. Shumakova, Sergey A. Khotimchenko

Abstract:

The relevance of research is associated, on the one hand, with an ever-increasing volume of production and the expansion of the scope of application of engineered nanomaterials (ENMs), and on the other hand, with the lack of sufficient scientific information on the nature of the interactions of nanoparticles (NPs) with components of biogenic and abiogenic origin. In particular, studying the effect of ENMs (TiO2 NPs, SiO2 NPs, Al2O3 NPs, fullerenol) on the toxicometric characteristics of common contaminants such as lead and cadmium is an important hygienic task, given the high probability of their joint presence in food products. Data were obtained characterizing a multidirectional change in the toxicity of model toxicants when they are co-administered with various types of ENMs. One explanation for this fact is the difference in the adsorption capacity of ENMs, which was further studied in in vitro studies. For this, a method was proposed based on in vitro modeling of conditions simulating the environment of the small intestine. It should be noted that the obtained data are in good agreement with the results of in vivo experiments: - with the combined administration of lead and TiO2 NPs, there were no significant changes in the accumulation of lead in rat liver; in other organs (kidneys, spleen, testes and brain), the lead content was lower than in animals of the control group; - studying the combined effect of lead and Al2O3 NPs, a multiple and significant increase in the accumulation of lead in rat liver was observed with an increase in the dose of Al2O3 NPs. For other organs, the introduction of various doses of Al2O3 NPs did not significantly affect the bioaccumulation of lead; - with the combined administration of lead and SiO2 NPs in different doses, there was no increase in lead accumulation in all studied organs. Based on the data obtained, it can be assumed that at least three scenarios of the combined effects of ENMs and chemical contaminants on the body: - ENMs quite firmly bind contaminants in the gastrointestinal tract and such a complex becomes inaccessible (or inaccessible) for absorption; in this case, it can be expected that the toxicity of both ENMs and contaminants will decrease; - the complex formed in the gastrointestinal tract has partial solubility and can penetrate biological membranes and / or physiological barriers of the body; in this case, ENMs can play the role of a kind of conductor for contaminants and, thus, their penetration into the internal environment of the body increases, thereby increasing the toxicity of contaminants; - ENMs and contaminants do not interact with each other in any way, therefore the toxicity of each of them is determined only by its quantity and does not depend on the quantity of another component. Authors hypothesized that the degree of adsorption of various elements on the surface of ENMs may be a unique characteristic of their action, allowing a more accurate understanding of the processes occurring in a living organism.

Keywords: absorption, cadmium, engineered nanomaterials, lead

Procedia PDF Downloads 86
217 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada

Authors: Stefan W. Kienzle

Abstract:

The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.

Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes

Procedia PDF Downloads 92
216 Social Value of Travel Time Savings in Sub-Saharan Africa

Authors: Richard Sogah

Abstract:

The significance of transport infrastructure investments for economic growth and development has been central to the World Bank’s strategy for poverty reduction. Among the conventional surface transport infrastructures, road infrastructure is significant in facilitating the movement of human capital goods and services. When transport projects (i.e., roads, super-highways) are implemented, they come along with some negative social values (costs), such as increased noise and air pollution for local residents living near these facilities, displaced individuals, etc. However, these projects also facilitate better utilization of existing capital stock and generate other observable benefits that can be easily quantified. For example, the improvement or construction of roads creates employment, stimulates revenue generation (toll), reduces vehicle operating costs and accidents, increases accessibility, trade expansion, safety improvement, etc. Aside from these benefits, travel time savings (TTSs) which are the major economic benefits of urban and inter-urban transport projects and therefore integral in the economic assessment of transport projects, are often overlooked and omitted when estimating the benefits of transport projects, especially in developing countries. The absence of current and reliable domestic travel data and the inability of replicated models from the developed world to capture the actual value of travel time savings due to the large unemployment, underemployment, and other labor-induced distortions has contributed to the failure to assign value to travel time savings when estimating the benefits of transport schemes in developing countries. This omission of the value of travel time savings from the benefits of transport projects in developing countries poses problems for investors and stakeholders to either accept or dismiss projects based on schemes that favor reduced vehicular operating costs and other parameters rather than those that ease congestion, increase average speed, facilitate walking and handloading, and thus save travel time. Given the complex reality in the estimation of the value of travel time savings and the presence of widespread informal labour activities in Sub-Saharan Africa, we construct a “nationally ranked distribution of time values” and estimate the value of travel time savings based on the area beneath the distribution. Compared with other approaches, our method captures both formal sector workers and individuals/people who work outside the formal sector and hence changes in their time allocation occur in the informal economy and household production activities. The dataset for the estimations is sourced from the World Bank, the International Labour Organization, etc.

Keywords: road infrastructure, transport projects, travel time savings, congestion, Sub-Sahara Africa

Procedia PDF Downloads 107
215 Community Perception towards the Major Drivers for Deforestation and Land Degradation of Choke Afro-alpine and Sub-afro alpine Ecosystem, Northwest Ethiopia

Authors: Zelalem Teshager

Abstract:

The Choke Mountains have several endangered and endemic wildlife species and provide important ecosystem services. Despite their environmental importance, the Choke Mountains are found in dangerous conditions. This raised the need for an evaluation of the community's perception of deforestation and its major drivers and suggested possible solutions in the Choke Mountains of northwestern Ethiopia. For this purpose, household surveys, key informant interviews, and focus group discussions were used. A total sample of 102 informants was used for this survey. A purposive sampling technique was applied to select the participants for in-depth interviews and focus group discussions. Both qualitative and quantitative data analyses were used. Computation of descriptive statistics such as mean, percentages, frequency, tables, figures, and graphs was applied to organize, analyze, and interpret the study. This study assessed smallholder agricultural land expansion, Fuel wood collection, population growth; encroachment, free grazing, high demand of construction wood, unplanned resettlement, unemployment, border conflict, lack of a strong forest protecting system, and drought were the serious causes of forest depletion reported by local communities. Loss of land productivity, Soil erosion, soil fertility decline, increasing wind velocity, rising temperature, and frequency of drought were the most perceived impacts of deforestation. Most of the farmers have a holistic understanding of forest cover change. Strengthening forest protection, improving soil and water conservation, enrichment planting, awareness creation, payment for ecosystem services, and zero grazing campaigns were mentioned as possible solutions to the current state of deforestation. Applications of Intervention measures, such as animal fattening, beekeeping, and fruit production can contribute to decreasing the deforestation causes and improve communities’ livelihood. In addition, concerted efforts of conservation will ensure that the forests’ ecosystems contribute to increased ecosystem services. The major drivers of deforestation should be addressed with government intervention to change dependency on forest resources, income sources of the people, and institutional set-up of the forestry sector. Overall, further reduction in anthropogenic pressure is urgent and crucial for the recovery of the afro-alpine vegetation and the interrelated endangered wildlife in the Choke Mountains.

Keywords: choke afro-alpine, deforestation, drivers, intervention measures, perceptions

Procedia PDF Downloads 53
214 Cross-Sectoral Energy Demand Prediction for Germany with a 100% Renewable Energy Production in 2050

Authors: Ali Hashemifarzad, Jens Zum Hingst

Abstract:

The structure of the world’s energy systems has changed significantly over the past years. One of the most important challenges in the 21st century in Germany (and also worldwide) is the energy transition. This transition aims to comply with the recent international climate agreements from the United Nations Climate Change Conference (COP21) to ensure sustainable energy supply with minimal use of fossil fuels. Germany aims for complete decarbonization of the energy sector by 2050 according to the federal climate protection plan. One of the stipulations of the Renewable Energy Sources Act 2017 for the expansion of energy production from renewable sources in Germany is that they cover at least 80% of the electricity requirement in 2050; The Gross end energy consumption is targeted for at least 60%. This means that by 2050, the energy supply system would have to be almost completely converted to renewable energy. An essential basis for the development of such a sustainable energy supply from 100% renewable energies is to predict the energy requirement by 2050. This study presents two scenarios for the final energy demand in Germany in 2050. In the first scenario, the targets for energy efficiency increase and demand reduction are set very ambitiously. To build a comparison basis, the second scenario provides results with less ambitious assumptions. For this purpose, first, the relevant framework conditions (following CUTEC 2016) were examined, such as the predicted population development and economic growth, which were in the past a significant driver for the increase in energy demand. Also, the potential for energy demand reduction and efficiency increase (on the demand side) was investigated. In particular, current and future technological developments in energy consumption sectors and possible options for energy substitution (namely the electrification rate in the transport sector and the building renovation rate) were included. Here, in addition to the traditional electricity sector, the areas of heat, and fuel-based consumptions in different sectors such as households, commercial, industrial and transport are taken into account, supporting the idea that for a 100% supply from renewable energies, the areas currently based on (fossil) fuels must be almost completely be electricity-based by 2050. The results show that in the very ambitious scenario a final energy demand of 1,362 TWh/a is required, which is composed of 818 TWh/a electricity, 229 TWh/a ambient heat for electric heat pumps and approx. 315 TWh/a non-electric energy (raw materials for non-electrifiable processes). In the less ambitious scenario, in which the targets are not fully achieved by 2050, the final energy demand will need a higher electricity part of almost 1,138 TWh/a (from the total: 1,682 TWh/a). It has also been estimated that 50% of the electricity revenue must be saved to compensate for fluctuations in the daily and annual flows. Due to conversion and storage losses (about 50%), this would mean that the electricity requirement for the very ambitious scenario would increase to 1,227 TWh / a.

Keywords: energy demand, energy transition, German Energiewende, 100% renewable energy production

Procedia PDF Downloads 133
213 Patterns of TV Simultaneous Interpreting of Emotive Overtones in Trump’s Victory Speech from English into Arabic

Authors: Hanan Al-Jabri

Abstract:

Simultaneous interpreting is deemed to be the most challenging mode of interpreting by many scholars. The special constraints involved in this task including time constraints, different linguistic systems, and stress pose a great challenge to most interpreters. These constraints are likely to maximise when the interpreting task is done live on TV. The TV interpreter is exposed to a wide variety of audiences with different backgrounds and needs and is mostly asked to interpret high profile tasks which raise his/her levels of stress, which further complicate the task. Under these constraints, which require fast and efficient performance, TV interpreters of four TV channels were asked to render Trump's victory speech into Arabic. However, they had also to deal with the burden of rendering English emotive overtones employed by the speaker into a whole different linguistic system. The current study aims at investigating the way TV interpreters, who worked in the simultaneous mode, handled this task; it aims at exploring and evaluating the TV interpreters’ linguistic choices and whether the original emotive effect was maintained, upgraded, downgraded or abandoned in their renditions. It also aims at exploring the possible difficulties and challenges that emerged during this process and might have influenced the interpreters’ linguistic choices. To achieve its aims, the study analysed Trump’s victory speech delivered on November 6, 2016, along with four Arabic simultaneous interpretations produced by four TV channels: Al-Jazeera, RT, CBC News, and France 24. The analysis of the study relied on two frameworks: a macro and a micro framework. The former presents an overview of the wider context of the English speech as well as an overview of the speaker and his political background to help understand the linguistic choices he made in the speech, and the latter framework investigates the linguistic tools which were employed by the speaker to stir people’s emotions. These tools were investigated based on Shamaa’s (1978) classification of emotive meaning according to their linguistic level: phonological, morphological, syntactic, and semantic and lexical levels. Moreover, this level investigates the patterns of rendition which were detected in the Arabic deliveries. The results of the study identified different rendition patterns in the Arabic deliveries, including parallel rendition, approximation, condensation, elaboration, transformation, expansion, generalisation, explicitation, paraphrase, and omission. The emerging patterns, as suggested by the analysis, were influenced by factors such as speedy and continuous delivery of some stretches, and highly-dense segments among other factors. The study aims to contribute to a better understanding of TV simultaneous interpreting between English and Arabic, as well as the practices of TV interpreters when rendering emotiveness especially that little is known about interpreting practices in the field of TV, particularly between Arabic and English.

Keywords: emotive overtones, interpreting strategies, political speeches, TV interpreting

Procedia PDF Downloads 159
212 The Scenario Analysis of Shale Gas Development in China by Applying Natural Gas Pipeline Optimization Model

Authors: Meng Xu, Alexis K. H. Lau, Ming Xu, Bill Barron, Narges Shahraki

Abstract:

As an emerging unconventional energy, shale gas has been an economically viable step towards a cleaner energy future in U.S. China also has shale resources that are estimated to be potentially the largest in the world. In addition, China has enormous unmet for a clean alternative to substitute coal. Nonetheless, the geological complexity of China’s shale basins and issues of water scarcity potentially impose serious constraints on shale gas development in China. Further, even if China could replicate to a significant degree the U.S. shale gas boom, China faces the problem of transporting the gas efficiently overland with its limited pipeline network throughput capacity and coverage. The aim of this study is to identify the potential bottlenecks in China’s gas transmission network, as well as to examine the shale gas development affecting particular supply locations and demand centers. We examine this through application of three scenarios with projecting domestic shale gas supply by 2020: optimistic, medium and conservative shale gas supply, taking references from the International Energy Agency’s (IEA’s) projections and China’s shale gas development plans. Separately we project the gas demand at provincial level, since shale gas will have more significant impact regionally than nationally. To quantitatively assess each shale gas development scenario, we formulated a gas pipeline optimization model. We used ArcGIS to generate the connectivity parameters and pipeline segment length. Other parameters are collected from provincial “twelfth-five year” plans and “China Oil and Gas Pipeline Atlas”. The multi-objective optimization model uses GAMs and Matlab. It aims to minimize the demands that are unable to be met, while simultaneously seeking to minimize total gas supply and transmission costs. The results indicate that, even if the primary objective is to meet the projected gas demand rather than cost minimization, there’s a shortfall of 9% in meeting total demand under the medium scenario. Comparing the results between the optimistic and medium supply of shale gas scenarios, almost half of the shale gas produced in Sichuan province and Chongqing won’t be able to be transmitted out by pipeline. On the demand side, the Henan province and Shanghai gas demand gap could be filled as much as 82% and 39% respectively, with increased shale gas supply. To conclude, the pipeline network in China is currently not sufficient in meeting the projected natural gas demand in 2020 under medium and optimistic scenarios, indicating the need for substantial pipeline capacity expansion for some of the existing network, and the importance of constructing new pipelines from particular supply to demand sites. If the pipeline constraint is overcame, Beijing, Shanghai, Jiangsu and Henan’s gas demand gap could potentially be filled, and China could thereby reduce almost 25% its dependency on LNG imports under the optimistic scenario.

Keywords: energy policy, energy systematic analysis, scenario analysis, shale gas in China

Procedia PDF Downloads 283
211 Modeling and Energy Analysis of Limestone Decomposition with Microwave Heating

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

The energy transition is spurred by structural changes in energy demand, supply, and prices. Microwave technology was first proposed as a faster alternative for cooking food. It was found that food heated instantly when interacting with high-frequency electromagnetic waves. The dielectric properties account for a material’s ability to absorb electromagnetic energy and dissipate this energy in the form of heat. Many energy-intense industries could benefit from electromagnetic heating since many of the raw materials are dielectric at high temperatures. Limestone sedimentary rock is a dielectric material intensively used in the cement industry to produce unslaked lime. A numerical 3D model was implemented in COMSOL Multiphysics to study the limestone continuous processing under microwave heating. The model solves the two-way coupling between the Energy equation and Maxwell’s equations as well as the coupling between heat transfer and chemical interfaces. Complementary, a controller was implemented to optimize the overall heating efficiency and control the numerical model stability. This was done by continuously matching the cavity impedance and predicting the required energy for the system, avoiding energy inefficiencies. This controller was developed in MATLAB and successfully fulfilled all these goals. The limestone load influence on thermal decomposition and overall process efficiency was the main object of this study. The procedure considered the Verification and Validation of the chemical kinetics model separately from the coupled model. The chemical model was found to correctly describe the chosen kinetic equation, and the coupled model successfully solved the equations describing the numerical model. The interaction between flow of material and electric field Poynting vector revealed to influence limestone decomposition, as a result from the low dielectric properties of limestone. The numerical model considered this effect and took advantage from this interaction. The model was demonstrated to be highly unstable when solving non-linear temperature distributions. Limestone has a dielectric loss response that increases with temperature and has low thermal conductivity. For this reason, limestone is prone to produce thermal runaway under electromagnetic heating, as well as numerical model instabilities. Five different scenarios were tested by considering a material fill ratio of 30%, 50%, 65%, 80%, and 100%. Simulating the tube rotation for mixing enhancement was proven to be beneficial and crucial for all loads considered. When uniform temperature distribution is accomplished, the electromagnetic field and material interaction is facilitated. The results pointed out the inefficient development of the electric field within the bed for 30% fill ratio. The thermal efficiency showed the propensity to stabilize around 90%for loads higher than 50%. The process accomplished a maximum microwave efficiency of 75% for the 80% fill ratio, sustaining that the tube has an optimal fill of material. Electric field peak detachment was observed for the case with 100% fill ratio, justifying the lower efficiencies compared to 80%. Microwave technology has been demonstrated to be an important ally for the decarbonization of the cement industry.

Keywords: CFD numerical simulations, efficiency optimization, electromagnetic heating, impedance matching, limestone continuous processing

Procedia PDF Downloads 174
210 The Spatial Circuit of the Audiovisual Industry in Argentina: From Monopoly and Geographic Concentration to New Regionalization and Democratization Policies

Authors: André Pasti

Abstract:

Historically, the communication sector in Argentina is characterized by intense monopolization and geographical concentration in the city of Buenos Aires. In 2000, the four major media conglomerates in operation – Clarín, Telefónica, America and Hadad – controlled 84% of the national media market. By 2009, new policies were implemented as a result of civil society organizations demands. Legally, a new regulatory framework was approved: the law 26,522 of Audiovisual Communications Services. Supposedly, these policies intend to create new conditions for the development of the audiovisual economy in the territory of Argentina. The regionalization of audiovisual production and the democratization of channels and access to media were among the priorities. This paper analyses the main changes and continuities in the organization of the spatial circuit of the audiovisual industry in Argentina provoked by these new policies. These new policies aim at increasing the diversity of audiovisual producers and promoting regional audiovisual industries. For this purpose, a national program for the development of audiovisual centers within the country was created. This program fostered a federalized production network, based on nine audiovisual regions and 40 nodes. Each node has created technical, financial and organizational conditions to gather different actors in audiovisual production – such as SMEs, social movements and local associations. The expansion of access to technical networks was also a concern of other policies, such as ‘Argentina connected’, whose objective was to expand access to broadband Internet. The Open Digital Television network also received considerable investments. Furthermore, measures have been carried out in order to impose limits on the concentration of ownership as well as to eliminate the oligopolies and to ensure more competition in the sector. These actions intended to force a divide of the media conglomerates into smaller groups. Nevertheless, the corporations that compose these conglomerates resist strongly, making full use of their economic and judiciary power. Indeed, the absence of effective impact of such measures can be testified by the fact that the audiovisual industry remains strongly concentrated in Argentina. Overall, these new policies were designed properly to decentralize audiovisual production and expand the regional diversity of the audiovisual industry. However, the effective transformation of the organization of the audiovisual circuit in the territory faced several resistances. This can be explained firstly and foremost by the ideological and economic power of the media conglomerates. In the second place, there is an inherited inertia from the unequal distribution of the objects needed for the audiovisual production and consumption. Lastly, the resistance also relies on financial needs and in the excessive dependence of the state for the promotion of regional audiovisual production.

Keywords: Argentina, audiovisual industry, communication policies, geographic concentration, regionalization, spatial circuit

Procedia PDF Downloads 214
209 A Comparative Semantic Network Study between Chinese and Western Festivals

Authors: Jianwei Qian, Rob Law

Abstract:

With the expansion of globalization and the increment of market competition, the festival, especially the traditional one, has demonstrated its vitality under the new context. As a new tourist attraction, festivals play a critically important role in promoting the tourism economy, because the organization of a festival can engage more tourists, generate more revenues and win a wider media concern. However, in the current stage of China, traditional festivals as a way to disseminate national culture are undergoing the challenge of foreign festivals and the related culture. Different from those special events created solely for developing economy, traditional festivals have their own culture and connotation. Therefore, it is necessary to conduct a study on not only protecting the tradition, but promoting its development as well. This study conducts a comparative study of the development of China’s Valentine’s Day and Western Valentine’s Day under the Chinese context and centers on newspaper reports in China from 2000 to 2016. Based on the literature, two main research focuses can be established: one is concerned about the festival’s impact and the other is about tourists’ motivation to engage in a festival. Newspaper reports serve as the research discourse and can help cover the two focal points. With the assistance of content mining techniques, semantic networks for both Days are constructed separately to help depict the status quo of these two festivals in China. Based on the networks, two models are established to show the key component system of traditional festivals in the hope of perfecting the positive role festival tourism plays in the promotion of economy and culture. According to the semantic networks, newspaper reports on both festivals have similarities and differences. The difference is mainly reflected in its cultural connotation, because westerners and Chinese may show their love in different ways. Nevertheless, they share more common points in terms of economy, tourism, and society. They also have a similar living environment and stakeholders. Thus, they can be promoted together to revitalize some traditions in China. Three strategies are proposed to realize the aforementioned aim. Firstly, localize international festivals to suit the Chinese context to make it function better. Secondly, facilitate the internationalization process of traditional Chinese festivals to receive more recognition worldwide. Finally, allow traditional festivals to compete with foreign ones to help them learn from each other and elucidate the development of other festivals. It is believed that if all these can be realized, not only the traditional Chinese festivals can obtain a more promising future, but foreign ones are the same as well. Accordingly, the paper can contribute to the theoretical construction of festival images by the presentation of the semantic network. Meanwhile, the identified features and issues of festivals from two different cultures can enlighten the organization and marketing of festivals as a vital tourism activity. In the long run, the study can enhance the festival as a key attraction to keep the sustainable development of both the economy and the society.

Keywords: Chinese context, comparative study, festival tourism, semantic network analysis, valentine’s day

Procedia PDF Downloads 230
208 A Study on Development Strategies of Marine Leisure Tourism Using AHP

Authors: Da-Hye Jang, Woo-Jeong Cho

Abstract:

Marine leisure tourism contributes greatly to the national economy in which the sea is located nearby and many countries are using marine tourism to create value added. The interest and investment of government and local governments on marine leisure tourism growing as a major trend of marine tourism is steadily increasing. But indiscriminate investment in marine leisure tourism such as duplicated business wastes limited resources. In other words, government and local governments need to select and concentrate on the goal they pursue by drawing priority on maritime leisure tourism policies. The purpose of this study is to analyze development strategies on supplier for marine leisure tourism and thus provide a comprehensive and rational framework for developing marine leisure tourism. In order to achieve the purpose, this study is to analyze priorities for each evaluation criterion of marine leisure tourism development policies using Analytic Hierarchy Process. In this study, a questionnaire was used as the survey tool and was developed based on the previous studies, government report, regional report, related thesis and literature for marine leisure tourism. The questionnaire was constructed by verifying the validity of contents from the expert group related to marine leisure tourism after conducting the first and second preliminary surveys. The AHP survey was conducted to experts (university professors, researchers, field specialists and related public officials) from April 6, 2018 to April 30, 2018 by visiting in person or e-mail. This study distributed 123 questionnaires and 68 valid questionnaires were used for data analysis. As a result, 4 factors with 12 detail strategies were analyzed using Excel. Extracted factors of development strategies of marine leisure tourism are consist of 4 factors such as infrastructure, popularization, law & system improvement and advancement. In conclusion, the results of the pairwise comparison of the four major factor on the first class were infrastructure, popularization, law & system improvement and advancement in order. Second, marine water front space maintenance had higher priority than marina facilities expansion and the establishment of marine leisure education center. Third, marine leisure safety·culture improvement had higher priority than strengthening experience·education program and the upkeep and open promotion event. Fourth, specialization·cluster of marine leisure tourism had higher priority than business support system of marine leisure tourism. Fifth, the revision of water-related leisure activities safety act had higher priority than an enactment of marine tourism promotion act and the foster of marina service industry. Finally, marine water front space maintenance was the most important development plan to boost marine leisure tourism.

Keywords: marine leisure tourism, marine leisure, marine tourism, analytic hierarchy process

Procedia PDF Downloads 165
207 Polypyrrole as Bifunctional Materials for Advanced Li-S Batteries

Authors: Fang Li, Jiazhao Wang, Jianmin Ma

Abstract:

The practical application of Li-S batteries is hampered due to poor cycling stability caused by electrolyte-dissolved lithium polysulfides. Dual functionalities such as strong chemical adsorption stability and high conductivity are highly desired for an ideal host material for a sulfur-based cathode. Polypyrrole (PPy), as a conductive polymer, was widely studied as matrixes for sulfur cathode due to its high conductivity and strong chemical interaction with soluble polysulfides. Thus, a novel cathode structure consisting of a free-standing sulfur-polypyrrole cathode and a polypyrrole coated separator was designed for flexible Li-S batteries. The PPy materials show strong interaction with dissoluble polysulfides, which could suppress the shuttle effect and improve the cycling stability. In addition, the synthesized PPy film with a rough surface acts as a current collector, which improves the adhesion of sulfur materials and restrain the volume expansion, enhancing the structural stability during the cycling process. For further enhancing the cycling stability, a PPy coated separator was also applied, which could make polysulfides into the cathode side to alleviate the shuttle effect. Moreover, the PPy layer coated on commercial separator is much lighter than other reported interlayers. A soft-packaged flexible Li-S battery has been designed and fabricated for testing the practical application of the designed cathode and separator, which could power a device consisting of 24 light-emitting diode (LED) lights. Moreover, the soft-packaged flexible battery can still show relatively stable cycling performance after repeated bending, indicating the potential application in flexible batteries. A novel vapor phase deposition method was also applied to prepare uniform polypyrrole layer coated sulfur/graphene aerogel composite. The polypyrrole layer simultaneously acts as host and adsorbent for efficient suppression of polysulfides dissolution through strong chemical interaction. The density functional theory (DFT) calculations reveal that the polypyrrole could trap lithium polysulfides through stronger bonding energy. In addition, the deflation of sulfur/graphene hydrogel during the vapor phase deposition process enhances the contact of sulfur with matrixes, resulting in high sulfur utilization and good rate capability. As a result, the synthesized polypyrrole coated sulfur/graphene aerogel composite delivers a specific discharge capacity of 1167 mAh g⁻¹ and 409.1 mAh g⁻¹ at 0.2 C and 5 C respectively. The capacity can maintain at 698 mAh g⁻¹ at 0.5 C after 500 cycles, showing an ultra-slow decay rate of 0.03% per cycle.

Keywords: polypyrrole, strong chemical interaction, long-term stability, Li-S batteries

Procedia PDF Downloads 140
206 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization

Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon

Abstract:

The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.

Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization

Procedia PDF Downloads 445
205 Investigation Studies of WNbMoVTa and WNbMoVTaCr₀.₅Al Refractory High Entropy Alloys as Plasma-Facing Materials

Authors: Burçak Boztemur, Yue Xu, Laima Luo, M. Lütfi Öveçoğlu, Duygu Ağaoğulları

Abstract:

Tungsten (W) is used chiefly as plasma-facing material. However, it has some problems, such as brittleness after plasma exposure. High-entropy alloys (RHEAs) are a new opportunity for this deficiency. So, the neutron shielding behavior of WNbMoVTa and WNbMoVTaCr₀.₅Al compositions were examined against He⁺ irradiation in this study. The mechanical and irradiation properties of the WNbMoVTa base composition were investigated by adding the Al and Cr elements. The mechanical alloying (MA) for 6 hours was applied to obtain RHEA powders. According to the X-ray diffraction (XRD) method, the body-centered cubic (BCC) phase and NbTa phase with a small amount of WC impurity that comes from vials and balls were determined after 6 h MA. Also, RHEA powders were consolidated with the spark plasma sintering (SPS) method (1500 ºC, 30 MPa, and 10 min). After the SPS method, (Nb,Ta)C and W₂C₀.₈₅ phases were obtained with the decomposition of WC and stearic acid that is added during MA based on XRD results. Also, the BCC phase was obtained for both samples. While the Al₂O₃ phase with a small intensity was seen for the WNbMoVTaCr₀.₅Al sample, the Ta₂VO₆ phase was determined for the base sample. These phases were observed as three different regions according to scanning electron microscopy (SEM). All elements were distributed homogeneously on the white region by measuring an electron probe micro-analyzer (EPMA) coupled with a wavelength dispersive spectroscope (WDS). Also, the grey region of the WNbMoVTa sample was rich in Ta, V, and O elements. However, the amount of Al and O elements was higher for the grey region of the WNbMoVTaCr₀.₅Al sample. The high amount of Nb, Ta, and C elements were determined for both samples. Archimedes’ densities that were measured with alcohol media were closer to the theoretical densities of RHEAs. These values were important for the microhardness and irradiation resistance of compositions. While the Vickers microhardness value of the WNbMoVTa sample was measured as ~11 GPa, this value increased to nearly 13 GPa with the WNbMoVTaCr₀.₅Al sample. These values were compatible with the wear behavior. The wear volume loss was decreased to 0.16×10⁻⁴ from 1.25×10⁻⁴ mm³ by the addition of Al and Cr elements to the WNbMoVTa. The He⁺ irradiation was conducted on the samples to observe surface damage. After irradiation, the XRD patterns were shifted to the left because of defects and dislocations. He⁺ ions were infused under the surface, so they created the lattice expansion. The peak shifting of the WNbMoVTaCr₀.₅Al sample was less than the WNbMoVTa base sample, thanks to less impact. A small amount of fuzz was observed for the base sample. This structure was removed and transformed into a wavy structure with the addition of Cr and Al elements. Also, the deformation hardening was actualized after irradiation. A lower amount of hardening was obtained with the WNbMoVTaCr₀.₅Al sample based on the changing microhardness values. The surface deformation was decreased in the WNbMoVTaCr₀.₅Al sample.

Keywords: refractory high entropy alloy, microhardness, wear resistance, He⁺ irradiation

Procedia PDF Downloads 64
204 Spray Nebulisation Drying: Alternative Method to Produce Microparticulated Proteins

Authors: Josef Drahorad, Milos Beran, Ondrej Vltavsky, Marian Urban, Martin Fronek, Jiri Sova

Abstract:

Engineering efforts of researchers of the Food research institute Prague and the Czech Technical University in spray drying technologies led to the introduction of a demonstrator ATOMIZER and a new technology of Carbon Dioxide-Assisted Spray Nebulization Drying (CASND). The equipment combines the spray drying technology, when the liquid to be dried is atomized by a rotary atomizer, with Carbon Dioxide Assisted Nebulization - Bubble Dryer (CAN-BD) process in an original way. A solution, emulsion or suspension is saturated by carbon dioxide at pressure up to 80 bar before the drying process. The atomization process takes place in two steps. In the first step, primary droplets are produced at the outlet of the rotary atomizer of special construction. In the second step, the primary droplets are divided in secondary droplets by the CO2 expansion from the inside of primary droplets. The secondary droplets, usually in the form of microbubbles, are rapidly dried by warm air stream at temperatures up to 60ºC and solid particles are formed in a drying chamber. Powder particles are separated from the drying air stream in a high efficiency fine powder separator. The product is frequently in the form of submicron hollow spheres. The CASND technology has been used to produce microparticulated protein concentrates for human nutrition from alternative plant sources - hemp and canola seed filtration cakes. Alkali extraction was used to extract the proteins from the filtration cakes. The protein solutions after the alkali extractions were dried with the demonstrator ATOMIZER. Aerosol particle size distribution and concentration in the draying chamber were determined by two different on-line aerosol spectrometers SMPS (Scanning Mobility Particle Sizer) and APS (Aerodynamic Particle Sizer). The protein powders were in form of hollow spheres with average particle diameter about 600 nm. The particles were characterized by the SEM method. The functional properties of the microparticulated protein concentrates were compared with the same protein concentrates dried by the conventional spray drying process. Microparticulated protein has been proven to have improved foaming and emulsifying properties, water and oil absorption capacities and formed long-term stable water dispersions. This work was supported by the research grants TH03010019 of the Technology Agency of the Czech Republic.

Keywords: carbon dioxide-assisted spray nebulization drying, canola seed, hemp seed, microparticulated proteins

Procedia PDF Downloads 164
203 Deforestation, Vulnerability and Adaptation Strategies of Rural Farmers: The Case of Central Rift Valley Region of Ethiopia

Authors: Dembel Bonta Gebeyehu

Abstract:

In the study area, the impacts of deforestation for environmental degradation and livelihood of farmers manifest in different faces. They are more vulnerable as they depend on rain-fed agriculture and immediate natural forests. On the other hand, after planting seedling, waste disposal and management system of the plastic cover is poorly practiced and administered in the country in general and in the study area in particular. If this situation continues, the plastic waste would also accentuate land degradation. Besides, there is the absence of empirical studies conducted comprehensively on the research under study the case. The results of the study could suffice to inform any intervention schemes or to contribute to the existing knowledge on these issues. The study employed a qualitative approach based on intensive fieldwork data collected via various tools namely open-ended interviews, focus group discussion, key-informant interview and non-participant observation. The collected data was duly transcribed and latter categorized into different labels based on pre-determined themes to make further analysis. The major causes of deforestation were the expansion of agricultural land, poor administration, population growth, and the absence of conservation methods. The farmers are vulnerable to soil erosion and soil infertility culminating in low agricultural production; loss of grazing land and decline of livestock production; climate change; and deterioration of social capital. Their adaptation and coping strategies include natural conservation measures, diversification of income sources, safety-net program, and migration. Due to participatory natural resource conservation measures, soil erosion has been decreased and protected, indigenous woodlands started to regenerate. These brought farmers’ attitudinal change. The existing forestation program has many flaws. Especially, after planting seedlings, there is no mechanism for the plastic waste disposal and management. It was also found out organizational challenges among the mandated offices In the studied area, deforestation is aggravated by a number of factors, which made the farmers vulnerable. The current forestation programs are not well-planned, implemented, and coordinated. Sustainable and efficient seedling plastic cover collection and reuse methods should be devised. This is possible through creating awareness, organizing micro and small enterprises to reuse, and generate income from the collected plastic etc.

Keywords: land-cover and land-dynamics, vulnerability, adaptation strategy, mitigation strategies, sustainable plastic waste management

Procedia PDF Downloads 387
202 Strategies for Drought Adpatation and Mitigation via Wastewater Management

Authors: Simrat Kaur, Fatema Diwan, Brad Reddersen

Abstract:

The unsustainable and injudicious use of natural renewable resources beyond the self-replenishment limits of our planet has proved catastrophic. Most of the Earth’s resources, including land, water, minerals, and biodiversity, have been overexploited. Owing to this, there is a steep rise in the global events of natural calamities of contrasting nature, such as torrential rains, storms, heat waves, rising sea levels, and megadroughts. These are all interconnected through common elements, namely oceanic currents and land’s the green cover. The deforestation fueled by the ‘economic elites’ or the global players have already cleared massive forests and ecological biomes in every region of the globe, including the Amazon. These were the natural carbon sinks prevailing and performing CO2 sequestration for millions of years. The forest biomes have been turned into mono cultivation farms to produce feedstock crops such as soybean, maize, and sugarcane; which are one of the biggest green house gas emitters. Such unsustainable agriculture practices only provide feedstock for livestock and food processing industries with huge carbon and water footprints. These are two main factors that have ‘cause and effect’ relationships in the context of climate change. In contrast to organic and sustainable farming, the mono-cultivation practices to produce food, fuel, and feedstock using chemicals devoid of the soil of its fertility, abstract surface, and ground waters beyond the limits of replenishment, emit green house gases, and destroy biodiversity. There are numerous cases across the planet where due to overuse; the levels of surface water reservoir such as the Lake Mead in Southwestern USA and ground water such as in Punjab, India, have deeply shrunk. Unlike the rain fed food production system on which the poor communities of the world relies; the blue water (surface and ground water) dependent mono-cropping for industrial and processed food create water deficit which put the burden on the domestic users. Excessive abstraction of both surface and ground waters for high water demanding feedstock (soybean, maize, sugarcane), cereal crops (wheat, rice), and cash crops (cotton) have a dual and synergistic impact on the global green house gas emissions and prevalence of megadroughts. Both these factors have elevated global temperatures, which caused cascading events such as soil water deficits, flash fires, and unprecedented burning of the woods, creating megafires in multiple continents, namely USA, South America, Europe, and Australia. Therefore, it is imperative to reduce the green and blue water footprints of agriculture and industrial sectors through recycling of black and gray waters. This paper explores various opportunities for successful implementation of wastewater management for drought preparedness in high risk communities.

Keywords: wastewater, drought, biodiversity, water footprint, nutrient recovery, algae

Procedia PDF Downloads 100
201 High Impact Biostratigrapgic Study

Authors: Njoku, Joy

Abstract:

The re-calibration of the Campanian to Maastritchian of some parts Anambra basin was carried outusing samples from two exploration wells (Amama-1 and Bara-1), Amama-1 (219M–1829M) and Bara-1 (317M-1594M). Palynological and Paleontological analyses werecarried out on 100 ditch cutting samples. The faunal and floral succession were of terrestrialand marine origin as described and logged. The well penetrated four stratigraphic units inAnambra Basin (the Nkporo, Mamu, Ajali and Nsukka) the wells yielded well preservedformanifera and palynormorphs. The well yielded 53 species of foram and 69 species ofpalynomorphs, with 12 genera Bara-1 (25 Species of foram and 101 species of palynormorphs). Amama-1permitted the recognition of 21 genera with 31 formainiferal assemblage zones, 32 pollen and 37 sporesassemblage zones, and dinoflagellate cyst, biozonation, ranging from late Campanian – earlyPaleocene. Bara-1 yielded (60 pollen, 41 spore assemblage zone and 18 dinoflagellate cyst).The zones, in stratigraphically ascending order for the foraminifera and palynomorphs are asfollows. AmamaBiozone A-Globotruncanellahavanensis zone: Late Campanian –Maastrichtian (695 – 1829m) Biozone B-Morozovellavelascoensis zone: Early Paleocene(165–695m) Bara-1 Biozone A-Globotruncanellahavanensis zone: Late Campanian(1512m) Biozone B-Bolivinaafra, B. explicate zone: Maastrichtian (634–1204m) BiozoneC- Indeterminate (305 – 634m) Palynological Amama-1 A.Ctenolophoniditescostatus zone:Early Maastrichtian (1829m) B-Retidiporitesminiporatus Zone: Late Maastrichtian (1274m)Constructipollenitesineffectus Zone: Early Paleocene(695m) Bara-1 Droseriditessenonicus Zone: Late Campanian (994– 1600m) B. Ctenolophoniditescostatus Zone: EarlyMaastrichtian (713–994m) C. Retidiporitesminiporatus Zone: Late Maastrichtian (305 –713m) The paleo – environment of deposition were determined to range from non-marine toouter netritic. A detailed categorization of the palynormorphs into terrestrially derivedpalynormorphs and marine derived palynormorphs based on the distribution of three broadvegetation types; mangrove, fresh water swamps and hinther land communities were used toevaluate sea level fluctuations with respect to sediments deposited in the basins and linkedwith a particular depositional system tract. Amama-1 recorded 4 maximum flooding surface(MFS) at depth 165-1829, dated b/w 61ma-76ma and three sequence boundary(SB) at depth1048m-1533m and 1581 dated b/w 634m-1387m, dated 69.5ma-82ma and four sequenceboundary(SB) at 552m-876m, dated 68ma-77.5ma respectively. The application ofecostratigraphic description is characterised by the prominent expansion of the hinterlandcomponent consisting of the Mangrove to Lowland Rainforest and Afromontane – Savannah vegetation.

Keywords: formanifera, palynomorphs. campanian, maastritchian, ecostratigraphic anambra

Procedia PDF Downloads 28
200 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia

Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger

Abstract:

Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.

Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia

Procedia PDF Downloads 72
199 Diminishing Constitutional Hyper-Rigidity by Means of Digital Technologies: A Case Study on E-Consultations in Canada

Authors: Amy Buckley

Abstract:

The purpose of this article is to assess the problem of constitutional hyper-rigidity to consider how it and the associated tensions with democratic constitutionalism can be diminished by means of using digital democratic technologies. In other words, this article examines how digital technologies can assist us in ensuring fidelity to the will of the constituent power without paying the price of hyper-rigidity. In doing so, it is impossible to ignore that digital strategies can also harm democracy through, for example, manipulation, hacking, ‘fake news,’ and the like. This article considers the tension between constitutional hyper-rigidity and democratic constitutionalism and the relevant strengths and weaknesses of digital democratic strategies before undertaking a case study on Canadian e-consultations and drawing its conclusions. This article observes democratic constitutionalism through the lens of the theory of deliberative democracy to suggest that the application of digital strategies can, notwithstanding their pitfalls, improve a constituency’s amendment culture and, thus, diminish constitutional hyper-rigidity. Constitutional hyper-rigidity is not a new or underexplored concept. At a high level, a constitution can be said to be ‘hyper-rigid’ when its formal amendment procedure is so difficult to enact that it does not take place or is limited in its application. This article claims that hyper-rigidity is one problem with ordinary constitutionalism that fails to satisfy the principled requirements of democratic constitutionalism. Given the rise and development of technology that has taken place since the Digital Revolution, there has been a significant expansion in the possibility for digital democratic strategies to overcome the democratic constitutionalism failures resulting from constitutional hyper-rigidity. Typically, these strategies have included, inter alia, e- consultations, e-voting systems, and online polling forums, all of which significantly improve the ability of politicians and judges to directly obtain the opinion of constituents on any number of matters. This article expands on the application of these strategies through its Canadian e-consultation case study and presents them as a solution to poor amendment culture and, consequently, constitutional hyper-rigidity. Hyper-rigidity is a common descriptor of many written and unwritten constitutions, including the United States, Australian, and Canadian constitutions as just some examples. This article undertakes a case study on Canada, in particular, as it is a jurisdiction less commonly cited in academic literature generally concerned with hyper-rigidity and because Canada has to some extent, championed the use of e-consultations. In Part I of this article, I identify the problem, being that the consequence of constitutional hyper-rigidity is in tension with the principles of democratic constitutionalism. In Part II, I identify and explore a potential solution, the implementation of digital democratic strategies as a means of reducing constitutional hyper-rigidity. In Part III, I explore Canada’s e-consultations as a case study for assessing whether digital democratic strategies do, in fact, improve a constituency’s amendment culture thus reducing constitutional hyper-rigidity and the associated tension that arises with the principles of democratic constitutionalism. The idea is to run a case study and then assess whether I can generalise the conclusions.

Keywords: constitutional hyper-rigidity, digital democracy, deliberative democracy, democratic constitutionalism

Procedia PDF Downloads 76
198 Innovative Technologies of Distant Spectral Temperature Control

Authors: Leonid Zhukov, Dmytro Petrenko

Abstract:

Optical thermometry has no alternative in many cases of industrial most effective continuous temperature control. Classical optical thermometry technologies can be used on available for pyrometers controlled objects with stable radiation characteristics and transmissivity of the intermediate medium. Without using temperature corrections, it is possible in the case of a “black” body for energy pyrometry and the cases of “black” and “grey” bodies for spectral ratio pyrometry or with using corrections – for any colored bodies. Consequently, with increasing the number of operating waves, optical thermometry possibilities to reduce methodical errors significantly expand. That is why, in recent 25-30 years, research works have been reoriented on more perfect spectral (multicolor) thermometry technologies. There are two physical material substances, i.e., substance (controlled object) and electromagnetic field (thermal radiation), to be operated in optical thermometry. Heat is transferred by radiation; therefore, radiation has the energy, entropy, and temperature. Optical thermometry was originating simultaneously with the developing of thermal radiation theory when the concept and the term "radiation temperature" was not used, and therefore concepts and terms "conditional temperatures" or "pseudo temperature" of controlled objects were introduced. They do not correspond to the physical sense and definitions of temperature in thermodynamics, molecular-kinetic theory, and statistical physics. Launched by the scientific thermometric society, discussion about the possibilities of temperature measurements of objects, including colored bodies, using the temperatures of their radiation is not finished. Are the information about controlled objects transferred by their radiation enough for temperature measurements? The positive and negative answers on this fundamental question divided experts into two opposite camps. Recent achievements of spectral thermometry develop events in her favour and don’t leave any hope for skeptics. This article presents the results of investigations and developments in the field of spectral thermometry carried out by the authors in the Department of Thermometry and Physics-Chemical Investigations. The authors have many-year’s of experience in the field of modern optical thermometry technologies. Innovative technologies of optical continuous temperature control have been developed: symmetric-wave, two-color compensative, and based on obtained nonlinearity equation of spectral emissivity distribution linear, two-range, and parabolic. Тhe technologies are based on direct measurements of physically substantiated and proposed by Prof. L. Zhukov, radiation temperatures with the next calculation of the controlled object temperature using this radiation temperatures and corresponding mathematical models. Тhe technologies significantly increase metrological characteristics of continuous contactless and light-guide temperature control in energy, metallurgical, ceramic, glassy, and other productions. For example, under the same conditions, the methodical errors of proposed technologies are less than the errors of known spectral and classical technologies in 2 and 3-13 times, respectively. Innovative technologies provide quality products obtaining at the lowest possible resource-including energy costs. More than 600 publications have been published on the completed developments, including more than 100 domestic patents, as well as 34 patents in Australia, Bulgaria, Germany, France, Canada, the USA, Sweden, and Japan. The developments have been implemented in the enterprises of USA, as well as Western Europe and Asia, including Germany and Japan.

Keywords: emissivity, radiation temperature, object temperature, spectral thermometry

Procedia PDF Downloads 98