Search results for: routing problem
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7349

Search results for: routing problem

5639 The Emancipatory Methodological Approach to the Organizational Problems Management

Authors: Slavica P. Petrovic

Abstract:

One of the key dimensions of management problems in organizations refers to the relations between stakeholders. The management problems that are characterized by conflict and coercion, in which participants do not agree on the ends and means, in which different groups, i.e., individuals, strive to – using the power they have – impose on others their favoured strategy and decisions represent the relevant research subject. Creatively managing the coercive problems in organizations, in which the sources of power can be identified, implies the emancipatory paradigm and the use of corresponding systems methodology. The main research aim is to critically reassess the theoretical foundations and methodological and methodical development of Critical Systems Heuristics (CSH) – as a valid representative of the emancipatory paradigm – in order to determine the conditions, ways, and achievements of its application in managing the coercive problems in organizations. The basic hypothesis is that CSH, as the emancipatory methodology, given its own theoretical foundations and methodological-methodical development, can be employed in a scientifically based and practically useful manner in creative addressing the coercive problems. The scientific instrumentarium corresponding to this research aim is critical systems thinking with its three key commitments to: a) Critical awareness of the strengths and weaknesses of each research instrument (theory, methodology, method, technique, model) for structuring the problem situations in organizations, b) Improvement of managing the coercive problems in organizations, and c) Pluralism – respect the different perceptions and interpretations of problem situations, and enable the combined use of research instruments. The relevant research result is that CSH – considering its theoretical foundations, methodological and methodical development – enables to reveal the normative content of the proposed or existing designs of organizational systems. Accordingly, it can be concluded that through the use of critically heuristic categories and dialectical debate between those involved and those affected by the designs, but who are not included in designing organizational systems, CSH endeavours to – in the application – support the process of improving position of all stakeholders.

Keywords: coercion and conflict in organizations, creative management, critical systems heuristics, the emancipatory systems methodology

Procedia PDF Downloads 442
5638 Inverse Matrix in the Theory of Dynamical Systems

Authors: Renata Masarova, Bohuslava Juhasova, Martin Juhas, Zuzana Sutova

Abstract:

In dynamic system theory a mathematical model is often used to describe their properties. In order to find a transfer matrix of a dynamic system we need to calculate an inverse matrix. The paper contains the fusion of the classical theory and the procedures used in the theory of automated control for calculating the inverse matrix. The final part of the paper models the given problem by the Matlab.

Keywords: dynamic system, transfer matrix, inverse matrix, modeling

Procedia PDF Downloads 516
5637 Main Tendencies of Youth Unemployment and the Regulation Mechanisms for Decreasing Its Rate in Georgia

Authors: Nino Paresashvili, Nino Abesadze

Abstract:

The modern world faces huge challenges. Globalization changed the socio-economic conditions of many countries. The current processes in the global environment have a different impact on countries with different cultures. However, an alleviation of poverty and improvement of living conditions is still the basic challenge for the majority of countries, because much of the population still lives under the official threshold of poverty. It is very important to stimulate youth employment. In order to prepare young people for the labour market, it is essential to provide them with the appropriate professional skills and knowledge. It is necessary to plan efficient activities for decreasing an unemployment rate and for developing the perfect mechanisms for regulation of a labour market. Such planning requires thorough study and analysis of existing reality, as well as development of corresponding mechanisms. Statistical analysis of unemployment is one of the main platforms for regulation of the labour market key mechanisms. The corresponding statistical methods should be used in the study process. Such methods are observation, gathering, grouping, and calculation of the generalized indicators. Unemployment is one of the most severe socioeconomic problems in Georgia. According to the past as well as the current statistics, unemployment rates always have been the most problematic issue to resolve for policy makers. Analytical works towards to the above-mentioned problem will be the basis for the next sustainable steps to solve the main problem. The results of the study showed that the choice of young people is not often due to their inclinations, their interests and the labour market demand. That is why the wrong professional orientation of young people in most cases leads to their unemployment. At the same time, it was shown that there are a number of professions in the labour market with a high demand because of the deficit the appropriate specialties. To achieve healthy competitiveness in youth employment, it is necessary to formulate regional employment programs with taking into account the regional infrastructure specifications.

Keywords: unemployment, analysis, methods, tendencies, regulation mechanisms

Procedia PDF Downloads 378
5636 Violent Conflict and the Protection of Women from Sex and Gender-Based Violence: A Third World Feminist Critique of the United Nations Women, Peace, and Security Agenda

Authors: Seember Susan Aondoakura

Abstract:

This paper examines the international legal framework established to address the challenges women and girls experience in situations of violent conflict. The United Nations (UN) women, peace, and security agenda (hereafter WPS agenda, the Agenda) aspire to make wars safer for women. It recognizes women's agency in armed conflict and their victimization and formulates measures for their protection. The Agenda also acknowledges women's participation in conflict transformation and post-conflict reconstruction. It also calls for the involvement of women in conflict transformation, encourages the protection of women from sex and gender-based violence (SGBV), and provides relief and recovery from conflict-related SGBV. Using Third World Critical Feminist Theory, this paper argues that the WPS agenda overly focus on the protection of women from SGBV occurring in the less developed and conflict-ridden states in the global south, obscures the complicity of western states and economies to the problem, and silences the privileges that such states derive from war economies that continue to fuel conflict. This protectionist approach of the UN also obliterates other equally pressing problems in need of attention, like the high rates of economic degradation in conflict-ravaged societies of the global south. Prioritising protection also 'others' the problem, obliterating any sense of interconnections across geographical locations and situating women in the less developed economies of the global south as the victims and their men as the perpetrators. Prioritising protection ultimately situates western societies as saviours of Third World women with no recourse to their role in engendering and sustaining war. The paper demonstrates that this saviour mentality obliterates chances of any meaningful coalition between the local and the international in framing and addressing the issue, as solutions are formulated from a specific lens—the white hegemonic lens.

Keywords: conflict, protection, security, SGBV

Procedia PDF Downloads 96
5635 Semantic Search Engine Based on Query Expansion with Google Ranking and Similarity Measures

Authors: Ahmad Shahin, Fadi Chakik, Walid Moudani

Abstract:

Our study is about elaborating a potential solution for a search engine that involves semantic technology to retrieve information and display it significantly. Semantic search engines are not used widely over the web as the majorities are still in Beta stage or under construction. Many problems face the current applications in semantic search, the major problem is to analyze and calculate the meaning of query in order to retrieve relevant information. Another problem is the ontology based index and its updates. Ranking results according to concept meaning and its relation with query is another challenge. In this paper, we are offering a light meta-engine (QESM) which uses Google search, and therefore Google’s index, with some adaptations to its returned results by adding multi-query expansion. The mission was to find a reliable ranking algorithm that involves semantics and uses concepts and meanings to rank results. At the beginning, the engine finds synonyms of each query term entered by the user based on a lexical database. Then, query expansion is applied to generate different semantically analogous sentences. These are generated randomly by combining the found synonyms and the original query terms. Our model suggests the use of semantic similarity measures between two sentences. Practically, we used this method to calculate semantic similarity between each query and the description of each page’s content generated by Google. The generated sentences are sent to Google engine one by one, and ranked again all together with the adapted ranking method (QESM). Finally, our system will place Google pages with higher similarities on the top of the results. We have conducted experimentations with 6 different queries. We have observed that most ranked results with QESM were altered with Google’s original generated pages. With our experimented queries, QESM generates frequently better accuracy than Google. In some worst cases, it behaves like Google.

Keywords: semantic search engine, Google indexing, query expansion, similarity measures

Procedia PDF Downloads 425
5634 Time, Uncertainty, and Technological Innovation

Authors: Xavier Everaert

Abstract:

Ever since the publication of “The Problem of Social” cost, Coasean insights on externalities, transaction costs, and the reciprocal nature of harms, have been widely debated. What has been largely neglected however, is the role of technological innovation in the mitigation of negative externalities or transaction costs. Incorporating future uncertainty about negligence standards or expected restitution costs and the profit opportunities these uncertainties reveal to entrepreneurs, allow us to frame problems regarding social costs within the reality of rapid technological evolution.

Keywords: environmental law and economics, entrepreneurship, commons, pollution, wildlife

Procedia PDF Downloads 421
5633 X-Ray Detector Technology Optimization In CT Imaging

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 271
5632 Kemmer Oscillator in Cosmic String Background

Authors: N. Messai, A. Boumali

Abstract:

In this work, we aim to solve the two dimensional Kemmer equation including Dirac oscillator interaction term, in the background space-time generated by a cosmic string which is submitted to an uniform magnetic field. Eigenfunctions and eigenvalues of our problem have been found and the influence of the cosmic string space-time on the energy spectrum has been analyzed.

Keywords: Kemmer oscillator, cosmic string, Dirac oscillator, eigenfunctions

Procedia PDF Downloads 584
5631 One-Class Classification Approach Using Fukunaga-Koontz Transform and Selective Multiple Kernel Learning

Authors: Abdullah Bal

Abstract:

This paper presents a one-class classification (OCC) technique based on Fukunaga-Koontz Transform (FKT) for binary classification problems. The FKT is originally a powerful tool to feature selection and ordering for two-class problems. To utilize the standard FKT for data domain description problem (i.e., one-class classification), in this paper, a set of non-class samples which exist outside of positive class (target class) describing boundary formed with limited training data has been constructed synthetically. The tunnel-like decision boundary around upper and lower border of target class samples has been designed using statistical properties of feature vectors belonging to the training data. To capture higher order of statistics of data and increase discrimination ability, the proposed method, termed one-class FKT (OC-FKT), has been extended to its nonlinear version via kernel machines and referred as OC-KFKT for short. Multiple kernel learning (MKL) is a favorable family of machine learning such that tries to find an optimal combination of a set of sub-kernels to achieve a better result. However, the discriminative ability of some of the base kernels may be low and the OC-KFKT designed by this type of kernels leads to unsatisfactory classification performance. To address this problem, the quality of sub-kernels should be evaluated, and the weak kernels must be discarded before the final decision making process. MKL/OC-FKT and selective MKL/OC-FKT frameworks have been designed stimulated by ensemble learning (EL) to weight and then select the sub-classifiers using the discriminability and diversities measured by eigenvalue ratios. The eigenvalue ratios have been assessed based on their regions on the FKT subspaces. The comparative experiments, performed on various low and high dimensional data, against state-of-the-art algorithms confirm the effectiveness of our techniques, especially in case of small sample size (SSS) conditions.

Keywords: ensemble methods, fukunaga-koontz transform, kernel-based methods, multiple kernel learning, one-class classification

Procedia PDF Downloads 21
5630 The Changes of Chemical Composition of Rice Straw Treated by a Biodecomposer Developed from Rumen Bacterial of Buffalo

Authors: A. Natsir, M. Nadir, S. Syahrir, A. Mujnisa

Abstract:

In tropical countries such as in Indonesia, rice straw plays an important role in fulfilling the needs of feed for ruminant, especially during the dry season in which the availability of forage is very limited. However, the main problem of using rice straw as a feedstuff is low digestibility due to the existence of the links between lignin and cellulose or hemicellulose, and imbalance of its minerals content. One alternative to solve this problem is by application of biodecomposer (BS) derived from rumen bacterial of the ruminant. This study was designed to assess the effects of BS application on the changes of the chemical composition of rice straw. Four adults local buffalo raised under typical feeding conditions were used as a source of inoculum for BS development. The animal was fed for a month with a diet consisted of rice straw and elephant grass before taking rumen fluid samples. Samples of rumen fluid were inoculated in the carboxymethyl cellulose (CMC) media under anaerobic condition for 48 hours at 37°C. The mixture of CMC media and microbes are ready to be used as a biodecomposer following incubation of the mixture under anaerobic condition for 7 days at 45°C. The effectiveness of BS then assessed by applying the BS on the straw according to completely randomized design consisted of four treatments and three replication. One hundred g of ground coarse rice straw was used as the substrate. The BS was applied to the rice straw substrate with the following composition: Rice straw without BS (P0), rice straw + 5% BS (P1), rice straw +10% BS (P2), and rice straw + 15% BS. The mixture of rice straw and BS then fermented under anaerobic for four weeks. Following the fermentation, the chemical composition of rice straw was evaluated. The results indicated that the crude protein content of rice straw significantly increased (P < 0.05) as the level of BS increased. On the other hand, the concentration of crude fiber of the rice straw was significantly decreased (P < 0.05) as the level of BS increased. Other nutrients such as minerals did not change (P > 0.05) due to the treatments. In conclusion, application of BS developed from rumen bacterial of buffalo has a promising prospect to be used as a biological agent to improve the quality of rice straw as feeding for ruminant.

Keywords: biodecomposer, local buffalo, rumen microbial, chemical composition

Procedia PDF Downloads 208
5629 Phytoextraction of Heavy Metals in a Contaminated Site in Assam, India Using Indian Pennywort and Fenugreek: An Experimental Study

Authors: Chinumani Choudhury

Abstract:

Heavy metal contamination is an alarming problem, which poses a serious risk to human health and the surrounding geology. Soils get contaminated with heavy metals due to the un-regularized industrial discharge of the toxic metal-rich effluents. Under such a condition, the remediation of the contaminated sites becomes imperative for a sustainable, safe, and healthy environment. Phytoextraction, which involves the removal of heavy metals from the soil through root absorption and uptake, is a viable remediation technique, which ensures extraction of the toxic inorganic compound available in the soil even at low concentrations. The soil present in the Silghat Region of Assam, India, is mostly contaminated with Zinc (Zn) and Lead (Pb), having concentrations as high as to cause a serious environmental problem if proper measures are not taken. In the present study, an extensive experimental study was carried out to understand the effectiveness of two commonly planted trees in Assam, namely, i) Indian Pennywort and ii) Fenugreek, in the removal of heavy metals from the contaminated soil. The basic characterization of the soil in the contaminated site of the Silghat region was performed and the field concentration of Zn and Pb was recorded. Various long-term laboratory pot tests were carried out by sowing the seeds of Indian Pennywort and Fenugreek in a soil, which was spiked, with a very high dosage of Zn and Pb. The tests were carried out for different concentration of a particular heavy metal and the individual effectiveness in the absorption of the heavy metal by the plants were studied. The concentration of the soil was monitored regularly to assess the rate of depletion and the simultaneous uptake of the heavy metal from the soil to the plant. The amount of heavy metal uptake by the plant was also quantified by analyzing the plant sample at the end of the testing period. Finally, the study throws light on the applicability of the studied plants in the field for effective remediation of the contaminated sites of Assam.

Keywords: phytoextraction, heavy-metals, Indian pennywort, fenugreek

Procedia PDF Downloads 120
5628 Numerical Investigation of Gas Leakage in RCSW-Soil Combinations

Authors: Mahmoud Y. M. Ahmed, Ahmed Konsowa, Mostafa Sami, Ayman Mosallam

Abstract:

Fukushima nuclear accident (Japan 2011) has drawn attention to the issue of gas leakage from hazardous facilities through building boundaries. The rapidly increasing investments in nuclear stations have made the ability to predict, and prevent, gas leakage a rather crucial issue both environmentally and economically. Leakage monitoring for underground facilities is rather complicated due to the combination of Reinforced Concrete Shear Wall (RCSW) and soil. In the framework of a recent research conducted by the authors, the gas insulation capabilities of RCSW-soil combination have been investigated via a lab-scale experimental work. Despite their accuracy, experimental investigations are expensive, time-consuming, hazardous, and lack for flexibility. Numerically simulating the gas leakage as a fluid flow problem based on Computational Fluid Dynamics (CFD) modeling approach can provide a potential alternative. This novel implementation of CFD approach is the topic of the present paper. The paper discusses the aspects of modeling the gas flow through porous media that resemble the RCSW both isolated and combined with the normal soil. A commercial CFD package is utilized in simulating this fluid flow problem. A fixed RCSW layer thickness is proposed, air is taken as the leaking gas, whereas the soil layer is represented as clean sand with variable properties. The variable sand properties include sand layer thickness, fine fraction ratio, and moisture content. The CFD simulation results almost demonstrate what has been found experimentally. A soil layer attached next to a cracked reinforced concrete section plays a significant role in reducing the gas leakage from that cracked section. This role is found to be strongly dependent on the soil specifications.

Keywords: RCSW, gas leakage, Pressure Decay Method, hazardous underground facilities, CFD

Procedia PDF Downloads 418
5627 Formulation, Nutritive Value Assessment And Effect On Weight Gain Of Infant Formulae Prepared From Locally Available Materia

Authors: J. T. Johnson, R. A. Atule, E. Gbodo

Abstract:

The widespread problem of infant malnutrition in developing countries has stirred efforts in research, development and extension by both local and international organizations. As a result, the formulation and development of nutritious weaning foods from local and readily available raw materials which are cost effective has become imperative in many developing countries. Thus, local and readily available raw materials where used to compound and develop nutritious new infant formulae. The materials used for this study include maize, millet, cowpea, pumpkin, fingerlings, and fish bone. The materials where dried and blended to powder. The powders were weighed in the ratio of 4:4:4:3:1:1 respectively and were then mixed properly. Analysis of nutritive value was conducted on the formulae and compared with NAN-2 standard and results reveals that the formulae had reasonable amount of moisture, lipids, carbohydrate, protein, and fibre. Although NAN-2 was superior in both carbohydrate and protein, the new infant formula was higher in mineral elements, vitamins, fibre, and lipids. All the essentials vitamins and both macro and micro minerals where found in appreciable quantity capable of meeting the biochemical and physiological demand of the body while the anti-nutrients composition were significantly below FAO and WHO safe limits. Finally, the compounded infant formulae was feed to a set of albino Wistar rats while some other set of rats was feed with NAN-2 for the period of twenty seven (27) days and body weight was measure at three days intervals. The results of body weight changes was spectacular as their body weight over shot or almost double that of those animals that were feed with NAN-2 at each point of measurement. The results suggest that the widespread problem of infant malnutrition in the developing world especially among the low income segment of the society can now be reduced if not totally eradicated since nutritive and cost effective weaning formulae can be prepared locally from common readily available materials.

Keywords: formulation, nutritive value, local, materials

Procedia PDF Downloads 378
5626 Educational Engineering Tool on Smartphone

Authors: Maya Saade, Rafic Younes, Pascal Lafon

Abstract:

This paper explores the transformative impact of smartphones on pedagogy and presents a smartphone application developed specifically for engineering problem-solving and educational purposes. The widespread availability and advanced capabilities of smartphones have revolutionized the way we interact with technology, including in education. The ubiquity of smartphones allows learners to access educational resources anytime and anywhere, promoting personalized and self-directed learning. The first part of this paper discusses the overall influence of smartphones on pedagogy, emphasizing their potential to improve learning experiences through mobile technology. In the context of engineering education, this paper focuses on the development of a dedicated smartphone application that serves as a powerful tool for both engineering problem-solving and education. The application features an intuitive and user-friendly interface, allowing engineering students and professionals to perform complex calculations and analyses on their smartphones. The smartphone application primarily focuses on beam calculations and serves as a comprehensive beam calculator tailored to engineering education. It caters to various engineering disciplines by offering interactive modules that allow students to learn key concepts through hands-on activities and simulations. With a primary emphasis on beam analysis, this application empowers users to perform calculations for statically determinate beams, statically indeterminate beams, and beam buckling phenomena. Furthermore, the app includes a comprehensive library of engineering formulas and reference materials, facilitating a deeper understanding and practical application of the fundamental principles in beam analysis. By offering a wide range of features specifically tailored for beam calculation, this application provides an invaluable tool for engineering students and professionals looking to enhance their understanding and proficiency in this crucial aspect of a structural engineer.

Keywords: mobile devices in education, solving engineering problems, smartphone application, engineering education

Procedia PDF Downloads 66
5625 Anti-Obesity Activity of Garcinia xanthochymus: Biochemical Characterization and In vivo Studies in High Fat Diet-Rat Model

Authors: Mahesh M. Patil, K. A. Anu-Appaiah

Abstract:

Overweight and obesity is a serious medical problem, increasing in prevalence, and affecting millions worldwide. Investigators have been trying from decades to articulate the burden of obesity and related risk factors. To answer this problem, we suggest a new therapeutic anti-obesity compounds from Garcinia xanthochymus fruit. However, there is little published scientific information on non-hydroxycitric acid Garcinia species. Our findings include biochemical characterization of the fruit; in vivo toxicity and bio-efficacy study of G. xanthochymus in high fat diet wistar rat model. We observed that Garcinia pericarp is a rich source of organic acids, polyphenols, mono- (40.63%) and poly-unsaturated fatty acids (16.45%; omega-3: 10.02%). Toxicological studies have showed that Garcinia is safe and had no observed adverse effect level up to 400 mg/kg/day. Body weight and food intake was significantly (P<0.05) reduced in oral gavage treated rats (sonicated Garcinia powder) in 13 weeks. Subcutaneous fat was significantly (P<0.05) reduced in Garcinia treated rats. Hepatocytes significantly (p<0.05) overexpressed sterol regulatory element binding protein 2, liver X receptor- α, liver X receptor- β, lipoprotein lipase and monoacylglycerol lipase. Fatty acid binding protein 1 and peroxisome proliferator activated receptor- α were down regulated as assessed by real time qPCR. Currently our research is focused on the adipocyte obesity related gene expressions, effect of Garcinia on 3T3-adipocyte cell lines and high fat diet induced mice model. This in vivo pre-clinical data suggests that G. xanthochymus may have clinical utility for the treatment of obesity. However, further studies are required to establish its potency.

Keywords: Garcinia xanthochymus, anti-obesity, high fat diet, real time qPCR

Procedia PDF Downloads 252
5624 Sericulture a Way for Bio-Diversity Conservation, Employment Generation and Socio-Economic Change: A-Comparison of Two Tribal Block of Raigarh, India

Authors: S. K. Dewangan, K. R. Sahu, S. Soni

Abstract:

Unemployment is today’s basic socio-economic problem eroding national income and living standards, aggravating national development and poverty alleviation. The farmers are encouraged to take up non-agriculture practices which are integrated with Sericulture. Sericulture is one of the primary occupations for livelihood of poor people in tribal area. Most of tribal are involved in Sericulture. Tasar, Eri are the main forest-based cultivation. Among these sericultures is the major crop adopted by the Tribal’s and practiced in respective areas. Out of the 6, 38,588 villages in India, sericultures are practiced in about 69000 villages providing employment to about 7.85 million people. Sericulture is providing livelihood for 9, 47,631 families. India continues to be the second largest producer of silk in the World. Among the four varieties of silk produced, as in 2012-13, Mulberry accounts for 18,715 MT, Eri 3116 MT, Tasar 1729 MT and Muga 119MT of the total raw silk production in the country. Sericulture with its unique features plays an important role in upgrading the socio-economic conditions of the rural folk and with employment opportunities to the educated rural youth and women. In view of the importance of sericulture enterprise for the biodiversity conservation as well as its cultural bondage, the paper tries to enlighten and discuss the significance of sericulture and strategies to be taken for the employment generation in Indian sericulture industry. The present paper explores the possible employment opportunities derived from problem analysis and strategies to be adopted aiming at revolutionary biodiversity conservation in the study area. The paper highlights the sericulture is a way for biodiversity conservation, employment generation in Raigarh district, their utilization and needs as they act as a tool for socio-economic change for tribal. The study concludes with some suggestions to improve the feasibility of sericulture in long term.

Keywords: bio-diversity, employment, sericulture, tribal, income, socio-economic

Procedia PDF Downloads 365
5623 Mapping Potential Soil Salinization Using Rule Based Object Oriented Image Analysis

Authors: Zermina Q., Wasif Y., Naeem S., Urooj S., Sajid R. A.

Abstract:

Land degradation, a leading environemtnal problem and a decrease in the quality of land has become a major global issue, caused by human activities. By land degradation, more than half of the world’s drylands are affected. The worldwide scope of main saline soils is approximately 955 M ha, whereas inferior salinization affected approximately 77 M ha. In irrigated areas, a total of 58% of these soils is found. As most of the vegetation types requires fertile soil for their growth and quality production, salinity causes serious problem to the production of these vegetation types and agriculture demands. This research aims to identify the salt affected areas in the selected part of Indus Delta, Sindh province, Pakistan. This particular mangroves dominating coastal belt is important to the local community for their crop growth. Object based image analysis approach has been adopted on Landsat TM imagery of year 2011 by incorporating different mathematical band ratios, thermal radiance and salinity index. Accuracy assessment of developed salinity landcover map was performed using Erdas Imagine Accuracy Assessment Utility. Rain factor was also considered before acquiring satellite imagery and conducting field survey, as wet soil can greatly affect the condition of saline soil of the area. Dry season considered best for the remote sensing based observation and monitoring of the saline soil. These areas were trained with the ground truth data w.r.t pH and electric condutivity of the soil samples. The results were obtained from the object based image analysis of Keti bunder and Kharo chan shows most of the region under low saline soil.Total salt affected soil was measured to be 46,581.7 ha in Keti Bunder, which represents 57.81 % of the total area of 80,566.49 ha. High Saline Area was about 7,944.68 ha (9.86%). Medium Saline Area was about 17,937.26 ha (22.26 %) and low Saline Area was about 20,699.77 ha (25.69%). Where as total salt affected soil was measured to be 52,821.87 ha in Kharo Chann, which represents 55.87 % of the total area of 94,543.54 ha. High Saline Area was about 5,486.55 ha (5.80 %). Medium Saline Area was about 13,354.72 ha (14.13 %) and low Saline Area was about 33980.61 ha (35.94 %). These results show that the area is low to medium saline in nature. Accuracy of the soil salinity map was found to be 83 % with the Kappa co-efficient of 0.77. From this research, it was evident that this area as a whole falls under the category of low to medium saline area and being close to coastal area, mangrove forest can flourish. As Mangroves are salt tolerant plant so this area is consider heaven for mangrove plantation. It would ultimately benefit both the local community and the environment. Increase in mangrove forest control the problem of soil salinity and prevent sea water to intrude more into coastal area. So deforestation of mangrove should be regularly monitored.

Keywords: indus delta, object based image analysis, soil salinity, thematic mapper

Procedia PDF Downloads 619
5622 Rd-PLS Regression: From the Analysis of Two Blocks of Variables to Path Modeling

Authors: E. Tchandao Mangamana, V. Cariou, E. Vigneau, R. Glele Kakai, E. M. Qannari

Abstract:

A new definition of a latent variable associated with a dataset makes it possible to propose variants of the PLS2 regression and the multi-block PLS (MB-PLS). We shall refer to these variants as Rd-PLS regression and Rd-MB-PLS respectively because they are inspired by both Redundancy analysis and PLS regression. Usually, a latent variable t associated with a dataset Z is defined as a linear combination of the variables of Z with the constraint that the length of the loading weights vector equals 1. Formally, t=Zw with ‖w‖=1. Denoting by Z' the transpose of Z, we define herein, a latent variable by t=ZZ’q with the constraint that the auxiliary variable q has a norm equal to 1. This new definition of a latent variable entails that, as previously, t is a linear combination of the variables in Z and, in addition, the loading vector w=Z’q is constrained to be a linear combination of the rows of Z. More importantly, t could be interpreted as a kind of projection of the auxiliary variable q onto the space generated by the variables in Z, since it is collinear to the first PLS1 component of q onto Z. Consider the situation in which we aim to predict a dataset Y from another dataset X. These two datasets relate to the same individuals and are assumed to be centered. Let us consider a latent variable u=YY’q to which we associate the variable t= XX’YY’q. Rd-PLS consists in seeking q (and therefore u and t) so that the covariance between t and u is maximum. The solution to this problem is straightforward and consists in setting q to the eigenvector of YY’XX’YY’ associated with the largest eigenvalue. For the determination of higher order components, we deflate X and Y with respect to the latent variable t. Extending Rd-PLS to the context of multi-block data is relatively easy. Starting from a latent variable u=YY’q, we consider its ‘projection’ on the space generated by the variables of each block Xk (k=1, ..., K) namely, tk= XkXk'YY’q. Thereafter, Rd-MB-PLS seeks q in order to maximize the average of the covariances of u with tk (k=1, ..., K). The solution to this problem is given by q, eigenvector of YY’XX’YY’, where X is the dataset obtained by horizontally merging datasets Xk (k=1, ..., K). For the determination of latent variables of order higher than 1, we use a deflation of Y and Xk with respect to the variable t= XX’YY’q. In the same vein, extending Rd-MB-PLS to the path modeling setting is straightforward. Methods are illustrated on the basis of case studies and performance of Rd-PLS and Rd-MB-PLS in terms of prediction is compared to that of PLS2 and MB-PLS.

Keywords: multiblock data analysis, partial least squares regression, path modeling, redundancy analysis

Procedia PDF Downloads 147
5621 Implementing Equitable Learning Experiences to Increase Environmental Awareness and Science Proficiency in Alabama’s Schools and Communities

Authors: Carly Cummings, Maria Soledad Peresin

Abstract:

Alabama has a long history of racial injustice and unsatisfactory educational performance. In the 1870s Jim Crow laws segregated public schools and disproportionally allocated funding and resources to white institutions across the South. Despite the Supreme Court ruling to integrate schools following Brown vs. the Board of Education in 1954, Alabama’s school system continued to exhibit signs of segregation, compounded by “white flight” and the establishment of exclusive private schools, which still exist today. This discriminatory history has had a lasting impact of the state’s education system, reflected in modern school demographics and achievement data. It is well known that Alabama struggles with education performance, especially in science education. On average, minority groups scored the lowest in science proficiency. In Alabama, minority populations are concentrated in a region known as the Black Belt, which was once home to countless slave plantations and was the epicenter of the Civil Rights Movement. Today the Black Belt is characterized by a high density of woodlands and plays a significant role in Alabama’s leading economic industry-forest products. Given the economic importance of forestry and agriculture to the state, environmental science proficiency is essential to its stability; however, it is neglected in areas where it is needed most. To better understand the inequity of science education within Alabama, our study first investigates how geographic location, demographics and school funding relate to science achievement scores using ArcGIS and Pearson’s correlation coefficient. Additionally, our study explores the implementation of a relevant, problem-based, active learning lesson in schools. Relevant learning engages students by connecting material to their personal experiences. Problem-based active learning involves real-world problem-solving through hands-on experiences. Given Alabama’s significant woodland coverage, educational materials on forest products were developed with consideration of its relevance to students, especially those located in the Black Belt. Furthermore, to incorporate problem solving and active learning, the lesson centered around students using forest products to solve environmental challenges, such as water pollution- an increasing challenge within the state due to climate change. Pre and post assessment surveys were provided to teachers to measure the effectiveness of the lesson. In addition to pedagogical practices, community and mentorship programs are known to positively impact educational achievements. To this end, our work examines the results of surveys measuring educational professionals’ attitudes toward a local mentorship group within the Black Belt and its potential to address environmental and science literacy. Additionally, our study presents survey results from participants who attended an educational community event, gauging its effectiveness in increasing environmental and science proficiency. Our results demonstrate positive improvements in environmental awareness and science literacy with relevant pedagogy, mentorship, and community involvement. Implementing these practices can help provide equitable and inclusive learning environments and can better equip students with the skills and knowledge needed to bridge this historic educational gap within Alabama.

Keywords: equitable education, environmental science, environmental education, science education, racial injustice, sustainability, rural education

Procedia PDF Downloads 68
5620 Evaluation of the Impact of Reducing the Traffic Light Cycle for Cars to Improve Non-Vehicular Transportation: A Case of Study in Lima

Authors: Gheyder Concha Bendezu, Rodrigo Lescano Loli, Aldo Bravo Lizano

Abstract:

In big urbanized cities of Latin America, motor vehicles have priority over non-motor vehicles and pedestrians. There is an important problem that affects people's health and quality of life; lack of inclusion towards pedestrians makes it difficult for them to move smoothly and safely since the city has been planned for the transit of motor vehicles. Faced with the new trend for sustainable and economical transport, the city is forced to develop infrastructure in order to incorporate pedestrians and users with non-motorized vehicles in the transport system. The present research aims to study the influence of non-motorized vehicles on an avenue, the optimization of a cycle using traffic lights based on simulation in Synchro software, to improve the flow of non-motor vehicles. The evaluation is of the microscopic type; for this reason, field data was collected, such as vehicular, pedestrian, and non-motor vehicle user demand. With the values of speed and travel time, it is represented in the current scenario that contains the existing problem. These data allow to create a microsimulation model in Vissim software, later to be calibrated and validated so that it has a behavior similar to reality. The results of this model are compared with the efficiency parameters of the proposed model; these parameters are the queue length, the travel speed, and mainly the travel times of the users at this intersection. The results reflect a reduction of 27% in travel time, that is, an improvement between the proposed model and the current one for this great avenue. The tail length of motor vehicles is also reduced by 12.5%, a considerable improvement. All this represents an improvement in the level of service and in the quality of life of users.

Keywords: bikeway, microsimulation, pedestrians, queue length, traffic light cycle, travel time

Procedia PDF Downloads 174
5619 X-Ray Detector Technology Optimization in Computed Tomography

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 194
5618 Collective Problem Solving: Tackling Obstacles and Unlocking Opportunities for Young People Not in Education, Employment, or Training

Authors: Kalimah Ibrahiim, Israa Elmousa

Abstract:

This study employed the world café method alongside semi-structured interviews within a 'conversation café' setting to engage stakeholders from the public health and primary care sectors. The objective was to collaboratively explore strategies to improve outcomes for young people not in education, employment, or training (NEET). The discussions were aimed at identifying the underlying causes of disparities faced by NEET individuals, exchanging experiences, and formulating community-driven solutions to bolster preventive efforts and shape policy initiatives. A thematic analysis of the qualitative data gathered emphasized the importance of community problem-solving through the exchange of ideas and reflective discussions. Healthcare professionals reflected on their potential roles, pinpointing a significant gap in understanding the specific needs of the NEET population and the unclear distribution of responsibilities among stakeholders. The results underscore the necessity for a unified approach in primary care and the fostering of multi-agency collaborations that focus on addressing social determinants of health. Such strategies are critical not only for the immediate improvement of health outcomes for NEET individuals but also for informing broader policy decisions that can have long-term benefits. Further research is ongoing, delving deeper into the unique challenges faced by this demographic and striving to develop more effective interventions. The study advocates for continued efforts to integrate insights from various sectors to create a more holistic and effective response to the needs of the NEET population, ensuring that future strategies are informed by a comprehensive understanding of their circumstances and challenges.

Keywords: multi-agency working, primary care, public health, social inequalities

Procedia PDF Downloads 39
5617 Intrusion Detection Techniques in NaaS in the Cloud: A Review

Authors: Rashid Mahmood

Abstract:

The network as a service (NaaS) usage has been well-known from the last few years in the many applications, like mission critical applications. In the NaaS, prevention method is not adequate as the security concerned, so the detection method should be added to the security issues in NaaS. The authentication and encryption are considered the first solution of the NaaS problem whereas now these are not sufficient as NaaS use is increasing. In this paper, we are going to present the concept of intrusion detection and then survey some of major intrusion detection techniques in NaaS and aim to compare in some important fields.

Keywords: IDS, cloud, naas, detection

Procedia PDF Downloads 320
5616 Capacity Oversizing for Infrastructure Sharing Synergies: A Game Theoretic Analysis

Authors: Robin Molinier

Abstract:

Industrial symbiosis (I.S) rely on two basic modes of cooperation between organizations that are infrastructure/service sharing and resource substitution (the use of waste materials, fatal energy and recirculated utilities for production). The former consists in the intensification of use of an asset and thus requires to compare the incremental investment cost to be incurred and the stand-alone cost faced by each potential participant to satisfy its own requirements. In order to investigate the way such a cooperation mode can be implemented we formulate a game theoretic model integrating the grassroot investment decision and the ex-post access pricing problem. In the first period two actors set cooperatively (resp. non-cooperatively) a level of common (resp. individual) infrastructure capacity oversizing to attract ex-post a potential entrant with a plug-and-play offer (available capacity, tariff). The entrant’s requirement is randomly distributed and known only after investments took place. Capacity cost exhibits sub-additive property so that there is room for profitable overcapacity setting in the first period under some conditions that we derive. The entrant willingness-to-pay for the access to the infrastructure is driven by both her standalone cost and the complement cost to be incurred in case she chooses to access an infrastructure whose the available capacity is lower than her requirement level. The expected complement cost function is thus derived, and we show that it is decreasing, convex and shaped by the entrant’s requirements distribution function. For both uniform and triangular distributions optimal capacity level is obtained in the cooperative setting and equilibrium levels are determined in the non-cooperative case. Regarding the latter, we show that competition is deterred by the first period investor with the highest requirement level. Using the non-cooperative game outcomes which gives lower bounds for the profit sharing problem in the cooperative one we solve the whole game and describe situations supporting sharing agreements.

Keywords: capacity, cooperation, industrial symbiosis, pricing

Procedia PDF Downloads 440
5615 Remote Sensing and GIS Based Methodology for Identification of Low Crop Productivity in Gautam Buddha Nagar District

Authors: Shivangi Somvanshi

Abstract:

Poor crop productivity in salt-affected environment in the country is due to insufficient and untimely canal supply to agricultural land and inefficient field water management practices. This could further degrade due to inadequate maintenance of canal network, ongoing secondary soil salinization and waterlogging, worsening of groundwater quality. Large patches of low productivity in irrigation commands are occurring due to waterlogging and salt-affected soil, particularly in the scarcity rainfall year. Satellite remote sensing has been used for mapping of areas of low crop productivity, waterlogging and salt in irrigation commands. The spatial results obtained for these problems so far are less reliable for further use due to rapid change in soil quality parameters over the years. The existing spatial databases of canal network and flow data, groundwater quality and salt-affected soil were obtained from the central and state line departments/agencies and were integrated with GIS. Therefore, an integrated methodology based on remote sensing and GIS has been developed in ArcGIS environment on the basis of canal supply status, groundwater quality, salt-affected soils, and satellite-derived vegetation index (NDVI), salinity index (NDSI) and waterlogging index (NSWI). This methodology was tested for identification and delineation of area of low productivity in the Gautam Buddha Nagar district (Uttar Pradesh). It was found that the area affected by this problem lies mainly in Dankaur and Jewar blocks of the district. The problem area was verified with ground data and was found to be approximately 78% accurate. The methodology has potential to be used in other irrigation commands in the country to obtain reliable spatial data on low crop productivity.

Keywords: remote sensing, GIS, salt affected soil, crop productivity, Gautam Buddha Nagar

Procedia PDF Downloads 287
5614 Multimedia Container for Autonomous Car

Authors: Janusz Bobulski, Mariusz Kubanek

Abstract:

The main goal of the research is to develop a multimedia container structure containing three types of images: RGB, lidar and infrared, properly calibrated to each other. An additional goal is to develop program libraries for creating and saving this type of file and for restoring it. It will also be necessary to develop a method of data synchronization from lidar and RGB cameras as well as infrared. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. Autonomous cars are increasingly breaking into our consciousness. No one seems to have any doubts that self-driving cars are the future of motoring. Manufacturers promise that moving the first of them to showrooms is the prospect of the next few years. Many experts believe that creating a network of communicating autonomous cars will be able to completely eliminate accidents. However, to make this possible, it is necessary to develop effective methods of detection of objects around the moving vehicle. In bad weather conditions, this task is difficult on the basis of the RGB(red, green, blue) image. Therefore, in such situations, you should be supported by information from other sources, such as lidar or infrared cameras. The problem is the different data formats that individual types of devices return. In addition to these differences, there is a problem with the synchronization of these data and the formatting of this data. The goal of the project is to develop a file structure that could be containing a different type of data. This type of file is calling a multimedia container. A multimedia container is a container that contains many data streams, which allows you to store complete multimedia material in one file. Among the data streams located in such a container should be indicated streams of images, films, sounds, subtitles, as well as additional information, i.e., metadata. This type of file could be used in autonomous vehicles, which would certainly facilitate data processing by the intelligent autonomous vehicle management system. As shown by preliminary studies, the use of combining RGB and InfraRed images with Lidar data allows for easier data analysis. Thanks to this application, it will be possible to display the distance to the object in a color photo. Such information can be very useful for drivers and for systems in autonomous cars.

Keywords: an autonomous car, image processing, lidar, obstacle detection

Procedia PDF Downloads 226
5613 Telogen Effluvium: A Modern Hair Loss Concern and the Interventional Strategies

Authors: Chettyparambil Lalchand Thejalakshmi, Sonal Sabu Edattukaran

Abstract:

Hair loss is one of the main issues that contemporary society is dealing with. It can be attributable to a wide range of factors, listing from one's genetic composition and the anxiety we experience on a daily basis. Telogen effluvium [TE] is a condition that causes temporary hair loss after a stressor that might shock the body and cause the hair follicles to temporarily rest, leading to hair loss. Most frequently, women are the ones who bring up these difficulties. Extreme illness or trauma, an emotional or important life event, rapid weight loss and crash dieting, a severe scalp skin problem, a new medication, or ceasing hormone therapy are examples of potential causes. Men frequently do not notice hair thinning with time, but women with long hair may be easily identified when shedding, which can occasionally result in bias because women tend to be more concerned with aesthetics and beauty standards of the society, and approach frequently with the concerns .The woman, who formerly possessed a full head of hair, is worried about the hair loss from her scalp . There are several cases of hair loss reported every day, and Telogen effluvium is said to be the most prevalent one of them all without any hereditary risk factors. While the patient has loss in hair volume, baldness is not the result of this problem . The exponentially growing Dermatology and Aesthetic medical division has discovered that this problem is the most common and also the easiest to cure since it is feasible for these people to regrow their hair, unlike those who have scarring alopecia, in which the follicle itself is damaged and non-viable. Telogen effluvium comes in two different forms: acute and chronic. Acute TE occurs in all the age groups with a hair loss of less than three months, while chronic TE is more common in those between the ages of 30 and 60 with a hair loss of more than six months . Both kinds are prevalent throughout all age groups, regardless of the predominance. It takes between three and six months for the lost hair to come back, although this condition is readily reversed by eliminating stresses. After shedding their hair, patients frequently describe having noticeable fringes on their forehead. The current medical treatments for this condition include topical corticosteroids, systemic corticosteroids, minoxidil and finasteride, CNDPA (caffeine, niacinamide, panthenol, dimethicone, and an acrylate polymer) .Individual terminal hair growth was increased by 10% as a result of the innovative intervention CNDPA. Botulinum Toxin A, Scalp Micro Needling, Platelet Rich Plasma Therapy [PRP], and sessions with Multivitamin Mesotherapy Injections are some recently enhanced techniques with partially or completely reversible hair loss. Also, it has been shown that supplements like Nutrafol and Biotin are producing effective outcomes. There is virtually little evidence to support the claim that applying sulfur-rich ingredients to the scalp, such as onion juice, can help TE patients' hair regenerate.

Keywords: dermatology, telogen effluvium, hair loss, modern hair loass treatments

Procedia PDF Downloads 90
5612 Methodological Issues of Teaching Vocabulary in a Technical University

Authors: Elza Salakhova

Abstract:

The purpose of this article is to consider some common difficulties encountered in teaching vocabulary in technical higher educational institutions. It deals with the problem of teaching special vocabulary in the process of teaching a foreign language. There have been analyzed some problems in teaching a foreign language to learners of a technical higher establishment. There are some recommendations for teachers to motivate their students to learn and master a foreign language through learning terminology.

Keywords: professionally-oriented study, motivation, technical university, foreign language

Procedia PDF Downloads 155
5611 Decreasing Non-Compliance with the Garbage Collection Fee Payment: A Case Study from the Intervention in a Municipality in the Slovak Republic

Authors: Anetta Caplanova, Eva Sirakovova, Estera Szakadatova

Abstract:

Non-payment of taxes and fees represents a problem, which occurs at national and local government levels in many countries. An effective tax collection is key for generating government and local government budget revenues to finance public services and infrastructure; thus, there is the need to address this problem. The standard approach considers as a solution raising taxes/fees to boost public revenues, which may be politically challenging and time-consuming to implement. An alternative approach is related to using behavioral interventions. These can be usually implemented relatively quickly, and in most cases, they are associated with low cost. In the paper, we present the results of the behavioral experiment focused on raising the level of compliance with the payment of garbage collection fees in a selected municipality in the Slovak Republic. The experiment was implemented using the leaflets sent to residential households together with the invoice for the garbage collection in the municipality Hlohovec, Western Slovakia, in Spring 2021. The sample of about 10000 households was divided into three random groups, a control group and two intervention groups. Households in intervention group 1 were sent a leaflet using the social norm nudge, while households in intervention group 2 were sent a leaflet using the deterrence nudge. The social norm framing leaflet pointed out that in the municipality, the prevailing majority of people paid the garbage collection fee and encouraged recipients to join this majority. The deterrent leaflet reminded the recipients that if they did not pay the fee on time, enforcement proceedings would follow. This was aimed to increase the subjective perception of citizens of the enforcement proceedings in case of noncompliance. In the paper, we present and discuss the results from the experiment and formulate relevant generalizations for other municipalities.

Keywords: municipal governments, garbage fee collection, behavioural intervention, social norm, deterrence nudge

Procedia PDF Downloads 191
5610 Achieving Process Stability through Automation and Process Optimization at H Blast Furnace Tata Steel, Jamshedpur

Authors: Krishnendu Mukhopadhyay, Subhashis Kundu, Mayank Tiwari, Sameeran Pani, Padmapal, Uttam Singh

Abstract:

Blast Furnace is a counter current process where burden descends from top and hot gases ascend from bottom and chemically reduce iron oxides into liquid hot metal. One of the major problems of blast furnace operation is the erratic burden descent inside furnace. Sometimes this problem is so acute that burden descent stops resulting in Hanging and instability of the furnace. This problem is very frequent in blast furnaces worldwide and results in huge production losses. This situation becomes more adverse when blast furnaces are operated at low coke rate and high coal injection rate with adverse raw materials like high alumina ore and high coke ash. For last three years, H-Blast Furnace Tata Steel was able to reduce coke rate from 450 kg/thm to 350 kg/thm with an increase in coal injection to 200 kg/thm which are close to world benchmarks and expand profitability. To sustain this regime, elimination of irregularities of blast furnace like hanging, channeling, and scaffolding is very essential. In this paper, sustaining of zero hanging spell for consecutive three years with low coke rate operation by improvement in burden characteristics, burden distribution, changes in slag regime, casting practices and adequate automation of the furnace operation has been illustrated. Models have been created to comprehend and upgrade the blast furnace process understanding. A model has been developed to predict the process of maintaining slag viscosity in desired range to attain proper burden permeability. A channeling prediction model has also been developed to understand channeling symptoms so that early actions can be initiated. The models have helped to a great extent in standardizing the control decisions of operators at H-Blast Furnace of Tata Steel, Jamshedpur and thus achieving process stability for last three years.

Keywords: hanging, channelling, blast furnace, coke

Procedia PDF Downloads 195