Search results for: MSW quantity prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3197

Search results for: MSW quantity prediction

467 Combat Plastic Entering in Kanpur City, Uttar Pradesh, India Marine Environment

Authors: Arvind Kumar

Abstract:

The city of Kanpur is located in the terrestrial plain area on the bank of the river Ganges and is the second largest city in the state of Uttar Pradesh. The city generates approximately 1400-1600 tons per day of MSW. Kanpur has been known as a major point and non-points-based pollution hotspot for the river Ganges. The city has a major industrial hub, probably the largest in the state, catering to the manufacturing and recycling of plastic and other dry waste streams. There are 4 to 5 major drains flowing across the city, which receive a significant quantity of waste leakage, which subsequently adds to the Ganges flow and is carried to the Bay of Bengal. A river-to-sea flow approach has been established to account for leaked waste into urban drains, leading to the build-up of marine litter. Throughout its journey, the river accumulates plastic – macro, meso, and micro, from various sources and transports it towards the sea. The Ganges network forms the second-largest plastic-polluting catchment in the world, with over 0.12 million tonnes of plastic discharged into marine ecosystems per year and is among 14 continental rivers into which over a quarter of global waste is discarded 3.150 Kilo tons of plastic waste is generated in Kanpur, out of which 10%-13% of plastic is leaked into the local drains and water flow systems. With the Support of Kanpur Municipal Corporation, 1TPD capacity MRF for drain waste management was established at Krishna Nagar, Kanpur & A German startup- Plastic Fisher, was identified for providing a solution to capture the drain waste and achieve its recycling in a sustainable manner with a circular economy approach. The team at Plastic Fisher conducted joint surveys and identified locations on 3 drains at Kanpur using GIS maps developed during the survey. It suggested putting floating 'Boom Barriers' across the drains with a low-cost material, which reduced their cost to only 2000 INR per barrier. The project was built upon the self-sustaining financial model. The project includes activities where a cost-efficient model is developed and adopted for a socially self-inclusive model. The project has recommended the use of low-cost floating boom barriers for capturing waste from drains. This involves a one-time time cost and has no operational cost. Manpower is engaged in fishing and capturing immobilized waste, whose salaries are paid by the Plastic Fisher. The captured material is sun-dried and transported to the designated place, where the shed and power connection, which act as MRF, are provided by the city Municipal corporation. Material aggregation, baling, and transportation costs to end-users are borne by Plastic Fisher as well.

Keywords: Kanpur, marine environment, drain waste management, plastic fisher

Procedia PDF Downloads 71
466 Designing Energy Efficient Buildings for Seasonal Climates Using Machine Learning Techniques

Authors: Kishor T. Zingre, Seshadhri Srinivasan

Abstract:

Energy consumption by the building sector is increasing at an alarming rate throughout the world and leading to more building-related CO₂ emissions into the environment. In buildings, the main contributors to energy consumption are heating, ventilation, and air-conditioning (HVAC) systems, lighting, and electrical appliances. It is hypothesised that the energy efficiency in buildings can be achieved by implementing sustainable technologies such as i) enhancing the thermal resistance of fabric materials for reducing heat gain (in hotter climates) and heat loss (in colder climates), ii) enhancing daylight and lighting system, iii) HVAC system and iv) occupant localization. Energy performance of various sustainable technologies is highly dependent on climatic conditions. This paper investigated the use of machine learning techniques for accurate prediction of air-conditioning energy in seasonal climates. The data required to train the machine learning techniques is obtained using the computational simulations performed on a 3-story commercial building using EnergyPlus program plugged-in with OpenStudio and Google SketchUp. The EnergyPlus model was calibrated against experimental measurements of surface temperatures and heat flux prior to employing for the simulations. It has been observed from the simulations that the performance of sustainable fabric materials (for walls, roof, and windows) such as phase change materials, insulation, cool roof, etc. vary with the climate conditions. Various renewable technologies were also used for the building flat roofs in various climates to investigate the potential for electricity generation. It has been observed that the proposed technique overcomes the shortcomings of existing approaches, such as local linearization or over-simplifying assumptions. In addition, the proposed method can be used for real-time estimation of building air-conditioning energy.

Keywords: building energy efficiency, energyplus, machine learning techniques, seasonal climates

Procedia PDF Downloads 114
465 Shear Strength and Consolidation Behavior of Clayey Soil with Vertical and Radial Drainage

Authors: R. Pillai Aparna, S. R. Gandhi

Abstract:

Soft clay deposits having low strength and high compressibility are found all over the world. Preloading with vertical drains is a widely used method for improving such type of soils. The coefficient of consolidation, irrespective of the drainage type, plays an important role in the design of vertical drains and it controls accurate prediction of the rate of consolidation of soil. Also, the increase in shear strength of soil with consolidation is another important factor considered in preloading or staged construction. To our best knowledge no clear guidelines are available to estimate the increase in shear strength for a particular degree of consolidation (U) at various stages during the construction. Various methods are available for finding out the consolidation coefficient. This study mainly focuses on the variation of, consolidation coefficient which was found out using different methods and shear strength with pressure intensity. The variation of shear strength with the degree of consolidation was also studied. The consolidation test was done using two types of highly compressible clays with vertical, radial and a few with combined drainage. The test was carried out at different pressures intensities and for each pressure intensity, once the target degree of consolidation is achieved, vane shear test was done at different locations in the sample, in order to determine the shear strength. The shear strength of clayey soils under the application of vertical stress with vertical and radial drainage with target U value of 70% and 90% was studied. It was found that there is not much variation in cv or cr value beyond 80kPa pressure intensity. Correlations were developed between shear strength ratio and consolidation pressure based on laboratory testing under controlled condition. It was observed that the shear strength of sample with target U value of 90% is about 1.4 to 2 times than that of 70% consolidated sample. Settlement analysis was done using Asaoka’s and hyperbolic method. The variation of strength with respect to the depth of sample was also studied, using large-scale consolidation test. It was found, based on the present study that the gain in strength is more on the top half of the clay layer, and also the shear strength of the sample ensuring radial drainage is slightly higher than that of the vertical drainage.

Keywords: consolidation coefficient, degree of consolidation, PVDs, shear strength

Procedia PDF Downloads 239
464 Agile Software Effort Estimation Using Regression Techniques

Authors: Mikiyas Adugna

Abstract:

Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.

Keywords: agile software development, effort estimation, elastic net regression, LASSO

Procedia PDF Downloads 71
463 Study and Fine Characterization of the SS 316L Microstructures Obtained by Laser Beam Melting Process

Authors: Sebastien Relave, Christophe Desrayaud, Aurelien Vilani, Alexey Sova

Abstract:

Laser beam melting (LBM) is an additive manufacturing process that enables complex 3D parts to be designed. This process is now commonly employed for various applications such as chemistry or energy, requiring the use of stainless steel grades. LBM can offer comparable and sometimes superior mechanical properties to those of wrought materials. However, we observed an anisotropic microstructure which results from the process, caused by the very high thermal gradients along the building axis. This microstructure can be harmful depending on the application. For this reason, control and prediction of the microstructure are important to ensure the improvement and reproducibility of the mechanical properties. This study is focused on the 316L SS grade and aims at understanding the solidification and transformation mechanisms during process. Experiments to analyse the nucleation and growth of the microstructure obtained by the LBM process according to several conditions. These samples have been designed on different type of support bulk and lattice. Samples are produced on ProX DMP 200 LBM device. For the two conditions the analysis of microstructures, thanks to SEM and EBSD, revealed a single phase Austenite with preferential crystallite growth along the (100) plane. The microstructure was presented a hierarchical structure consisting columnar grains sizes in the range of 20-100 µm and sub grains structure of size 0.5 μm. These sub-grains were found in different shapes (columnar and cellular). This difference can be explained by a variation of the thermal gradient and cooling rate or element segregation while no sign of element segregation was found at the sub-grain boundaries. A high dislocation concentration was observed at sub-grain boundaries. These sub-grains are separated by very low misorientation walls ( < 2°) this causes a lattice of curvature inside large grain. A discussion is proposed on the occurrence of these microstructures formation, in regard of the LBM process conditions.

Keywords: selective laser melting, stainless steel, microstructure

Procedia PDF Downloads 157
462 The Interplay between Autophagy and Macrophages' Polarization in Wound Healing: A Genetic Regulatory Network Analysis

Authors: Mayada Mazher, Ahmed Moustafa, Ahmed Abdellatif

Abstract:

Background: Autophagy is a eukaryotic, highly conserved catabolic process implicated in many pathophysiologies such as wound healing. Autophagy-associated genes serve as a scaffolding platform for signal transduction of macrophage polarization during the inflammatory phase of wound healing and tissue repair process. In the current study, we report a model for the interplay between autophagy-associated genes and macrophages polarization associated genes. Methods: In silico analysis was performed on 249 autophagy-related genes retrieved from the public autophagy database and gene expression data retrieved from Gene Expression Omnibus (GEO); GSE81922 and GSE69607 microarray data macrophages polarization 199 DEGS. An integrated protein-protein interaction network was constructed for autophagy and macrophage gene sets. The gene sets were then used for GO terms pathway enrichment analysis. Common transcription factors for autophagy and macrophages' polarization were identified. Finally, microRNAs enriched in both autophagy and macrophages were predicated. Results: In silico prediction of common transcription factors in DEGs macrophages and autophagy gene sets revealed a new role for the transcription factors, HOMEZ, GABPA, ELK1 and REL, that commonly regulate macrophages associated genes: IL6,IL1M, IL1B, NOS1, SOC3 and autophagy-related genes: Atg12, Rictor, Rb1cc1, Gaparab1, Atg16l1. Conclusions: Autophagy and macrophages' polarization are interdependent cellular processes, and both autophagy-related proteins and macrophages' polarization related proteins coordinate in tissue remodelling via transcription factors and microRNAs regulatory network. The current work highlights a potential new role for transcription factors HOMEZ, GABPA, ELK1 and REL in wound healing.

Keywords: autophagy related proteins, integrated network analysis, macrophages polarization M1 and M2, tissue remodelling

Procedia PDF Downloads 152
461 Airport Pavement Crack Measurement Systems and Crack Density for Pavement Evaluation

Authors: Ali Ashtiani, Hamid Shirazi

Abstract:

This paper reviews the status of existing practice and research related to measuring pavement cracking and using crack density as a pavement surface evaluation protocol. Crack density for pavement evaluation is currently not widely used within the airport community and its use by the highway community is limited. However, surface cracking is a distress that is closely monitored by airport staff and significantly influences the development of maintenance, rehabilitation and reconstruction plans for airport pavements. Therefore crack density has the potential to become an important indicator of pavement condition if the type, severity and extent of surface cracking can be accurately measured. A pavement distress survey is an essential component of any pavement assessment. Manual crack surveying has been widely used for decades to measure pavement performance. However, the accuracy and precision of manual surveys can vary depending upon the surveyor and performing surveys may disrupt normal operations. Given the variability of manual surveys, this method has shown inconsistencies in distress classification and measurement. This can potentially impact the planning for pavement maintenance, rehabilitation and reconstruction and the associated funding strategies. A substantial effort has been devoted for the past 20 years to reduce the human intervention and the error associated with it by moving toward automated distress collection methods. The automated methods refer to the systems that identify, classify and quantify pavement distresses through processes that require no or very minimal human intervention. This principally involves the use of a digital recognition software to analyze and characterize pavement distresses. The lack of established protocols for measurement and classification of pavement cracks captured using digital images is a challenge to developing a reliable automated system for distress assessment. Variations in types and severity of distresses, different pavement surface textures and colors and presence of pavement joints and edges all complicate automated image processing and crack measurement and classification. This paper summarizes the commercially available systems and technologies for automated pavement distress evaluation. A comprehensive automated pavement distress survey involves collection, interpretation, and processing of the surface images to identify the type, quantity and severity of the surface distresses. The outputs can be used to quantitatively calculate the crack density. The systems for automated distress survey using digital images reviewed in this paper can assist the airport industry in the development of a pavement evaluation protocol based on crack density. Analysis of automated distress survey data can lead to a crack density index. This index can be used as a means of assessing pavement condition and to predict pavement performance. This can be used by airport owners to determine the type of pavement maintenance and rehabilitation in a more consistent way.

Keywords: airport pavement management, crack density, pavement evaluation, pavement management

Procedia PDF Downloads 185
460 Assessment of Predictive Confounders for the Prevalence of Breast Cancer among Iraqi Population: A Retrospective Study from Baghdad, Iraq

Authors: Nadia H. Mohammed, Anmar Al-Taie, Fadia H. Al-Sultany

Abstract:

Although breast cancer prevalence continues to increase, mortality has been decreasing as a result of early detection and improvement in adjuvant systemic therapy. Nevertheless, this disease required further efforts to understand and identify the associated potential risk factors that could play a role in the prevalence of this malignancy among Iraqi women. The objective of this study was to assess the perception of certain predictive risk factors on the prevalence of breast cancer types among a sample of Iraqi women diagnosed with breast cancer. This was a retrospective observational study carried out at National Cancer Research Center in College of Medicine, Baghdad University from November 2017 to January 2018. Data of 100 patients with breast cancer whose biopsies examined in the National Cancer Research Center were included in this study. Data were collected to structure a detailed assessment regarding the patients’ demographic, medical and cancer records. The majority of study participants (94%) suffered from ductal breast cancer with mean age 49.57 years. Among those women, 48.9% were obese with body mass index (BMI) 35 kg/m2. 68.1% of them had positive family history of breast cancer and 66% had low parity. 40.4% had stage II ductal breast cancer followed by 25.5% with stage III. It was found that 59.6% and 68.1% had positive oestrogen receptor sensitivity and positive human epidermal growth factor (HER2/neu) receptor sensitivity respectively. In regard to the impact of prediction of certain variables on the incidence of ductal breast cancer, positive family history of breast cancer (P < 0.0001), low parity (P< 0.0001), stage I and II breast cancer (P = 0.02) and positive HER2/neu status (P < 0.0001) were significant predictive factors among the study participants. The results from this study provide relevant evidence for a significant positive and potential association between certain risk factors and the prevalence of breast cancer among Iraqi women.

Keywords: Ductal Breast Cancer, Hormone Sensitivity, Iraq, Risk Factors

Procedia PDF Downloads 128
459 Extraction of Rice Bran Protein Using Enzymes and Polysaccharide Precipitation

Authors: Sudarat Jiamyangyuen, Tipawan Thongsook, Riantong Singanusong, Chanida Saengtubtim

Abstract:

Rice is a staple food as well as exported commodity of Thailand. Rice bran, a 10.5% constituent of rice grain, is a by-product of rice milling process. Rice bran is normally used as a raw material for rice bran oil production or sold as feed with a low price. Therefore, this study aimed to increase value of defatted rice bran as obtained after extracting of rice bran oil. Conventionally, the protein in defatted rice bran was extracted using alkaline extraction and acid precipitation, which results in reduction of nutritious components in rice bran. Rice bran protein concentrate is suitable for those who are allergenic of protein from other sources eg. milk, wheat. In addition to its hypoallergenic property, rice bran protein also contains good quantity of lysine. Thus it may act as a suitable ingredient for infant food formulations while adding variety to the restricted diets of children with food allergies. The objectives of this study were to compare properties of rice bran protein concentrate (RBPC) extracted from defatted rice bran using enzymes together with precipitation step using polysaccharides (alginate and carrageenan) to those of a control sample extracted using a conventional method. The results showed that extraction of protein from rice bran using enzymes exhibited the higher protein recovery compared to that extraction with alkaline. The extraction conditions using alcalase 2% (v/w) at 50 C, pH 9.5 gave the highest protein (2.44%) and yield (32.09%) in extracted solution compared to other enzymes. Rice bran protein concentrate powder prepared by a precipitation step using alginate (protein in solution: alginate 1:0.006) exhibited the highest protein (27.55%) and yield (6.62%). Precipitation using alginate was better than that of acid. RBPC extracted with alkaline (ALK) or enzyme alcalase (ALC), then precipitated with alginate (AL) (samples RBP-ALK-AL and RBP-ALC-AL) yielded the precipitation rate of 75% and 91.30%, respectively. Therefore, protein precipitation using alginate was then selected. Amino acid profile of control sample, and sample precipitated with alginate, as compared to casein and soy protein isolated, showed that control sample showed the highest content among all sample. Functional property study of RBP showed that the highest nitrogen solubility occurred in pH 8-10. There was no statically significant between emulsion capacity and emulsion stability of control and sample precipitated by alginate. However, control sample showed a higher of foaming and lower foam stability compared to those of sample precipitated with alginate. The finding was successful in terms of minimizing chemicals used in extraction and precipitation steps in preparation of rice bran protein concentrate. This research involves in a production of value-added product in which the double amount of protein (28%) compared to original amount (14%) contained in rice bran could be beneficial in terms of adding to food products eg. healthy drink with high protein and fiber. In addition, the basic knowledge of functional property of rice bran protein concentrate was obtained, which can be used to appropriately select the application of this value-added product from rice bran.

Keywords: alginate, carrageenan, rice bran, rice bran protein

Procedia PDF Downloads 295
458 Rd-PLS Regression: From the Analysis of Two Blocks of Variables to Path Modeling

Authors: E. Tchandao Mangamana, V. Cariou, E. Vigneau, R. Glele Kakai, E. M. Qannari

Abstract:

A new definition of a latent variable associated with a dataset makes it possible to propose variants of the PLS2 regression and the multi-block PLS (MB-PLS). We shall refer to these variants as Rd-PLS regression and Rd-MB-PLS respectively because they are inspired by both Redundancy analysis and PLS regression. Usually, a latent variable t associated with a dataset Z is defined as a linear combination of the variables of Z with the constraint that the length of the loading weights vector equals 1. Formally, t=Zw with ‖w‖=1. Denoting by Z' the transpose of Z, we define herein, a latent variable by t=ZZ’q with the constraint that the auxiliary variable q has a norm equal to 1. This new definition of a latent variable entails that, as previously, t is a linear combination of the variables in Z and, in addition, the loading vector w=Z’q is constrained to be a linear combination of the rows of Z. More importantly, t could be interpreted as a kind of projection of the auxiliary variable q onto the space generated by the variables in Z, since it is collinear to the first PLS1 component of q onto Z. Consider the situation in which we aim to predict a dataset Y from another dataset X. These two datasets relate to the same individuals and are assumed to be centered. Let us consider a latent variable u=YY’q to which we associate the variable t= XX’YY’q. Rd-PLS consists in seeking q (and therefore u and t) so that the covariance between t and u is maximum. The solution to this problem is straightforward and consists in setting q to the eigenvector of YY’XX’YY’ associated with the largest eigenvalue. For the determination of higher order components, we deflate X and Y with respect to the latent variable t. Extending Rd-PLS to the context of multi-block data is relatively easy. Starting from a latent variable u=YY’q, we consider its ‘projection’ on the space generated by the variables of each block Xk (k=1, ..., K) namely, tk= XkXk'YY’q. Thereafter, Rd-MB-PLS seeks q in order to maximize the average of the covariances of u with tk (k=1, ..., K). The solution to this problem is given by q, eigenvector of YY’XX’YY’, where X is the dataset obtained by horizontally merging datasets Xk (k=1, ..., K). For the determination of latent variables of order higher than 1, we use a deflation of Y and Xk with respect to the variable t= XX’YY’q. In the same vein, extending Rd-MB-PLS to the path modeling setting is straightforward. Methods are illustrated on the basis of case studies and performance of Rd-PLS and Rd-MB-PLS in terms of prediction is compared to that of PLS2 and MB-PLS.

Keywords: multiblock data analysis, partial least squares regression, path modeling, redundancy analysis

Procedia PDF Downloads 147
457 Exploring the Interplay of Attention, Awareness, and Control: A Comprehensive Investigation

Authors: Venkateswar Pujari

Abstract:

This study tries to investigate the complex interplay between control, awareness, and attention in human cognitive processes. The fundamental elements of cognitive functioning that play a significant role in influencing perception, decision-making, and behavior are attention, awareness, and control. Understanding how they interact can help us better understand how our minds work and may even increase our understanding of cognitive science and its therapeutic applications. The study uses an empirical methodology to examine the relationships between attention, awareness, and control by integrating different experimental paradigms and neuropsychological tests. To ensure the generalizability of findings, a wide sample of participants is chosen, including people with various cognitive profiles and ages. The study is structured into four primary parts, each of which focuses on one component of how attention, awareness, and control interact: 1. Evaluation of Attentional Capacity and Selectivity: In this stage, participants complete established attention tests, including the Stroop task and visual search tasks. 2. Evaluation of Awareness Degrees: In the second stage, participants' degrees of conscious and unconscious awareness are assessed using perceptual awareness tasks such as masked priming and binocular rivalry tasks. 3. Investigation of Cognitive Control Mechanisms: In the third phase, reaction inhibition, cognitive flexibility, and working memory capacity are investigated using exercises like the Wisconsin Card Sorting Test and the Go/No-Go paradigm. 4. Results Integration and Analysis: Data from all phases are integrated and analyzed in the final phase. To investigate potential links and prediction correlations between attention, awareness, and control, correlational and regression analyses are carried out. The study's conclusions shed light on the intricate relationships that exist between control, awareness, and attention throughout cognitive function. The findings may have consequences for cognitive psychology, neuroscience, and clinical psychology by providing new understandings of cognitive dysfunctions linked to deficiencies in attention, awareness, and control systems.

Keywords: attention, awareness, control, cognitive functioning, neuropsychological assessment

Procedia PDF Downloads 91
456 Comparison and Improvement of the Existing Cone Penetration Test Results: Shear Wave Velocity Correlations for Hungarian Soils

Authors: Ákos Wolf, Richard P. Ray

Abstract:

Due to the introduction of Eurocode 8, the structural design for seismic and dynamic effects has become more significant in Hungary. This has emphasized the need for more effort to describe the behavior of structures under these conditions. Soil conditions have a significant effect on the response of structures by modifying the stiffness and damping of the soil-structural system and by modifying the seismic action as it reaches the ground surface. Shear modulus (G) and shear wave velocity (vs), which are often measured in the field, are the fundamental dynamic soil properties for foundation vibration problems, liquefaction potential and earthquake site response analysis. There are several laboratory and in-situ measurement techniques to evaluate dynamic soil properties, but unfortunately, they are often too expensive for general design practice. However, a significant number of correlations have been proposed to determine shear wave velocity or shear modulus from Cone Penetration Tests (CPT), which are used more and more in geotechnical design practice in Hungary. This allows the designer to analyze and compare CPT and seismic test result in order to select the best correlation equations for Hungarian soils and to improve the recommendations for the Hungarian geologic conditions. Based on a literature review, as well as research experience in Hungary, the influence of various parameters on the accuracy of results will be shown. This study can serve as a basis for selecting and modifying correlation equations for Hungarian soils. Test data are taken from seven locations in Hungary with similar geologic conditions. The shear wave velocity values were measured by seismic CPT. Several factors are analyzed including soil type, behavior index, measurement depth, geologic age etc. for their effect on the accuracy of predictions. The final results show an improved prediction method for Hungarian soils

Keywords: CPT correlation, dynamic soil properties, seismic CPT, shear wave velocity

Procedia PDF Downloads 246
455 Experimental Pain Study Investigating the Distinction between Pain and Relief Reports

Authors: Abeer F. Almarzouki, Christopher A. Brown, Richard J. Brown, Anthony K. P. Jones

Abstract:

Although relief is commonly assumed to be a direct reflection of pain reduction, it seems to be driven by complex emotional interactions in which pain reduction is only one component. For example, termination of a painful/aversive event may be relieving and rewarding. Accordingly, in this study, whether terminating an aversive negative prediction of pain would be reflected in a greater relief experience was investigated, with a view to separating apart the effects of the manipulation on pain and relief. We use aversive conditioning paradigm to investigate the perception of relief in an aversive (threat) vs. positive context. Participants received positive predictors of a non-painful outcome which were presented within either a congruent positive (non-painful) context or an incongruent threat (painful) context that had been previously conditioned; trials followed by identical laser stimuli on both conditions. Participants were asked to rate the perceived intensity of pain as well as their perception of relief in response to the cue predicting the outcome. Results demonstrated that participants reported more pain in the aversive context compared to the positive context. Conversely, participants reported more relief in the aversive context compares to the neutral context. The rating of relief in the threat context was not correlated with pain reports. The results suggest that relief is not dependant on pain intensity. Consistent with this, relief in the threat context was greater than that in the positive expectancy condition, while the opposite pattern was obtained for the pain ratings. The value of relief in this study is better appreciated in the context of an impending negative threat, which is apparent in the higher pain ratings in the prior negative expectancy compared to the positive expectancy condition. Moreover, the more threatening the context (as manifested by higher unpleasantness/higher state anxiety scores), the more the relief is appreciated. The importance of the study highlights the importance of exploring relief and pain intensity in monitoring separately or evaluating pain-related suffering. The results also illustrate that the perception of painful input may largely be shaped by the context and not necessarily stimulus-related.

Keywords: aversive context, pain, predictions, relief

Procedia PDF Downloads 139
454 Grid and Market Integration of Large Scale Wind Farms using Advanced Predictive Data Mining Techniques

Authors: Umit Cali

Abstract:

The integration of intermittent energy sources like wind farms into the electricity grid has become an important challenge for the utilization and control of electric power systems, because of the fluctuating behaviour of wind power generation. Wind power predictions improve the economic and technical integration of large amounts of wind energy into the existing electricity grid. Trading, balancing, grid operation, controllability and safety issues increase the importance of predicting power output from wind power operators. Therefore, wind power forecasting systems have to be integrated into the monitoring and control systems of the transmission system operator (TSO) and wind farm operators/traders. The wind forecasts are relatively precise for the time period of only a few hours, and, therefore, relevant with regard to Spot and Intraday markets. In this work predictive data mining techniques are applied to identify a statistical and neural network model or set of models that can be used to predict wind power output of large onshore and offshore wind farms. These advanced data analytic methods helps us to amalgamate the information in very large meteorological, oceanographic and SCADA data sets into useful information and manageable systems. Accurate wind power forecasts are beneficial for wind plant operators, utility operators, and utility customers. An accurate forecast allows grid operators to schedule economically efficient generation to meet the demand of electrical customers. This study is also dedicated to an in-depth consideration of issues such as the comparison of day ahead and the short-term wind power forecasting results, determination of the accuracy of the wind power prediction and the evaluation of the energy economic and technical benefits of wind power forecasting.

Keywords: renewable energy sources, wind power, forecasting, data mining, big data, artificial intelligence, energy economics, power trading, power grids

Procedia PDF Downloads 518
453 The Use of Correlation Difference for the Prediction of Leakage in Pipeline Networks

Authors: Mabel Usunobun Olanipekun, Henry Ogbemudia Omoregbee

Abstract:

Anomalies such as water pipeline and hydraulic or petrochemical pipeline network leakages and bursts have significant implications for economic conditions and the environment. In order to ensure pipeline systems are reliable, they must be efficiently controlled. Wireless Sensor Networks (WSNs) have become a powerful network with critical infrastructure monitoring systems for water, oil and gas pipelines. The loss of water, oil and gas is inevitable and is strongly linked to financial costs and environmental problems, and its avoidance often leads to saving of economic resources. Substantial repair costs and the loss of precious natural resources are part of the financial impact of leaking pipes. Pipeline systems experts have implemented various methodologies in recent decades to identify and locate leakages in water, oil and gas supply networks. These methodologies include, among others, the use of acoustic sensors, measurements, abrupt statistical analysis etc. The issue of leak quantification is to estimate, given some observations about that network, the size and location of one or more leaks in a water pipeline network. In detecting background leakage, however, there is a greater uncertainty in using these methodologies since their output is not so reliable. In this work, we are presenting a scalable concept and simulation where a pressure-driven model (PDM) was used to determine water pipeline leakage in a system network. These pressure data were collected with the use of acoustic sensors located at various node points after a predetermined distance apart. We were able to determine with the use of correlation difference to determine the leakage point locally introduced at a predetermined point between two consecutive nodes, causing a substantial pressure difference between in a pipeline network. After de-noising the signal from the sensors at the nodes, we successfully obtained the exact point where we introduced the local leakage using the correlation difference model we developed.

Keywords: leakage detection, acoustic signals, pipeline network, correlation, wireless sensor networks (WSNs)

Procedia PDF Downloads 109
452 Numerical Investigation of Dynamic Stall over a Wind Turbine Pitching Airfoil by Using OpenFOAM

Authors: Mahbod Seyednia, Shidvash Vakilipour, Mehran Masdari

Abstract:

Computations for two-dimensional flow past a stationary and harmonically pitching wind turbine airfoil at a moderate value of Reynolds number (400000) are carried out by progressively increasing the angle of attack for stationary airfoil and at fixed pitching frequencies for rotary one. The incompressible Navier-Stokes equations in conjunction with Unsteady Reynolds Average Navier-Stokes (URANS) equations for turbulence modeling are solved by OpenFOAM package to investigate the aerodynamic phenomena occurred at stationary and pitching conditions on a NACA 6-series wind turbine airfoil. The aim of this study is to enhance the accuracy of numerical simulation in predicting the aerodynamic behavior of an oscillating airfoil in OpenFOAM. Hence, for turbulence modelling, k-ω-SST with low-Reynolds correction is employed to capture the unsteady phenomena occurred in stationary and oscillating motion of the airfoil. Using aerodynamic and pressure coefficients along with flow patterns, the unsteady aerodynamics at pre-, near-, and post-static stall regions are analyzed in harmonically pitching airfoil, and the results are validated with the corresponding experimental data possessed by the authors. The results indicate that implementing the mentioned turbulence model leads to accurate prediction of the angle of static stall for stationary airfoil and flow separation, dynamic stall phenomenon, and reattachment of the flow on the surface of airfoil for pitching one. Due to the geometry of the studied 6-series airfoil, the vortex on the upper surface of the airfoil during upstrokes is formed at the trailing edge. Therefore, the pattern flow obtained by our numerical simulations represents the formation and change of the trailing-edge vortex at near- and post-stall regions where this process determines the dynamic stall phenomenon.

Keywords: CFD, moderate Reynolds number, OpenFOAM, pitching oscillation, unsteady aerodynamics, wind turbine

Procedia PDF Downloads 203
451 Statistical Modeling and by Artificial Neural Networks of Suspended Sediment Mina River Watershed at Wadi El-Abtal Gauging Station (Northern Algeria)

Authors: Redhouane Ghernaout, Amira Fredj, Boualem Remini

Abstract:

Suspended sediment transport is a serious problem worldwide, but it is much more worrying in certain regions of the world, as is the case in the Maghreb and more particularly in Algeria. It continues to take disturbing proportions in Northern Algeria due to the variability of rains in time and in space and constant deterioration of vegetation. Its prediction is essential in order to identify its intensity and define the necessary actions for its reduction. The purpose of this study is to analyze the concentration data of suspended sediment measured at Wadi El-Abtal Hydrometric Station. It also aims to find and highlight regressive power relationships, which can explain the suspended solid flow by the measured liquid flow. The study strives to find models of artificial neural networks linking the flow, month and precipitation parameters with solid flow. The obtained results show that the power function of the solid transport rating curve and the models of artificial neural networks are appropriate methods for analysing and estimating suspended sediment transport in Wadi Mina at Wadi El-Abtal Hydrometric Station. They made it possible to identify in a fairly conclusive manner the model of neural networks with four input parameters: the liquid flow Q, the month and the daily precipitation measured at the representative stations (Frenda 013002 and Ain El-Hadid 013004 ) of the watershed. The model thus obtained makes it possible to estimate the daily solid flows (interpolate and extrapolate) even beyond the period of observation of solid flows (1985/86 to 1999/00), given the availability of the average daily liquid flows and daily precipitation since 1953/1954.

Keywords: suspended sediment, concentration, regression, liquid flow, solid flow, artificial neural network, modeling, mina, algeria

Procedia PDF Downloads 103
450 A Study of The Factors Predicting Radiation Exposure to Contacts of Saudi Patients Treated With Low-Dose Radioactive Iodine (I-131)

Authors: Khalid A. Salman, Shereen Wagih, Tariq Munshi, Musaed Almalki, Safwan Zatari, Zahid Khan

Abstract:

Aim: To measure exposure levels to family members and caregivers of Saudi patients treated with low dose I131 therapy, and household radiation exposure rate to predict different factors that can affect radiation exposure. Patients and methods: All adult self dependent patients with hyperthyroidism or cancer thyroid referred for low dose radioactive I131 therapy on outpatient basis are included. Radiation protection procedures are given to the participant and family members in details. TLD’s were dispensed to each participant in sufficient quantity for his/her family members living in the household. TLD’s are collected at fifth days post-dispense from patients who agreed to have a home visit during which the household is inspected and level of radiation contamination of surfaces was measured. Results: Thirty-two patients were enrolled in the current study, with a mean age of 43.1± 17.1 years Out of them 25 patients (78%) are females. I131 therapy was given in twenty patients (63%) for cancer thyroid of and for toxic goiter in the remaining twelve patients (37%), with an overall mean I131 dose of 24.1 ± 7.5mCi that is relatively higher in the former. The overall number of household family members and helpers of patients are 139, out of them77 are females (55.4%) & 62 are males (44.6%) with a mean age of 29.8± 17.6. The mean period of contact with the patient is 7.6 ±5.6hours. The cumulative radiation exposure shows that radiation exposure to all family members is below the exposure constraint (1mSv), with a range of 109 to 503uSv, and a mean value of 220.9±91 uSv. Numerical data shows a little higher exposure rate for family members of those who receive higher dose of I131 (patients with thyroid cancer) and household members who spent longer time with the patient, yet, the difference is statistically insignificant (P>0.05). Besides, no significant correlation was found between the degree of cumulative exposure of the family members to their gender, age, socioeconomic standard, educational level and residential factors. In the 21 home visits all data from bedrooms, reception areas and kitchens are below hazardous limits (0.5uSv/h) apart from bathrooms that give a slightly higher reading of 0.57±0.39 uSv/h in those with cancer thyroid who receive a higher radiation dose. A statistically significant difference was found between radiation exposure rate in bathrooms used by the patient versus those used by family members only, with a mean value of exposure rate of 0.701±0.21 uSv/h and 0.17±0.82 uSv/h respectively, with a p-value of 0.018 (<0.05). Conclusion: Family members of patients treated with low dose I131 on outpatient basis have a good compliance to radiation protection instruction if given properly with a cumulative radiation exposure rate evidently beyond the radiation exposure constraints of 1 mSv. Given I131 dose, hours spent with the patient, age, gender, socioeconomic standard, educational level and residential factors have no significant correlation with the cumulative radiation exposure. The patient bathroom exhibits more radiation exposure rate, needing more strict instructions for patient bathroom use and health hygiene.

Keywords: family members, radiation exposure, radioactive iodine therapy, radiation safety

Procedia PDF Downloads 276
449 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery

Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas

Abstract:

The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.

Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition

Procedia PDF Downloads 150
448 The Extent of Land Use Externalities in the Fringe of Jakarta Metropolitan: An Application of Spatial Panel Dynamic Land Value Model

Authors: Rahma Fitriani, Eni Sumarminingsih, Suci Astutik

Abstract:

In a fast growing region, conversion of agricultural lands which are surrounded by some new development sites will occur sooner than expected. This phenomenon has been experienced by many regions in Indonesia, especially the fringe of Jakarta (BoDeTaBek). Being Indonesia’s capital city, rapid conversion of land in this area is an unavoidable process. The land conversion expands spatially into the fringe regions, which were initially dominated by agricultural land or conservation sites. Without proper control or growth management, this activity will invite greater costs than benefits. The current land use is the use which maximizes its value. In order to maintain land for agricultural activity or conservation, some efforts are needed to keep the land value of this activity as high as possible. In this case, the knowledge regarding the functional relationship between land value and its driving forces is necessary. In a fast growing region, development externalities are the assumed dominant driving force. Land value is the product of the past decision of its use leading to its value. It is also affected by the local characteristics and the observed surrounded land use (externalities) from the previous period. The effect of each factor on land value has dynamic and spatial virtues; an empirical spatial dynamic land value model will be more useful to capture them. The model will be useful to test and to estimate the extent of land use externalities on land value in the short run as well as in the long run. It serves as a basis to formulate an effective urban growth management’s policy. This study will apply the model to the case of land value in the fringe of Jakarta Metropolitan. The model will be used further to predict the effect of externalities on land value, in the form of prediction map. For the case of Jakarta’s fringe, there is some evidence about the significance of neighborhood urban activity – negative externalities, the previous land value and local accessibility on land value. The effects are accumulated dynamically over years, but they will fully affect the land value after six years.

Keywords: growth management, land use externalities, land value, spatial panel dynamic

Procedia PDF Downloads 256
447 Evaluation of Compatibility between Produced and Injected Waters and Identification of the Causes of Well Plugging in a Southern Tunisian Oilfield

Authors: Sonia Barbouchi, Meriem Samcha

Abstract:

Scale deposition during water injection into aquifer of oil reservoirs is a serious problem experienced in the oil production industry. One of the primary causes of scale formation and injection well plugging is mixing two waters which are incompatible. Considered individually, the waters may be quite stable at system conditions and present no scale problems. However, once they are mixed, reactions between ions dissolved in the individual waters may form insoluble products. The purpose of this study is to identify the causes of well plugging in a southern Tunisian oilfield, where fresh water has been injected into the producing wells to counteract the salinity of the formation waters and inhibit the deposition of halite. X-ray diffraction (XRD) mineralogical analysis has been carried out on scale samples collected from the blocked well. Two samples collected from both formation water and injected water were analysed using inductively coupled plasma atomic emission spectroscopy, ion chromatography and other standard laboratory techniques. The results of complete waters analysis were the typical input parameters, to determine scaling tendency. Saturation indices values related to CaCO3, CaSO4, BaSO4 and SrSO4 scales were calculated for the water mixtures at different share, under various conditions of temperature, using a computerized scale prediction model. The compatibility study results showed that mixing the two waters tends to increase the probability of barite deposition. XRD analysis confirmed the compatibility study results, since it proved that the analysed deposits consisted predominantly of barite with minor galena. At the studied temperatures conditions, the tendency for barite scale is significantly increasing with the increase of fresh water share in the mixture. The future scale inhibition and removal strategies to be implemented in the concerned oilfield are being derived in a large part from the results of the present study.

Keywords: compatibility study, produced water, scaling, water injection

Procedia PDF Downloads 166
446 Evaluation of Batch Splitting in the Context of Load Scattering

Authors: S. Wesebaum, S. Willeke

Abstract:

Production companies are faced with an increasingly turbulent business environment, which demands very high production volumes- and delivery date flexibility. If a decoupling by storage stages is not possible (e.g. at a contract manufacturing company) or undesirable from a logistical point of view, load scattering effects the production processes. ‘Load’ characterizes timing and quantity incidence of production orders (e.g. in work content hours) to workstations in the production, which results in specific capacity requirements. Insufficient coordination between load (demand capacity) and capacity supply results in heavy load scattering, which can be described by deviations and uncertainties in the input behavior of a capacity unit. In order to respond to fluctuating loads, companies try to implement consistent and realizable input behavior using the capacity supply available. For example, a uniform and high level of equipment capacity utilization keeps production costs down. In contrast, strong load scattering at workstations leads to performance loss or disproportionately fluctuating WIP, whereby the logistics objectives are affected negatively. Options for reducing load scattering are e.g. shifting the start and end dates of orders, batch splitting and outsourcing of operations or shifting to other workstations. This leads to an adjustment of load to capacity supply, and thus to a reduction of load scattering. If the adaptation of load to capacity cannot be satisfied completely, possibly flexible capacity must be used to ensure that the performance of a workstation does not decrease for a given load. Where the use of flexible capacities normally raises costs, an adjustment of load to capacity supply reduces load scattering and, in consequence, costs. In the literature you mostly find qualitative statements for describing load scattering. Quantitative evaluation methods that describe load mathematically are rare. In this article the authors discuss existing approaches for calculating load scattering and their various disadvantages such as lack of opportunity for normalization. These approaches are the basis for the development of our mathematical quantification approach for describing load scattering that compensates the disadvantages of the current quantification approaches. After presenting our mathematical quantification approach, the method of batch splitting will be described. Batch splitting allows the adaptation of load to capacity to reduce load scattering. After describing the method, it will be explicitly analyzed in the context of the logistic curve theory by Nyhuis using the stretch factor α1 in order to evaluate the impact of the method of batch splitting on load scattering and on logistic curves. The conclusion of this article will be to show how the methods and approaches presented can help companies in a turbulent environment to quantify the occurring work load scattering accurately and apply an efficient method for adjusting work load to capacity supply. In this way, the achievements of the logistical objectives are increased without causing additional costs.

Keywords: batch splitting, production logistics, production planning and control, quantification, load scattering

Procedia PDF Downloads 399
445 Improvement in Blast Furnace Performance Using Softening - Melting Zone Profile Prediction Model at G Blast Furnace, Tata Steel Jamshedpur

Authors: Shoumodip Roy, Ankit Singhania, K. R. K. Rao, Ravi Shankar, M. K. Agarwal, R. V. Ramna, Uttam Singh

Abstract:

The productivity of a blast furnace and the quality of the hot metal produced are significantly dependent on the smoothness and stability of furnace operation. The permeability of the furnace bed, as well as the gas flow pattern, influences the steady control of process parameters. The softening – melting zone that is formed inside the furnace contributes largely in distribution of the gas flow and the bed permeability. A better shape of softening-melting zone enhances the performance of blast furnace, thereby reducing the fuel rates and improving furnace life. Therefore, predictive model of the softening- melting zone profile can be utilized to control and improve the furnace operation. The shape of softening-melting zone depends upon the physical and chemical properties of the agglomerates and iron ore charged in the furnace. The variations in the agglomerate proportion in the burden at G Blast furnace disturbed the furnace stability. During such circumstances, it was analyzed that a w-shape softening-melting zone profile was formed inside the furnace. The formation of w-shape zone resulted in poor bed permeability and non-uniform gas flow. There was a significant increase in the heat loss at the lower zone of the furnace. The fuel demand increased, and the huge production loss was incurred. Therefore, visibility of softening-melting zone profile was necessary in order to pro-actively optimize the process parameters and thereby to operate the furnace smoothly. Using stave temperatures, a model was developed that predicted the shape of the softening-melting zone inside the furnace. It was observed that furnace operated smoothly during inverse V-shape of the zone and vice-versa during w-shape. This model helped to control the heat loss, optimize the burden distribution and lower the fuel rate at G Blast Furnace, TSL Jamshedpur. As a result of furnace stabilization productivity increased by 10% and fuel rate reduced by 80 kg/thm. Details of the process have been discussed in this paper.

Keywords: agglomerate, blast furnace, permeability, softening-melting

Procedia PDF Downloads 252
444 Role of P53, KI67 and Cyclin a Immunohistochemical Assay in Predicting Wilms’ Tumor Mortality

Authors: Ahmed Atwa, Ashraf Hafez, Mohamed Abdelhameed, Adel Nabeeh, Mohamed Dawaba, Tamer Helmy

Abstract:

Introduction and Objective: Tumour staging and grading do not usually reflect the future behavior of Wilms' tumor (WT) regarding mortality. Therefore, in this study, P53, Ki67 and cyclin A immunohistochemistry were used in a trial to predict WT cancer-specific survival (CSS). Methods: In this nonconcurrent cohort study, patients' archived data, including age at presentation, gender, history, clinical examination and radiological investigations, were retrieved then the patients were reviewed at the outpatient clinic of a tertiary care center by history-taking, clinical examination and radiological investigations to detect the oncological outcome. Cases that received preoperative chemotherapy or died due to causes other than WT were excluded. Formalin-fixed, paraffin-embedded specimens obtained from the previously preserved blocks at the pathology laboratory were taken on positively charged slides for IHC with p53, Ki67 and cyclin A. All specimens were examined by an experienced histopathologist devoted to the urological practice and blinded to the patient's clinical findings. P53 and cyclin A staining were scored as 0 (no nuclear staining),1 (<10% nuclear staining), 2 (10-50% nuclear staining) and 3 (>50% nuclear staining). Ki67 proliferation index (PI) was graded as low, borderline and high. Results: Of the 75 cases, 40 (53.3%) were males and 35 (46.7%) were females, and the median age was 36 months (2-216). With a mean follow-up of 78.6±31 months, cancer-specific mortality (CSM) occurred in 15 (20%) and 11 (14.7%) patients, respectively. Kaplan-Meier curve was used for survival analysis, and groups were compared using the Log-rank test. Multivariate logistic regression and Cox regression were not used because only one variable (cyclin A) had shown statistical significance (P=.02), whereas the other significant factor (residual tumor) had few cases. Conclusions: Cyclin A IHC should be considered as a marker for the prediction of WT CSS. Prospective studies with a larger sample size are needed.

Keywords: wilms’ tumour, nephroblastoma, urology, survival

Procedia PDF Downloads 67
443 Fuzzy Logic Classification Approach for Exponential Data Set in Health Care System for Predication of Future Data

Authors: Manish Pandey, Gurinderjit Kaur, Meenu Talwar, Sachin Chauhan, Jagbir Gill

Abstract:

Health-care management systems are a unit of nice connection as a result of the supply a straightforward and fast management of all aspects relating to a patient, not essentially medical. What is more, there are unit additional and additional cases of pathologies during which diagnosing and treatment may be solely allotted by victimization medical imaging techniques. With associate ever-increasing prevalence, medical pictures area unit directly acquired in or regenerate into digital type, for his or her storage additionally as sequent retrieval and process. Data Mining is the process of extracting information from large data sets through using algorithms and Techniques drawn from the field of Statistics, Machine Learning and Data Base Management Systems. Forecasting may be a prediction of what's going to occur within the future, associated it's an unsure method. Owing to the uncertainty, the accuracy of a forecast is as vital because the outcome foretold by foretelling the freelance variables. A forecast management should be wont to establish if the accuracy of the forecast is within satisfactory limits. Fuzzy regression strategies have normally been wont to develop shopper preferences models that correlate the engineering characteristics with shopper preferences relating to a replacement product; the patron preference models offer a platform, wherever by product developers will decide the engineering characteristics so as to satisfy shopper preferences before developing the merchandise. Recent analysis shows that these fuzzy regression strategies area units normally will not to model client preferences. We tend to propose a Testing the strength of Exponential Regression Model over regression toward the mean Model.

Keywords: health-care management systems, fuzzy regression, data mining, forecasting, fuzzy membership function

Procedia PDF Downloads 279
442 Achieving Process Stability through Automation and Process Optimization at H Blast Furnace Tata Steel, Jamshedpur

Authors: Krishnendu Mukhopadhyay, Subhashis Kundu, Mayank Tiwari, Sameeran Pani, Padmapal, Uttam Singh

Abstract:

Blast Furnace is a counter current process where burden descends from top and hot gases ascend from bottom and chemically reduce iron oxides into liquid hot metal. One of the major problems of blast furnace operation is the erratic burden descent inside furnace. Sometimes this problem is so acute that burden descent stops resulting in Hanging and instability of the furnace. This problem is very frequent in blast furnaces worldwide and results in huge production losses. This situation becomes more adverse when blast furnaces are operated at low coke rate and high coal injection rate with adverse raw materials like high alumina ore and high coke ash. For last three years, H-Blast Furnace Tata Steel was able to reduce coke rate from 450 kg/thm to 350 kg/thm with an increase in coal injection to 200 kg/thm which are close to world benchmarks and expand profitability. To sustain this regime, elimination of irregularities of blast furnace like hanging, channeling, and scaffolding is very essential. In this paper, sustaining of zero hanging spell for consecutive three years with low coke rate operation by improvement in burden characteristics, burden distribution, changes in slag regime, casting practices and adequate automation of the furnace operation has been illustrated. Models have been created to comprehend and upgrade the blast furnace process understanding. A model has been developed to predict the process of maintaining slag viscosity in desired range to attain proper burden permeability. A channeling prediction model has also been developed to understand channeling symptoms so that early actions can be initiated. The models have helped to a great extent in standardizing the control decisions of operators at H-Blast Furnace of Tata Steel, Jamshedpur and thus achieving process stability for last three years.

Keywords: hanging, channelling, blast furnace, coke

Procedia PDF Downloads 195
441 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning

Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic

Abstract:

Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.

Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method

Procedia PDF Downloads 249
440 Examining Predictive Coding in the Hierarchy of Visual Perception in the Autism Spectrum Using Fast Periodic Visual Stimulation

Authors: Min L. Stewart, Patrick Johnston

Abstract:

Predictive coding has been proposed as a general explanatory framework for understanding the neural mechanisms of perception. As such, an underweighting of perceptual priors has been hypothesised to underpin a range of differences in inferential and sensory processing in autism spectrum disorders. However, empirical evidence to support this has not been well established. The present study uses an electroencephalography paradigm involving changes of facial identity and person category (actors etc.) to explore how levels of autistic traits (AT) affect predictive coding at multiple stages in the visual processing hierarchy. The study uses a rapid serial presentation of faces, with hierarchically structured sequences involving both periodic and aperiodic repetitions of different stimulus attributes (i.e., person identity and person category) in order to induce contextual expectations relating to these attributes. It investigates two main predictions: (1) significantly larger and late neural responses to change of expected visual sequences in high-relative to low-AT, and (2) significantly reduced neural responses to violations of contextually induced expectation in high- relative to low-AT. Preliminary frequency analysis data comparing high and low-AT show greater and later event-related-potentials (ERPs) in occipitotemporal areas and prefrontal areas in high-AT than in low-AT for periodic changes of facial identity and person category but smaller ERPs over the same areas in response to aperiodic changes of identity and category. The research advances our understanding of how abnormalities in predictive coding might underpin aberrant perceptual experience in autism spectrum. This is the first stage of a research project that will inform clinical practitioners in developing better diagnostic tests and interventions for people with autism.

Keywords: hierarchical visual processing, face processing, perceptual hierarchy, prediction error, predictive coding

Procedia PDF Downloads 111
439 Valorization of Underutilized Fish Species Through a Multidisciplinary Approach

Authors: Tiziana Pepe, Gerardo Manfreda, Adriana Ianieri, Aniello Anastasio

Abstract:

The sustainable exploitation of marine biological resources is among the most important objectives of the EU's Common Fisheries Policy (CFP). Currently, Europe imports about 65% of its fish products, indicating that domestic production does not meet consumer demand. Despite the availability of numerous commercially significant fish species, European consumption is concentrated on a limited number of products (e.g., sea bass, sea bream, shrimp). Many native species, present in large quantities in the Mediterranean Sea, are little known to consumers and are therefore considered ‘fishing by-products’. All the data presented so far indicate a significant waste of local resources and the overexploitation of a few fish stocks. It is therefore necessary to develop strategies that guide the market towards sustainable conversion. The objective of this work was to valorize underutilized fish species of the Mediterranean Sea through a multidisciplinary approach. To this end, three fish species were sampled: Atlantic Horse Mackerel (Trachurus trachurus), Bogue (Boops boops), and Common Dolphinfish (Coryphaena hippurus). Nutritional properties (water %, fats, proteins, ashes, salts), physical/chemical properties (TVB-N, histamine, pH), and rheological properties (color, texture, viscosity) were analyzed. The analyses were conducted on both fillets and processing by-products. Additionally, mitochondrial DNA (mtDNA) was extracted from the muscle of each species. The mtDNA was then sequenced using the Illumina NGS technique. The analysis of nutritional properties classified the fillets of the sampled species as lean or semi-fat, as they had a fat content of less than 3%, while the by-products showed a higher lipid content (2.7-5%). The protein percentage for all fillets was 22-23%, while for processing by-products, the protein concentration was 18-19% for all species. Rheological analyses showed an increase in viscosity in saline solution in all species, indicating their potential suitability for industrial processing. High-quality and quantity complete mtDNA was extracted from all analyzed species. The complete mitochondrial genome sequences were successfully obtained and annotated. The results of this study suggest that all analyzed species are suitable for both human consumption and feed production. The sequencing of the complete mtDNA and its availability in international databases will be useful for accurate phylogenetic analysis and proper species identification, even in prepared and processed products. Underutilized fish species represent an important economic resource. Encouraging their consumption could limit the phenomenon of overfishing, protecting marine biodiversity. Furthermore, the valorization of these species will increase national fish production, supporting the local economy, cultural, and gastronomic tradition, and optimizing the exploitation of Mediterranean resources in accordance with the CFP.

Keywords: mtDNA, nutritional analysis, sustainable fisheries, underutilized fish species

Procedia PDF Downloads 30
438 Numerical Modeling and Prediction of Nanoscale Transport Phenomena in Vertically Aligned Carbon Nanotube Catalyst Layers by the Lattice Boltzmann Simulation

Authors: Seungho Shin, Keunwoo Choi, Ali Akbar, Sukkee Um

Abstract:

In this study, the nanoscale transport properties and catalyst utilization of vertically aligned carbon nanotube (VACNT) catalyst layers are computationally predicted by the three-dimensional lattice Boltzmann simulation based on the quasi-random nanostructural model in pursuance of fuel cell catalyst performance improvement. A series of catalyst layers are randomly generated with statistical significance at the 95% confidence level to reflect the heterogeneity of the catalyst layer nanostructures. The nanoscale gas transport phenomena inside the catalyst layers are simulated by the D3Q19 (i.e., three-dimensional, 19 velocities) lattice Boltzmann method, and the corresponding mass transport characteristics are mathematically modeled in terms of structural properties. Considering the nanoscale reactant transport phenomena, a transport-based effective catalyst utilization factor is defined and statistically analyzed to determine the structure-transport influence on catalyst utilization. The tortuosity of the reactant mass transport path of VACNT catalyst layers is directly calculated from the streaklines. Subsequently, the corresponding effective mass diffusion coefficient is statistically predicted by applying the pre-estimated tortuosity factors to the Knudsen diffusion coefficient in the VACNT catalyst layers. The statistical estimation results clearly indicate that the morphological structures of VACNT catalyst layers reduce the tortuosity of reactant mass transport path when compared to conventional catalyst layer and significantly improve consequential effective mass diffusion coefficient of VACNT catalyst layer. Furthermore, catalyst utilization of the VACNT catalyst layer is substantially improved by enhanced mass diffusion and electric current paths despite the relatively poor interconnections of the ion transport paths.

Keywords: Lattice Boltzmann method, nano transport phenomena, polymer electrolyte fuel cells, vertically aligned carbon nanotube

Procedia PDF Downloads 201