Search results for: mining industry
296 Statistical Optimization of Adsorption of a Harmful Dye from Aqueous Solution
Abstract:
Textile industries cater to varied customer preferences and contribute substantially to the economy. However, these textile industries also produce a considerable amount of effluents. Prominent among these are the azo dyes which impart considerable color and toxicity even at low concentrations. Azo dyes are also used as coloring agents in food and pharmaceutical industry. Despite their applications, azo dyes are also notorious pollutants and carcinogens. Popular techniques like photo-degradation, biodegradation and the use of oxidizing agents are not applicable for all kinds of dyes, as most of them are stable to these techniques. Chemical coagulation produces a large amount of toxic sludge which is undesirable and is also ineffective towards a number of dyes. Most of the azo dyes are stable to UV-visible light irradiation and may even resist aerobic degradation. Adsorption has been the most preferred technique owing to its less cost, high capacity and process efficiency and the possibility of regenerating and recycling the adsorbent. Adsorption is also most preferred because it may produce high quality of the treated effluent and it is able to remove different kinds of dyes. However, the adsorption process is influenced by many variables whose inter-dependence makes it difficult to identify optimum conditions. The variables include stirring speed, temperature, initial concentration and adsorbent dosage. Further, the internal diffusional resistance inside the adsorbent particle leads to slow uptake of the solute within the adsorbent. Hence, it is necessary to identify optimum conditions that lead to high capacity and uptake rate of these pollutants. In this work, commercially available activated carbon was chosen as the adsorbent owing to its high surface area. A typical azo dye found in textile effluent waters, viz. the monoazo Acid Orange 10 dye (CAS: 1936-15-8) has been chosen as the representative pollutant. Adsorption studies were mainly focused at obtaining equilibrium and kinetic data for the batch adsorption process at different process conditions. Studies were conducted at different stirring speed, temperature, adsorbent dosage and initial dye concentration settings. The Full Factorial Design was the chosen statistical design framework for carrying out the experiments and identifying the important factors and their interactions. The optimum conditions identified from the experimental model were validated with actual experiments at the recommended settings. The equilibrium and kinetic data obtained were fitted to different models and the model parameters were estimated. This gives more details about the nature of adsorption taking place. Critical data required to design batch adsorption systems for removal of Acid Orange 10 dye and identification of factors that critically influence the separation efficiency are the key outcomes from this research.Keywords: acid orange 10, activated carbon, optimum adsorption conditions, statistical design
Procedia PDF Downloads 169295 Application of Combined Cluster and Discriminant Analysis to Make the Operation of Monitoring Networks More Economical
Authors: Norbert Magyar, Jozsef Kovacs, Peter Tanos, Balazs Trasy, Tamas Garamhegyi, Istvan Gabor Hatvani
Abstract:
Water is one of the most important common resources, and as a result of urbanization, agriculture, and industry it is becoming more and more exposed to potential pollutants. The prevention of the deterioration of water quality is a crucial role for environmental scientist. To achieve this aim, the operation of monitoring networks is necessary. In general, these networks have to meet many important requirements, such as representativeness and cost efficiency. However, existing monitoring networks often include sampling sites which are unnecessary. With the elimination of these sites the monitoring network can be optimized, and it can operate more economically. The aim of this study is to illustrate the applicability of the CCDA (Combined Cluster and Discriminant Analysis) to the field of water quality monitoring and optimize the monitoring networks of a river (the Danube), a wetland-lake system (Kis-Balaton & Lake Balaton), and two surface-subsurface water systems on the watershed of Lake Neusiedl/Lake Fertő and on the Szigetköz area over a period of approximately two decades. CCDA combines two multivariate data analysis methods: hierarchical cluster analysis and linear discriminant analysis. Its goal is to determine homogeneous groups of observations, in our case sampling sites, by comparing the goodness of preconceived classifications obtained from hierarchical cluster analysis with random classifications. The main idea behind CCDA is that if the ratio of correctly classified cases for a grouping is higher than at least 95% of the ratios for the random classifications, then at the level of significance (α=0.05) the given sampling sites don’t form a homogeneous group. Due to the fact that the sampling on the Lake Neusiedl/Lake Fertő was conducted at the same time at all sampling sites, it was possible to visualize the differences between the sampling sites belonging to the same or different groups on scatterplots. Based on the results, the monitoring network of the Danube yields redundant information over certain sections, so that of 12 sampling sites, 3 could be eliminated without loss of information. In the case of the wetland (Kis-Balaton) one pair of sampling sites out of 12, and in the case of Lake Balaton, 5 out of 10 could be discarded. For the groundwater system of the catchment area of Lake Neusiedl/Lake Fertő all 50 monitoring wells are necessary, there is no redundant information in the system. The number of the sampling sites on the Lake Neusiedl/Lake Fertő can decrease to approximately the half of the original number of the sites. Furthermore, neighbouring sampling sites were compared pairwise using CCDA and the results were plotted on diagrams or isoline maps showing the location of the greatest differences. These results can help researchers decide where to place new sampling sites. The application of CCDA proved to be a useful tool in the optimization of the monitoring networks regarding different types of water bodies. Based on the results obtained, the monitoring networks can be operated more economically.Keywords: combined cluster and discriminant analysis, cost efficiency, monitoring network optimization, water quality
Procedia PDF Downloads 348294 Environmental Catalysts for Refining Technology Application: Reduction of CO Emission and Gasoline Sulphur in Fluid Catalytic Cracking Unit
Authors: Loganathan Kumaresan, Velusamy Chidambaram, Arumugam Velayutham Karthikeyani, Alex Cheru Pulikottil, Madhusudan Sau, Gurpreet Singh Kapur, Sankara Sri Venkata Ramakumar
Abstract:
Environmentally driven regulations throughout the world stipulate dramatic improvements in the quality of transportation fuels and refining operations. The exhaust gases like CO, NOx, and SOx from stationary sources (e.g., refinery) and motor vehicles contribute to a large extent for air pollution. The refining industry is under constant environmental pressure to achieve more rigorous standards on sulphur content in the fuel used in the transportation sector and other off-gas emissions. Fluid catalytic cracking unit (FCCU) is a major secondary process in refinery for gasoline and diesel production. CO-combustion promoter additive and gasoline sulphur reduction (GSR) additive are catalytic systems used in FCCU to assist the combustion of CO to CO₂ in the regenerator and regulate sulphur in gasoline faction respectively along with main FCC catalyst. Effectiveness of these catalysts is governed by the active metal used, its dispersion, the type of base material employed, and retention characteristics of additive in FCCU such as attrition resistance and density. The challenge is to have a high-density microsphere catalyst support for its retention and high activity of the active metals as these catalyst additives are used in low concentration compare to the main FCC catalyst. The present paper discusses in the first part development of high dense microsphere of nanocrystalline alumina by hydro-thermal method for CO combustion promoter application. Performance evaluation of additive was conducted under simulated regenerator conditions and shows CO combustion efficiency above 90%. The second part discusses the efficacy of a co-precipitation method for the generation of the active crystalline spinels of Zn, Mg, and Cu with aluminium oxides as an additive. The characterization and micro activity test using heavy combined hydrocarbon feedstock at FCC unit conditions for evaluating gasoline sulphur reduction activity are studied. These additives were characterized by X-Ray Diffraction, NH₃-TPD & N₂ sorption analysis, TPR analysis to establish structure-activity relationship. The reaction of sulphur removal mechanisms involving hydrogen transfer reaction, aromatization and alkylation functionalities are established to rank GSR additives for their activity, selectivity, and gasoline sulphur removal efficiency. The sulphur shifting in other liquid products such as heavy naphtha, light cycle oil, and clarified oil were also studied. PIONA analysis of liquid product reveals 20-40% reduction of sulphur in gasoline without compromising research octane number (RON) of gasoline and olefins content.Keywords: hydrothermal, nanocrystalline, spinel, sulphur reduction
Procedia PDF Downloads 96293 Gas Metal Arc Welding of Clad Plates API 5L X-60/316L Applying External Magnetic Fields during Welding
Authors: Blanca A. Pichardo, Victor H. Lopez, Melchor Salazar, Rafael Garcia, Alberto Ruiz
Abstract:
Clad pipes in comparison to plain carbon steel pipes offer the oil and gas industry high corrosion resistance, reduction in economic losses due to pipeline failures and maintenance, lower labor risk, prevent pollution and environmental damage due to hydrocarbons spills caused by deteriorated pipelines. In this context, it is paramount to establish reliable welding procedures to join bimetallic plates or pipes. Thus, the aim of this work is to study the microstructure and mechanical behavior of clad plates welded by the gas metal arc welding (GMAW) process. A clad of 316L stainless steel was deposited onto API 5L X-60 plates by overlay welding with the GMAW process. Welding parameters were, 22.5 V, 271 A, heat input 1,25 kJ/mm, shielding gas 98% Ar + 2% O₂, reverse polarity, torch displacement speed 3.6 mm/s, feed rate 120 mm/s, electrode diameter 1.2 mm and application of an electromagnetic field of 3.5 mT. The overlay welds were subjected to macro-structural and microstructural characterization. After manufacturing the clad plates, a single V groove joint was machined with a 60° bevel and 1 mm root face. GMA welding of the bimetallic plates was performed in four passes with ER316L-Si filler for the root pass and an ER70s-6 electrode for the subsequent welding passes. For joining the clad plates, an electromagnetic field was applied with 2 purposes; to improve the microstructural characteristics and to assist the stability of the electric arc during welding in order to avoid magnetic arc blow. The welds were macro and microstructurally characterized and the mechanical properties were also evaluated. Vickers microhardness (100 g load for 10 s) measurements were made across the welded joints at three levels. The first profile, at the 316L stainless steel cladding, was quite even with a value of approximately 230 HV. The second microhardness profile showed high values in the weld metal, ~400 HV, this was due to the formation of a martensitic microstructure by dilution of the first welding pass with the second. The third profile crossed the third and fourth welding passes and an average value of 240 HV was measured. In the tensile tests, yield strength was between 400 to 450 MPa with a tensile strength of ~512 MPa. In the Charpy impact tests, the results were 86 and 96 J for specimens with the notch in the face and in the root of the weld bead, respectively. The results of the mechanical properties were in the range of the API 5L X-60 base material. The overlap welding process used for cladding is not suitable for large components, however, it guarantees a metallurgical bond, unlike the most commonly used processes such as thermal expansion. For welding bimetallic plates, control of the temperature gradients is key to avoid distortions. Besides, the dissimilar nature of the bimetallic plates gives rise to the formation of a martensitic microstructure during welding.Keywords: clad pipe, dissimilar welding, gas metal arc welding, magnetic fields
Procedia PDF Downloads 152292 A Strategy to Reduce Salt Intake: The Use of a Seasoning Obtained from Wine Pomace
Authors: María Luisa Gonzalez-SanJose, Javier Garcia-Lomillo, Raquel Del Pino, Miriam Ortega-Heras, Maria Dolores Rivero-Perez, Pilar Muñiz-Rodriguez
Abstract:
One of the most preoccupant problems related to the diet of the occidental societies is the high salt intake. In Spain, salt intake is almost twice as recommended by the World Health Organization (WHO). A lot of negative health effects of high sodium intake have been described being the hypertension, cardiovascular and coronary diseases ones of the most important. Due to this fact, government and other institutions are working on the gradual reduction of this consumption. Intake of meat products have been described as the main processed products that bring salt to the diet, followed by snacks and savory crackers. However, fortunately, the food industry has also raised awareness of this problem and is working intensely, and in recent years attempts to reduce the salt content in processed products, and is developing special lines with low sodium content. It is important to consider that processed food are the main source of sodium in occidental countries. One of the possible strategies to reduce the salt content in food is to find substitutes that can emulate their taste properties without adding much sodium or products that mask or substitute salty sensations with other flavors and aromas. In this sense, multiple products have been proposed and used until now. Potassium salts produce similar salty sensations without bring sodium, however their intake should be also limited, by healthy reasons. Furthermore, some potassium salts shows some better notes. Other alternatives are the use of flavor enhancers, spices, aromatic herbs, sea-plant derivate products, etc. The wine pomace is rich in potassium salts, content organic acid and other flavored substances, therefore it could be an interesting raw material to obtain derived products that could be useful as alternative ‘seasonings’. Considering previous comments, the main aim of this study was to evaluate the possible use of a natural seasoning, made from red wine pomace, in two different foods, crackers and burgers. The seasoning was made in the pilot plant of food technology of the University of Burgos, where the studied crackers and patties were also made. Different members of the University, students, docent and administrative personal, taste the products, and a trained panel evaluated salty intensity. The seasoning in addition to potassium contain significant levels of dietary fiber and phenolic compounds, which also makes it interesting as a functional ingredient. Both burgers and crackers made with the seasoning showed better taste that those without salt. Obviously, they showed lower sodium content than normal formulation, and were richer in potassium, antioxidant and fiber. Then, they showed lower values of the relation Na/K. All these facts are correlated with more ‘healthy’ products especially to that people with hypertension and other coronary dysfunctions.Keywords: healthy foods, low salt, seasoning, wine pomace
Procedia PDF Downloads 274291 Blade-Coating Deposition of Semiconducting Polymer Thin Films: Light-To-Heat Converters
Authors: M. Lehtihet, S. Rosado, C. Pradère, J. Leng
Abstract:
Poly(3,4-ethylene dioxythiophene) polystyrene sulfonate (PEDOT: PSS), is a polymer mixture well-known for its semiconducting properties and is widely used in the coating industry for its visible transparency and high electronic conductivity (up to 4600 S/cm) as a transparent non-metallic electrode and in organic light-emitting diodes (OLED). It also possesses strong absorption properties in the Near Infra-Red (NIR) range (λ ranging between 900 nm to 2.5 µm). In the present work, we take advantage of this absorption to explore its potential use as a transparent light-to-heat converter. PEDOT: PSS aqueous dispersions are deposited onto a glass substrate using a blade-coating technique in order to produce uniform coatings with controlled thicknesses ranging in ≈ 400 nm to 2 µm. Blade-coating technique allows us good control of the deposit thickness and uniformity by the tuning of several experimental conditions (blade velocity, evaporation rate, temperature, etc…). This liquid coating technique is a well-known, non-expensive technique to realize thin film coatings on various substrates. For coatings on glass substrates destined to solar insulation applications, the ideal coating would be made of a material able to transmit all the visible range while reflecting the NIR range perfectly, but materials possessing similar properties still have unsatisfactory opacity in the visible too (for example, titanium dioxide nanoparticles). NIR absorbing thin films is a more realistic alternative for such an application. Under solar illumination, PEDOT: PSS thin films heat up due to absorption of NIR light and thus act as planar heaters while maintaining good transparency in the visible range. Whereas they screen some NIR radiation, they also generate heat which is then conducted into the substrate that re-emits this energy by thermal emission in every direction. In order to quantify the heating power of these coatings, a sample (coating on glass) is placed in a black enclosure and illuminated with a solar simulator, a lamp emitting a calibrated radiation very similar to the solar spectrum. The temperature of the rear face of the substrate is measured in real-time using thermocouples and a black-painted Peltier sensor measures the total entering flux (sum of transmitted and re-emitted fluxes). The heating power density of the thin films is estimated from a model of the thin film/glass substrate describing the system, and we estimate the Solar Heat Gain Coefficient (SHGC) to quantify the light-to-heat conversion efficiency of such systems. Eventually, the effect of additives such as dimethyl sulfoxide (DMSO) or optical scatterers (particles) on the performances are also studied, as the first one can alter the IR absorption properties of PEDOT: PSS drastically and the second one can increase the apparent optical path of light within the thin film material.Keywords: PEDOT: PSS, blade-coating, heat, thin-film, Solar spectrum
Procedia PDF Downloads 162290 In Response to Worldwide Disaster: Academic Libraries’ Functioning During COVID-19 Pandemic Without a Policy
Authors: Dalal Albudaiwi, Mike Allen, Talal Alhaji, Shahnaz Khadimehzadah
Abstract:
As a pandemic, COVID-19 has impacted the whole world since November 2019. In other words, every organization, industry, and institution has been negatively affected by the Coronavirus. The uncertainty of how long the pandemic will last caused chaos at all levels. As with any other institution, public libraries were affected and transmitted into online services and resources. As internationally, have been witnessed that some public libraries were well-prepared for such disasters as the pandemic, and therefore, collections, users, services, technologies, staff, and budgets were all influenced. Public libraries’ policies did not mention any plan regarding such a pandemic. Instead, there are several rules in the guidelines about disasters in general, such as natural disasters. In this pandemic situation, libraries have been involved in different uneasy circumstances. However, it has always been apparent to public libraries the role they play in serving their communities in excellent and critical times. It dwells into the traditional role public libraries play in providing information services and sources to satisfy their information-based community needs. Remarkably increasing people’s awareness of the importance of informational enrichment and enhancing society’s skills in dealing with information and information sources. Under critical circumstances, libraries play a different role. It goes beyond the traditional part of information providers to the untraditional role of being a social institution that serves the community with whatever capabilities they have. This study takes two significant directions. The first focuses on investigating how libraries have responded to COVID-19 and how they manage disasters within their organization. The second direction focuses on how libraries help their communities to act during disasters and how to recover from the consequences. The current study examines how libraries prepare for disasters and the role of public libraries during disasters. We will also propose “measures” to be a model that libraries can use to evaluate the effectiveness of their response to disasters. We intend to focus on how libraries responded to this new disaster. Therefore, this study aims to develop a comprehensive policy that includes responding to a crisis such as Covid-19. An analytical lens inside the libraries as an organization and outside the organization walls will be documented based on analyzing disaster-related literature published in the LIS publication. The study employs content analysis (CA) methodology. CA is widely used in the library and information science. The critical contribution of this work is to propose solutions it provides to libraries and planers to prepare crisis management plans/ policies, specifically to face a new global disaster such as the COVID-19 pandemic. Moreover, the study will help library directors to evaluate their strategies and to improve them properly. The significance of this study lies in guiding libraries’ directors to enhance the goals of the libraries to guarantee crucial issues such as: saving time, avoiding loss, saving budget, acting quickly during a crisis, maintaining libraries’ role during pandemics, finding out the best response to disasters, and creating plan/policy as a sample for all libraries.Keywords: Covid-19, policy, preparedness, public libraries
Procedia PDF Downloads 80289 In Search of Innovation: Exploring the Dynamics of Innovation
Authors: Michal Lysek, Mike Danilovic, Jasmine Lihua Liu
Abstract:
HMS Industrial Networks AB has been recognized as one of the most innovative companies in the industrial communication industry worldwide. The creation of their Anybus innovation during the 1990s contributed considerably to the company’s success. From inception, HMS’ employees were innovating for the purpose of creating new business (the creation phase). After the Anybus innovation, they began the process of internationalization (the commercialization phase), which in turn led them to concentrate on cost reduction, product quality, delivery precision, operational efficiency, and increasing growth (the growth phase). As a result of this transformation, performing new radical innovations have become more complicated. The purpose of our research was to explore the dynamics of innovation at HMS from the aspect of key actors, activities, and events, over the three phases, in order to understand what led to the creation of their Anybus innovation, and why it has become increasingly challenging for HMS to create new radical innovations for the future. Our research methodology was based on a longitudinal, retrospective study from the inception of HMS in 1988 to 2014, a single case study inspired by the grounded theory approach. We conducted 47 interviews and collected 1 024 historical documents for our research. Our analysis has revealed that HMS’ success in creating the Anybus, and developing a successful business around the innovation, was based on three main capabilities – cultivating customer relations on different managerial and organizational levels, inspiring business relations, and balancing complementary human assets for the purpose of business creation. The success of HMS has turned the management’s attention away from past activities of key actors, of their behavior, and how they influenced and stimulated the creation of radical innovations. Nowadays, they are rhetorically focusing on creativity and innovation. All the while, their real actions put emphasis on growth, cost reduction, product quality, delivery precision, operational efficiency, and moneymaking. In the process of becoming an international company, HMS gradually refocused. In so doing they became profitable and successful, but they also forgot what made them innovative in the first place. Fortunately, HMS’ management has come to realize that this is the case and they are now in search of recapturing innovation once again. Our analysis indicates that HMS’ management is facing several barriers to innovation related path dependency and other lock-in phenomena. HMS’ management has been captured, trapped in their mindset and actions, by the success of the past. But now their future has to be secured, and they have come to realize that moneymaking is not everything. In recent years, HMS’ management have begun to search for innovation once more, in order to recapture their past capabilities for creating radical innovations. In order to unlock their managerial perceptions of customer needs and their counter-innovation driven activities and events, to utilize the full potential of their employees and capture the innovation opportunity for the future.Keywords: barriers to innovation, dynamics of innovation, in search of excellence and innovation, radical innovation
Procedia PDF Downloads 379288 Mathematical Modelling of Bacterial Growth in Products of Animal Origin in Storage and Transport: Effects of Temperature, Use of Bacteriocins and pH Level
Authors: Benjamin Castillo, Luis Pastenes, Fernando Cordova
Abstract:
The pathogen growth in animal source foods is a common problem in the food industry, causing monetary losses due to the spoiling of products or food intoxication outbreaks in the community. In this sense, the quality of the product is reflected by the population of deteriorating agents present in it, which are mainly bacteria. The factors which are likely associated with freshness in animal source foods are temperature and processing, storage, and transport times. However, the level of deterioration of products depends, in turn, on the characteristics of the bacterial population, causing the decomposition or spoiling, such as pH level and toxins. Knowing the growth dynamics of the agents that are involved in product contamination allows the monitoring for more efficient processing. This means better quality and reasonable costs, along with a better estimation of necessary time and temperature intervals for transport and storage in order to preserve product quality. The objective of this project is to design a secondary model that allows measuring the impact on temperature bacterial growth and the competition for pH adequacy and release of bacteriocins in order to describe such phenomenon and, thus, estimate food product half-life with the least possible risk of deterioration or spoiling. In order to achieve this objective, the authors propose an analysis of a three-dimensional ordinary differential which includes; logistic bacterial growth extended by the inhibitory action of bacteriocins including the effect of the medium pH; change in the medium pH levels through an adaptation of the Luedeking-Piret kinetic model; Bacteriocin concentration modeled similarly to pH levels. These three dimensions are being influenced by the temperature at all times. Then, this differential system is expanded, taking into consideration the variable temperature and the concentration of pulsed bacteriocins, which represent characteristics inherent of the modeling, such as transport and storage, as well as the incorporation of substances that inhibit bacterial growth. The main results lead to the fact that temperature changes in an early stage of transport increased the bacterial population significantly more than if it had increased during the final stage. On the other hand, the incorporation of bacteriocins, as in other investigations, proved to be efficient in the short and medium-term since, although the population of bacteria decreased, once the bacteriocins were depleted or degraded over time, the bacteria eventually returned to their regular growth rate. The efficacy of the bacteriocins at low temperatures decreased slightly, which equates with the fact that their natural degradation rate also decreased. In summary, the implementation of the mathematical model allowed the simulation of a set of possible bacteria present in animal based products, along with their properties, in various transport and storage situations, which led us to state that for inhibiting bacterial growth, the optimum is complementary low constant temperatures and the initial use of bacteriocins.Keywords: bacterial growth, bacteriocins, mathematical modelling, temperature
Procedia PDF Downloads 135287 A Practical Construction Technique to Enhance the Performance of Rock Bolts in Tunnels
Authors: Ojas Chaudhari, Ali Nejad Ghafar, Giedrius Zirgulis, Marjan Mousavi, Tommy Ellison, Sandra Pousette, Patrick Fontana
Abstract:
In Swedish tunnel construction, a critical issue that has been repeatedly acknowledged is corrosion and, consequently, failure of the rock bolts in rock support systems. The defective installation of rock bolts results in the formation of cavities in the cement mortar that is regularly used to fill the area under the dome plates. These voids allow for water-ingress to the rock bolt assembly, which results in corrosion of rock bolt components and eventually failure. In addition, the current installation technique consists of several manual steps with intense labor works that are usually done in uncomfortable and exhausting conditions, e.g., under the roof of the tunnels. Such intense tasks also lead to a considerable waste of materials and execution errors. Moreover, adequate quality control of the execution is hardly possible with the current technique. To overcome these issues, a non-shrinking/expansive cement-based mortar filled in the paper packaging has been developed in this study which properly fills the area under the dome plates without or with the least remaining cavities, ultimately that diminishes the potential of corrosion. This article summarizes the development process and the experimental evaluation of this technique for the installation of rock bolts. In the development process, the cementitious mortar was first developed using specific cement and shrinkage reducing/expansive additives. The mechanical and flow properties of the mortar were then evaluated using compressive strength, density, and slump flow measurement methods. In addition, isothermal calorimetry and shrinkage/expansion measurements were used to elucidate the hydration and durability attributes of the mortar. After obtaining the desired properties in both fresh and hardened conditions, the developed dry mortar was filled in specific permeable paper packaging and then submerged in water bath for specific intervals before the installation. The tests were enhanced progressively by optimizing different parameters such as shape and size of the packaging, characteristics of the paper used, immersion time in water and even some minor characteristics of the mortar. Finally, the developed prototype was tested in a lab-scale rock bolt assembly with various angles to analyze the efficiency of the method in real life scenario. The results showed that the new technique improves the performance of the rock bolts by reducing the material wastage, improving environmental performance, facilitating and accelerating the labor works, and finally enhancing the durability of the whole system. Accordingly, this approach provides an efficient alternative for the traditional way of tunnel bolt installation with considerable advantages for the Swedish tunneling industry.Keywords: corrosion, durability, mortar, rock bolt
Procedia PDF Downloads 112286 A Fast Multi-Scale Finite Element Method for Geophysical Resistivity Measurements
Authors: Mostafa Shahriari, Sergio Rojas, David Pardo, Angel Rodriguez- Rozas, Shaaban A. Bakr, Victor M. Calo, Ignacio Muga
Abstract:
Logging-While Drilling (LWD) is a technique to record down-hole logging measurements while drilling the well. Nowadays, LWD devices (e.g., nuclear, sonic, resistivity) are mostly used commercially for geo-steering applications. Modern borehole resistivity tools are able to measure all components of the magnetic field by incorporating tilted coils. The depth of investigation of LWD tools is limited compared to the thickness of the geological layers. Thus, it is a common practice to approximate the Earth’s subsurface with a sequence of 1D models. For a 1D model, we can reduce the dimensionality of the problem using a Hankel transform. We can solve the resulting system of ordinary differential equations (ODEs) either (a) analytically, which results in a so-called semi-analytic method after performing a numerical inverse Hankel transform, or (b) numerically. Semi-analytic methods are used by the industry due to their high performance. However, they have major limitations, namely: -The analytical solution of the aforementioned system of ODEs exists only for piecewise constant resistivity distributions. For arbitrary resistivity distributions, the solution of the system of ODEs is unknown by today’s knowledge. -In geo-steering, we need to solve inverse problems with respect to the inversion variables (e.g., the constant resistivity value of each layer and bed boundary positions) using a gradient-based inversion method. Thus, we need to compute the corresponding derivatives. However, the analytical derivatives of cross-bedded formation and the analytical derivatives with respect to the bed boundary positions have not been published to the best of our knowledge. The main contribution of this work is to overcome the aforementioned limitations of semi-analytic methods by solving each 1D model (associated with each Hankel mode) using an efficient multi-scale finite element method. The main idea is to divide our computations into two parts: (a) offline computations, which are independent of the tool positions and we precompute only once and use them for all logging positions, and (b) online computations, which depend upon the logging position. With the above method, (a) we can consider arbitrary resistivity distributions along the 1D model, and (b) we can easily and rapidly compute the derivatives with respect to any inversion variable at a negligible additional cost by using an adjoint state formulation. Although the proposed method is slower than semi-analytic methods, its computational efficiency is still high. In the presentation, we shall derive the mathematical variational formulation, describe the proposed multi-scale finite element method, and verify the accuracy and efficiency of our method by performing a wide range of numerical experiments and comparing the numerical solutions to semi-analytic ones when the latest are available.Keywords: logging-While-Drilling, resistivity measurements, multi-scale finite elements, Hankel transform
Procedia PDF Downloads 386285 The Beauty and the Cruel: The Price of Ethics
Authors: Camila Lee Park, Mauro Fracarolli Nunes
Abstract:
Understood as the preference for products and services that do not involve moral dilemmas, ethical consumption has been increasingly discussed by scholars, practitioners, and consumers. Among its diverse trends, the defense of animal rights and welfare seems to have gained particular momentum in past decades. Not surprisingly, companies, governments, ideologues, and virtually any institution or group interested in (re)shaping society invest in the building of narratives oriented to influence consumption behavior. The animal rights movement, for example, is devoted to the elimination of the use of animals in science, as well as of commercial animal agriculture and hunting activities. Although advances in ethical consumption may be observed in practice, it still seems more popular as rhetoric. Diverse scholars have addressed the disparities between self-professed ethical consumers and their actual purchase patterns, with differences being attributed to factors such as price sensitivity, lack of information, quality, cynicism, and limited availability. The gap is also linked to the 'consumer sovereignty myth', according to which consumers are only able to choose from a pre-determined range of choices made before products reach them. On the other hand, academics also debate ethical consumption behavior as more likely to occur when it assumes compliance with social norms. As sustainability becomes a permanent issue, customers may tend to adhere to ethical consumption, either because of an individual value or due to a social one. Regardless of these efforts, the actual value attributed to ethical businesses remains unclear. Likewise, the power of stakeholders’ initiatives to influence corporate strategies is dubious. In search to offer new perspectives on these matters, the present study concentrates on the following research questions: Do customers value products/companies that respect animal rights? If so, does such enhanced value convert into actions from the part of the companies? Broadly, we aim to understand if customers’ perception holds performative traits (i.e., are capable of either trigger or contribute to changes in organizational behaviour around the respect for animal rights). In addressing these issues, two preliminary behavioral vignette-based experiments were conducted, with the perspectives of 307 participants being assessed. Building on a case of the cosmetics industry, social, emotional, and functional values were hypothesized as directly impacting positive word-of-mouth, which, in turn, would carry direct effects on purchase intention. A first structural equation model was analyzed with the combined samples of studies I and II. Results suggest that emotional value strongly impacts both positive word-of-mouth and purchase intention. Data confirms initial expectations on customers valuing products and companies that comply with ethical postures concerning animals, especially if social-oriented practices are also present.Keywords: animal rights, business ethics, emotional value, ethical consumption
Procedia PDF Downloads 119284 A Case Study on Problems Originated from Critical Path Method Application in a Governmental Construction Project
Authors: Mohammad Lemar Zalmai, Osman Hurol Turkakin, Cemil Akcay, Ekrem Manisali
Abstract:
In public construction projects, determining the contract period in the award phase is one of the most important factors. The contract period establishes the baseline for creating the cash flow curve and progress payment planning in the post-award phase. If overestimated, project duration causes losses for both the owner and the contractor. Therefore, it is essential to base construction project duration on reliable forecasting. In Turkey, schedules are usually built using the bar chart (Gantt) schedule, especially for governmental construction agencies. The usage of these schedules is limited for bidding purposes. Although the bar-chart schedule is useful in some cases, it lacks logical connections between activities; it would be harder to obtain the activities that have more effects than others on the project's total duration, especially in large complex projects. In this study, a construction schedule is prepared with Critical Path Method (CPM) that addresses the above-mentioned discrepancies. CPM is a simple and effective method that displays project time and critical paths, showing results of forward and backward calculations with considering the logic relationships between activities; it is a powerful tool for planning and managing all kinds of construction projects and is a very convenient method for the construction industry. CPM provides a much more useful and precise approach than traditional bar-chart diagrams that form the basis of construction planning and control. CPM has two main application utilities in the construction field; the first one is obtaining project duration, which is called an as-planned schedule that includes as-planned activity durations with relationships between subsequent activities. Another utility is during the project execution; each activity is tracked, and their durations are recorded in order to obtain as-built schedule, which is named as a black box of the project. The latter is more useful for delay analysis, and conflict resolutions. These features of CPM have been popular around the world. However, it has not been yet extensively used in Turkey. In this study, a real construction project is investigated as a case study; CPM-based scheduling is used for establishing both of as-built and as-planned schedules. Problems that emerged during the construction phase are identified and categorized. Subsequently, solutions are suggested. Two scenarios were considered. In the first scenario, project progress was monitored based as CPM was used to track and manage progress; this was carried out based on real-time data. In the second scenario, project progress was supposedly tracked based on the assumption that the Gantt chart was used. The S-curves of the two scenarios are plotted and interpreted. Comparing the results, possible faults of the latter scenario are highlighted, and solutions are suggested. The importance of CPM implementation has been emphasized and it has been proposed to make it mandatory for preparation of construction schedule based on CPM for public construction projects contracts.Keywords: as-built, case-study, critical path method, Turkish government sector projects
Procedia PDF Downloads 119283 Sustainability in Space: Material Efficiency in Space Missions
Authors: Hamda M. Al-Ali
Abstract:
From addressing fundamental questions about the history of the solar system to exploring other planets for any signs of life have always been the core of human space exploration. This triggered humans to explore whether other planets such as Mars could support human life on them. Therefore, many planned space missions to other planets have been designed and conducted to examine the feasibility of human survival on them. However, space missions are expensive and consume a large number of various resources to be successful. To overcome these problems, material efficiency shall be maximized through the use of reusable launch vehicles (RLV) rather than disposable and expendable ones. Material efficiency is defined as a way to achieve service requirements using fewer materials to reduce CO2 emissions from industrial processes. Materials such as aluminum-lithium alloys, steel, Kevlar, and reinforced carbon-carbon composites used in the manufacturing of spacecrafts could be reused in closed-loop cycles directly or by adding a protective coat. Material efficiency is a fundamental principle of a circular economy. The circular economy aims to cutback waste and reduce pollution through maximizing material efficiency so that businesses can succeed and endure. Five strategies have been proposed to improve material efficiency in the space industry, which includes waste minimization, introduce Key Performance Indicators (KPIs) to measure material efficiency, and introduce policies and legislations to improve material efficiency in the space sector. Another strategy to boost material efficiency is through maximizing resource and energy efficiency through material reusability. Furthermore, the environmental effects associated with the rapid growth in the number of space missions include black carbon emissions that lead to climate change. The levels of emissions must be tracked and tackled to ensure the safe utilization of space in the future. The aim of this research paper is to examine and suggest effective methods used to improve material efficiency in space missions so that space and Earth become more environmentally and economically sustainable. The objectives used to fulfill this aim are to identify the materials used in space missions that are suitable to be reused in closed-loop cycles considering material efficiency indicators and circular economy concepts. An explanation of how spacecraft materials could be re-used as well as propose strategies to maximize material efficiency in order to make RLVs possible so that access to space becomes affordable and reliable is provided. Also, the economic viability of the RLVs is examined to show the extent to which the use of RLVs has on the reduction of space mission costs. The environmental and economic implications of the increase in the number of space missions as a result of the use of RLVs are also discussed. These research questions are studied through detailed critical analysis of the literature, such as published reports, books, scientific articles, and journals. A combination of keywords such as material efficiency, circular economy, RLVs, and spacecraft materials were used to search for appropriate literature.Keywords: access to space, circular economy, material efficiency, reusable launch vehicles, spacecraft materials
Procedia PDF Downloads 113282 Parents as a Determinant for Students' Attitudes and Intentions toward Higher Education
Authors: Anna Öqvist, Malin Malmström
Abstract:
Attaining a higher level of education has become an increasingly important prerequisite for people’s economic and social independence and mobility. Young people who do not pursue higher education are not as attractive as potential employees in the modern work environment. Although completing a higher education degree is not a guarantee for getting a job, it substantially increases the chances for employment and, consequently, the chances for a better life. Despite this, it’s a fact that in several regions in Sweden, fewer students are choosing to engage in higher education. Similar trends have been emphasized in, for instance, the US where high dropout patterns among young people have been noted. This is a threat to future employment and industry development in these regions because the future employment base for society is dependent upon students’ willingness to invest in higher education. Much of prior studies have focused on the role of parents’ involvement in their children’s’ school work and the positive influence parents involvement have on their children’s school performance. Parental influence on education in general has been a topic of interest among those concerned with optimal developmental and educational outcomes for children and youth in pre-, secondary- and high school. Across a range of studies, there has emerged a strong conclusion that parental influence on child and youths education generally benefits children's and youths learning and school success. Arguably then, we could expect that parents influence on whether or not to pursue a higher education would be of importance to understand young people’s choice to engage in higher education. Accordingly, understanding what drives students’ intentions to pursue higher education is an essential component of motivating students to aspire to make the most of their potential in their future work life. Drawing on the theory of planned behavior, this study examines the role of parents influence on students’ attitudes about whether higher education can be beneficial to their future work life. We used a qualitative approach by collecting interview data from 18 high school students in Sweden to capture students’ cognitive and motivational mechanisms (attitudes) to influence intentions to engage in higher education. We found that parents may positively or negatively influence students’ attitudes and subsequently a student's intention to pursue higher education. Accordingly, our results show that parents’ own attitudes and expectations on their children are keys for influencing students’ attitudes and intentions for higher education. Further, our finding illuminates the mechanisms that drive students in one direction or the other. As such, our findings show that the same categories of arguments are used for driving students’ attitudes and intentions in two opposite directions, namely; financial arguments and work life benefits arguments. Our results contribute to existing literature by showing that parents do affect young people’s intentions to engage in higher studies. The findings contribute to the theory of planned behavior and have implications for the literature on higher education and educational psychology and also provide guidance on how to inform students about facts of higher studies in school.Keywords: higher studies, intentions, parents influence, theory of planned behavior
Procedia PDF Downloads 257281 Music Genre Classification Based on Non-Negative Matrix Factorization Features
Authors: Soyon Kim, Edward Kim
Abstract:
In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)
Procedia PDF Downloads 303280 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations
Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso
Abstract:
Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.Keywords: pipeline, leakage, detection, AI
Procedia PDF Downloads 191279 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory
Authors: Xiaochen Mu
Abstract:
Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.Keywords: data protection, property rights, intellectual property, Big data
Procedia PDF Downloads 39278 Collaborative Procurement in the Pursuit of Net- Zero: A Converging Journey
Authors: Bagireanu Astrid, Bros-Williamson Julio, Duncheva Mila, Currie John
Abstract:
The Architecture, Engineering, and Construction (AEC) sector plays a critical role in the global transition toward sustainable and net-zero built environments. However, the industry faces unique challenges in planning for net-zero while struggling with low productivity, cost overruns and overall resistance to change. Traditional practices fall short due to their inability to meet the requirements for systemic change, especially as governments increasingly demand transformative approaches. Working in silos and rigid hierarchies and a short-term, client-centric approach prioritising immediate gains over long-term benefit stands in stark contrast to the fundamental requirements for the realisation of net-zero objectives. These practices have limited capacity to effectively integrate AEC stakeholders and promote the essential knowledge sharing required to address the multifaceted challenges of achieving net-zero. In the context of built environment, procurement may be described as the method by which a project proceeds from inception to completion. Collaborative procurement methods under the Integrated Practices (IP) umbrella have the potential to align more closely with net-zero objectives. This paper explores the synergies between collaborative procurement principles and the pursuit of net zero in the AEC sector, drawing upon the shared values of cross-disciplinary collaboration, Early Supply Chain involvement (ESI), use of standards and frameworks, digital information management, strategic performance measurement, integrated decision-making principles and contractual alliancing. To investigate the role of collaborative procurement in advancing net-zero objectives, a structured research methodology was employed. First, the study focuses on a systematic review on the application of collaborative procurement principles in the AEC sphere. Next, a comprehensive analysis is conducted to identify common clusters of these principles across multiple procurement methods. An evaluative comparison between traditional procurement methods and collaborative procurement for achieving net-zero objectives is presented. Then, the study identifies the intersection between collaborative procurement principles and the net-zero requirements. Lastly, an exploration of key insights for AEC stakeholders focusing on the implications and practical applications of these findings is made. Directions for future development of this research are recommended. Adopting collaborative procurement principles can serve as a strategic framework for guiding the AEC sector towards realising net-zero. Synergising these approaches overcomes fragmentation, fosters knowledge sharing, and establishes a net-zero-centered ecosystem. In the context of the ongoing efforts to amplify project efficiency within the built environment, a critical realisation of their central role becomes imperative for AEC stakeholders. When effectively leveraged, collaborative procurement emerges as a powerful tool to surmount existing challenges in attaining net-zero objectives.Keywords: collaborative procurement, net-zero, knowledge sharing, architecture, built environment
Procedia PDF Downloads 73277 Immune Responses and Pathological Manifestations in Chicken to Oral Infection with Salmonella typhimurium
Authors: Mudasir Ahmad Syed, Raashid Ahmd Wani, Mashooq Ahmad Dar, Uneeb Urwat, Riaz Ahmad Shah, Nazir Ahmad Ganai
Abstract:
Salmonella enterica serovar Typhimurium (Salmonella Typhimurium) is a primary avian pathogen responsible for severe intestinal pathology in younger chickens and economic losses. However, the Salmonella Typhimurium is also able to cause infection in humans, described by typhoid fever and acute gastro-intestinal disease. A study was conducted at days to investigate pathological, histopathological, haemato-biochemical, immunological and expression kinetics of NRAMP (natural resistance associated macrophage protein) gene family (NRAMP1 and NRAMP2) in broiler chickens following experimental infection of Salmonella Typhimurium at 0,1,3,5,7,9,11,13 and 15 days respectively. Infection was developed in birds through oral route at 2×108 CFU/ml. Clinical symptoms appeared 4 days post infection (dpi) and after one-week birds showed progressive weakness, anorexia, diarrhea and lowering of head. On postmortem examination, liver showed congestion, hemorrhage and necrotic foci on surface, while as spleen, lungs and intestines revealed congestion and hemorrhages. Histopathological alterations were principally observed in liver in second week post infection. Changes in liver comprised of congestion, areas of necrosis, reticular endothelial hyperplasia in association with mononuclear cell and heterophilic infiltration. Hematological studies confirm a significant decrease (P<0.05) in RBC count, Hb concentration and PCV. White blood cell count showed significant increase throughout the experimental study. An increase in heterophils was found up to 7dpi and a decreased pattern was observed afterwards. Initial lymphopenia followed by lymphocytosis was found in infected chicks. Biochemical studies showed a significant increase in glucose, AST and ALT concentration and a significant decrease (P<0.05) in total protein and albumin level in the infected group. Immunological studies showed higher titers of IgG in infected group as compared to control group. The real time gene expression of NRAMPI and NRAMP2 genes increased significantly (P<0.05) in infected group as compared to controls. The peak expression of NRAMP1 gene was seen in liver, spleen and caecum of infected birds at 3dpi, 5dpi and 7dpi respectively, while as peak expression of NRAMP2 gene in liver, spleen and caecum of infected chicken was seen at 9dpi, 5dpi and 9dpi respectively. This study has role in diagnostics and prognostics in the poultry industry for the detection of salmonella infections at early stages of poultry development.Keywords: biochemistry, histopathology, NRAMP, poultry, real time expression, Salmonella Typhimurium
Procedia PDF Downloads 332276 Graphene-Graphene Oxide Dopping Effect on the Mechanical Properties of Polyamide Composites
Authors: Daniel Sava, Dragos Gudovan, Iulia Alexandra Gudovan, Ioana Ardelean, Maria Sonmez, Denisa Ficai, Laurentia Alexandrescu, Ecaterina Andronescu
Abstract:
Graphene and graphene oxide have been intensively studied due to the very good properties, which are intrinsic to the material or come from the easy doping of those with other functional groups. Graphene and graphene oxide have known a broad band of useful applications, in electronic devices, drug delivery systems, medical devices, sensors and opto-electronics, coating materials, sorbents of different agents for environmental applications, etc. The board range of applications does not come only from the use of graphene or graphene oxide alone, or by its prior functionalization with different moieties, but also it is a building block and an important component in many composite devices, its addition coming with new functionalities on the final composite or strengthening the ones that are already existent on the parent product. An attempt to improve the mechanical properties of polyamide elastomers by compounding with graphene oxide in the parent polymer composition was attempted. The addition of the graphene oxide contributes to the properties of the final product, improving the hardness and aging resistance. Graphene oxide has a lower hardness and textile strength, and if the amount of graphene oxide in the final product is not correctly estimated, it can lead to mechanical properties which are comparable to the starting material or even worse, the graphene oxide agglomerates becoming a tearing point in the final material if the amount added is too high (in a value greater than 3% towards the parent material measured in mass percentages). Two different types of tests were done on the obtained materials, the hardness standard test and the tensile strength standard test, and they were made on the obtained materials before and after the aging process. For the aging process, an accelerated aging was used in order to simulate the effect of natural aging over a long period of time. The accelerated aging was made in extreme heat. For all materials, FT-IR spectra were recorded using FT-IR spectroscopy. From the FT-IR spectra only the bands corresponding to the polyamide were intense, while the characteristic bands for graphene oxide were very small in comparison due to the very small amounts introduced in the final composite along with the low absorptivity of the graphene backbone and limited number of functional groups. In conclusion, some compositions showed very promising results, both in tensile strength test and in hardness tests. The best ratio of graphene to elastomer was between 0.6 and 0.8%, this addition extending the life of the product. Acknowledgements: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project ‘New nanostructured polymeric composites for centre pivot liners, centre plate and other components for the railway industry (RONERANANOSTRUCT)’, No: 18 PTE (PN-III-P2-2.1-PTE-2016-0146) is also acknowledged.Keywords: graphene, graphene oxide, mechanical properties, dopping effect
Procedia PDF Downloads 314275 Comparison of Non-destructive Devices to Quantify the Moisture Content of Bio-Based Insulation Materials on Construction Sites
Authors: Léa Caban, Lucile Soudani, Julien Berger, Armelle Nouviaire, Emilio Bastidas-Arteaga
Abstract:
Improvement of the thermal performance of buildings is a high concern for the construction industry. With the increase in environmental issues, new types of construction materials are being developed. These include bio-based insulation materials. They capture carbon dioxide, can be produced locally, and have good thermal performance. However, their behavior with respect to moisture transfer is still facing some issues. With a high porosity, the mass transfer is more important in those materials than in mineral insulation ones. Therefore, they can be more sensitive to moisture disorders such as mold growth, condensation risks or decrease of the wall energy efficiency. For this reason, the initial moisture content on the construction site is a piece of crucial knowledge. Measuring moisture content in a laboratory is a mastered task. Diverse methods exist but the easiest and the reference one is gravimetric. A material is weighed dry and wet, and its moisture content is mathematically deduced. Non-destructive methods (NDT) are promising tools to determine in an easy and fast way the moisture content in a laboratory or on construction sites. However, the quality and reliability of the measures are influenced by several factors. Classical NDT portable devices usable on-site measure the capacity or the resistivity of materials. Water’s electrical properties are very different from those of construction materials, which is why the water content can be deduced from these measurements. However, most moisture meters are made to measure wooden materials, and some of them can be adapted for construction materials with calibration curves. Anyway, these devices are almost never calibrated for insulation materials. The main objective of this study is to determine the reliability of moisture meters in the measurement of biobased insulation materials. The determination of which one of the capacitive or resistive methods is the most accurate and which device gives the best result is made. Several biobased insulation materials are tested. Recycled cotton, two types of wood fibers of different densities (53 and 158 kg/m3) and a mix of linen, cotton, and hemp. It seems important to assess the behavior of a mineral material, so glass wool is also measured. An experimental campaign is performed in a laboratory. A gravimetric measurement of the materials is carried out for every level of moisture content. These levels are set using a climatic chamber and by setting the relative humidity level for a constant temperature. The mass-based moisture contents measured are considered as references values, and the results given by moisture meters are compared to them. A complete analysis of the uncertainty measurement is also done. These results are used to analyze the reliability of moisture meters depending on the materials and their water content. This makes it possible to determine whether the moisture meters are reliable, and which one is the most accurate. It will then be used for future measurements on construction sites to assess the initial hygrothermal state of insulation materials, on both new-build and renovation projects.Keywords: capacitance method, electrical resistance method, insulation materials, moisture transfer, non-destructive testing
Procedia PDF Downloads 124274 Pioneering Conservation of Aquatic Ecosystems under Australian Law
Authors: Gina M. Newton
Abstract:
Australia’s Environment Protection and Biodiversity Conservation Act (EPBC Act) is the premiere, national law under which species and 'ecological communities' (i.e., like ecosystems) can be formally recognised and 'listed' as threatened across all jurisdictions. The listing process involves assessment against a range of criteria (similar to the IUCN process) to demonstrate conservation status (i.e., vulnerable, endangered, critically endangered, etc.) based on the best available science. Over the past decade in Australia, there’s been a transition from almost solely terrestrial to the first aquatic threatened ecological community (TEC or ecosystem) listings (e.g., River Murray, Macquarie Marshes, Coastal Saltmarsh, Salt-wedge Estuaries). All constitute large areas, with some including multiple state jurisdictions. Development of these conservation and listing advices has enabled, for the first time, a more forensic analysis of three key factors across a range of aquatic and coastal ecosystems: -the contribution of invasive species to conservation status, -how to demonstrate and attribute decline in 'ecological integrity' to conservation status, and, -identification of related priority conservation actions for management. There is increasing global recognition of the disproportionate degree of biodiversity loss within aquatic ecosystems. In Australia, legislative protection at Commonwealth or State levels remains one of the strongest conservation measures. Such laws have associated compliance mechanisms for breaches to the protected status. They also trigger the need for environment impact statements during applications for major developments (which may be denied). However, not all jurisdictions have such laws in place. There remains much opposition to the listing of freshwater systems – for example, the River Murray (Australia's largest river) and Macquarie Marshes (an internationally significant wetland) were both disallowed by parliament four months after formal listing. This was mainly due to a change of government, dissent from a major industry sector, and a 'loophole' in the law. In Australia, at least in the immediate to medium-term time frames, invasive species (aliens, native pests, pathogens, etc.) appear to be the number one biotic threat to the biodiversity and ecological function and integrity of our aquatic ecosystems. Consequently, this should be considered a current priority for research, conservation, and management actions. Another key outcome from this analysis was the recognition that drawing together multiple lines of evidence to form a 'conservation narrative' is a more useful approach to assigning conservation status. This also helps to addresses a glaring gap in long-term ecological data sets in Australia, which often precludes a more empirical data-driven approach. An important lesson also emerged – the recognition that while conservation must be underpinned by the best available scientific evidence, it remains a 'social and policy' goal rather than a 'scientific' goal. Communication, engagement, and 'politics' necessarily play a significant role in achieving conservation goals and need to be managed and resourced accordingly.Keywords: aquatic ecosystem conservation, conservation law, ecological integrity, invasive species
Procedia PDF Downloads 132273 Audio-Visual Co-Data Processing Pipeline
Authors: Rita Chattopadhyay, Vivek Anand Thoutam
Abstract:
Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech
Procedia PDF Downloads 80272 A Greener Approach towards the Synthesis of an Antimalarial Drug Lumefantrine
Authors: Luphumlo Ncanywa, Paul Watts
Abstract:
Malaria is a disease that kills approximately one million people annually. Children and pregnant women in sub-Saharan Africa lost their lives due to malaria. Malaria continues to be one of the major causes of death, especially in poor countries in Africa. Decrease the burden of malaria and save lives is very essential. There is a major concern about malaria parasites being able to develop resistance towards antimalarial drugs. People are still dying due to lack of medicine affordability in less well-off countries in the world. If more people could receive treatment by reducing the cost of drugs, the number of deaths in Africa could be massively reduced. There is a shortage of pharmaceutical manufacturing capability within many of the countries in Africa. However one has to question how Africa would actually manufacture drugs, active pharmaceutical ingredients or medicines developed within these research programs. It is quite likely that such manufacturing would be outsourced overseas, hence increasing the cost of production and potentially limiting the full benefit of the original research. As a result the last few years has seen major interest in developing more effective and cheaper technology for manufacturing generic pharmaceutical products. Micro-reactor technology (MRT) is an emerging technique that enables those working in research and development to rapidly screen reactions utilizing continuous flow, leading to the identification of reaction conditions that are suitable for usage at a production level. This emerging technique will be used to develop antimalarial drugs. It is this system flexibility that has the potential to reduce both the time was taken and risk associated with transferring reaction methodology from research to production. Using an approach referred to as scale-out or numbering up, a reaction is first optimized within the laboratory using a single micro-reactor, and in order to increase production volume, the number of reactors employed is simply increased. The overall aim of this research project is to develop and optimize synthetic process of antimalarial drugs in the continuous processing. This will provide a step change in pharmaceutical manufacturing technology that will increase the availability and affordability of antimalarial drugs on a worldwide scale, with a particular emphasis on Africa in the first instance. The research will determine the best chemistry and technology to define the lowest cost manufacturing route to pharmaceutical products. We are currently developing a method to synthesize Lumefantrine in continuous flow using batch process as bench mark. Lumefantrine is a dichlorobenzylidine derivative effective for the treatment of various types of malaria. Lumefantrine is an antimalarial drug used with artemether for the treatment of uncomplicated malaria. The results obtained when synthesizing Lumefantrine in a batch process are transferred into a continuous flow process in order to develop an even better and reproducible process. Therefore, development of an appropriate synthetic route for Lumefantrine is significant in pharmaceutical industry. Consequently, if better (and cheaper) manufacturing routes to antimalarial drugs could be developed and implemented where needed, it is far more likely to enable antimalarial drugs to be available to those in need.Keywords: antimalarial, flow, lumefantrine, synthesis
Procedia PDF Downloads 202271 Re-Entrant Direct Hexagonal Phases in a Lyotropic System Induced by Ionic Liquids
Authors: Saheli Mitra, Ramesh Karri, Praveen K. Mylapalli, Arka. B. Dey, Gourav Bhattacharya, Gouriprasanna Roy, Syed M. Kamil, Surajit Dhara, Sunil K. Sinha, Sajal K. Ghosh
Abstract:
The most well-known structures of lyotropic liquid crystalline systems are the two dimensional hexagonal phase of cylindrical micelles with a positive interfacial curvature and the lamellar phase of flat bilayers with zero interfacial curvature. In aqueous solution of surfactants, the concentration dependent phase transitions have been investigated extensively. However, instead of changing the surfactant concentrations, the local curvature of an aggregate can be altered by tuning the electrostatic interactions among the constituent molecules. Intermediate phases with non-uniform interfacial curvature are still unexplored steps to understand the route of phase transition from hexagonal to lamellar. Understanding such structural evolution in lyotropic liquid crystalline systems is important as it decides the complex rheological behavior of the system, which is one of the main interests of the soft matter industry. Sodium dodecyl sulfate (SDS) is an anionic surfactant and can be considered as a unique system to tune the electrostatics by cationic additives. In present study, imidazolium-based ionic liquids (ILs) with different number of carbon atoms in their single hydrocarbon chain were used as the additive in the aqueous solution of SDS. At a fixed concentration of total non-aqueous components (SDS and IL), the molar ratio of these components was changed, which effectively altered the electrostatic interactions between the SDS molecules. As a result, the local curvature is observed to modify, and correspondingly, the structure of the hexagonal liquid crystalline phases are transformed into other phases. Polarizing optical microscopy of SDS and imidazole-based-IL systems have exhibited different textures of the liquid crystalline phases as a function of increasing concentration of the ILs. The small angle synchrotron x-ray diffraction (SAXD) study has indicated the hexagonal phase of direct cylindrical micelles to transform to a rectangular phase at the presence of short (two hydrocarbons) chain IL. However, the hexagonal phase is transformed to a lamellar phase at the presence of long (ten hydrocarbons) chain IL. Interestingly, at the presence of a medium (four hydrocarbons) chain IL, the hexagonal phase is transformed to another hexagonal phase of direct cylindrical micelles through the lamellar phase. To the best of our knowledge, such a phase sequence has not been reported earlier. Even though the small angle x-ray diffraction study has revealed the lattice parameters of these phases to be similar to each other, their rheological behavior has been distinctly different. These rheological studies have shed lights on how these phases differ in their viscoelastic behavior. Finally, the packing parameters, calculated for these phases based on the geometry of the aggregates, have explained the formation of the self-assembled aggregates.Keywords: lyotropic liquid crystals, polarizing optical microscopy, rheology, surfactants, small angle x-ray diffraction
Procedia PDF Downloads 138270 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping
Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello
Abstract:
Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration
Procedia PDF Downloads 167269 Ammonia Bunkering Spill Scenarios: Modelling Plume’s Behaviour and Potential to Trigger Harmful Algal Blooms in the Singapore Straits
Authors: Bryan Low
Abstract:
In the coming decades, the global maritime industry will face a most formidable environmental challenge -achieving net zero carbon emissions by 2050. To meet this target, the Maritime Port Authority of Singapore (MPA) has worked to establish green shipping and digital corridors with ports of several other countries around the world where ships will use low-carbon alternative fuels such as ammonia for power generation. While this paradigm shift to the bunkering of greener fuels is encouraging, fuels like ammonia will also introduce a new and unique type of environmental risk in the unlikely scenario of a spill. While numerous modelling studies have been conducted for oil spills and their associated environmental impact on coastal and marine ecosystems, ammonia spills are comparatively less well understood. For example, there is a knowledge gap regarding how the complex hydrodynamic conditions of the Singapore Straits may influence the dispersion of a hypothetical ammonia plume, which has different physical and chemical properties compared to an oil slick. Chemically, ammonia can be absorbed by phytoplankton, thus altering the balance of the marine nitrogen cycle. Biologically, ammonia generally serves the role of a nutrient in coastal ecosystems at lower concentrations. However, at higher concentrations, it has been found to be toxic to many local species. It may also have the potential to trigger eutrophication and harmful algal blooms (HABs) in coastal waters, depending on local hydrodynamic conditions. Thus, the key objective of this research paper is to support the development of a model-based forecasting system that can predict ammonia plume behaviour in coastal waters, given prevailing hydrodynamic conditions and their environmental impact. This will be essential as ammonia bunkering becomes more commonplace in Singapore’s ports and around the world. Specifically, this system must be able to assess the HAB-triggering potential of an ammonia plume, as well as its lethal and sub-lethal toxic effects on local species. This will allow the relevant authorities to better plan risk mitigation measures or choose a time window with the ideal hydrodynamic conditions to conduct ammonia bunkering operations with minimal risk. In this paper, we present the first part of such a forecasting system: a jointly coupled hydrodynamic-water quality model that can capture how advection-diffusion processes driven by ocean currents influence plume behaviour and how the plume interacts with the marine nitrogen cycle. The model is then applied to various ammonia spill scenarios where the results are discussed in the context of current ammonia toxicity guidelines, impact on local ecosystems, and mitigation measures for future bunkering operations conducted in the Singapore Straits.Keywords: ammonia bunkering, forecasting, harmful algal blooms, hydrodynamics, marine nitrogen cycle, oceanography, water quality modeling
Procedia PDF Downloads 83268 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction
Authors: Bruce Wrightsman
Abstract:
Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.Keywords: wood building systems, material histories, monocoque systems, construction waste
Procedia PDF Downloads 78267 The Power-Knowledge Relationship in the Italian Education System between the 19th and 20th Century
Authors: G. Iacoviello, A. Lazzini
Abstract:
This paper focuses on the development of the study of accounting in the Italian education system between the 19th and 20th centuries. It also focuses on the subsequent formation of a scientific and experimental forma mentis that would prepare students for administrative and managerial activities in industry, commerce and public administration. From a political perspective, the period was characterized by two dominant movements - liberalism (1861-1922) and fascism (1922-1945) - that deeply influenced accounting practices and the entire Italian education system. The materials used in the study include both primary and secondary sources. The primary sources used to inform this study are numerous original documents issued from 1890-1935 by the government and maintained in the Historical Archive of the State in Rome. The secondary sources have supported both the development of the theoretical framework and the definition of the historical context. This paper assigns to the educational system the role of cultural producer. Foucauldian analysis identifies the problem confronted by the critical intellectual in finding a way to deploy knowledge through a 'patient labour of investigation' that highlights the contingency and fragility of the circumstances that have shaped current practices and theories. Education can be considered a powerful and political process providing students with values, ideas, and models that they will subsequently use to discipline themselves, remaining as close to them as possible. It is impossible for power to be exercised without knowledge, just as it is impossible for knowledge not to engender power. The power-knowledge relationship can be usefully employed for explaining how power operates within society, how mechanisms of power affect everyday lives. Power is employed at all levels and through many dimensions including government. Schools exercise ‘epistemological power’ – a power to extract a knowledge of individuals from individuals. Because knowledge is a key element in the operation of power, the procedures applied to the formation and accumulation of knowledge cannot be considered neutral instruments for the presentation of the real. Consequently, the same institutions that produce and spread knowledge can be considered part of the ‘power-knowledge’ interrelation. Individuals have become both objects and subject in the development of knowledge. If education plays a fundamental role in shaping all aspects of communities in the same way, the structural changes resulting from economic, social and cultural development affect the educational systems. Analogously, the important changes related to social and economic development required legislative intervention to regulate the functioning of different areas in society. Knowledge can become a means of social control used by the government to manage populations. It can be argued that the evolution of Italy’s education systems is coherent with the idea that power and knowledge do not exist independently but instead are coterminous. This research aims to reduce such a gap by analysing the role of the state in the development of accounting education in Italy.Keywords: education system, government, knowledge, power
Procedia PDF Downloads 139