Search results for: Croatian industries
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1861

Search results for: Croatian industries

121 Studies of the Reaction Products Resulted from Glycerol Electrochemical Conversion under Galvanostatic Mode

Authors: Ching Shya Lee, Mohamed Kheireddine Aroua, Wan Mohd Ashri Wan Daud, Patrick Cognet, Yolande Peres, Mohammed Ajeel

Abstract:

In recent years, with the decreasing supply of fossil fuel, renewable energy has received a significant demand. Biodiesel which is well known as vegetable oil based fatty acid methyl ester is an alternative fuel for diesel. It can be produced from transesterification of vegetable oils, such as palm oil, sunflower oil, rapeseed oil, etc., with methanol. During the transesterification process, crude glycerol is formed as a by-product, resulting in 10% wt of the total biodiesel production. To date, due to the fast growing of biodiesel production in worldwide, the crude glycerol supply has also increased rapidly and resulted in a significant price drop for glycerol. Therefore, extensive research has been developed to use glycerol as feedstock to produce various added-value chemicals, such as tartronic acid, mesoxalic acid, glycolic acid, glyceric acid, propanediol, acrolein etc. The industrial processes that usually involved are selective oxidation, biofermentation, esterification, and hydrolysis. However, the conversion of glycerol into added-value compounds by electrochemical approach is rarely discussed. Currently, the approach is mainly focused on the electro-oxidation study of glycerol under potentiostatic mode for cogenerating energy with other chemicals. The electro-organic synthesis study from glycerol under galvanostatic mode is seldom reviewed. In this study, the glycerol was converted into various added-value compounds by electrochemical method under galvanostatic mode. This work aimed to study the possible compounds produced from glycerol by electrochemical technique in a one-pot electrolysis cell. The electro-organic synthesis study from glycerol was carried out in a single compartment reactor for 8 hours, over the platinum cathode and anode electrodes under acidic condition. Various parameters such as electric current (1.0 A to 3.0 A) and reaction temperature (27 °C to 80 °C) were evaluated. The products obtained were characterized by using gas chromatography-mass spectroscopy equipped with an aqueous-stable polyethylene glycol stationary phase column. Under the optimized reaction condition, the glycerol conversion achieved as high as 95%. The glycerol was successfully converted into various added-value chemicals such as ethylene glycol, glycolic acid, glyceric acid, acetaldehyde, formic acid, and glyceraldehyde; given the yield of 1%, 45%, 27%, 4%, 0.7% and 5%, respectively. Based on the products obtained from this study, the reaction mechanism of this process is proposed. In conclusion, this study has successfully converted glycerol into a wide variety of added-value compounds. These chemicals are found to have high market value; they can be used in the pharmaceutical, food and cosmetic industries. This study effectively opens a new approach for the electrochemical conversion of glycerol. For further enhancement on the product selectivity, electrode material is an important parameter to be considered.

Keywords: biodiesel, glycerol, electrochemical conversion, galvanostatic mode

Procedia PDF Downloads 177
120 Toxic Chemicals from Industries into Pacific Biota. Investigation of Polychlorinated Biphenyls (PCBs), Dioxins (PCDD), Furans (PCDF) and Polybrominated Diphenyls (PBDE No. 47) in Tuna and Shellfish in Kiribati, Solomon Islands and the Fiji Islands

Authors: Waisea Votadroka, Bert Van Bavel

Abstract:

The most commonly consumed shellfish species produced in the Pacific, shellfish and tuna fish, were investigated for the occurrence of a range of brominated and chlorinated contaminants in order to establish current levels. Polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs) and polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) were analysed in the muscle of tuna species Katsuwonis pelamis, yellow fin tuna, and shellfish species from the Fiji Islands. The investigation of polychlorinated biphenyls (PCBs), furans (PCDFs) and polybrominated diphenylethers (PBDE No.47) in tuna and shellfish in Kiribati, Solomon Islands and Fiji is necessary due to the lack of research data in the Pacific region. The health risks involved in the consumption of marine foods laced with toxic organo-chlorinated and brominated compounds makes in the analyses of these compounds in marine foods important particularly when Pacific communities rely on these resources as their main diet. The samples were homogenized in a motor with anhydrous sodium sulphate in the ratio of 1:3 (muscle) and 1:4-1:5 (roe and butter). The tuna and shellfish samples were homogenized and freeze dried at the sampling location at the Institute of Applied Science, Fiji. All samples were stored in amber glss jars at -18 ° C until extraction at Orebro University. PCDD/Fs, PCBs and pesticides were all analysed using an Autospec Ultina HRGC/HRMS operating at 10,000 resolutions with EI ionization at 35 eV. All the measurements were performed in the selective ion recording mode (SIR), monitoring the two most abundant ions of the molecular cluster (PCDD/Fs and PCBs). Results indicated that the Fiji Composite sample for Batissa violacea range 0.7-238.6 pg/g lipid; Fiji sample composite Anadara antiquate range 1.6 – 808.6 pg/g lipid; Solomon Islands Katsuwonis Pelamis 7.5-3770.7 pg/g lipid; Solomon Islands Yellow Fin tuna 2.1 -778.4 pg/g lipid; Kiribati Katsuwonis Pelamis 4.8-1410 pg/g lipids. The study has demonstrated that these species are good bio-indicators of the presence of these toxic organic pollutants in edible marine foods. Our results suggest that for pesticides levels, p,p-DDE is the most dominant for all the groups and seems to be highest at 565.48 pg/g lipid in composite Batissa violacea from Fiji. For PBDE no.47 in comparing all samples, the composite Batissa violacea from Fiji had the highest level of 118.20 pg/g lipid. Based upon this study, the contamination levels found in the study species were quite lower compared with levels reported in impacted ecosystems around the world

Keywords: polychlorinated biphenyl, polybrominated diphenylethers, pesticides, organoclorinated pesticides, PBDEs

Procedia PDF Downloads 355
119 The Social Ecology of Serratia entomophila: Pathogen of Costelytra giveni

Authors: C. Watson, T. Glare, M. O'Callaghan, M. Hurst

Abstract:

The endemic New Zealand grass grub (Costelytra giveni, Coleoptera: Scarabaeidae) is an economically significant grassland pest in New Zealand. Due to their impacts on production within the agricultural sector, one of New Zealand's primary industries, several methods are being used to either control or prevent the establishment of new grass grub populations in the pasture. One such method involves the use of a biopesticide based on the bacterium Serratia entomophila. This species is one of the causative agents of amber disease, a chronic disease of the larvae which results in death via septicaemia after approximately 2 to 3 months. The ability of S. entomophila to cause amber disease is dependant upon the presence of the amber disease associated plasmid (pADAP), which encodes for the key virulence determinants required for the establishment and maintenance of the disease. Following the collapse of grass grub populations within the soil, resulting from either natural population build-up or application of the bacteria, non-pathogenic plasmid-free Serratia strains begin to predominate within the soil. Whilst the interactions between S. entomophila and grass grub larvae are well studied, less information is known on the interactions between plasmid-bearing and plasmid-free strains, particularly the potential impact of these interactions upon the efficacy of an applied biopesticide. Using a range of constructed strains with antibiotic tags, in vitro (broth culture) and in vivo (soil and larvae) experiments were conducted using inoculants comprised of differing ratios of isogenic pathogenic and non-pathogenic Serratia strains, enabling the relative growth of pADAP+ and pADAP- strains under competition conditions to be assessed. In nutrient-rich, the non-pathogenic pADAP- strain outgrew the pathogenic pADAP+ strain by day 3 when inoculated in equal quantities, and by day 5 when applied as the minority inoculant, however, there was an overall gradual decline in the number of viable bacteria for both strains over a 7-day period. Similar results were obtained in additional experiments using the same strains and continuous broth cultures re-inoculated at 24-hour intervals, although in these cultures, the viable cell count did not diminish over the 7-day period. When the same ratios were assessed in soil microcosms with limited available nutrients, the strains remained relatively stable over a 2-month period. Additionally, in vivo grass grub co-infections assays using the same ratios of tagged Serratia strains revealed similar results to those observed in the soil, but there was also evidence of horizontal transfer of pADAP from the pathogenic to the non-pathogenic strain within the larval gut after a period of 4 days. Whilst the influence of competition is more apparent in broth cultures than within the soil or larvae, further testing is required to determine whether this competition between pathogenic and non-pathogenic Serratia strains has any influence on efficacy and disease progression, and how this may impact on the ability of S. entomophila to cause amber disease within grass grub larvae when applied as a biopesticide.

Keywords: biological control, entomopathogen, microbial ecology, New Zealand

Procedia PDF Downloads 130
118 A Numerical Hybrid Finite Element Model for Lattice Structures Using 3D/Beam Elements

Authors: Ahmadali Tahmasebimoradi, Chetra Mang, Xavier Lorang

Abstract:

Thanks to the additive manufacturing process, lattice structures are replacing the traditional structures in aeronautical and automobile industries. In order to evaluate the mechanical response of the lattice structures, one has to resort to numerical techniques. Ansys is a globally well-known and trusted commercial software that allows us to model the lattice structures and analyze their mechanical responses using either solid or beam elements. In this software, a script may be used to systematically generate the lattice structures for any size. On the one hand, solid elements allow us to correctly model the contact between the substrates (the supports of the lattice structure) and the lattice structure, the local plasticity, and the junctions of the microbeams. However, their computational cost increases rapidly with the size of the lattice structure. On the other hand, although beam elements reduce the computational cost drastically, it doesn’t correctly model the contact between the lattice structures and the substrates nor the junctions of the microbeams. Also, the notion of local plasticity is not valid anymore. Moreover, the deformed shape of the lattice structure doesn’t correspond to the deformed shape of the lattice structure using 3D solid elements. In this work, motivated by the pros and cons of the 3D and beam models, a numerically hybrid model is presented for the lattice structures to reduce the computational cost of the simulations while avoiding the aforementioned drawbacks of the beam elements. This approach consists of the utilization of solid elements for the junctions and beam elements for the microbeams connecting the corresponding junctions to each other. When the global response of the structure is linear, the results from the hybrid models are in good agreement with the ones from the 3D models for body-centered cubic with z-struts (BCCZ) and body-centered cubic without z-struts (BCC) lattice structures. However, the hybrid models have difficulty to converge when the effect of large deformation and local plasticity are considerable in the BCCZ structures. Furthermore, the effect of the junction’s size of the hybrid models on the results is investigated. For BCCZ lattice structures, the results are not affected by the junction’s size. This is also valid for BCC lattice structures as long as the ratio of the junction’s size to the diameter of the microbeams is greater than 2. The hybrid model can take into account the geometric defects. As a demonstration, the point clouds of two lattice structures are parametrized in a platform called LATANA (LATtice ANAlysis) developed by IRT-SystemX. In this process, for each microbeam of the lattice structures, an ellipse is fitted to capture the effect of shape variation and roughness. Each ellipse is represented by three parameters; semi-major axis, semi-minor axis, and angle of rotation. Having the parameters of the ellipses, the lattice structures are constructed in Spaceclaim (ANSYS) using the geometrical hybrid approach. The results show a negligible discrepancy between the hybrid and 3D models, while the computational cost of the hybrid model is lower than the computational cost of the 3D model.

Keywords: additive manufacturing, Ansys, geometric defects, hybrid finite element model, lattice structure

Procedia PDF Downloads 92
117 Belarus Rivers Runoff: Current State, Prospects

Authors: Aliaksandr Volchak, Мaryna Barushka

Abstract:

The territory of Belarus is studied quite well in terms of hydrology but runoff fluctuations over time require more detailed research in order to forecast changes in rivers runoff in future. Generally, river runoff is shaped by natural climatic factors, but man-induced impact has become so big lately that it can be compared to natural processes in forming runoffs. In Belarus, a heavy man load on the environment was caused by large-scale land reclamation in the 1960s. Lands of southern Belarus were reclaimed most, which contributed to changes in runoff. Besides, global warming influences runoff. Today we observe increase in air temperature, decrease in precipitation, changes in wind velocity and direction. These result from cyclic climate fluctuations and, to some extent, the growth of concentration of greenhouse gases in the air. Climate change affects Belarus’s water resources in different ways: in hydropower industry, other water-consuming industries, water transportation, agriculture, risks of floods. In this research we have done an assessment of river runoff according to the scenarios of climate change and global climate forecast presented in the 4th and 5th Assessment Reports conducted by Intergovernmental Panel on Climate Change (IPCC) and later specified and adjusted by experts from Vilnius Gediminas Technical University with the use of a regional climatic model. In order to forecast changes in climate and runoff, we analyzed their changes from 1962 up to now. This period is divided into two: from 1986 up to now in comparison with the changes observed from 1961 to 1985. Such a division is a common world-wide practice. The assessment has revealed that, on the average, changes in runoff are insignificant all over the country, even with its irrelevant increase by 0.5 – 4.0% in the catchments of the Western Dvina River and north-eastern part of the Dnieper River. However, changes in runoff have become more irregular both in terms of the catchment area and inter-annual distribution over seasons and river lengths. Rivers in southern Belarus (the Pripyat, the Western Bug, the Dnieper, the Neman) experience reduction of runoff all year round, except for winter, when their runoff increases. The Western Bug catchment is an exception because its runoff reduces all year round. Significant changes are observed in spring. Runoff of spring floods reduces but the flood comes much earlier. There are different trends in runoff changes in spring, summer, and autumn. Particularly in summer, we observe runoff reduction in the south and west of Belarus, with its growth in the north and north-east. Our forecast of runoff up to 2035 confirms the trend revealed in 1961 – 2015. According to it, in the future, there will be a strong difference between northern and southern Belarus, between small and big rivers. Although we predict irrelevant changes in runoff, it is quite possible that they will be uneven in terms of seasons or particular months. Especially, runoff can change in summer, but decrease in the rest seasons in the south of Belarus, whereas in the northern part the runoff is predicted to change insignificantly.

Keywords: assessment, climate fluctuation, forecast, river runoff

Procedia PDF Downloads 104
116 The Cost of Beauty: Insecurity and Profit

Authors: D. Cole, S. Mahootian, P. Medlock

Abstract:

This research contributes to existing knowledge of the complexities surrounding women’s relationship to beauty standards by examining their lived experiences. While there is much academic work on the effects of culturally imposed and largely unattainable beauty standards, the arguments tend to fall into two paradigms. On the one hand is the radical feminist perspective that argues that women are subjected to absolute oppression within the patriarchal system in which beauty standards have been constructed. This position advocates for a complete restructuring of social institutions to liberate women from all types of oppression. On the other hand, there are liberal feminist arguments that focus on choice, arguing that women’s agency in how to present themselves is empowerment. These arguments center around what women do within the patriarchal system in order to liberate themselves. However, there is very little research on the lived experiences of women negotiating these two realms: the complex negotiation between the pressure to adhere to cultural beauty standards and the agency of self-expression and empowerment. By exploring beauty standards through the intersection of societal messages (including macro-level processes such as social media and advertising as well as smaller-scale interactions such as families and peers) and lived experiences, this study seeks to provide a nuanced understanding of how women navigate and negotiate their own presentation and sense of self-identity. Current research sees a rise in incidents of body dysmorphia, depression and anxiety since the advent of social media. Approximately 91% of women are unhappy with their bodies and resort to dieting to achieve their ideal body shape, but only 5% of women naturally possess the body type often portrayed by Americans in movies and media. It is, therefore, crucial we begin talking about the processes that are affecting self-image and mental health. A question that arises is that, given these negative effects, why do companies continue to advertise and target women with standards that very few could possibly attain? One obvious answer is that keeping beauty standards largely unattainable enables the beauty and fashion industries to make large profits by promising products and procedures that will bring one up to “standard”. The creation of dissatisfaction for some is profit for others. This research utilizes qualitative methods: interviews, questionnaires, and focus groups to investigate women’s relationships to beauty standards and empowerment. To this end, we reached out to potential participants through a video campaign on social media: short clips on Instagram, Facebook, and TikTok and a longer clip on YouTube inviting users to take part in the study. Participants are asked to react to images, videos, and other beauty-related texts. The findings of this research have implications for policy development, advocacy and interventions aimed at promoting healthy inclusivity and empowerment of women.

Keywords: women, beauty, consumerism, social media

Procedia PDF Downloads 23
115 Influence of Counter-Face Roughness on the Friction of Bionic Microstructures

Authors: Haytam Kasem

Abstract:

The problem of quick and easy reversible attachment has become of great importance in different fields of technology. For the reason, during the last decade, a new emerging field of adhesion science has been developed. Essentially inspired by some animals and insects, which during their natural evolution have developed fantastic biological attachment systems allowing them to adhere and run on walls and ceilings of uneven surfaces. Potential applications of engineering bio-inspired solutions include climbing robots, handling systems for wafers in nanofabrication facilities, and mobile sensor platforms, to name a few. However, despite the efforts provided to apply bio-inspired patterned adhesive-surfaces to the biomedical field, they are still in the early stages compared with their conventional uses in other industries mentioned above. In fact, there are some critical issues that still need to be addressed for the wide usage of the bio-inspired patterned surfaces as advanced biomedical platforms. For example, surface durability and long-term stability of surfaces with high adhesive capacity should be improved, but also the friction and adhesion capacities of these bio-inspired microstructures when contacting rough surfaces. One of the well-known prototypes for bio-inspired attachment systems is biomimetic wall-shaped hierarchical microstructure for gecko-like attachments. Although physical background of these attachment systems is widely understood, the influence of counter-face roughness and its relationship with the friction force generated when sliding against wall-shaped hierarchical microstructure have yet to be fully analyzed and understood. To elucidate the effect of the counter-face roughness on the friction of biomimetic wall-shaped hierarchical microstructure we have replicated the isotropic topography of 12 different surfaces using replicas made of the same epoxy material. The different counter-faces were fully characterized under 3D optical profilometer to measure roughness parameters. The friction forces generated by spatula-shaped microstructure in contact with the tested counter-faces were measured on a home-made tribometer and compared with the friction forces generated by the spatulae in contact with a smooth reference. It was found that classical roughness parameters, such as average roughness Ra and others, could not be utilized to explain topography-related variation in friction force. This has led us to the development of an integrated roughness parameter obtained by combining different parameters which are the mean asperity radius of curvature (R), the asperity density (η), the deviation of asperities high (σ) and the mean asperities angle (SDQ). This new integrated parameter is capable of explaining the variation of results of friction measurements. Based on the experimental results, we developed and validated an analytical model to predict the variation of the friction force as a function of roughness parameters of the counter-face and the applied normal load, as well.

Keywords: friction, bio-mimetic micro-structure, counter-face roughness, analytical model

Procedia PDF Downloads 218
114 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry

Authors: C. A. Barros, Ana P. Barroso

Abstract:

Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.

Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis

Procedia PDF Downloads 188
113 The Distribution and Environmental Behavior of Heavy Metals in Jajarm Bauxite Mine, Northeast Iran

Authors: Hossein Hassani, Ali Rezaei

Abstract:

Heavy metals are naturally occurring elements that have a high atomic weight and a density at least five times greater than that of water. Their multiple industrial, domestic, agricultural, medical, and technological applications have led to their wide distribution in the environment, raising concerns over their potential effects on human health and the environment. Environmental protection against various pollutants, such as heavy metals formed by industries, mines and modern technologies, is a concern for researchers and industry. In order to assess the contamination of soils the distribution and environmental behavior have been investigated. Jajarm bauxite mine, the most important deposits have been discovered in Iran, which is about 22 million tons of reserve, and is the main mineral of the Diaspora. With a view to estimate the heavy metals ratio of the Jajarm bauxite mine area and to evaluate the pollution level, 50 samples have been collected and have been analyzed for the heavy metals of As, Cd, Cu, Hg, Ni and Pb with the help of Inductively Coupled Plasma-Mass Spectrometer (ICP- MS). In this study, we have dealt with determining evaluation criteria including contamination factor (CF), average concentration (AV), enrichment factor (EF) and geoaccumulation index (GI) to assess the risk of pollution from heavy metals(As, Cd, Cu, Hg, Ni and Pb) in Jajarm bauxite mine. In the samples of the studied, the average of recorded concentration of elements for Arsenic, Cadmium, Copper, Mercury, Nickel and Lead are 18, 0.11, 12, 0.07, 58 and 51 (mg/kg) respectively. The comparison of the heavy metals concentration average and the toxic potential in the samples has shown that an average with respect to the world average of the uncontaminated soil amounts. The average of Pb and As elements shows a higher quantity with respect to the world average quantity. The pollution factor for the study elements has been calculated on the basis of the soil background concentration and has been categorized on the basis of the uncontaminated world soil average with respect to the Hakanson classification. The calculation of the corrected pollutant degree shows the degree of the bulk intermediate pollutant (1.55-2.0) for the average soil sampling of the study area which is on the basis of the background quantity and the world average quantity of the uncontaminated soils. The provided conclusion from calculation of the concentrated factor, for some of the samples show that the average of the lead and arsenic elements stations are more than the background values and the unnatural metal concentration are covered under the study area, That's because the process of mining and mineral extraction. Given conclusion from the calculation of Geoaccumulation index of the soil sampling can explain that the copper, nickel, cadmium, arsenic, lead and mercury elements are Uncontamination. In general, the results indicate that the Jajarm bauxite mine of heavy metal pollution is uncontaminated area and extract the mineral from the mine, not create environmental hazards in the region.

Keywords: enrichment factor, geoaccumulation index, heavy metals, Jajarm bauxite mine, pollution

Procedia PDF Downloads 266
112 Strength Performance and Microstructure Characteristics of Natural Bonded Fiber Composites from Malaysian Bamboo

Authors: Shahril Anuar Bahari, Mohd Azrie Mohd Kepli, Mohd Ariff Jamaludin, Kamarulzaman Nordin, Mohamad Jani Saad

Abstract:

Formaldehyde release from wood-based panel composites can be very toxicity and may increase the risk of human health as well as environmental problems. A new bio-composites product without synthetic adhesive or resin is possible to be developed in order to reduce these problems. Apart from formaldehyde release, adhesive is also considered to be expensive, especially in the manufacturing of composite products. Natural bonded composites can be termed as a panel product composed with any type of cellulosic materials without the addition of synthetic resins. It is composed with chemical content activation in the cellulosic materials. Pulp and paper making method (chemical pulping) was used as a general guide in the composites manufacturing. This method will also generally reduce the manufacturing cost and the risk of formaldehyde emission and has potential to be used as an alternative technology in fiber composites industries. In this study, the natural bonded bamboo fiber composite was produced from virgin Malaysian bamboo fiber (Bambusa vulgaris). The bamboo culms were chipped and digested into fiber using this pulping method. The black liquor collected from the pulping process was used as a natural binding agent in the composition. Then the fibers were mixed and blended with black liquor without any resin addition. The amount of black liquor used per composite board was 20%, with approximately 37% solid content. The composites were fabricated using a hot press machine at two different board densities, 850 and 950 kg/m³, with two sets of hot pressing time, 25 and 35 minutes. Samples of the composites from different densities and hot pressing times were tested in flexural strength and internal bonding (IB) for strength performance according to British Standard. Modulus of elasticity (MOE) and modulus of rupture (MOR) was determined in flexural test, while tensile force perpendicular to the surface was recorded in IB test. Results show that the strength performance of the composites with 850 kg/m³ density were significantly higher than 950 kg/m³ density, especially for samples from 25 minutes hot pressing time. Strength performance of composites from 25 minutes hot pressing time were generally greater than 35 minutes. Results show that the maximum mean values of strength performance were recorded from composites with 850 kg/m³ density and 25 minutes pressing time. The maximum mean values for MOE, MOR and IB were 3251.84, 16.88 and 0.27 MPa, respectively. Only MOE result has conformed to high density fiberboard (HDF) standard (2700 MPa) in British Standard for Fiberboard Specification, BS EN 622-5: 2006. Microstructure characteristics of composites can also be related to the strength performance of the composites, in which, the observed fiber damage in composites from 950 kg/m³ density and overheat of black liquor led to the low strength properties, especially in IB test.

Keywords: bamboo fiber, natural bonded, black liquor, mechanical tests, microstructure observations

Procedia PDF Downloads 233
111 Investigating the Key Success Factors of Supplier Collaboration Governance in the Aerospace Industry

Authors: Maria Jose Granero Paris, Ana Isabel Jimenez Zarco, Agustin Pablo Alvarez Herranz

Abstract:

In the industrial sector collaboration with suppliers is key to the development of innovations in the field of processes. Access to resources and expertise that are not available in the business, obtaining a cost advantage, or the reduction of the time needed to carry out innovation are some of the benefits associated with the process. However, the success of this collaborative process is compromised, when from the beginning not clearly rules have been established that govern the relationship. Abundant studies developed in the field of innovation emphasize the strategic importance of the concept of “Governance”. Despite this, there have been few papers that have analyzed how the governance process of the relationship must be designed and managed to ensure the success of the collaboration process. The lack of literature in this area responds to the wide diversity of contexts where collaborative processes to innovate take place. Thus, in sectors such as the car industry there is a strong collaborative tradition between manufacturers and suppliers being part of the value chain. In this case, it is common to establish mechanisms and procedures that fix formal and clear objectives to regulate the relationship, and establishes the rights and obligations of each of the parties involved. By contrast, in other sectors, collaborative relationships to innovate are not a common way of working, particularly when their aim is the development of process improvements. It is in this case, it is when the lack of mechanisms to establish and regulate the behavior of those involved, can give rise to conflicts, and the failure of the cooperative relationship. Because of this the present paper analyzes the similarities and differences in the processes of governance in collaboration with suppliers in the European aerospace industry With these ideas in mind, we present research is twofold: Understand the importance of governance as a key element of the success of the collaboration in the development of product and process innovations, Establish the mechanisms and procedures to ensure the proper management of the processes of collaboration. Following the methodology of the case study, we analyze the way in which manufacturers and suppliers cooperate in the development of new products and processes in two industries with different levels of technological intensity and collaborative tradition: the automotive and aerospace. The identification of those elements playing a key role to establish a successful governance and relationship management and the compression of the mechanisms of regulation and control in place at the automotive sector can be use to propose solutions to some of the conflicts that currently arise in aerospace industry. The paper concludes by analyzing the strategic implications for the aerospace industry entails the adoption of some of the practices traditionally used in other industrial sectors. Finally, it is important to highlight that in this paper are presented the first results of a research project currently in progress describing a model of governance that explains the way to manage outsourced services to suppliers in the European aerospace industry, through the analysis of companies in the sector located in Germany, France and Spain.

Keywords: supplier collaboration, supplier relationship governance, innovation management, product innovation, process innovation

Procedia PDF Downloads 433
110 Methodology to Assess the Circularity of Industrial Processes

Authors: Bruna F. Oliveira, Teresa I. Gonçalves, Marcelo M. Sousa, Sandra M. Pimenta, Octávio F. Ramalho, José B. Cruz, Flávia V. Barbosa

Abstract:

The EU Circular Economy action plan, launched in 2020, is one of the major initiatives to promote the transition into a more sustainable industry. The circular economy is a popular concept used by many companies nowadays. Some industries are better forwarded to this reality than others, and the tannery industry is a sector that needs more attention due to its strong environmental impact caused by its dimension, intensive resources consumption, lack of recyclability, and second use of its products, as well as the industrial effluents generated by the manufacturing processes. For these reasons, the zero-waste goal and the European objectives are further being achieved. In this context, a need arises to provide an effective methodology that allows to determine the level of circularity of tannery companies. Regarding the complexity of the circular economy concept, few factories have a specialist in sustainability to assess the company’s circularity or have the ability to implement circular strategies that could benefit the manufacturing processes. Although there are several methodologies to assess circularity in specific industrial sectors, there is not an easy go-to methodology applied in factories aiming for cleaner production. Therefore, a straightforward methodology to assess the level of circularity, in this case of a tannery industry, is presented and discussed in this work, allowing any company to measure the impact of its activities. The methodology developed consists in calculating the Overall Circular Index (OCI) by evaluating the circularity of four key areas -energy, material, economy and social- in a specific factory. The index is a value between 0 and 1, where 0 means a linear economy, and 1 is a complete circular economy. Each key area has a sub-index, obtained through key performance indicators (KPIs) regarding each theme, and the OCI reflects the average of the four sub-indexes. Some fieldwork in the appointed company was required in order to obtain all the necessary data. By having separate sub-indexes, one can observe which areas are more linear than others. Thus, it is possible to work on the most critical areas by implementing strategies to increase the OCI. After these strategies are implemented, the OCI is recalculated to check the improvements made and any other changes in the remaining sub-indexes. As such, the methodology in discussion works through continuous improvement, constantly reevaluating and improving the circularity of the factory. The methodology is also flexible enough to be implemented in any industrial sector by adapting the KPIs. This methodology was implemented in a selected Portuguese small and medium-sized enterprises (SME) tannery industry and proved to be a relevant tool to measure the circularity level of the factory. It was witnessed that it is easier for non-specialists to evaluate circularity and identify possible solutions to increase its value, as well as learn how one action can impact their environment. In the end, energetic and environmental inefficiencies were identified and corrected, increasing the sustainability and circularity of the company. Through this work, important contributions were provided, helping the Portuguese SMEs to achieve the European and UN 2030 sustainable goals.

Keywords: circular economy, circularity index, sustainability, tannery industry, zero-waste

Procedia PDF Downloads 44
109 How to Assess the Attractiveness of Business Location According to the Mainstream Concepts of Comparative Advantages

Authors: Philippe Gugler

Abstract:

Goal of the study: The concept of competitiveness has been addressed by economic theorists and policymakers for several hundreds of years, with both groups trying to understand the drivers of economic prosperity and social welfare. The goal of this contribution is to address the major useful theoretical contributions that permit to identify the main drivers of a territory’s competitiveness. We first present the major contributions found in the classical and neo-classical theories. Then, we concentrate on two majors schools providing significant thoughts on the competitiveness of locations: the Economic Geography (EG) School and the International Business (IB) School. Methodology: The study is based on a literature review of the classical and neo-classical theories, on the economic geography theories and on the international business theories. This literature review establishes links between these theoretical mainstreams. This work is based on the academic framework establishing a meaningful literature review aimed to respond to our research question and to develop further research in this field. Results: The classical and neo-classical pioneering theories provide initial insights that territories are different and that these differences explain the discrepancies in their levels of prosperity and standards of living. These theories emphasized different factors impacting the level and the growth of productivity in a given area and therefore the degree of their competitiveness. However, these theories are not sufficient to more precisely identify the drivers and enablers of location competitiveness and to explain, in particular, the factors that drive the creation of economic activities, the expansion of economic activities, the creation of new firms and the attraction of foreign firms. Prosperity is due to economic activities created by firms. Therefore, we need more theoretical insights to scrutinize the competitive advantages of territories or, in other words, their ability to offer the best conditions that enable economic agents to achieve higher rates of productivity in open markets. Two major theories provide, to a large extent, the needed insights: the economic geography theory and the international business theory. The economic geography studies scrutinized in this study from Marshall to Porter, aim to explain the drivers of the concentration of specific industries and activities in specific locations. These activity agglomerations may be due to the creation of new enterprises, the expansion of existing firms, and the attraction of firms located elsewhere. Regarding this last possibility, the international business (IB) theories focus on the comparative advantages of locations as far as multinational enterprises (MNEs) strategies are concerned. According to international business theory, the comparative advantages of a location serves firms not only by exploiting their ownership advantages (mostly as far as market seeking, resource seeking and efficiency seeking investments are concerned) but also by augmenting and/or creating new ownership advantages (strategic asset seeking investments). The impact of a location on the competitiveness of firms is considered from both sides: the MNE’s home country and the MNE’s host country.

Keywords: competitiveness, economic geography, international business, attractiveness of businesses

Procedia PDF Downloads 117
108 Vortex Control by a Downstream Splitter Plate in Psudoplastic Fluid Flow

Authors: Sudipto Sarkar, Anamika Paul

Abstract:

Pseudoplastic (n<1, n is the power index) fluids have great importance in food, pharmaceutical and chemical process industries which require a lot of attention. Unfortunately, due to its complex flow behavior inadequate research works can be found even in laminar flow regime. A practical problem is solved in the present research work by numerical simulation where we tried to control the vortex shedding from a square cylinder using a horizontal splitter plate placed at the downstream flow region. The position of the plate is at the centerline of the cylinder with varying distance from the cylinder to calculate the critical gap-ratio. If the plate is placed inside this critical gap, the vortex shedding from the cylinder suppressed completely. The Reynolds number considered here is in unsteady laminar vortex shedding regime, Re = 100 (Re = U∞a/ν, where U∞ is the free-stream velocity of the flow, a is the side of the cylinder and ν is the maximum value of kinematic viscosity of the fluid). Flow behavior has been studied for three different gap-ratios (G/a = 2, 2.25 and 2.5, where G is the gap between cylinder and plate) and for a fluid with three different flow behavior indices (n =1, 0.8 and 0.5). The flow domain is constructed using Gambit 2.2.30 and this software is also used to generate the mesh and to impose the boundary conditions. For G/a = 2, the domain size is considered as 37.5a × 16a with 316 × 208 grid points in the streamwise and flow-normal directions respectively after a thorough grid independent study. Fine and equal grid spacing is used close to the geometry to capture the vortices shed from the cylinder and the boundary layer developed over the flat plate. Away from the geometry meshes are unequal in size and stretched out. For other gap-ratios, proportionate domain size and total grid points are used with similar kind of mesh distribution. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition) at upper and lower domain boundary conditions are used for the simulation. Wall boundary condition (u = v = 0) is considered both on the cylinder and the splitter plate surfaces. Discretized forms of fully conservative 2-D unsteady Navier Stokes equations are then solved by Ansys Fluent 14.5. SIMPLE algorithm written in finite volume method is selected for this purpose which is a default solver inculcate in Fluent. The results obtained for Newtonian fluid flow agree well with previous works supporting Fluent’s usefulness in academic research. A thorough analysis of instantaneous and time-averaged flow fields are depicted both for Newtonian and pseudoplastic fluid flow. It has been observed that as the value of n reduces the stretching of shear layers also reduce and these layers try to roll up before the plate. For flow with high pseudoplasticity (n = 0.5) the nature of vortex shedding changes and the value of critical gap-ratio reduces. These are the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.

Keywords: CFD, pseudoplastic fluid flow, wake-boundary layer interactions, critical gap-ratio

Procedia PDF Downloads 89
107 Experimental Study of Energy Absorption Efficiency (EAE) of Warp-Knitted Spacer Fabric Reinforced Foam (WKSFRF) Under Low-Velocity Impact

Authors: Amirhossein Dodankeh, Hadi Dabiryan, Saeed Hamze

Abstract:

Using fabrics to reinforce composites considerably leads to improved mechanical properties, including resistance to the impact load and the energy absorption of composites. Warp-knitted spacer fabrics (WKSF) are fabrics consisting of two layers of warp-knitted fabric connected by pile yarns. These connections create a space between the layers filled by pile yarns and give the fabric a three-dimensional shape. Today because of the unique properties of spacer fabrics, they are widely used in the transportation, construction, and sports industries. Polyurethane (PU) foams are commonly used as energy absorbers, but WKSF has much better properties in moisture transfer, compressive properties, and lower heat resistance than PU foam. It seems that the use of warp-knitted spacer fabric reinforced PU foam (WKSFRF) can lead to the production and use of composite, which has better properties in terms of energy absorption from the foam, its mold formation is enhanced, and its mechanical properties have been improved. In this paper, the energy absorption efficiency (EAE) of WKSFRF under low-velocity impact is investigated experimentally. The contribution of the effect of each of the structural parameters of the WKSF on the absorption of impact energy has also been investigated. For this purpose, WKSF with different structures such as two different thicknesses, small and large mesh sizes, and position of the meshes facing each other and not facing each other were produced. Then 6 types of composite samples with different structural parameters were fabricated. The physical properties of samples like weight per unit area and fiber volume fraction of composite were measured for 3 samples of any type of composites. Low-velocity impact with an initial energy of 5 J was carried out on 3 samples of any type of composite. The output of the low-velocity impact test is acceleration-time (A-T) graph with a lot deviation point, in order to achieve the appropriate results, these points were removed using the FILTFILT function of MATLAB R2018a. Using Newtonian laws of physics force-displacement (F-D) graph was drawn from an A-T graph. We know that the amount of energy absorbed is equal to the area under the F-D curve. Determination shows the maximum energy absorption is 2.858 J which is related to the samples reinforced with fabric with large mesh, high thickness, and not facing of the meshes relative to each other. An index called energy absorption efficiency was defined, which means absorption energy of any kind of our composite divided by its fiber volume fraction. With using this index, the best EAE between the samples is 21.6 that occurs in the sample with large mesh, high thickness, and meshes facing each other. Also, the EAE of this sample is 15.6% better than the average EAE of other composite samples. Generally, the energy absorption on average has been increased 21.2% by increasing the thickness, 9.5% by increasing the size of the meshes from small to big, and 47.3% by changing the position of the meshes from facing to non-facing.

Keywords: composites, energy absorption efficiency, foam, geometrical parameters, low-velocity impact, warp-knitted spacer fabric

Procedia PDF Downloads 142
106 Comparative Investigation of Two Non-Contact Prototype Designs Based on a Squeeze-Film Levitation Approach

Authors: A. Almurshedi, M. Atherton, C. Mares, T. Stolarski, M. Miyatake

Abstract:

Transportation and handling of delicate and lightweight objects is currently a significant issue in some industries. Two common contactless movement prototype designs, ultrasonic transducer design and vibrating plate design, are compared. Both designs are based on the method of squeeze-film levitation, and this study aims to identify the limitations, and challenges of each. The designs are evaluated in terms of levitation capabilities, and characteristics. To this end, theoretical and experimental explorations are made. It is demonstrated that the ultrasonic transducer prototype design is better suited to the terms of levitation capabilities. However, the design has some operating and mechanical designing difficulties. For making accurate industrial products in micro-fabrication and nanotechnology contexts, such as semiconductor silicon wafers, micro-components and integrated circuits, non-contact oil-free, ultra-precision and low wear transport along the production line is crucial for enabling. One of the designs (design A) is called the ultrasonic chuck, for which an ultrasonic transducer (Langevin, FBI 28452 HS) comprises the main part. Whereas the other (design B), is a vibrating plate design, which consists of a plain rectangular plate made of Aluminium firmly fastened at both ends. The size of the rectangular plate is 200x100x2 mm. In addition, four rounded piezoelectric actuators of size 28 mm diameter with 0.5 mm thickness are glued to the underside of the plate. The vibrating plate is clamped at both ends in the horizontal plane through a steel supporting structure. In addition, the dynamic of levitation using the designs (A and B) has been investigated based on the squeeze film levitation (SFL). The input apparatus that is used with designs consist of a sine wave signal generator connected to an amplifier type ENP-1-1U (Echo Electronics). The latter has to be utilised to magnify the sine wave voltage that is produced by the signal generator. The measurements of the maximum levitation for three different semiconductor wafers of weights 52, 70 and 88 [g] for design A are 240, 205 and 187 [um], respectively. Whereas the physical results show that the average separation distance for a disk of 5 [g] weight for design B reaches 70 [um]. By using the methodology of squeeze film levitation, it is possible to hold an object in a non-contact manner. The analyses of the investigation outcomes signify that the non-contact levitation of design A provides more improvement than design B. However, design A is more complicated than design B in terms of its manufacturing. In order to identify an adequate non-contact SFL design, a comparison between two common such designs has been adopted for the current investigation. Specifically, the study will involve making comparisons in terms of the following issues: floating component geometries and material type constraints; final created pressure distributions; dangerous interactions with the surrounding space; working environment constraints; and complication and compactness of the mechanical design. Considering all these matters is essential for proficiently distinguish the better SFL design.

Keywords: ANSYS, floating, piezoelectric, squeeze-film

Procedia PDF Downloads 125
105 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 48
104 Ecological Relationships Between Material, Colonizing Organisms, and Resulting Performances

Authors: Chris Thurlbourne

Abstract:

Due to the continual demand for material to build, and a limit of good environmental material credentials of 'normal' building materials, there is a need to look at new and reconditioned material types - both biogenic and non-biogenic - and a field of research that accompanies this. This research development focuses on biogenic and non-biogenic material engineering and the impact of our environment on new and reconditioned material types. In our building industry and all the industries involved in constructing our built environment, building material types can be broadly categorized into two types, biogenic and non-biogenic material properties. Both play significant roles in shaping our built environment. Regardless of their properties, all material types originate from our earth, whereas many are modified through processing to provide resistance to 'forces of nature', be it rain, wind, sun, gravity, or whatever the local environmental conditions throw at us. Modifications are succumbed to offer benefits in endurance, resistance, malleability in handling (building with), and ergonomic values - in all types of building material. We assume control of all building materials through rigorous quality control specifications and regulations to ensure materials perform under specific constraints. Yet materials confront an external environment that is not controlled with live forces undetermined, and of which materials naturally act and react through weathering, patination and discoloring, promoting natural chemical reactions such as rusting. The purpose of the paper is to present recent research that explores the after-life of specific new and reconditioned biogenic and non-biogenic material types and how the understanding of materials' natural processes of transformation when exposed to the external climate, can inform initial design decisions. With qualities to receive in a transient and contingent manner, ecological relationships between material, the colonizing organisms and resulting performances invite opportunities for new design explorations for the benefit of both the needs of human society and the needs of our natural environment. The research follows designing for the benefit of both and engaging in both biogenic and non-biogenic material engineering whilst embracing the continual demand for colonization - human and environment, and the aptitude of a material to be colonized by one or several groups of living organisms without necessarily undergoing any severe deterioration, but embracing weathering, patination and discoloring, and at the same time establishing new habitat. The research follows iterative prototyping processes where knowledge has been accumulated via explorations of specific material performances, from laboratory to construction mock-ups focusing on the architectural qualities embedded in control of production techniques and facilitating longer-term patinas of material surfaces to extend the aesthetic beyond common judgments. Experiments are therefore focused on how the inherent material qualities drive a design brief toward specific investigations to explore aesthetics induced through production, patinas and colonization obtained over time while exposed and interactions with external climate conditions.

Keywords: biogenic and non-biogenic, natural processes of transformation, colonization, patina

Procedia PDF Downloads 60
103 Howard Mold Count of Tomato Pulp Commercialized in the State of São Paulo, Brazil

Authors: M. B. Atui, A. M. Silva, M. A. M. Marciano, M. I. Fioravanti, V. A. Franco, L. B. Chasin, A. R. Ferreira, M. D. Nogueira

Abstract:

Fungi attack large amount of fruits and those who have suffered an injury on the surface are more susceptible to the growth, as they have pectinolytic enzymes that destroy the edible portion forming an amorphous and soft dough. The spores can reach the plant by the wind, rain and insects and fruit may have on its surface, besides the contaminants from the fruit trees, land and water, forming a flora composed mainly of yeasts and molds. Other contamination can occur for the equipment used to harvest, for the use of boxes and contaminated water to the fruit washing, for storage in dirty places. The hyphae in tomato products indicate the use of raw materials contaminated or unsuitable hygiene conditions during processing. Although fungi are inactivated in heat processing step, its hyphae remain in the final product and search for detection and quantification is an indicator of the quality of raw material. Howard Method count of fungi mycelia in industrialized pulps evaluates the amount of decayed fruits existing in raw material. The Brazilian legislation governing processed and packaged products set the limit of 40% of positive fields in tomato pulps. The aim of this study was to evaluate the quality of the tomato pulp sold in greater São Paulo, through a monitoring during the four seasons of the year. All over 2010, 110 samples have been examined; 21 were taking in spring, 31 in summer, 31 in fall and 27 in winter, all from different lots and trademarks. Samples have been picked up in several stores located in the city of São Paulo. Howard method was used, recommended by the AOAC, 19th ed, 2011 16:19:02 technique - method 965.41. Hundred percent of the samples contained fungi mycelia. The count average of fungi mycelia per season was 23%, 28%, 8,2% and 9,9% in spring, summer, fall and winter, respectively. Regarding the spring samples of the 21 samples analyzed, 14.3% were off-limits proposed by the legislation. As for the samples of the fall and winter, all were in accordance with the legislation and the average of mycelial filament count has not exceeded 20%, which can be explained by the low temperatures during this time of the year. The acquired samples in the summer and spring showed high percentage of fungal mycelium in the final product, related to the high temperatures in these seasons. Considering that the limit of 40% of positive fields is accepted for the Brazilian Legislation (RDC nº 14/2014), 3 spring samples (14%) and 6 summer samples (19%) will be over this limit and subject to law penalties. According to gathered data, 82% of manufacturers of this product manage to keep acceptable levels of fungi mycelia in their product. In conclusion, only 9.2% samples were for the limits established by Resolution RDC. 14/2014, showing that the limit of 40% is feasible and can be used by these segment industries. The result of the filament count mycelial by Howard method is an important tool in the microscopic analysis since it measures the quality of raw material used in the production of tomato products.

Keywords: fungi, howard, method, tomato, pulps

Procedia PDF Downloads 356
102 The Potential Impact of Big Data Analytics on Pharmaceutical Supply Chain Management

Authors: Maryam Ziaee, Himanshu Shee, Amrik Sohal

Abstract:

Big Data Analytics (BDA) in supply chain management has recently drawn the attention of academics and practitioners. Big data refers to a massive amount of data from different sources, in different formats, generated at high speed through transactions in business environments and supply chain networks. Traditional statistical tools and techniques find it difficult to analyse this massive data. BDA can assist organisations to capture, store, and analyse data specifically in the field of supply chain. Currently, there is a paucity of research on BDA in the pharmaceutical supply chain context. In this research, the Australian pharmaceutical supply chain was selected as the case study. This industry is highly significant since the right medicine must reach the right patients, at the right time, in right quantity, in good condition, and at the right price to save lives. However, drug shortages remain a substantial problem for hospitals across Australia with implications on patient care, staff resourcing, and expenditure. Furthermore, a massive volume and variety of data is generated at fast speed from multiple sources in pharmaceutical supply chain, which needs to be captured and analysed to benefit operational decisions at every stage of supply chain processes. As the pharmaceutical industry lags behind other industries in using BDA, it raises the question of whether the use of BDA can improve transparency among pharmaceutical supply chain by enabling the partners to make informed-decisions across their operational activities. This presentation explores the impacts of BDA on supply chain management. An exploratory qualitative approach was adopted to analyse data collected through interviews. This study also explores the BDA potential in the whole pharmaceutical supply chain rather than focusing on a single entity. Twenty semi-structured interviews were undertaken with top managers in fifteen organisations (five pharmaceutical manufacturers, five wholesalers/distributors, and five public hospital pharmacies) to investigate their views on the use of BDA. The findings revealed that BDA can enable pharmaceutical entities to have improved visibility over the whole supply chain and also the market; it enables entities, especially manufacturers, to monitor consumption and the demand rate in real-time and make accurate demand forecasts which reduce drug shortages. Timely and precise decision-making can allow the entities to source and manage their stocks more effectively. This can likely address the drug demand at hospitals and respond to unanticipated issues such as drug shortages. Earlier studies explore BDA in the context of clinical healthcare; however, this presentation investigates the benefits of BDA in the Australian pharmaceutical supply chain. Furthermore, this research enhances managers’ insight into the potentials of BDA at every stage of supply chain processes and helps to improve decision-making in their supply chain operations. The findings will turn the rhetoric of data-driven decision into a reality where the managers may opt for analytics for improved decision-making in the supply chain processes.

Keywords: big data analytics, data-driven decision, pharmaceutical industry, supply chain management

Procedia PDF Downloads 84
101 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence

Authors: Sogand Barghi

Abstract:

The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.

Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting

Procedia PDF Downloads 37
100 Iron Oxide Reduction Using Solar Concentration and Carbon-Free Reducers

Authors: Bastien Sanglard, Simon Cayez, Guillaume Viau, Thomas Blon, Julian Carrey, Sébastien Lachaize

Abstract:

The need to develop clean production processes is a key challenge of any industry. Steel and iron industries are particularly concerned since they emit 6.8% of global anthropogenic greenhouse gas emissions. One key step of the process is the high-temperature reduction of iron ore using coke, leading to large amounts of CO2 emissions. One route to decrease impacts is to get rid of fossil fuels by changing both the heat source and the reducer. The present work aims at investigating experimentally the possibility to use concentrated solar energy and carbon-free reducing agents. Two sets of experimentations were realized. First, in situ X-ray diffraction on pure and industrial powder of hematite was realized to study the phase evolution as a function of temperature during reduction under hydrogen and ammonia. Secondly, experiments were performed on industrial iron ore pellets, which were reduced by NH3 or H2 into a “solar furnace” composed of a controllable 1600W Xenon lamp to simulate and control the solar concentrated irradiation of a glass reactor and of a diaphragm to control light flux. Temperature and pressure were recorded during each experiment via thermocouples and pressure sensors. The percentage of iron oxide converted to iron (called thereafter “reduction ratio”) was found through Rietveld refinement. The power of the light source and the reduction time were varied. Results obtained in the diffractometer reaction chamber show that iron begins to form at 300°C with pure Fe2O3 powder and 400°C with industrial iron ore when maintained at this temperature for 60 minutes and 80 minutes, respectively. Magnetite and wuestite are detected on both powders during the reduction under hydrogen; under ammonia, iron nitride is also detected for temperatures between400°C and 600°C. All the iron oxide was converted to iron for a reaction of 60 min at 500°C, whereas a conversion ratio of 96% was reached with industrial powder for a reaction of 240 min at 600°C under hydrogen. Under ammonia, full conversion was also reached after 240 min of reduction at 600 °C. For experimentations into the solar furnace with iron ore pellets, the lamp power and the shutter opening were varied. An 83.2% conversion ratio was obtained with a light power of 67 W/cm2 without turning over the pellets. Nevertheless, under the same conditions, turning over the pellets in the middle of the experiment permits to reach a conversion ratio of 86.4%. A reduction ratio of 95% was reached with an exposure of 16 min by turning over pellets at half time with a flux of 169W/cm2. Similar or slightly better results were obtained under an ammonia reducing atmosphere. Under the same flux, the highest reduction yield of 97.3% was obtained under ammonia after 28 minutes of exposure. The chemical reaction itself, including the solar heat source, does not produce any greenhouse gases, so solar metallurgy represents a serious way to reduce greenhouse gas emission of metallurgy industry. Nevertheless, the ecological impact of the reducers must be investigated, which will be done in future work.

Keywords: solar concentration, metallurgy, ammonia, hydrogen, sustainability

Procedia PDF Downloads 108
99 Fucoidan: A Potent Seaweed-Derived Polysaccharide with Immunomodulatory and Anti-inflammatory Properties

Authors: Tauseef Ahmad, Muhammad Ishaq, Mathew Eapen, Ahyoung Park, Sam Karpiniec, Vanni Caruso, Rajaraman Eri

Abstract:

Fucoidans are complex, fucose-rich sulfated polymers discovered in brown seaweeds. Fucoidans are popular around the world, particularly in the nutraceutical and pharmaceutical industries, due to their promising medicinal properties. Fucoidans have been shown to have a variety of biological activities, including anti-inflammatory effects. They are known to inhibit inflammatory processes through a variety of mechanisms, including enzyme inhibition and selectin blockade. Inflammation is a part of the complicated biological response of living systems to damaging stimuli, and it plays a role in the pathogenesis of a variety of disorders, including arthritis, inflammatory bowel disease, cancer, and allergies. In the current investigation, various fucoidan extracts from Undaria pinnatifida, Fucus vesiculosus, Macrocystis pyrifera, Ascophyllum nodosum, and Laminaria japonica were assessed for inhibition of pro-inflammatory cytokine production (TNF-α, IL-1β, and IL-6) in LPS induced human macrophage cell line (THP-1) and human peripheral blood mononuclear cells (PBMCs). Furthermore, we also sought to catalogue these extracts based on their anti-inflammatory effects in the current in-vitro cell model. Materials and Methods: To assess the cytotoxicity of fucoidan extracts, MTT (3-[4,5-dimethylthiazol-2-yl]-2,5, -diphenyltetrazolium bromide) cell viability assay was performed. Furthermore, a dose-response for fucoidan extracts was performed in LPS induced THP-1 cells and PBMCs after pre-treatment for 24 hours, and levels of TNF-α, IL-1β, and IL-6 cytokines were measured using Enzyme-Linked Immunosorbent Assay (ELISA). Results: The MTT cell viability assay demonstrated that fucoidan extracts exhibited no evidence of cytotoxicity in THP-1 cells or PBMCs after 48 hours of incubation. The results of the sandwich ELISA revealed that all fucoidan extracts suppressed cytokine production in LPS-stimulated PBMCs and human THP-1 cells in a dose-dependent manner. Notably, at lower concentrations, the lower molecular fucoidan (5-30 kDa) extract from Macrocystis pyrifera was a highly efficient inhibitor of pro-inflammatory cytokines. Fucoidan extracts from all species including Undaria pinnatifida, Fucus vesiculosus, Macrocystis pyrifera, Ascophyllum nodosum, and Laminaria japonica exhibited significant anti-inflammatory effects. These findings on several fucoidan extracts provide insight into strategies for improving their efficacy against inflammation-related diseases. Conclusion: In the current research, we have successfully catalogued several fucoidan extracts based on their efficiency in LPS-induced macrophages and PBMCs in downregulating the key pro-inflammatory cytokines (TNF-, IL-1 and IL-6), which are prospective targets in human inflammatory illnesses. Further research would provide more information on the mechanism of action, allowing it to be tested for therapeutic purposes as an anti-inflammatory medication.

Keywords: fucoidan, PBMCs, THP-1, TNF-α, IL-1β, IL-6, inflammation

Procedia PDF Downloads 40
98 Test Rig Development for Up-to-Date Experimental Study of Multi-Stage Flash Distillation Process

Authors: Marek Vondra, Petr Bobák

Abstract:

Vacuum evaporation is a reliable and well-proven technology with a wide application range which is frequently used in food, chemical or pharmaceutical industries. Recently, numerous remarkable studies have been carried out to investigate utilization of this technology in the area of wastewater treatment. One of the most successful applications of vacuum evaporation principal is connected with seawater desalination. Since 1950’s, multi-stage flash distillation (MSF) has been the leading technology in this field and it is still irreplaceable in many respects, despite a rapid increase in cheaper reverse-osmosis-based installations in recent decades. MSF plants are conveniently operated in countries with a fluctuating seawater quality and at locations where a sufficient amount of waste heat is available. Nowadays, most of the MSF research is connected with alternative heat sources utilization and with hybridization, i.e. merging of different types of desalination technologies. Some of the studies are concerned with basic principles of the static flash phenomenon, but only few scientists have lately focused on the fundamentals of continuous multi-stage evaporation. Limited measurement possibilities at operating plants and insufficiently equipped experimental facilities may be the reasons. The aim of the presented study was to design, construct and test an up-to-date test rig with an advanced measurement system which will provide real time monitoring options of all the important operational parameters under various conditions. The whole system consists of a conventionally designed MSF unit with 8 evaporation chambers, versatile heating circuit for different kinds of feed water (e.g. seawater, waste water), sophisticated system for acquisition and real-time visualization of all the related quantities (temperature, pressure, flow rate, weight, conductivity, pH, water level, power input), access to a wide spectrum of operational media (salt, fresh and softened water, steam, natural gas, compressed air, electrical energy) and integrated transparent features which enable a direct visual control of selected physical mechanisms (water evaporation in chambers, water level right before brine and distillate pumps). Thanks to the adjustable process parameters, it is possible to operate the test unit at desired operational conditions. This allows researchers to carry out statistical design and analysis of experiments. Valuable results obtained in this manner could be further employed in simulations and process modeling. First experimental tests confirm correctness of the presented approach and promise interesting outputs in the future. The presented experimental apparatus enables flexible and efficient research of the whole MSF process.

Keywords: design of experiment, multi-stage flash distillation, test rig, vacuum evaporation

Procedia PDF Downloads 366
97 Evaluation Of A Start Up Business Strategy In Movie Industry: Case Study Of Visinema

Authors: Stacia E. H. Sitohang, S.Mn., Socrates Rudy Sirait

Abstract:

The first movie theater in Indonesia was established in December 1900. The movie industry started with international movie penetration. After a while, local movie producers started to rise and created local Indonesian movies. The industry is growing through ups and downs in Indonesia. In 2008, Visinema was founded in Jakarta, Indonesia, by AnggaDwimasSasongko, one of the most respected movie director in Indonesia. After getting achievements and recognition, Visinema chose to grow the company horizontally as opposed to only grow vertically and gain another similar achievement. Visinemachose to build the ecosystem that enables them to obtain many more opportunities and generatebusiness sustainability. The company proceed as an agile company. They created several business subsidiaries to support the company’s Intellectual Property (IP) development. This research was done through interview with the key persons in the company and questionnaire to get market insights regarding Visinema. The is able to transform their IP that initially started from movies to different kinds of business model. Interestingly, Angga chose to use the start up approach to create Visinema. In 2019, the company successfully gained Series A funding from Intudo Ventures and got other various investment schemes to support the business. In early 2020, Covid-19 pandemic negatively impacted many industries in Indonesia, especially the entertainment and leisure businesses. Fortunately, Visinema did not face any significant problem regarding survival during the pandemic, there were nolay-offs nor work hour reductions. Instead, they were thinking of much bigger opportunities and problems. While other companies suffer during the pandemic, Visinema created the first focused Transactional Video On Demand (TVOD) in Indonesia named Bioskop Online. The platform was created to keep the company innovating and adapting with the new online market as the result of the Covid-19 pandemic. Other than a digital platform, Visinemainvested heavily in animation to target kids and family business. They believed that penetrating the technology and animation market is going to be the biggest opportunity in Visinema’s road map. Besides huge opportunities, Visinema is also facing problems. The first is company brand positioning. Angga, as the founder, felt the need to detach his name from the brand image of Visinema to create system sustainability and scalability. Second, the company has to create a strategy to refocus in a particular business area to maintain and improve the competitive advantages. The third problem, IP piracy is a huge structural problem in Indonesia, the company considers IP thieves as their biggest competitors as opposed to other production company. As the recommendation, we suggest a set of branding and management strategy to detach the founder’s name from Visinema’s brand and improve the competitive advantages. We also suggest Visinema invest in system building to prevent IP piracy in the entertainment industry, which later can be another business subsidiary of Visinema.

Keywords: business ecosystem, agile, sustainability, scalability, start Up, intellectual property, digital platform

Procedia PDF Downloads 113
96 Disrupting Traditional Industries: A Scenario-Based Experiment on How Blockchain-Enabled Trust and Transparency Transform Nonprofit Organizations

Authors: Michael Mertel, Lars Friedrich, Kai-Ingo Voigt

Abstract:

Based on principle-agent theory, an information asymmetry exists in the traditional donation process. Consumers cannot comprehend whether nonprofit organizations (NPOs) use raised funds according to the designated cause after the transaction took place (hidden action). Therefore, charity organizations have tried to appear transparent and gain trust by using the same marketing instruments for decades (e.g., releasing project success reports). However, none of these measures can guarantee consumers that charities will use their donations for the purpose. With awareness of misuse of donations rising due to the Ukraine conflict (e.g., funding crime), consumers are increasingly concerned about the destination of their charitable purposes. Therefore, innovative charities like the Human Rights Foundation have started to offer donations via blockchain. Blockchain technology has the potential to establish profound trust and transparency in the donation process: Consumers can publicly track the progress of their donation at any time after deciding to donate. This ensures that the charity is not using donations against its original intent. Hence, the aim is to investigate the effect of blockchain-enabled transactions on the willingness to donate. Sample and Design: To investigate consumers' behavior, we use a scenario-based experiment. After removing participants (e.g., due to failed attention checks), 3192 potential donors participated (47.9% female, 62.4% bachelor or above). Procedure: We randomly assigned the participants to one of two scenarios. In all conditions, the participants read a scenario about a fictive charity organization called "Helper NPO." Afterward, the participants answered questions regarding their perception of the charity. Manipulation: The first scenario (n = 1405) represents a typical donation process, where consumers donate money without any option to track and trace. The second scenario (n = 1787) represents a donation process via blockchain, where consumers can track and trace their donations respectively. Using t-statistics, the findings demonstrate a positive effect of donating via blockchain on participants’ willingness to donate (mean difference = 0.667, p < .001, Cohen’s d effect size = 0.482). A mediation analysis shows significant effects for the mediation of transparency (Estimate = 0.199, p < .001), trust (Estimate = 0.144, p < .001), and transparency and trust (Estimate = 0.158, p < .001). The total effect of blockchain usage on participants’ willingness to donate (Estimate = 0.690, p < .001) consists of the direct effect (Estimate = 0.189, p < .001) and the indirect effects of transparency and trust (Estimate = 0.501, p < .001). Furthermore, consumers' affinity for technology moderates the direct effect of blockchain usage on participants' willingness to donate (Estimate = 0.150, p < .001). Donating via blockchain is a promising way for charities to engage consumers for several reasons: (1) Charities can emphasize trust and transparency in their advertising campaigns. (2) Established charities can target new customer segments by specifically engaging technology-affine consumers in the future. (3) Charities can raise international funds without previous barriers (e.g., setting up bank accounts). Nevertheless, increased transparency can also backfire (e.g., disclosure of costs). Such cases require further research.

Keywords: blockchain, social sector, transparency, trust

Procedia PDF Downloads 69
95 Coupled Field Formulation – A Unified Method for Formulating Structural Mechanics Problems

Authors: Ramprasad Srinivasan

Abstract:

Engineers create inventions and put their ideas in concrete terms to design new products. Design drivers must be established, which requires, among other things, a complete understanding of the product design, load paths, etc. For Aerospace Vehicles, weight/strength ratio, strength, stiffness and stability are the important design drivers. A complex built-up structure is made up of an assemblage of primitive structural forms of arbitrary shape, which include 1D structures like beams and frames, 2D structures like membranes, plate and shell structures, and 3D solid structures. Justification through simulation involves a check for all the quantities of interest, namely stresses, deformation, frequencies, and buckling loads and is normally achieved through the finite element (FE) method. Over the past few decades, Fiber-reinforced composites are fast replacing the traditional metallic structures in the weight-sensitive aerospace and aircraft industries due to their high specific strength, high specific stiffness, anisotropic properties, design freedom for tailoring etc. Composite panel constructions are used in aircraft to design primary structure components like wings, empennage, ailerons, etc., while thin-walled composite beams (TWCB) are used to model slender structures like stiffened panels, helicopter, and wind turbine rotor blades, etc. The TWCB demonstrates many non-classical effects like torsional and constrained warping, transverse shear, coupling effects, heterogeneity, etc., which makes the analysis of composite structures far more complex. Conventional FE formulations to model 1D structures suffer from many limitations like shear locking, particularly in slender beams, lower convergence rates due to material coupling in composites, inability to satisfy, equilibrium in the domain and natural boundary conditions (NBC) etc. For 2D structures, the limitations of conventional displacement-based FE formulations include the inability to satisfy NBC explicitly and many pathological problems such as shear and membrane locking, spurious modes, stress oscillations, lower convergence due to mesh distortion etc. This mandates frequent re-meshing to even achieve an acceptable mesh (satisfy stringent quality metrics) for analysis leading to significant cycle time. Besides, currently, there is a need for separate formulations (u/p) to model incompressible materials, and a single unified formulation is missing in the literature. Hence coupled field formulation (CFF) is a unified formulation proposed by the author for the solution of complex 1D and 2D structures addressing the gaps in the literature mentioned above. The salient features of CFF and its many advantages over other conventional methods shall be presented in this paper.

Keywords: coupled field formulation, kinematic and material coupling, natural boundary condition, locking free formulation

Procedia PDF Downloads 50
94 Analysis of Electric Mobility in the European Union: Forecasting 2035

Authors: Domenico Carmelo Mongelli

Abstract:

The context is that of great uncertainty in the 27 countries belonging to the European Union which has adopted an epochal measure: the elimination of internal combustion engines for the traction of road vehicles starting from 2035 with complete replacement with electric vehicles. If on the one hand there is great concern at various levels for the unpreparedness for this change, on the other the Scientific Community is not preparing accurate studies on the problem, as the scientific literature deals with single aspects of the issue, moreover addressing the issue at the level of individual countries, losing sight of the global implications of the issue for the entire EU. The aim of the research is to fill these gaps: the technological, plant engineering, environmental, economic and employment aspects of the energy transition in question are addressed and connected to each other, comparing the current situation with the different scenarios that could exist in 2035 and in the following years until total disposal of the internal combustion engine vehicle fleet for the entire EU. The methodologies adopted by the research consist in the analysis of the entire life cycle of electric vehicles and batteries, through the use of specific databases, and in the dynamic simulation, using specific calculation codes, of the application of the results of this analysis to the entire EU electric vehicle fleet from 2035 onwards. Energy balance sheets will be drawn up (to evaluate the net energy saved), plant balance sheets (to determine the surplus demand for power and electrical energy required and the sizing of new plants from renewable sources to cover electricity needs), economic balance sheets (to determine the investment costs for this transition, the savings during the operation phase and the payback times of the initial investments), the environmental balances (with the different energy mix scenarios in anticipation of 2035, the reductions in CO2eq and the environmental effects are determined resulting from the increase in the production of lithium for batteries), the employment balances (it is estimated how many jobs will be lost and recovered in the reconversion of the automotive industry, related industries and in the refining, distribution and sale of petroleum products and how many will be products for technological innovation, the increase in demand for electricity, the construction and management of street electric columns). New algorithms for forecast optimization are developed, tested and validated. Compared to other published material, the research adds an overall picture of the energy transition, capturing the advantages and disadvantages of the different aspects, evaluating the entities and improvement solutions in an organic overall picture of the topic. The results achieved allow us to identify the strengths and weaknesses of the energy transition, to determine the possible solutions to mitigate these weaknesses and to simulate and then evaluate their effects, establishing the most suitable solutions to make this transition feasible.

Keywords: engines, Europe, mobility, transition

Procedia PDF Downloads 38
93 Learning the Most Common Causes of Major Industrial Accidents and Apply Best Practices to Prevent Such Accidents

Authors: Rajender Dahiya

Abstract:

Investigation outcomes of major process incidents have been consistent for decades and validate that the causes and consequences are often identical. The debate remains as we continue to experience similar process incidents even with enormous development of new tools, technologies, industry standards, codes, regulations, and learning processes? The objective of this paper is to investigate the most common causes of major industrial incidents and reveal industry challenges and best practices to prevent such incidents. The author, in his current role, performs audits and inspections of a variety of high-hazard industries in North America, including petroleum refineries, chemicals, petrochemicals, manufacturing, etc. In this paper, he shares real life scenarios, examples, and case studies from high hazards operating facilities including key challenges and best practices. This case study will provide a clear understanding of the importance of near miss incident investigation. The incident was a Safe operating limit excursion. The case describes the deficiencies in management programs, the competency of employees, and the culture of the corporation that includes hazard identification and risk assessment, maintaining the integrity of safety-critical equipment, operating discipline, learning from process safety near misses, process safety competency, process safety culture, audits, and performance measurement. Failure to identify the hazards and manage the risks of highly hazardous materials and processes is one of the primary root-causes of an incident, and failure to learn from past incidents is the leading cause of the recurrence of incidents. Several investigations of major incidents discovered that each showed several warning signs before occurring, and most importantly, all were preventable. The author will discuss why preventable incidents were not prevented and review the mutual causes of learning failures from past major incidents. The leading causes of past incidents are summarized below. Management failure to identify the hazard and/or mitigate the risk of hazardous processes or materials. This process starts early in the project stage and continues throughout the life cycle of the facility. For example, a poorly done hazard study such as HAZID, PHA, or LOPA is one of the leading causes of the failure. If this step is performed correctly, then the next potential cause is. Management failure to maintain the integrity of safety critical systems and equipment. In most of the incidents, mechanical integrity of the critical equipment was not maintained, safety barriers were either bypassed, disabled, or not maintained. The third major cause is Management failure to learn and/or apply learning from the past incidents. There were several precursors before those incidents. These precursors were either ignored altogether or not taken seriously. This paper will conclude by sharing how a well-implemented operating management system, good process safety culture, and competent leaders and staff contributed to managing the risks to prevent major incidents.

Keywords: incident investigation, risk management, loss prevention, process safety, accident prevention

Procedia PDF Downloads 30
92 Fuels and Platform Chemicals Production from Lignocellulosic Biomass: Current Status and Future Prospects

Authors: Chandan Kundu, Sankar Bhattacharya

Abstract:

A significant disadvantage of fossil fuel energy production is the considerable amount of carbon dioxide (CO₂) released, which is one of the contributors to climate change. Apart from environmental concerns, changing fossil fuel prices have pushed society gradually towards renewable energy sources in recent years. Biomass is a plentiful and renewable resource and a source of carbon. Recent years have seen increased research interest in generating fuels and chemicals from biomass. Unlike fossil-based resources, biomass is composed of lignocellulosic material, which does not contribute to the increase in atmospheric CO₂ over a longer term. These considerations contribute to the current move of the chemical industry from non-renewable feedstock to renewable biomass. This presentation focuses on generating bio-oil and two major platform chemicals that can potentially improve the environment. Thermochemical processes such as pyrolysis are considered viable methods for producing bio-oil and biomass-based platform chemicals. Fluidized bed reactors, on the other hand, are known to boost bio-oil yields during pyrolysis due to their superior mixing and heat transfer features, as well as their scalability. This review and the associated experimental work are focused on the thermochemical conversion of biomass to bio-oil and two high-value platform chemicals, Levoglucosenone (LGO) and 5-Chloromethyl furfural (5-CMF), in a fluidized bed reactor. These two active molecules with distinct features can potentially be useful monomers in the chemical and pharmaceutical industries since they are well adapted to the manufacture of biologically active products. This process took several meticulous steps. To begin, the biomass was delignified using a peracetic acid pretreatment to remove lignin. Because of its complicated structure, biomass must be pretreated to remove the lignin, increasing access to the carbohydrate components and converting them to platform chemicals. The biomass was then characterized by Thermogravimetric analysis, Synchrotron-based THz spectroscopy, and in-situ DRIFTS in the laboratory. Based on the results, a continuous-feeding fluidized bed reactor system was constructed to generate platform chemicals from pretreated biomass using hydrogen chloride acid-gas as a catalyst. The procedure also yields biochar, which has a number of potential applications, including soil remediation, wastewater treatment, electrode production, and energy resource utilization. Consequently, this research also includes a preliminary experimental evaluation of the biochar's prospective applications. The biochar obtained was evaluated for its CO₂ and steam reactivity. The outline of the presentation will comprise the following: Biomass pretreatment for effective delignification Mechanistic study of the thermal and thermochemical conversion of biomass Thermochemical conversion of untreated and pretreated biomass in the presence of an acid catalyst to produce LGO and CMF A thermo-catalytic process for the production of LGO and 5-CMF in a continuously-fed fluidized bed reactor and efficient separation of chemicals Use of biochar generated from the platform chemicals production through gasification

Keywords: biomass, pretreatment, pyrolysis, levoglucosenone

Procedia PDF Downloads 105