Search results for: industry applications
11 Observations on Cultural Alternative and Environmental Conservation: Populations "Delayed" and Excluded from Health and Public Hygiene Policies in Mexico (1890-1930)
Authors: Marcela Davalos Lopez
Abstract:
The history of the circulation of hygienic knowledge and the consolidation of public health in Latin American cities towards the end of the 19th century is well known. Among them, Mexico City was inserted in international politics, strengthened institutions, medical knowledge, applied parameters of modernity and built sanitary engineering works. Despite the power that this hygienist system achieved, its scope was relative: it cannot be generalized to all cities. From a comparative and contextual analysis, it will be shown that conclusions derived from modern urban historiography present, from our contemporary observations, fractures. Between 1890 and 1930, the small cities and areas surrounding the Mexican capital adapted in their own way the international and federal public health regulations. This will be shown for neighborhoods located around Mexico City and in a medium city, close to the Mexican capital (about 80 km), called Cuernavaca. While the inhabitants of the neighborhoods kept awaiting the evolutionary process and the forms that public hygiene policies were taking (because they were witnesses and affected in their territories), in Cuernavaca, the dictates came as an echo. While the capital was drained, large roads were opened, roundabouts were erected, residents were expelled, and drains, sewers, drinking water pipes, etc., were built; Cuernavaca was sheltered in other times and practices. What was this due to? Undoubtedly, the time and energy that it took politicians and the group of "scientists" to carry out these enormous works in the Mexican capital took them away from addressing the issue in remote villages. It was not until the 20th century that the federal hygiene policy began to be strengthened. Despite this, there are other factors that emphasize the particularities of each site. I would like to draw attention here to the different receptions that each town prepared on public hygiene. We will see that Cuernavaca responded to its own semi-rural culture, history, orography and functions, prolonging for much longer, for example, the use of its deep ravines as sewers. For their part, the neighborhoods surrounding the capital, although affected and excluded from hygienist policies, chose to move away from them and solve the deficiencies with their own resources (they resorted to the waste that was left from the dried lake of Mexico to continue their lake practices). All of this points to a paradox that shapes our contemporary concerns: on the one hand, the benefits derived from medical knowledge and its technological applications (in this work referring particularly to the urban health system) and, on the other, the alteration it caused in environmental settings. Places like Cuernavaca (classified by the nineteenth-century and hygienists of the first decades of the twentieth century as backward), as well as landscapes such as neighborhoods, affected by advances in sanitary engineering, keep in their memory buried practices that we observe today as possible ways to reestablish environmental balances: alternative uses of water; recycling of organic materials; local uses of fauna; various systems for breaking down excreta, and so on. In sum, what the nineteenth and first half of the twentieth centuries graduated as levels of backwardness or progress, turn out to be key information to rethink the routes of environmental conservation. When we return to the observations of the scientists, politicians and lawyers of that period, we find historically rejected cultural alterity. Populations such as Cuernavaca that, due to their history, orography and/or insufficiency of federal policies, kept different relationships with the environment, today give us clues to reorient basic elements of cities: alternative uses of water, waste of raw materials, organic or consumption of local products, among others. It is, therefore, a matter of unearthing the rejected that cries out to emerge to the surface.Keywords: sanitary hygiene, Mexico city, cultural alterity, environmental conservation, environmental history
Procedia PDF Downloads 16610 Effect of Inoculation with Consortia of Plant-Growth Promoting Bacteria on Biomass Production of the Halophyte Salicornia ramosissima
Authors: Maria João Ferreira, Natalia Sierra-Garcia, Javier Cremades, Carla António, Ana M. Rodrigues, Helena Silva, Ângela Cunha
Abstract:
Salicornia ramosissima, a halophyte that grows naturally in coastal areas of the northern hemisphere, is often considered the most promising halophyte candidate for extensive crop cultivation and saline agriculture practices. The expanding interest in this plant surpasses its use as gourmet food and includes their potential application as a source of bioactive compounds for the pharmaceutical industry. Despite growing well in saline soils, sustainable and ecologically friendly techniques to enhance crop production and the nutritional value of this plant are still needed. The root microbiome of S. ramosissima proved to be a source of taxonomically diverse plant growth-promoting bacteria (PGPB). Halotolerant strains of Bacillus, Salinicola, Pseudomonas, and Brevibacterium, among other genera, exhibit a broad spectrum of plant-growth promotion traits [e.g., 3-indole acetic acid (IAA), 1-aminocyclopropane-1-carboxylic acid (ACC) deaminase, siderophores, phosphate solubilization, Nitrogen fixation] and express a wide range of extracellular enzyme activities. In this work, three plant growth-promoting bacteria strains (Brevibacterium casei EB3, Pseudomonas oryzihabitans RL18, and Bacillus aryabhattai SP20) isolated from the rhizosphere and the endosphere of S. ramosissima roots from different saltmarshes along the Portuguese coast were inoculated in S. ramosissima seeds. Plants germinated from inoculated seeds were grown for three months in pots filled with a mixture of perlite and estuarine sediment (1:1) in greenhouse conditions and later transferred to a growth chamber, where they were maintained two months with controlled photoperiod, temperature, and humidity. Pots were placed on trays containing the irrigation solution (Hoagland’s solution 20% added with 10‰ marine salt). Before reaching the flowering stage, plants were collected, and the fresh and dry weight of aerial parts was determined. Non-inoculated seeds were used as a negative control. Selected dried stems from the most promising treatments were later analyzed by GC-TOF-MS for primary metabolite composition. The efficiency of inoculation and persistence of the inoculum was assessed by Next Generation Sequencing. Inoculations with single strain EB3 and co-inoculations with EB3+RL18 and EB3+RL18+SP20 (All treatment) resulted in significantly higher biomass production (fresh and dry weight) compared to non-inoculated plants. Considering fresh weight alone, inoculation with isolates SP20 and RL18 also caused a significant positive effect. Combined inoculation with the consortia SP20+EB3 or SP20+RL18 did not significantly improve biomass production. The analysis of the profile of primary metabolites will provide clues on the mechanisms by which the growth-enhancement effect of the inoculants operates in the plants. These results sustain promising prospects for the use of rhizospheric and endophytic PGPB as biofertilizers, reducing environmental impacts and operational costs of agrochemicals and contributing to the sustainability and cost-effectiveness of saline agriculture. Acknowledgments: This work was supported by project Rhizomis PTDC/BIA-MIC/29736/2017 financed by Fundação para a Ciência e Tecnologia (FCT) through the Regional Operational Program of the Center (02/SAICT/2017) with FEDER funds (European Regional Development Fund, FNR, and OE) and by FCT through CESAM (UIDP/50017/2020 + UIDB/50017/2020), LAQV-REQUIMTE (UIDB/50006/2020). We also acknowledge FCT/FSE for the financial support to Maria João Ferreira through a PhD grant (PD/BD/150363/2019). We are grateful to Horta dos Peixinhos for their help and support during sampling and seed collection. We also thank Glória Pinto for her collaboration providing us the use of the growth chambers during the final months of the experiment and Enrique Mateos-Naranjo and Jennifer Mesa-Marín of the Departamento de Biología Vegetal y Ecología, the University of Sevilla for their advice regarding the growth of salicornia plants in greenhouse conditions.Keywords: halophytes, PGPB, rhizosphere engineering, biofertilizers, primary metabolite profiling, plant inoculation, Salicornia ramosissima
Procedia PDF Downloads 1609 A Spatial Repetitive Controller Applied to an Aeroelastic Model for Wind Turbines
Authors: Riccardo Fratini, Riccardo Santini, Jacopo Serafini, Massimo Gennaretti, Stefano Panzieri
Abstract:
This paper presents a nonlinear differential model, for a three-bladed horizontal axis wind turbine (HAWT) suited for control applications. It is based on a 8-dofs, lumped parameters structural dynamics coupled with a quasi-steady sectional aerodynamics. In particular, using the Euler-Lagrange Equation (Energetic Variation approach), the authors derive, and successively validate, such model. For the derivation of the aerodynamic model, the Greenbergs theory, an extension of the theory proposed by Theodorsen to the case of thin airfoils undergoing pulsating flows, is used. Specifically, in this work, the authors restricted that theory under the hypothesis of low perturbation reduced frequency k, which causes the lift deficiency function C(k) to be real and equal to 1. Furthermore, the expressions of the aerodynamic loads are obtained using the quasi-steady strip theory (Hodges and Ormiston), as a function of the chordwise and normal components of relative velocity between flow and airfoil Ut, Up, their derivatives, and section angular velocity ε˙. For the validation of the proposed model, the authors carried out open and closed-loop simulations of a 5 MW HAWT, characterized by radius R =61.5 m and by mean chord c = 3 m, with a nominal angular velocity Ωn = 1.266rad/sec. The first analysis performed is the steady state solution, where a uniform wind Vw = 11.4 m/s is considered and a collective pitch angle θ = 0.88◦ is imposed. During this step, the authors noticed that the proposed model is intrinsically periodic due to the effect of the wind and of the gravitational force. In order to reject this periodic trend in the model dynamics, the authors propose a collective repetitive control algorithm coupled with a PD controller. In particular, when the reference command to be tracked and/or the disturbance to be rejected are periodic signals with a fixed period, the repetitive control strategies can be applied due to their high precision, simple implementation and little performance dependency on system parameters. The functional scheme of a repetitive controller is quite simple and, given a periodic reference command, is composed of a control block Crc(s) usually added to an existing feedback control system. The control block contains and a free time-delay system eτs in a positive feedback loop, and a low-pass filter q(s). It should be noticed that, while the time delay term reduces the stability margin, on the other hand the low pass filter is added to ensure stability. It is worth noting that, in this work, the authors propose a phase shifting for the controller and the delay system has been modified as e^(−(T−γk)), where T is the period of the signal and γk is a phase shifting of k samples of the same periodic signal. It should be noticed that, the phase shifting technique is particularly useful in non-minimum phase systems, such as flexible structures. In fact, using the phase shifting, the iterative algorithm could reach the convergence also at high frequencies. Notice that, in our case study, the shifting of k samples depends both on the rotor angular velocity Ω and on the rotor azimuth angle Ψ: we refer to this controller as a spatial repetitive controller. The collective repetitive controller has also been coupled with a C(s) = PD(s), in order to dampen oscillations of the blades. The performance of the spatial repetitive controller is compared with an industrial PI controller. In particular, starting from wind speed velocity Vw = 11.4 m/s the controller is asked to maintain the nominal angular velocity Ωn = 1.266rad/s after an instantaneous increase of wind speed (Vw = 15 m/s). Then, a purely periodic external disturbance is introduced in order to stress the capabilities of the repetitive controller. The results of the simulations show that, contrary to a simple PI controller, the spatial repetitive-PD controller has the capability to reject both external disturbances and periodic trend in the model dynamics. Finally, the nominal value of the angular velocity is reached, in accordance with results obtained with commercial software for a turbine of the same type.Keywords: wind turbines, aeroelasticity, repetitive control, periodic systems
Procedia PDF Downloads 2518 The Integration of Digital Humanities into the Sociology of Knowledge Approach to Discourse Analysis
Authors: Gertraud Koch, Teresa Stumpf, Alejandra Tijerina García
Abstract:
Discourse analysis research approaches belong to the central research strategies applied throughout the humanities; they focus on the countless forms and ways digital texts and images shape present-day notions of the world. Despite the constantly growing number of relevant digital, multimodal discourse resources, digital humanities (DH) methods are thus far not systematically developed and accessible for discourse analysis approaches. Specifically, the significance of multimodality and meaning plurality modelling are yet to be sufficiently addressed. In order to address this research gap, the D-WISE project aims to develop a prototypical working environment as digital support for the sociology of knowledge approach to discourse analysis and new IT-analysis approaches for the use of context-oriented embedding representations. Playing an essential role throughout our research endeavor is the constant optimization of hermeneutical methodology in the use of (semi)automated processes and their corresponding epistemological reflection. Among the discourse analyses, the sociology of knowledge approach to discourse analysis is characterised by the reconstructive and accompanying research into the formation of knowledge systems in social negotiation processes. The approach analyses how dominant understandings of a phenomenon develop, i.e., the way they are expressed and consolidated by various actors in specific arenas of discourse until a specific understanding of the phenomenon and its socially accepted structure are established. This article presents insights and initial findings from D-WISE, a joint research project running since 2021 between the Institute of Anthropological Studies in Culture and History and the Language Technology Group of the Department of Informatics at the University of Hamburg. As an interdisciplinary team, we develop central innovations with regard to the availability of relevant DH applications by building up a uniform working environment, which supports the procedure of the sociology of knowledge approach to discourse analysis within open corpora and heterogeneous, multimodal data sources for researchers in the humanities. We are hereby expanding the existing range of DH methods by developing contextualized embeddings for improved modelling of the plurality of meaning and the integrated processing of multimodal data. The alignment of this methodological and technical innovation is based on the epistemological working methods according to grounded theory as a hermeneutic methodology. In order to systematically relate, compare, and reflect the approaches of structural-IT and hermeneutic-interpretative analysis, the discourse analysis is carried out both manually and digitally. Using the example of current discourses on digitization in the healthcare sector and the associated issues regarding data protection, we have manually built an initial data corpus of which the relevant actors and discourse positions are analysed in conventional qualitative discourse analysis. At the same time, we are building an extensive digital corpus on the same topic based on the use and further development of entity-centered research tools such as topic crawlers and automated newsreaders. In addition to the text material, this consists of multimodal sources such as images, video sequences, and apps. In a blended reading process, the data material is filtered, annotated, and finally coded with the help of NLP tools such as dependency parsing, named entity recognition, co-reference resolution, entity linking, sentiment analysis, and other project-specific tools that are being adapted and developed. The coding process is carried out (semi-)automated by programs that propose coding paradigms based on the calculated entities and their relationships. Simultaneously, these can be specifically trained by manual coding in a closed reading process and specified according to the content issues. Overall, this approach enables purely qualitative, fully automated, and semi-automated analyses to be compared and reflected upon.Keywords: entanglement of structural IT and hermeneutic-interpretative analysis, multimodality, plurality of meaning, sociology of knowledge approach to discourse analysis
Procedia PDF Downloads 2287 Open Science Philosophy, Research and Innovation
Authors: C.Ardil
Abstract:
Open Science translates the understanding and application of various theories and practices in open science philosophy, systems, paradigms and epistemology. Open Science originates with the premise that universal scientific knowledge is a product of a collective scholarly and social collaboration involving all stakeholders and knowledge belongs to the global society. Scientific outputs generated by public research are a public good that should be available to all at no cost and without barriers or restrictions. Open Science has the potential to increase the quality, impact and benefits of science and to accelerate advancement of knowledge by making it more reliable, more efficient and accurate, better understandable by society and responsive to societal challenges, and has the potential to enable growth and innovation through reuse of scientific results by all stakeholders at all levels of society, and ultimately contribute to growth and competitiveness of global society. Open Science is a global movement to improve accessibility to and reusability of research practices and outputs. In its broadest definition, it encompasses open access to publications, open research data and methods, open source, open educational resources, open evaluation, and citizen science. The implementation of open science provides an excellent opportunity to renegotiate the social roles and responsibilities of publicly funded research and to rethink the science system as a whole. Open Science is the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods. Open Science represents a novel systematic approach to the scientific process, shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process, based on cooperative work and diffusing scholarly knowledge with no barriers and restrictions. Open Science refers to efforts to make the primary outputs of publicly funded research results (publications and the research data) publicly accessible in digital format with no limitations. Open Science is about extending the principles of openness to the whole research cycle, fostering, sharing and collaboration as early as possible, thus entailing a systemic change to the way science and research is done. Open Science is the ongoing transition in how open research is carried out, disseminated, deployed, and transformed to make scholarly research more open, global, collaborative, creative and closer to society. Open Science involves various movements aiming to remove the barriers for sharing any kind of output, resources, methods or tools, at any stage of the research process. Open Science embraces open access to publications, research data, source software, collaboration, peer review, notebooks, educational resources, monographs, citizen science, or research crowdfunding. The recognition and adoption of open science practices, including open science policies that increase open access to scientific literature and encourage data and code sharing, is increasing in the open science philosophy. Revolutionary open science policies are motivated by ethical, moral or utilitarian arguments, such as the right to access digital research literature for open source research or science data accumulation, research indicators, transparency in the field of academic practice, and reproducibility. Open science philosophy is adopted primarily to demonstrate the benefits of open science practices. Researchers use open science applications for their own advantage in order to get more offers, increase citations, attract media attention, potential collaborators, career opportunities, donations and funding opportunities. In open science philosophy, open data findings are evidence that open science practices provide significant benefits to researchers in scientific research creation, collaboration, communication, and evaluation according to more traditional closed science practices. Open science considers concerns such as the rigor of peer review, common research facts such as financing and career development, and the sacrifice of author rights. Therefore, researchers are recommended to implement open science research within the framework of existing academic evaluation and incentives. As a result, open science research issues are addressed in the areas of publishing, financing, collaboration, resource management and sharing, career development, discussion of open science questions and conclusions.Keywords: Open Science, Open Science Philosophy, Open Science Research, Open Science Data
Procedia PDF Downloads 1336 Evaluation of Academic Research Projects Using the AHP and TOPSIS Methods
Authors: Murat Arıbaş, Uğur Özcan
Abstract:
Due to the increasing number of universities and academics, the fund of the universities for research activities and grants/supports given by government institutions have increased number and quality of academic research projects. Although every academic research project has a specific purpose and importance, limited resources (money, time, manpower etc.) require choosing the best ones from all (Amiri, 2010). It is a pretty hard process to compare and determine which project is better such that the projects serve different purposes. In addition, the evaluation process has become complicated since there are more than one evaluator and multiple criteria for the evaluation (Dodangeh, Mojahed and Yusuff, 2009). Mehrez and Sinuany-Stern (1983) determined project selection problem as a Multi Criteria Decision Making (MCDM) problem. If a decision problem involves multiple criteria and objectives, it is called as a Multi Attribute Decision Making problem (Ömürbek & Kınay, 2013). There are many MCDM methods in the literature for the solution of such problems. These methods are AHP (Analytic Hierarchy Process), ANP (Analytic Network Process), TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation), UTADIS (Utilities Additives Discriminantes), ELECTRE (Elimination et Choix Traduisant la Realite), MAUT (Multiattribute Utility Theory), GRA (Grey Relational Analysis) etc. Teach method has some advantages compared with others (Ömürbek, Blacksmith & Akalın, 2013). Hence, to decide which MCDM method will be used for solution of the problem, factors like the nature of the problem, types of choices, measurement scales, type of uncertainty, dependency among the attributes, expectations of decision maker, and quantity and quality of the data should be considered (Tavana & Hatami-Marbini, 2011). By this study, it is aimed to develop a systematic decision process for the grant support applications that are expected to be evaluated according to their scientific adequacy by multiple evaluators under certain criteria. In this context, project evaluation process applied by The Scientific and Technological Research Council of Turkey (TÜBİTAK) the leading institutions in our country, was investigated. Firstly in the study, criteria that will be used on the project evaluation were decided. The main criteria were selected among TÜBİTAK evaluation criteria. These criteria were originality of project, methodology, project management/team and research opportunities and extensive impact of project. Moreover, for each main criteria, 2-4 sub criteria were defined, hence it was decided to evaluate projects over 13 sub-criterion in total. Due to superiority of determination criteria weights AHP method and provided opportunity ranking great number of alternatives TOPSIS method, they are used together. AHP method, developed by Saaty (1977), is based on selection by pairwise comparisons. Because of its simple structure and being easy to understand, AHP is the very popular method in the literature for determining criteria weights in MCDM problems. Besides, the TOPSIS method developed by Hwang and Yoon (1981) as a MCDM technique is an alternative to ELECTRE method and it is used in many areas. In the method, distance from each decision point to ideal and to negative ideal solution point was calculated by using Euclidian Distance Approach. In the study, main criteria and sub-criteria were compared on their own merits by using questionnaires that were developed based on an importance scale by four relative groups of people (i.e. TUBITAK specialists, TUBITAK managers, academics and individuals from business world ) After these pairwise comparisons, weight of the each main criteria and sub-criteria were calculated by using AHP method. Then these calculated criteria’ weights used as an input in TOPSİS method, a sample consisting 200 projects were ranked on their own merits. This new system supported to opportunity to get views of the people that take part of project process including preparation, evaluation and implementation on the evaluation of academic research projects. Moreover, instead of using four main criteria in equal weight to evaluate projects, by using weighted 13 sub-criteria and decision point’s distance from the ideal solution, systematic decision making process was developed. By this evaluation process, new approach was created to determine importance of academic research projects.Keywords: Academic projects, Ahp method, Research projects evaluation, Topsis method.
Procedia PDF Downloads 5915 Times2D: A Time-Frequency Method for Time Series Forecasting
Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan
Abstract:
Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation
Procedia PDF Downloads 444 Impacts of Transformational Leadership: Petronas Stations in Sabah, Malaysia
Authors: Lizinis Cassendra Frederick Dony, Jirom Jeremy Frederick Dony, Cyril Supain Christopher
Abstract:
The purpose of this paper is to improve the devotion to leadership through HR practices implementation at the PETRONAS stations. This emphasize the importance of personal grooming and Customer Care hospitality training for their front line working individuals and teams’ at PETRONAS stations in Sabah. Based on Thomas Edison, International Leadership Journal, theory, research, education and development practice and application to all organizational phenomena may affect or be affected by leadership. FINDINGS – PETRONAS in short called Petroliam Nasional Berhad is a Malaysian oil and gas company that was founded on August 17, 1974. Wholly owned by the Government of Malaysia, the corporation is vested with the entire oil and gas resources in Malaysia and is entrusted with the responsibility of developing and adding value to these resources. Fortune ranks PETRONAS as the 68th largest company in the world in 2012. It also ranks PETRONAS as the 12th most profitable company in the world and the most profitable in Asia. As of the end of March 2005, the PETRONAS Group comprised 103 wholly owned subsidiaries, 19 partly owned outfits and 57 associated companies. The group is engaged in a wide spectrum of petroleum activities, including upstream exploration and production of oil and gas to downstream oil refining, marketing and distribution of petroleum products, trading, gas processing and liquefaction, gas transmission pipeline network operations, marketing of liquefied natural gas; petrochemical manufacturing and marketing; shipping; automotive engineering and property investment. PETRONAS has growing their marketing channel in a competitive market. They have combined their resources to pursue common goals. PETRONAS provides opportunity to carry out Industrial Training Job Placement to the University students in Malaysia for 6-8 months. The effects of the Industrial Training have exposed them to the real working environment experience acting representing on behalf of General Manager for almost one year. Thus, the management education and reward incentives schemes have aspire the working teams transformed to gain their good leadership. Furthermore, knowledge and experiences are very important in the human capital development transformation. SPSS extends the accurate analysis PETRONAS achievement through 280 questionnaires and 81 questionnaires through excel calculation distributed to interview face to face with the customers, PETRONAS dealers and front desk staffs stations in the 17 stations in Kota Kinabalu, Sabah. Hence, this research study will improve its service quality innovation and business sustainability performance optimization. ORIGINALITY / VALUE – The impact of Transformational Leadership practices have influenced the working team’s behaviour as a Brand Ambassadors of PETRONAS. Finally, the findings correlation indicated that PETRONAS stations needs more HR resources practices to deploy more customer care retention resources in mitigating the business challenges in oil and gas industry. Therefore, as the business established at stiff competition globally (Cooper, 2006; Marques and Simon, 2006), it is crucial for the team management should be capable to minimize noises risk, financial risk and mitigating any other risks as a whole at the optimum level. CONCLUSION- As to conclude this research found that both transformational and transactional contingent reward leadership4 were positively correlated with ratings of platoon potency and ratings of leadership for the platoon leader and sergeant were moderately inter correlated. Due to this identification, we recommended that PETRONAS management should offers quality team management in PETRONAS stations in a broader variety of leadership training specialization in the operation efficiency at the front desk Customer Care hospitality. By having the reliability and validity of job experiences, it leverages diversity teamwork and cross collaboration. Other than leveraging factor, PETRONAS also will strengthen the interpersonal front liners effectiveness and enhance quality of interaction through effective communication. Finally, through numerous CSR correlation studies regression PETRONAS performance on Corporate Social Performance and several control variables.1 CSR model activities can be mis-specified if it is not controllable under R & D which evident in various feedbacks collected from the local communities and younger generation is inclined to higher financial expectation from PETRONAS. But, however, it created a huge impact on the nation building as part of its social adaptability overreaching their business stakeholders’ satisfaction in Sabah.Keywords: human resources practices implementation (hrpi), source of competitive advantage in people’s development (socaipd), corporate social responsibility (csr), service quality at front desk stations (sqafd), impacts of petronas leadership (iopl)
Procedia PDF Downloads 3523 Numerical Simulation of Von Karman Swirling Bioconvection Nanofluid Flow from a Deformable Rotating Disk
Authors: Ali Kadir, S. R. Mishra, M. Shamshuddin, O. Anwar Beg
Abstract:
Motivation- Rotating disk bio-reactors are fundamental to numerous medical/biochemical engineering processes including oxygen transfer, chromatography, purification and swirl-assisted pumping. The modern upsurge in biologically-enhanced engineering devices has embraced new phenomena including bioconvection of micro-organisms (photo-tactic, oxy-tactic, gyrotactic etc). The proven thermal performance superiority of nanofluids i.e. base fluids doped with engineered nanoparticles has also stimulated immense implementation in biomedical designs. Motivated by these emerging applications, we present a numerical thermofluid dynamic simulation of the transport phenomena in bioconvection nanofluid rotating disk bioreactor flow. Methodology- We study analytically and computationally the time-dependent three-dimensional viscous gyrotactic bioconvection in swirling nanofluid flow from a rotating disk configuration. The disk is also deformable i.e. able to extend (stretch) in the radial direction. Stefan blowing is included. The Buongiorno dilute nanofluid model is adopted wherein Brownian motion and thermophoresis are the dominant nanoscale effects. The primitive conservation equations for mass, radial, tangential and axial momentum, heat (energy), nanoparticle concentration and micro-organism density function are formulated in a cylindrical polar coordinate system with appropriate wall and free stream boundary conditions. A mass convective condition is also incorporated at the disk surface. Forced convection is considered i.e. buoyancy forces are neglected. This highly nonlinear, strongly coupled system of unsteady partial differential equations is normalized with the classical Von Karman and other transformations to render the boundary value problem (BVP) into an ordinary differential system which is solved with the efficient Adomian decomposition method (ADM). Validation with earlier Runge-Kutta shooting computations in the literature is also conducted. Extensive computations are presented (with the aid of MATLAB symbolic software) for radial and circumferential velocity components, temperature, nanoparticle concentration, micro-organism density number and gradients of these functions at the disk surface (radial local skin friction, local circumferential skin friction, Local Nusselt number, Local Sherwood number, motile microorganism mass transfer rate). Main Findings- Increasing radial stretching parameter decreases radial velocity and radial skin friction, reduces azimuthal velocity and skin friction, decreases local Nusselt number and motile micro-organism mass wall flux whereas it increases nano-particle local Sherwood number. Disk deceleration accelerates the radial flow, damps the azimuthal flow, decreases temperatures and thermal boundary layer thickness, depletes the nano-particle concentration magnitudes (and associated nano-particle species boundary layer thickness) and furthermore decreases the micro-organism density number and gyrotactic micro-organism species boundary layer thickness. Increasing Stefan blowing accelerates the radial flow and azimuthal (circumferential flow), elevates temperatures of the nanofluid, boosts nano-particle concentration (volume fraction) and gyrotactic micro-organism density number magnitudes whereas suction generates the reverse effects. Increasing suction effect reduces radial skin friction and azimuthal skin friction, local Nusselt number, and motile micro-organism wall mass flux whereas it enhances the nano-particle species local Sherwood number. Conclusions - Important transport characteristics are identified of relevance to real bioreactor nanotechnological systems not discussed in previous works. ADM is shown to achieve very rapid convergence and highly accurate solutions and shows excellent promise in simulating swirling multi-physical nano-bioconvection fluid dynamics problems. Furthermore, it provides an excellent complement to more general commercial computational fluid dynamics simulations.Keywords: bio-nanofluids, rotating disk bioreactors, Von Karman swirling flow, numerical solutions
Procedia PDF Downloads 1572 Tackling the Decontamination Challenge: Nanorecycling of Plastic Waste
Authors: Jocelyn Doucet, Jean-Philippe Laviolette, Ali Eslami
Abstract:
The end-of-life management and recycling of polymer wastes remains a key environment issue in on-going efforts to increase resource efficiency and attaining GHG emission reduction targets. Half of all the plastics ever produced were made in the last 13 years, and only about 16% of that plastic waste is collected for recycling, while 25% is incinerated, 40% is landfilled, and 19% is unmanaged and leaks in the environment and waterways. In addition to the plastic collection issue, the UN recently published a report on chemicals in plastics, which adds another layer of challenge when integrating recycled content containing toxic products into new products. To tackle these important issues, innovative solutions are required. Chemical recycling of plastics provides new complementary alternatives to the current recycled plastic market by converting waste material into a high value chemical commodity that can be reintegrated in a variety of applications, making the total market size of the output – virgin-like, high value products - larger than the market size of the input – plastic waste. Access to high-quality feedstock also remains a major obstacle, primarily due to material contamination issues. Pyrowave approaches this challenge with its innovative nano-recycling technology, which purifies polymers at the molecular level, removing undesirable contaminants and restoring the resin to its virgin state without having to depolymerise it. This breakthrough approach expands the range of plastics that can be effectively recycled, including mixed plastics with various contaminants such as lead, inorganic pigments, and flame retardants. The technology allows yields below 100ppm, and purity can be adjusted to an infinitesimal level depending on the customer's specifications. The separation of the polymer and contaminants in Pyrowave's nano-recycling process offers the unique ability to customize the solution on targeted additives and contaminants to be removed based on the difference in molecular size. This precise control enables the attainment of a final polymer purity equivalent to virgin resin. The patented process involves dissolving the contaminated material using a specially formulated solvent, purifying the mixture at the molecular level, and subsequently extracting the solvent to yield a purified polymer resin that can directly be reintegrated in new products without further treatment. Notably, this technology offers simplicity, effectiveness, and flexibility while minimizing environmental impact and preserving valuable resources in the manufacturing circuit. Pyrowave has successfully applied this nano-recycling technology to decontaminate polymers and supply purified, high-quality recycled plastics to critical industries, including food-contact compliance. The technology is low-carbon, electrified, and provides 100% traceable resins with properties identical to those of virgin resins. Additionally, the issue of low recycling rates and the limited market for traditionally hard-to-recycle plastic waste has fueled the need for new complementary alternatives. Chemical recycling, such as Pyrowave's microwave depolymerization, presents a sustainable and efficient solution by converting plastic waste into high-value commodities. By employing microwave catalytic depolymerization, Pyrowave enables a truly circular economy of plastics, particularly in treating polystyrene waste to produce virgin-like styrene monomers. This revolutionary approach boasts low energy consumption, high yields, and a reduced carbon footprint. Pyrowave offers a portfolio of sustainable, low-carbon, electric solutions to give plastic waste a second life and paves the way to the new circular economy of plastics. Here, particularly for polystyrene, we show that styrene monomer yields from Pyrowave’s polystyrene microwave depolymerization reactor is 2,2 to 1,5 times higher than that of the thermal conventional pyrolysis. In addition, we provide a detailed understanding of the microwave assisted depolymerization via analyzing the effects of microwave power, pyrolysis time, microwave receptor and temperature on the styrene product yields. Furthermore, we investigate life cycle environmental impact assessment of microwave assisted pyrolysis of polystyrene in commercial-scale production. Finally, it is worth pointing out that Pyrowave is able to treat several tons of polystyrene to produce virgin styrene monomers and manage waste/contaminated polymeric materials as well in a truly circular economy.Keywords: nanorecycling, nanomaterials, plastic recycling, depolymerization
Procedia PDF Downloads 661 Detailed Degradation-Based Model for Solid Oxide Fuel Cells Long-Term Performance
Authors: Mina Naeini, Thomas A. Adams II
Abstract:
Solid Oxide Fuel Cells (SOFCs) feature high electrical efficiency and generate substantial amounts of waste heat that make them suitable for integrated community energy systems (ICEs). By harvesting and distributing the waste heat through hot water pipelines, SOFCs can meet thermal demand of the communities. Therefore, they can replace traditional gas boilers and reduce greenhouse gas (GHG) emissions. Despite these advantages of SOFCs over competing power generation units, this technology has not been successfully commercialized in large-scale to replace traditional generators in ICEs. One reason is that SOFC performance deteriorates over long-term operation, which makes it difficult to find the proper sizing of the cells for a particular ICE system. In order to find the optimal sizing and operating conditions of SOFCs in a community, a proper knowledge of degradation mechanisms and effects of operating conditions on SOFCs long-time performance is required. The simplified SOFC models that exist in the current literature usually do not provide realistic results since they usually underestimate rate of performance drop by making too many assumptions or generalizations. In addition, some of these models have been obtained from experimental data by curve-fitting methods. Although these models are valid for the range of operating conditions in which experiments were conducted, they cannot be generalized to other conditions and so have limited use for most ICEs. In the present study, a general, detailed degradation-based model is proposed that predicts the performance of conventional SOFCs over a long period of time at different operating conditions. Conventional SOFCs are composed of Yttria Stabilized Zirconia (YSZ) as electrolyte, Ni-cermet anodes, and LaSr₁₋ₓMnₓO₃ (LSM) cathodes. The following degradation processes are considered in this model: oxidation and coarsening of nickel particles in the Ni-cermet anodes, changes in the pore radius in anode, electrolyte, and anode electrical conductivity degradation, and sulfur poisoning of the anode compartment. This model helps decision makers discover the optimal sizing and operation of the cells for a stable, efficient performance with the fewest assumptions. It is suitable for a wide variety of applications. Sulfur contamination of the anode compartment is an important cause of performance drop in cells supplied with hydrocarbon-based fuel sources. H₂S, which is often added to hydrocarbon fuels as an odorant, can diminish catalytic behavior of Ni-based anodes by lowering their electrochemical activity and hydrocarbon conversion properties. Therefore, the existing models in the literature for H₂-supplied SOFCs cannot be applied to hydrocarbon-fueled SOFCs as they only account for the electrochemical activity reduction. A regression model is developed in the current work for sulfur contamination of the SOFCs fed with hydrocarbon fuel sources. The model is developed as a function of current density and H₂S concentration in the fuel. To the best of authors' knowledge, it is the first model that accounts for impact of current density on sulfur poisoning of cells supplied with hydrocarbon-based fuels. Proposed model has wide validity over a range of parameters and is consistent across multiple studies by different independent groups. Simulations using the degradation-based model illustrated that SOFCs voltage drops significantly in the first 1500 hours of operation. After that, cells exhibit a slower degradation rate. The present analysis allowed us to discover the reason for various degradation rate values reported in literature for conventional SOFCs. In fact, the reason why literature reports very different degradation rates, is that literature is inconsistent in definition of how degradation rate is calculated. In the literature, the degradation rate has been calculated as the slope of voltage versus time plot with the unit of voltage drop percentage per 1000 hours operation. Due to the nonlinear profile of voltage over time, degradation rate magnitude depends on the magnitude of time steps selected to calculate the curve's slope. To avoid this issue, instantaneous rate of performance drop is used in the present work. According to a sensitivity analysis, the current density has the highest impact on degradation rate compared to other operating factors, while temperature and hydrogen partial pressure affect SOFCs performance less. The findings demonstrated that a cell running at lower current density performs better in long-term in terms of total average energy delivered per year, even though initially it generates less power than if it had a higher current density. This is because of the dominant and devastating impact of large current densities on the long-term performance of SOFCs, as explained by the model.Keywords: degradation rate, long-term performance, optimal operation, solid oxide fuel cells, SOFCs
Procedia PDF Downloads 133