Search results for: climacteric produce
587 Robust Batch Process Scheduling in Pharmaceutical Industries: A Case Study
Authors: Tommaso Adamo, Gianpaolo Ghiani, Antonio Domenico Grieco, Emanuela Guerriero
Abstract:
Batch production plants provide a wide range of scheduling problems. In pharmaceutical industries a batch process is usually described by a recipe, consisting of an ordering of tasks to produce the desired product. In this research work we focused on pharmaceutical production processes requiring the culture of a microorganism population (i.e. bacteria, yeasts or antibiotics). Several sources of uncertainty may influence the yield of the culture processes, including (i) low performance and quality of the cultured microorganism population or (ii) microbial contamination. For these reasons, robustness is a valuable property for the considered application context. In particular, a robust schedule will not collapse immediately when a cell of microorganisms has to be thrown away due to a microbial contamination. Indeed, a robust schedule should change locally in small proportions and the overall performance measure (i.e. makespan, lateness) should change a little if at all. In this research work we formulated a constraint programming optimization (COP) model for the robust planning of antibiotics production. We developed a discrete-time model with a multi-criteria objective, ordering the different criteria and performing a lexicographic optimization. A feasible solution of the proposed COP model is a schedule of a given set of tasks onto available resources. The schedule has to satisfy tasks precedence constraints, resource capacity constraints and time constraints. In particular time constraints model tasks duedates and resource availability time windows constraints. To improve the schedule robustness, we modeled the concept of (a, b) super-solutions, where (a, b) are input parameters of the COP model. An (a, b) super-solution is one in which if a variables (i.e. the completion times of a culture tasks) lose their values (i.e. cultures are contaminated), the solution can be repaired by assigning these variables values with a new values (i.e. the completion times of a backup culture tasks) and at most b other variables (i.e. delaying the completion of at most b other tasks). The efficiency and applicability of the proposed model is demonstrated by solving instances taken from Sanofi Aventis, a French pharmaceutical company. Computational results showed that the determined super-solutions are near-optimal.Keywords: constraint programming, super-solutions, robust scheduling, batch process, pharmaceutical industries
Procedia PDF Downloads 618586 Microstructure and Mechanical Properties Evaluation of Graphene-Reinforced AlSi10Mg Matrix Composite Produced by Powder Bed Fusion Process
Authors: Jitendar Kumar Tiwari, Ajay Mandal, N. Sathish, A. K. Srivastava
Abstract:
Since the last decade, graphene achieved great attention toward the progress of multifunction metal matrix composites, which are highly demanded in industries to develop energy-efficient systems. This study covers the two advanced aspects of the latest scientific endeavor, i.e., graphene as reinforcement in metallic materials and additive manufacturing (AM) as a processing technology. Herein, high-quality graphene and AlSi10Mg powder mechanically mixed by very low energy ball milling with 0.1 wt. % and 0.2 wt. % graphene. Mixed powder directly subjected to the powder bed fusion process, i.e., an AM technique to produce composite samples along with bare counterpart. The effects of graphene on porosity, microstructure, and mechanical properties were examined in this study. The volumetric distribution of pores was observed under X-ray computed tomography (CT). On the basis of relative density measurement by X-ray CT, it was observed that porosity increases after graphene addition, and pore morphology also transformed from spherical pores to enlarged flaky pores due to improper melting of composite powder. Furthermore, the microstructure suggests the grain refinement after graphene addition. The columnar grains were able to cross the melt pool boundaries in case of the bare sample, unlike composite samples. The smaller columnar grains were formed in composites due to heterogeneous nucleation by graphene platelets during solidification. The tensile properties get affected due to induced porosity irrespective of graphene reinforcement. The optimized tensile properties were achieved at 0.1 wt. % graphene. The increment in yield strength and ultimate tensile strength was 22% and 10%, respectively, for 0.1 wt. % graphene reinforced sample in comparison to bare counterpart while elongation decreases 20% for the same sample. The hardness indentations were taken mostly on the solid region in order to avoid the collapse of the pores. The hardness of the composite was increased progressively with graphene content. Around 30% of increment in hardness was achieved after the addition of 0.2 wt. % graphene. Therefore, it can be concluded that powder bed fusion can be adopted as a suitable technique to develop graphene reinforced AlSi10Mg composite. Though, some further process modification required to avoid the induced porosity after the addition of graphene, which can be addressed in future work.Keywords: graphene, hardness, porosity, powder bed fusion, tensile properties
Procedia PDF Downloads 127585 An Analysis of Emmanuel Macron's Campaign Discourse
Authors: Robin Turner
Abstract:
In the context of the strengthening conservative movements such as “Brexit” and the election of US President Donald Trump, the global political stage was shaken up by the election of Emmanuel Macron to the French presidency, defeating the far-right candidate Marine Le Pen. The election itself was a first for the Fifth Republic in which neither final candidate was from the traditional two major political parties: the left Parti Socialiste (PS) and the right Les Républicains (LR). Macron, who served as the Minister of Finance under his predecessor, founded the centrist liberal political party En Marche! in April 2016 before resigning from his post in August to launch his bid for the presidency. Between the time of the party’s creation to the first round of elections a year later, Emmanuel Macron and En Marche! had garnered enough support to make it to the run-off election, finishing far ahead of many seasoned national political figures. Now months into his presidency, the youngest President of the Republic shows no sign of losing fuel anytime soon. His unprecedented success raises a lot of questions with respect to international relations, economics, and the evolving relationship between the French government and its citizens. The effectiveness of Macron’s campaign, of course, relies on many factors, one of which is his manner of communicating his platform to French voters. Using data from oral discourse and primary material from Macron and En Marche! in sources such as party publications and Twitter, the study categorizes linguistic instruments – address, lexicon, tone, register, and syntax – to identify prevailing patterns of speech and communication. The linguistic analysis in this project is two-fold. In addition to these findings’ stand-alone value, these discourse patterns are contextualized by comparable discourse of other 2017 presidential candidates with high emphasis on that of Marine Le Pen. Secondly, to provide an alternative approach, the study contextualizes Macron’s discourse using those of two immediate predecessors representing the traditional stronghold political parties, François Hollande (PS) and Nicolas Sarkozy (LR). These comparative methods produce an analysis that gives insight to not only a contributing factor to Macron’s successful 2017 campaign but also provides insight into how Macron’s platform presents itself differently to previous presidential platforms. Furthermore, this study extends analysis to supply data that contributes to a wider analysis of the defeat of “traditional” French political parties by the “start-up” movement En Marche!.Keywords: Emmanuel Macron, French, discourse analysis, political discourse
Procedia PDF Downloads 261584 Modeling the Impact of Aquaculture in Wetland Ecosystems Using an Integrated Ecosystem Approach: Case Study of Setiu Wetlands, Malaysia
Authors: Roseliza Mat Alipiah, David Raffaelli, J. C. R. Smart
Abstract:
This research is a new approach as it integrates information from both environmental and social sciences to inform effective management of the wetlands. A three-stage research framework was developed for modelling the drivers and pressures imposed on the wetlands and their impacts to the ecosystem and the local communities. Firstly, a Bayesian Belief Network (BBN) was used to predict the probability of anthropogenic activities affecting the delivery of different key wetland ecosystem services under different management scenarios. Secondly, Choice Experiments (CEs) were used to quantify the relative preferences which key wetland stakeholder group (aquaculturists) held for delivery of different levels of these key ecosystem services. Thirdly, a Multi-Criteria Decision Analysis (MCDA) was applied to produce an ordinal ranking of the alternative management scenarios accounting for their impacts upon ecosystem service delivery as perceived through the preferences of the aquaculturists. This integrated ecosystem management approach was applied to a wetland ecosystem in Setiu, Terengganu, Malaysia which currently supports a significant level of aquaculture activities. This research has produced clear guidelines to inform policy makers considering alternative wetland management scenarios: Intensive Aquaculture, Conservation or Ecotourism, in addition to the Status Quo. The findings of this research are as follows: The BBN revealed that current aquaculture activity is likely to have significant impacts on water column nutrient enrichment, but trivial impacts on caged fish biomass, especially under the Intensive Aquaculture scenario. Secondly, the best fitting CE models identified several stakeholder sub-groups for aquaculturists, each with distinct sets of preferences for the delivery of key ecosystem services. Thirdly, the MCDA identified Conservation as the most desirable scenario overall based on ordinal ranking in the eyes of most of the stakeholder sub-groups. Ecotourism and Status Quo scenarios were the next most preferred and Intensive Aquaculture was the least desirable scenario. The methodologies developed through this research provide an opportunity for improving planning and decision making processes that aim to deliver sustainable management of wetland ecosystems in Malaysia.Keywords: Bayesian belief network (BBN), choice experiments (CE), multi-criteria decision analysis (MCDA), aquaculture
Procedia PDF Downloads 294583 Honey Intoxication: A Unique Cause of Sudden Cardiac Collapse
Authors: Bharat Rawat, Shekhar Rajbhandari, Yadav Bhatta, Jay Prakash Jaiswal, Shivaji Bikram Silwal, Rajiv Shrestha, Shova Sunuwar
Abstract:
Introduction: The honey produced by the bees fed on Rhobdodendron species containing grayanotoxin is known as mad honey. Grayanotoxin is found in honey obtained from the nectar of Rhododendron species growing on the mountains of the Black Sea region of Turkey and also in Japan, Nepal, Brazil, parts of North America, and Europe. Although the incidence of grayanotoxin poisoning is rare, there is concern that the number of cases per year will rise with the increasing demand for organic products. Mad honey intoxication might present with mild symptoms of cardiovascular, gastrointestinal and neurological systems or might also present with a life-threatening form with AV block and cardiovascular collapse. In this article, we describe the summary of five cases, which came to our hospital with mad honey related cardiac complications. Findings: In last one year, five cases presented in the emergency department with sudden onset of Loss of consciousness, dizziness, shortness of breath. They felt difficulty after the consumption of 1-3 teaspoonful of wild honey. The honey was brought from most of the rural parts of Nepal like khotang. Some of them also came with vomiting, dizziness, and loose stool. On examination, most of them had severe bradycardia and low blood pressure. No abnormalities were detected on systemic examinations. In one patient, ECG and cardiac enzymes showed features of the acute coronary syndrome, but his treadmill test done few days later was normal. All patients were managed with inj. Atropine, I/V normal saline, and other supportive measures and discharged in a stable condition within one or two days. Conclusions: Rhododendrons is the national flower of Nepal. The specific species of rhododendron found in Nepal which contains the toxin is not known. Bees feeding on these rhododendrons are known to transfer the grayanotoxin to the honey they produce. Most symptoms are mild and resolve themselves without medical intervention. Signs and symptoms of grayanotoxin poisoning rarely last more than 24 hours and are usually not fatal. Some signs of mad honey poisoning include Bradycardia, Cardiac arrhythmia, Hypotension, Nausea and Vomiting. They respond to close monitoring and appropriate supportive treatment. Normally, patients recover completely with no residual damage to the heart or its conduction system.Keywords: rhobdodendron, honey, grayanotoxin, bradycardia
Procedia PDF Downloads 348582 Molecular Implication of Interaction of Human Enteric Pathogens with Phylloplane of Tomato
Authors: Shilpi, Indu Gaur, Neha Bhadauria, Susmita Goswami, Prabir K. Paul
Abstract:
Cultivation and consumption of organically grown fruits and vegetables have increased by several folds. However, the presence of Human Enteric Pathogens on the surface of organically grown vegetables causing Gastro-intestinal diseases, are most likely due to contaminated water and fecal matter of farm animals. Human Enteric Pathogens are adapted to colonize the human gut, and also colonize plant surface. Microbes on plant surface communicate with each other to establish quorum sensing. The cross talk study is important because the enteric pathogens on phylloplane have been reported to mask the beneficial resident bacteria of plant. In the present study, HEPs and bacterial colonizers were identified using 16s rRNA sequencing. Microbial colonization patterns after interaction between Human Enteric Pathogens and natural bacterial residents on tomato phylloplane was studied. Tomato plants raised under aseptic conditions were inoculated with a mixture of Serratia fonticola and Klebsiella pneumoniae. The molecules involved in cross-talk between Human Enteric Pathogens and regular bacterial colonizers were isolated and identified using molecular techniques and HPLC. The colonization pattern was studied by leaf imprint method after 48 hours of incubation. The associated protein-protein interaction in the host cytoplasm was studied by use of crosslinkers. From treated leaves the crosstalk molecules and interaction proteins were separated on 1D SDS-PAGE and analyzed by MALDI-TOF-TOF analysis. The study is critical in understanding the molecular aspects of HEP’s adaption to phylloplane. The study revealed human enteric pathogens aggressively interact among themselves and resident bacteria. HEPs induced establishment of a signaling cascade through protein-protein interaction in the host cytoplasm. The study revealed that the adaptation of Human Enteric Pathogens on phylloplane of Solanum lycopersicum involves the establishment of complex molecular interaction between the microbe and the host including microbe-microbe interaction leading to an establishment of quorum sensing. The outcome will help in minimizing the HEP load on fresh farm produce, thereby curtailing incidences of food-borne diseases.Keywords: crosslinkers, human enteric pathogens (HEPs), phylloplane, quorum sensing
Procedia PDF Downloads 279581 Reverse Engineering of a Secondary Structure of a Helicopter: A Study Case
Authors: Jose Daniel Giraldo Arias, Camilo Rojas Gomez, David Villegas Delgado, Gullermo Idarraga Alarcon, Juan Meza Meza
Abstract:
The reverse engineering processes are widely used in the industry with the main goal to determine the materials and the manufacture used to produce a component. There are a lot of characterization techniques and computational tools that are used in order to get this information. A study case of a reverse engineering applied to a secondary sandwich- hybrid type structure used in a helicopter is presented. The methodology used consists of five main steps, which can be applied to any other similar component: Collect information about the service conditions of the part, disassembly and dimensional characterization, functional characterization, material properties characterization and manufacturing processes characterization, allowing to obtain all the supports of the traceability of the materials and processes of the aeronautical products that ensure their airworthiness. A detailed explanation of each step is covered. Criticality and comprehend the functionalities of each part, information of the state of the art and information obtained from interviews with the technical groups of the helicopter’s operators were analyzed,3D optical scanning technique, standard and advanced materials characterization techniques and finite element simulation allow to obtain all the characteristics of the materials used in the manufacture of the component. It was found that most of the materials are quite common in the aeronautical industry, including Kevlar, carbon, and glass fibers, aluminum honeycomb core, epoxy resin and epoxy adhesive. The stacking sequence and volumetric fiber fraction are a critical issue for the mechanical behavior; a digestion acid method was used for this purpose. This also helps in the determination of the manufacture technique which for this case was Vacuum Bagging. Samples of the material were manufactured and submitted to mechanical and environmental tests. These results were compared with those obtained during reverse engineering, which allows concluding that the materials and manufacture were correctly determined. Tooling for the manufacture was designed and manufactured according to the geometry and manufacture process requisites. The part was manufactured and the mechanical, and environmental tests required were also performed. Finally, a geometric characterization and non-destructive techniques allow verifying the quality of the part.Keywords: reverse engineering, sandwich-structured composite parts, helicopter, mechanical properties, prototype
Procedia PDF Downloads 418580 Performance of a Sailing Vessel with a Solid Wing Sail Compared to a Traditional Sail
Authors: William Waddington, M. Jahir Rizvi
Abstract:
Sail used to propel a vessel functions in a similar way to an aircraft wing. Traditionally, cloth and ropes were used to produce sails. However, there is one major problem with traditional sail design, the increase in turbulence and flow separation when compared to that of an aircraft wing with the same camber. This has led to the development of the solid wing sail focusing mainly on the sail shape. Traditional cloth sails are manufactured as a single element whereas solid wing sail is made of two segments. To the authors’ best knowledge, the phenomena behind the performances of this type of sail at various angles of wind direction with respect to a sailing vessel’s direction (known as the angle of attack) is still an area of mystery. Hence, in this study, the thrusts of a sailing vessel produced by wing sails constructed with various angles (22°, 24°, 26° and 28°) between the two segments have been compared to that of a traditional cloth sail made of carbon-fiber material. The reason for using carbon-fiber material is to achieve the correct and the exact shape of a commercially available mainsail. NACA 0024 and NACA 0016 foils have been used to generate two-segment wing sail shape which incorporates a flap between the first and the second segments. Both the two-dimensional and the three-dimensional sail models designed in commercial CAD software Solidworks have been analyzed through Computational Fluid Dynamics (CFD) techniques using Ansys CFX considering an apparent wind speed of 20.55 knots with an apparent wind angle of 31°. The results indicate that the thrust from traditional sail increases from 8.18 N to 8.26 N when the angle of attack is increased from 5° to 7°. However, the thrust value decreases if the angle of attack is further increased. A solid wing sail which possesses 20° angle between its two segments, produces thrusts from 7.61 N to 7.74 N with an increase in the angle of attack from 7° to 8°. The thrust remains steady up to 9° angle of attack and drops dramatically beyond 9°. The highest thrust values that can be obtained for the solid wing sails with 22°, 24°, 26° and 28° angle respectively between the two segments are 8.75 N, 9.10 N, 9.29 N and 9.19 N respectively. The optimum angle of attack for each of the solid wing sails is identified as 7° at which these thrust values are obtained. Therefore, it can be concluded that all the thrust values predicted for the solid wing sails of angles between the two segments above 20° are higher compared to the thrust predicted for the traditional sail. However, the best performance from a solid wing sail is expected when the sail is created with an angle between the two segments above 20° but below or equal to 26°. In addition, 1/29th scale models in the wind tunnel have been tested to observe the flow behaviors around the sails. The experimental results support the numerical observations as the flow behaviors are exactly the same.Keywords: CFD, drag, sailing vessel, thrust, traditional sail, wing sail
Procedia PDF Downloads 279579 Plant Genetic Diversity in Home Gardens and Its Contribution to Household Economy in Western Part of Ethiopia
Authors: Bedilu Tafesse
Abstract:
Home gardens are important social and cultural spaces where knowledge related to agricultural practice is transmitted and through which households may improve their income and livelihood. High levels of inter- and intra-specific plant genetic diversity are preserved in home gardens. Plant diversity is threatened by rapid and unplanned urbanization, which increases environmental problems such as heating, pollution, loss of habitats and ecosystem disruption. Tropical home gardens have played a significant role in conserving plant diversity while providing substantial benefits to households. This research aimed to understand the relationship between household characteristics and plant diversity in western Ethiopia home gardens and the contributions of plants to the household economy. Plant diversity and different uses of plants were studied in a random sample of 111 suburban home gardens in the Ilu Ababora, Jima and Wellega suburban area, western Ethiopia, based on complete garden inventories followed by household surveys on socio-economic status during 2012. A total of 261 species of plants were observed, of which 41% were ornamental plants, 36% food plants, and 22% medicinal plants. Of these 16% were sold commercially to produce income. Avocado, bananas, and other fruits produced in excess. Home gardens contributed the equivalent of 7% of total annual household income in terms of food and commercial sales. Multiple regression analysis showed that education, time spent in gardening, land for cultivation, household expenses, primary conservation practices, and uses of special techniques explained 56% of the total plant diversity. Food, medicinal and commercial plant species had significant positive relationships with time spent gardening and land area for gardening. Education and conservation practices significantly affected food and medicinal plant diversity. Special techniques used in gardening showed significant positive relations with ornamental and commercial plants. Reassessments in different suburban and urban home gardens and proper documentation using same methodology is essential to build a firm policy for enhancing plant diversity and related values to households and surroundings.Keywords: plant genetic diversity, urbanization, suburban home gardens, Ethiopia
Procedia PDF Downloads 303578 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning
Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga
Abstract:
Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter
Procedia PDF Downloads 212577 The Paradox of Design Aesthetics and the Sustainable Design
Authors: Asena Demirci, Gozen Guner Aktaş, Nur Ayalp
Abstract:
Nature provides a living space for humans, also in contrast it is destroyed by humans for their personal needs and ambitions. For decreasing these damages against nature, solutions are started to generate and to develop. Moreover, precautions are implemented. After 1960s, especially when the ozone layer got harmed and got thinner by toxic substances coming from man made structures, environmental problems which effected human’s activities of daily living. Thus, this subject about environmental solutions and precautions is becoming a priority issue for scientists. Most of the environmental problems are caused by buildings and factories which are built without any concerns about protecting nature. This situation creates awareness about environmental issues and also the terms like sustainability, Renewable energy show up in building, Construction and architecture sectors to provide environmental protection. In this perspective, the design disciplines also should be respectful to nature and the sustainability. Designs which involve the features like sustainability, renewability and being ecologic have specialties to be less detrimental to the environment rather than the designs which do not involve. Furthermore, these designs produce their own energy for consuming, So they do not use the natural resources. They do not contain harmful substances and they are made of recyclable materials. Thus, they are becoming environmentally friendly structures. There is a common concern among designers about the issue of sustainable design. They believe that the idea of sustainability inhibits the creativity. All works of design resemble each other from the point of aesthetics and technological matters. In addition, there is a concern about design ethics which aesthetic designs cannot be accepted as a priority. For these reasons, there are few designs included the features of being eco-friendly and well-designed and also had design concerns around the world. Despite the other design disciplines, The concept of sustainability is getting more important each day in interior architecture and interior design. As it is known that human being spends 90 % of his life in interior spaces, The importance of that concept in interior spaces is obvious. Aesthetic is another vital concern in interior space design also. Most of the time sustainable materials and sustainable interior design applications conflicts with personal aesthetic parameters. This study aims to discuss the great paradox between the design aesthetic and the sustainable design. Does the sustainable approach in interior design disturbs the design aesthetic? This is one of the most popular questions that have been discussed for a while. With this paper this question will be evaluated with a case study which analyzes the aesthetic perceptions and preferences of the users and designers in sustainable interior spaces.Keywords: aesthetics, interior design, sustainable design, sustainability
Procedia PDF Downloads 289576 Water Dumpflood into Multiple Low-Pressure Gas Reservoirs
Authors: S. Lertsakulpasuk, S. Athichanagorn
Abstract:
As depletion-drive gas reservoirs are abandoned when there is insufficient production rate due to pressure depletion, waterflooding has been proposed to increase the reservoir pressure in order to prolong gas production. Due to high cost, water injection may not be economically feasible. Water dumpflood into gas reservoirs is a new promising approach to increase gas recovery by maintaining reservoir pressure with much cheaper costs than conventional waterflooding. Thus, a simulation study of water dumpflood into multiple nearly abandoned or already abandoned thin-bedded gas reservoirs commonly found in the Gulf of Thailand was conducted to demonstrate the advantage of the proposed method and to determine the most suitable operational parameters for reservoirs having different system parameters. A reservoir simulation model consisting of several thin-layered depletion-drive gas reservoirs and an overlying aquifer was constructed in order to investigate the performance of the proposed method. Two producers were initially used to produce gas from the reservoirs. One of them was later converted to a dumpflood well after gas production rate started to decline due to continuous reduction in reservoir pressure. The dumpflood well was used to flow water from the aquifer to increase pressure of the gas reservoir in order to drive gas towards producer. Two main operational parameters which are wellhead pressure of producer and the time to start water dumpflood were investigated to optimize gas recovery for various systems having different gas reservoir dip angles, well spacings, aquifer sizes, and aquifer depths. This simulation study found that water dumpflood can increase gas recovery up to 12% of OGIP depending on operational conditions and system parameters. For the systems having a large aquifer and large distance between wells, it is best to start water dumpflood when the gas rate is still high since the long distance between the gas producer and dumpflood well helps delay water breakthrough at producer. As long as there is no early water breakthrough, the earlier the energy is supplied to the gas reservoirs, the better the gas recovery. On the other hand, for the systems having a small or moderate aquifer size and short distance between the two wells, performing water dumpflood when the rate is close to the economic rate is better because water is more likely to cause an early breakthrough when the distance is short. Water dumpflood into multiple nearly-depleted or depleted gas reservoirs is a novel study. The idea of using water dumpflood to increase gas recovery has been mentioned in the literature but has never been investigated. This detailed study will help a practicing engineer to understand the benefits of such method and can implement it with minimum cost and risk.Keywords: dumpflood, increase gas recovery, low-pressure gas reservoir, multiple gas reservoirs
Procedia PDF Downloads 444575 Configuring Resilience and Environmental Sustainability to Achieve Superior Performance under Differing Conditions of Transportation Disruptions
Authors: Henry Ataburo, Dominic Essuman, Emmanuel Kwabena Anin
Abstract:
Recent trends of catastrophic events, such as the Covid-19 pandemic, the Suez Canal blockage, the Russia-Ukraine conflict, the Israel-Hamas conflict, and the climate change crisis, continue to devastate supply chains and the broader society. Prior authors have advocated for a simultaneous pursuit of resilience and sustainability as crucial for navigating these challenges. Nevertheless, the relationship between resilience and sustainability is a rather complex one: resilience and sustainability are considered unrelated, substitutes, or complements. Scholars also suggest that different firms prioritize resilience and sustainability differently for varied strategic reasons. However, we know little about whether, how, and when these choices produce different typologies of firms to explain differences in financial and market performance outcomes. This research draws inferences from the systems configuration approach to organizational fit to contend that a taxonomy of firms may emerge based on how firms configure resilience and environmental sustainability. The study further examines the effects of these taxonomies on financial and market performance in differing transportation disruption conditions. Resilience is operationalized as a firm’s ability to adjust current operations, structure, knowledge, and resources in response to disruptions, whereas environmental sustainability is operationalized as the extent to which a firm deploys resources judiciously and keeps the ecological impact of its operations to the barest minimum. Using primary data from 199 firms in Ghana and cluster analysis as an analytical tool, the study identifies four clusters of firms based on how they prioritize resilience and sustainability: Cluster 1 - "strong, moderate resilience, high sustainability firms," Cluster 2 - "sigh resilience, high sustainability firms," Cluster 3 - "high resilience, strong, moderate sustainability firms," and Cluster 4 - "weak, moderate resilience, strong, moderate sustainability firms". In addition, ANOVA and regression analysis revealed the following findings: Only clusters 1 and 2 were significantly associated with both market and financial performance. Under high transportation disruption conditions, cluster 1 firms excel better in market performance, whereas cluster 2 firms excel better in financial performance. Conversely, under low transportation disruption conditions, cluster 1 firms excel better in financial performance, whereas cluster 2 firms excel better in market performance. The study provides theoretical and empirical evidence of how resilience and environmental sustainability can be configured to achieve specific performance objectives under different disruption conditions.Keywords: resilience, environmental sustainability, developing economy, transportation disruption
Procedia PDF Downloads 67574 Time Estimation of Return to Sports Based on Classification of Health Levels of Anterior Cruciate Ligament Using a Convolutional Neural Network after Reconstruction Surgery
Authors: Zeinab Jafari A., Ali Sharifnezhad B., Mohammad Razi C., Mohammad Haghpanahi D., Arash Maghsoudi
Abstract:
Background and Objective: Sports-related rupture of the anterior cruciate ligament (ACL) and following injuries have been associated with various disorders, such as long-lasting changes in muscle activation patterns in athletes, which might last after ACL reconstruction (ACLR). The rupture of the ACL might result in abnormal patterns of movement execution, extending the treatment period and delaying athletes’ return to sports (RTS). As ACL injury is especially prevalent among athletes, the lengthy treatment process and athletes’ absence from sports are of great concern to athletes and coaches. Thus, estimating safe time of RTS is of crucial importance. Therefore, using a deep neural network (DNN) to classify the health levels of ACL in injured athletes, this study aimed to estimate the safe time for athletes to return to competitions. Methods: Ten athletes with ACLR and fourteen healthy controls participated in this study. Three health levels of ACL were defined: healthy, six-month post-ACLR surgery and nine-month post-ACLR surgery. Athletes with ACLR were tested six and nine months after the ACLR surgery. During the course of this study, surface electromyography (sEMG) signals were recorded from five knee muscles, namely Rectus Femoris (RF), Vastus Lateralis (VL), Vastus Medialis (VM), Biceps Femoris (BF), Semitendinosus (ST), during single-leg drop landing (SLDL) and forward hopping (SLFH) tasks. The Pseudo-Wigner-Ville distribution (PWVD) was used to produce three-dimensional (3-D) images of the energy distribution patterns of sEMG signals. Then, these 3-D images were converted to two-dimensional (2-D) images implementing the heat mapping technique, which were then fed to a deep convolutional neural network (DCNN). Results: In this study, we estimated the safe time of RTS by designing a DCNN classifier with an accuracy of 90 %, which could classify ACL into three health levels. Discussion: The findings of this study demonstrate the potential of the DCNN classification technique using sEMG signals in estimating RTS time, which will assist in evaluating the recovery process of ACLR in athletes.Keywords: anterior cruciate ligament reconstruction, return to sports, surface electromyography, deep convolutional neural network
Procedia PDF Downloads 78573 A Triad Pedagogy for Increased Digital Competence of Human Resource Management Students: Reflecting on Human Resource Information Systems at a South African University
Authors: Esther Pearl Palmer
Abstract:
Driven by the increased pressure on Higher Education Institutions (HEIs) to produce work-ready graduates for the modern world of work, this study reflects on triad teaching and learning practices to increase student engagement and employability. In the South African higher education context, the employability of graduates is imperative in strengthening the country’s economy and in increasing competitiveness. Within this context, the field of Human Resource Management (HRM) calls for innovative methods and approaches to teaching and learning and assessing the skills and competencies of graduates to render them employable. Digital competency in Human Resource Information Systems (HRIS) is an important component and prerequisite for employment in HRM. The purpose of this research is to reflect on the subject HRIS developed by lecturers at the Central University of Technology, Free State (CUT), with the intention to actively engage students in real-world learning activities and increase their employability. The Enrichment Triad Model (ETM) was used as theoretical framework to develop the subject as it supports a triad teaching and learning approach to education. It is, furthermore, an inter-structured model that supports collaboration between industry, academics and students. The study follows a mixed-method approach to reflect on the learning experiences of the industry, academics and students in the subject field over the past three years. This paper is a work in progress and seeks to broaden the scope of extant studies about student engagement in work-related learning to increase employability. Based on the ETM as theoretical framework and pedagogical practice, this paper proposes that following a triad teaching and learning approach will increase work-related skills of students. Findings from the study show that students, academics and industry alike regard educational opportunities that incorporate active learning experiences with the world of work enhances student engagement in learning and renders them more employable.Keywords: digital competence, enriched triad model, human resource information systems, student engagement, triad pedagogy.
Procedia PDF Downloads 91572 Valorization of Seafood and Poultry By-Products as Gelatin Source and Quality Assessment
Authors: Elif Tugce Aksun Tumerkan, Umran Cansu, Gokhan Boran, Fatih Ozogul
Abstract:
Gelatin is a mixture of peptides obtained from collagen by partial thermal hydrolysis. It is an important and useful biopolymer that is used in the food, pharmacy, and photography products. Generally, gelatins are sourced from pig skin and bones, beef bone and hide, but within the last decade, using alternative gelatin resources has attracted some interest. In this study, functional properties of gelatin extracted from seafood and poultry by-products were evaluated. For this purpose, skins of skipjack tuna (Katsuwonus pelamis) and frog (Rana esculata) were used as seafood by-products and chicken skin as poultry by-product as raw material for gelatin extraction. Following the extraction of gelatin, all samples were lyophilized and stored in plastic bags at room temperature. For comparing gelatins obtained; chemical composition, common quality parameters including bloom value, gel strength, and viscosity in addition to some others like melting and gelling temperatures, hydroxyproline content, and colorimetric parameters were determined. The results showed that the highest protein content obtained in frog gelatin with 90.1% and the highest hydroxyproline content was in chicken gelatin with 7.6% value. Frog gelatin showed a significantly higher (P < 0.05) melting point (42.7°C) compared to that of fish (29.7°C) and chicken (29.7°C) gelatins. The bloom value of gelatin from frog skin was found higher (363 g) than chicken and fish gelatins (352 and 336 g, respectively) (P < 0.05). While fish gelatin had higher lightness (L*) value (92.64) compared to chicken and frog gelatins, redness/greenness (a*) value was significantly higher in frog skin gelatin. Based on the results obtained, it can be concluded that skins of different animals with high commercial value may be utilized as alternative sources to produce gelatin with high yield and desirable functional properties. Functional and quality analysis of gelatin from frog, chicken, and tuna skin showed by-product of poultry and seafood can be used as an alternative gelatine source to mammalian gelatine. The functional properties, including bloom strength, melting points, and viscosity of gelatin from frog skin were more admirable than that of the chicken and tuna skin. Among gelatin groups, significant characteristic differences such as gel strength and physicochemical properties were observed based on not only raw material but also the extraction method.Keywords: chicken skin, fish skin, food industry, frog skin, gel strength
Procedia PDF Downloads 163571 Syngas From Polypropylene Gasification in a Fluidized Bed
Authors: Sergio Rapagnà, Alessandro Antonio Papa, Armando Vitale, Andre Di Carlo
Abstract:
In recent years the world population has enormously increased the use of plastic products for their living needs, in particular for transporting and storing consumer goods such as food and beverage. Plastics are widely used in the automotive industry, in construction of electronic equipment, clothing and home furnishings. Over the last 70 years, the annual production of plastic products has increased from 2 million tons to 460 million tons. About 20% of the last quantity is mismanaged as waste. The consequence of this mismanagement is the release of plastic waste into the terrestrial and marine environments which represents a danger to human health and the ecosystem. Recycling all plastics is difficult because they are often made with mixtures of polymers that are incompatible with each other and contain different additives. The products obtained are always of lower quality and after two/three recycling cycles they must be eliminated either by thermal treatment to produce heat or disposed of in landfill. An alternative to these current solutions is to obtain a mixture of gases rich in H₂, CO and CO₂ suitable for being profitably used for the production of chemicals with consequent savings fossil sources. Obtaining a hydrogen-rich syngas can be achieved by gasification process using the fluidized bed reactor, in presence of steam as the fluidization medium. The fluidized bed reactor allows the gasification process of plastics to be carried out at a constant temperature and allows the use of different plastics with different compositions and different grain sizes. Furthermore, during the gasification process the use of steam increase the gasification of char produced by the first pyrolysis/devolatilization process of the plastic particles. The bed inventory can be made with particles having catalytic properties such as olivine, capable to catalyse the steam reforming reactions of heavy hydrocarbons normally called tars, with a consequent increase in the quantity of gases produced. The plant is composed of a fluidized bed reactor made of AISI 310 steel, having an internal diameter of 0.1 m, containing 3 kg of olivine particles as a bed inventory. The reactor is externally heated by an oven up to 1000 °C. The hot producer gases that exit the reactor, after being cooled, are quantified using a mass flow meter. Gas analyzers are present to measure instantly the volumetric composition of H₂, CO, CO₂, CH₄ and NH₃. At the conference, the results obtained from the continuous gasification of polypropylene (PP) particles in a steam atmosphere at temperatures of 840-860 °C will be presented.Keywords: gasification, fluidized bed, hydrogen, olivine, polypropyle
Procedia PDF Downloads 27570 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics
Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic
Abstract:
Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress
Procedia PDF Downloads 227569 An Experimental Determination of the Limiting Factors Governing the Operation of High-Hydrogen Blends in Domestic Appliances Designed to Burn Natural Gas
Authors: Haiqin Zhou, Robin Irons
Abstract:
The introduction of hydrogen into local networks may, in many cases, require the initial operation of those systems on natural gas/hydrogen blends, either because of a lack of sufficient hydrogen to allow a 100% conversion or because existing infrastructure imposes limitations on the % hydrogen that can be burned before the end-use technologies are replaced. In many systems, the largest number of end-use technologies are small-scale but numerous appliances used for domestic and industrial heating and cooking. In such a scenario, it is important to understand exactly how much hydrogen can be introduced into these appliances before their performance becomes unacceptable and what imposes that limitation. This study seeks to explore a range of significantly higher hydrogen blends and a broad range of factors that might limit operability or environmental acceptability. We will present tests from a burner designed for space heating and optimized for natural gas as an increasing % of hydrogen blends (increasing from 25%) were burned and explore the range of parameters that might govern the acceptability of operation. These include gaseous emissions (particularly NOx and unburned carbon), temperature, flame length, stability and general operational acceptability. Results will show emissions, Temperature, and flame length as a function of thermal load and percentage of hydrogen in the blend. The relevant application and regulation will ultimately determine the acceptability of these values, so it is important to understand the full operational envelope of the burners in question through the sort of extensive parametric testing we have carried out. The present dataset should represent a useful data source for designers interested in exploring appliance operability. In addition to this, we present data on two factors that may be absolutes in determining allowable hydrogen percentages. The first of these is flame blowback. Our results show that, for our system, the threshold between acceptable and unacceptable performance lies between 60 and 65% mol% hydrogen. Another factor that may limit operation, and which would be important in domestic applications, is the acoustic performance of these burners. We will describe a range of operational conditions in which hydrogen blend burners produce a loud and invasive ‘screech’. It will be important for equipment designers and users to find ways to avoid this or mitigate it if performance is to be deemed acceptable.Keywords: blends, operational, domestic appliances, future system operation.
Procedia PDF Downloads 23568 Production of Bio-Composites from Cocoa Pod Husk for Use in Packaging Materials
Authors: L. Kanoksak, N. Sukanya, L. Napatsorn, T. Siriporn
Abstract:
A growing population and demand for packaging are driving up the usage of natural resources as raw materials in the pulp and paper industry. Long-term effects of environmental is disrupting people's way of life all across the planet. Finding pulp sources to replace wood pulp is therefore necessary. To produce wood pulp, various other potential plants or plant parts can be employed as substitute raw materials. For example, pulp and paper were made from agricultural residue that mainly included pulp can be used in place of wood. In this study, cocoa pod husks were an agricultural residue of the cocoa and chocolate industries. To develop composite materials to replace wood pulp in packaging materials. The paper was coated with polybutylene adipate-co-terephthalate (PBAT). By selecting and cleaning fresh cocoa pod husks, the size was reduced. And the cocoa pod husks were dried. The morphology and elemental composition of cocoa pod husks were studied. To evaluate the mechanical and physical properties, dried cocoa husks were extracted using the soda-pulping process. After selecting the best formulations, paper with a PBAT bioplastic coating was produced on a paper-forming machine Physical and mechanical properties were studied. By using the Field Emission Scanning Electron Microscope/Energy Dispersive X-Ray Spectrometer (FESEM/EDS) technique, the structure of dried cocoa pod husks showed the main components of cocoa pod husks. The appearance of porous has not been found. The fibers were firmly bound for use as a raw material for pulp manufacturing. Dry cocoa pod husks contain the major elements carbon (C) and oxygen (O). Magnesium (Mg), potassium (K), and calcium (Ca) were minor elements that were found in very small levels. After that cocoa pod husks were removed from the soda-pulping process. It found that the SAQ5 formula produced pulp yield, moisture content, and water drainage. To achieve the basis weight by TAPPI T205 sp-02 standard, cocoa pod husk pulp and modified starch were mixed. The paper was coated with bioplastic PBAT. It was produced using bioplastic resin from the blown film extrusion technique. It showed the contact angle, dispersion component and polar component. It is an effective hydrophobic material for rigid packaging applications.Keywords: cocoa pod husks, agricultural residue, composite material, rigid packaging
Procedia PDF Downloads 76567 Using Optimal Cultivation Strategies for Enhanced Biomass and Lipid Production of an Indigenous Thraustochytrium sp. BM2
Authors: Hsin-Yueh Chang, Pin-Chen Liao, Jo-Shu Chang, Chun-Yen Chen
Abstract:
Biofuel has drawn much attention as a potential substitute to fossil fuels. However, biodiesel from waste oil, oil crops or other oil sources can only satisfy partial existing demands for transportation. Due to the feature of being clean, green and viable for mass production, using microalgae as a feedstock for biodiesel is regarded as a possible solution for a low-carbon and sustainable society. In particular, Thraustochytrium sp. BM2, an indigenous heterotrophic microalga, possesses the potential for metabolizing glycerol to produce lipids. Hence, it is being considered as a promising microalgae-based oil source for biodiesel production and other applications. This study was to optimize the culture pH, scale up, assess the feasibility of producing microalgal lipid from crude glycerol and apply operation strategies following optimal results from shake flask system in a 5L stirred-tank fermenter for further enhancing lipid productivities. Cultivation of Thraustochytrium sp. BM2 without pH control resulted in the highest lipid production of 3944 mg/L and biomass production of 4.85 g/L. Next, when initial glycerol and corn steep liquor (CSL) concentration increased five times (50 g and 62.5 g, respectively), the overall lipid productivity could reach 124 mg/L/h. However, when using crude glycerol as a sole carbon source, direct addition of crude glycerol could inhibit culture growth. Therefore, acid and metal salt pretreatment methods were utilized to purify the crude glycerol. Crude glycerol pretreated with acid and CaCl₂ had the greatest overall lipid productivity 131 mg/L/h when used as a carbon source and proved to be a better substitute for pure glycerol as carbon source in Thraustochytrium sp. BM2 cultivation medium. Engineering operation strategies such as fed-batch and semi-batch operation were applied in the cultivation of Thraustochytrium sp. BM2 for the improvement of lipid production. In cultivation of fed-batch operation strategy, harvested biomass 132.60 g and lipid 69.15 g were obtained. Also, lipid yield 0.20 g/g glycerol was same as in batch cultivation, although with poor overall lipid productivity 107 mg/L/h. In cultivation of semi-batch operation strategy, overall lipid productivity could reach 158 mg/L/h due to the shorter cultivation time. Harvested biomass and lipid achieved 232.62 g and 126.61 g respectively. Lipid yield was improved from 0.20 to 0.24 g/g glycerol. Besides, product costs of three kinds of operation strategies were also calculated. The lowest product cost 12.42 $NTD/g lipid was obtained while employing semi-batch operation strategy and reduced 33% in comparison with batch operation strategy.Keywords: heterotrophic microalga Thrasutochytrium sp. BM2, microalgal lipid, crude glycerol, fermentation strategy, biodiesel
Procedia PDF Downloads 148566 Development of Oral Biphasic Drug Delivery System Using a Natural Resourced Polymer, Terminalia catappa
Authors: Venkata Srikanth Meka, Nur Arthirah Binti Ahmad Tarmizi Tan, Muhammad Syahmi Bin Md Nazir, Adinarayana Gorajana, Senthil Rajan Dharmalingam
Abstract:
Biphasic drug delivery systems are designed to release drug at two different rates, either fast/prolonged or prolonged/fast. A fast/prolonged release system provides a burst drug release at initial stage followed by a slow release over a prolonged period of time and in case of prolonged/fast release system, the release pattern is vice versa. Terminalia catappa gum (TCG) is a natural polymer and was successfully proven as a novel pharmaceutical excipient. The main objective of the present research is to investigate the applicability of natural polymer, Terminalia catappa gum in the design of oral biphasic drug delivery system in the form of mini tablets by using a model drug, buspirone HCl. This investigation aims to produce a biphasic release drug delivery system of buspirone by combining immediate release and prolonged release mini tablets into a capsule. For immediate release mini tablets, a dose of 4.5 mg buspirone was prepared by varying the concentration of superdisintegrant; crospovidone. On the other hand, prolonged release mini tablets were produced by using different concentrations of the natural polymer; TCG with a buspirone dose of 3mg. All mini tablets were characterized for weight variation, hardness, friability, disintegration, content uniformity and dissolution studies. The optimized formulations of immediate and prolonged release mini tablets were finally combined in a capsule and was evaluated for release studies. FTIR and DSC studies were conducted to study the drug-polymer interaction. All formulations of immediate release and prolonged release mini tablets were passed all the in-process quality control tests according to US Pharmacopoeia. The disintegration time of immediate release mini tablets of different formulations was varied from 2-6 min, and maximum drug release was achieved in lesser than 60 min. Whereas prolonged release mini tablets made with TCG have shown good drug retarding properties. Formulations were controlled for about 4-10 hrs with varying concentration of TCG. As the concentration of TCG increased, the drug release retarding property also increased. The optimised mini tablets were packed in capsules and were evaluated for the release mechanism. The capsule dosage form has clearly exhibited the biphasic release of buspirone, indicating that TCG is a suitable natural polymer for this study. FTIR and DSC studies proved that there was no interaction between the drug and polymer. Based on the above positive results, it can be concluded that TCG is a suitable polymer for the biphasic drug delivery systems.Keywords: Terminalia catappa gum, biphasic release, mini tablets, tablet in capsule, natural polymers
Procedia PDF Downloads 393565 Production of Rhamnolipids from Different Resources and Estimating the Kinetic Parameters for Bioreactor Design
Authors: Olfat A. Mohamed
Abstract:
Rhamnolipids biosurfactants have distinct properties given them importance in many industrial applications, especially their great new future applications in cosmetic and pharmaceutical industries. These applications have encouraged the search for diverse and renewable resources to control the cost of production. The experimental results were then applied to find a suitable mathematical model for obtaining the design criteria of the batch bioreactor. This research aims to produce Rhamnolipids from different oily wastewater sources such as petroleum crude oil (PO) and vegetable oil (VO) by using Pseudomonas aeruginosa ATCC 9027. Different concentrations of the PO and the VO are added to the media broth separately are in arrangement (0.5 1, 1.5, 2, 2.5 % v/v) and (2, 4, 6, 8 and 10%v/v). The effect of the initial concentration of oil residues and the addition of glycerol and palmitic acid was investigated as an inducer in the production of rhamnolipid and the surface tension of the broth. It was found that 2% of the waste (PO) and 6% of the waste (VO) was the best initial substrate concentration for the production of rhamnolipids (2.71, 5.01 g rhamnolipid/l) as arrangement. Addition of glycerol (10-20% v glycerol/v PO) to the 2% PO fermentation broth led to increase the rhamnolipid production (about 1.8-2 times fold). However, the addition of palmitic acid (5 and 10 g/l) to fermentation broth contained 6% VO rarely enhanced the production rate. The experimental data for 2% initially (PO) was used to estimate the various kinetic parameters. The following results were obtained, maximum rate or velocity of reaction (Vmax) = 0.06417 g/l.hr), yield of cell weight per unit weight of substrate utilized (Yx/s = 0.324 g Cx/g Cs) maximum specific growth rate (μmax = 0.05791 hr⁻¹), yield of rhamnolipid weight per unit weight of substrate utilized (Yp/s)=0.2571gCp/g Cs), maintenance coefficient (Ms =0.002419), Michaelis-Menten constant, (Km=6.1237 gmol/l), endogenous decay coefficient (Kd=0.002375 hr⁻¹). Predictive parameters and advanced mathematical models were applied to evaluate the time of the batch bioreactor. The results were as follows: 123.37, 129 and 139.3 hours in respect of microbial biomass, substrate and product concentration, respectively compared with experimental batch time of 120 hours in all cases. The expected mathematical models are compatible with the laboratory results and can, therefore, be considered as tools for expressing the actual system.Keywords: batch bioreactor design, glycerol, kinetic parameters, petroleum crude oil, Pseudomonas aeruginosa, rhamnolipids biosurfactants, vegetable oil
Procedia PDF Downloads 131564 Stems of Prunus avium: An Unexplored By-product with Great Bioactive Potential
Authors: Luís R. Silva, Fábio Jesus, Catarina Bento, Ana C. Gonçalves
Abstract:
Over the last few years, the traditional medicine has gained ground at nutritional and pharmacological level. The natural products and their derivatives have great importance in several drugs used in modern therapeutics. Plant-based systems continue to play an essential role in primary healthcare. Additionally, the utilization of their plant parts, such as leaves, stems and flowers as nutraceutical and pharmaceutical products, can add a high value in the natural products market, not just by the nutritional value due to the significant levels of phytochemicals, but also by to the high benefit for the producers and manufacturers business. Stems of Prunus avium L. are a byproduct resulting from the processing of cherry, and have been consumed over the years as infusions and decoctions due to its bioactive properties, being used as sedative, diuretic and draining, to relief of renal stones, edema and hypertension. In this work, we prepared a hydroethanolic and infusion extracts from stems of P. avium collected in Fundão Region (Portugal), and evaluate the phenolic profile by LC/DAD, antioxidant capacity, α-glucosidase inhibitory activity and protection of human erythrocytes against oxidative damage. The LC-DAD analysis allowed to the identification of 19 phenolic compounds, catechin and 3-O-caffolquinic acid were the main ones. In a general way, hydroethanolic extract proved to be more active than infusion. This extract had the best antioxidant activity against DPPH• (IC50=22.37 ± 0.28 µg/mL) and superoxide radical (IC50=13.93 ± 0.30 µg/mL). Furthermore, it was the most active concerning inhibition of hemoglobin oxidation (IC50=13.73 ± 0.67 µg/mL), hemolysis (IC50=1.49 ± 0.18 µg/mL) and lipid peroxidation (IC50=26.20 ± 0.38 µg/mL) on human erythrocytes. On the other hand, infusion revealed to be more efficient towards α-glucosidase inhibitory activity (IC50=3.18 ± 0.23 µg/mL) and against nitric oxide radical (IC50=99.99 ± 1.89 µg/mL). The Sweet cherry sector is very important in Fundão Region (Portugal), and taking profit from the great wastes produced during processing of the cherry to produce added-value products, such as food supplements cannot be ignored. Our results demonstrate that P. avium stems possesses remarkable antioxidant and free radical scavenging properties. It is therefore, suggest, that P. avium stems can be used as a natural antioxidant with high potential to prevent or slow the progress of human diseases mediated by oxidative stress.Keywords: stems, Prunus avium, phenolic compounds, biological potential
Procedia PDF Downloads 297563 The Foucaultian Relationship between Power and Knowledge: Genealogy as a Method for Epistemic Resistance
Authors: Jana Soler Libran
Abstract:
The primary aim of this paper is to analyze the relationship between power and knowledge suggested in Michel Foucault's theory. Taking into consideration the role of power in knowledge production, the goal is to evaluate to what extent genealogy can be presented as a practical method for epistemic resistance. To do so, the methodology used consists of a revision of Foucault’s literature concerning the topic discussed. In this sense, conceptual analysis is applied in order to understand the effect of the double dimension of power on knowledge production. In its negative dimension, power is conceived as an organ of repression, vetoing certain instances of knowledge considered deceitful. In opposition, in its positive dimension, power works as an organ of the production of truth by means of institutionalized discourses. This double declination of power leads to the first main findings of the present analysis: no truth or knowledge can lie outside power’s action, and power is constituted through accepted forms of knowledge. To second these statements, Foucaultian discourse formations are evaluated, presenting external exclusion procedures as paradigmatic practices to demonstrate how power creates and shapes the validity of certain epistemes. Thus, taking into consideration power’s mechanisms to produce and reproduce institutionalized truths, this paper accounts for the Foucaultian praxis of genealogy as a method to reveal power’s intention, instruments, and effects in the production of knowledge. In this sense, it is suggested to consider genealogy as a practice which, firstly, reveals what instances of knowledge are subjugated to power and, secondly, promotes aforementioned peripherical discourses as a form of epistemic resistance. In order to counterbalance these main theses, objections to Foucault’s work from Nancy Fraser, Linda Nicholson, Charles Taylor, Richard Rorty, Alvin Goldman, or Karen Barad are discussed. In essence, the understanding of the Foucaultian relationship between power and knowledge is essential to analyze how contemporary discourses are produced by both traditional institutions and new forms of institutionalized power, such as mass media or social networks. Therefore, Michel Foucault's practice of genealogy is relevant, not only for its philosophical contribution as a method to uncover the effects of power in knowledge production but also because it constitutes a valuable theoretical framework for political theory and sociological studies concerning the formation of societies and individuals in the contemporary world.Keywords: epistemic resistance, Foucault’s genealogy, knowledge, power, truth
Procedia PDF Downloads 124562 Micro-Milling Process Development of Advanced Materials
Authors: M. A. Hafiz, P. T. Matevenga
Abstract:
Micro-level machining of metals is a developing field which has shown to be a prospective approach to produce features on the parts in the range of a few to a few hundred microns with acceptable machining quality. It is known that the mechanics (i.e. the material removal mechanism) of micro-machining and conventional machining have significant differences due to the scaling effects associated with tool-geometry, tool material and work piece material characteristics. Shape memory alloys (SMAs) are those metal alloys which display two exceptional properties, pseudoelasticity and the shape memory effect (SME). Nickel-titanium (NiTi) alloys are one of those unique metal alloys. NiTi alloys are known to be difficult-to-cut materials specifically by using conventional machining techniques due to their explicit properties. Their high ductility, high amount of strain hardening, and unusual stress–strain behaviour are the main properties accountable for their poor machinability in terms of tool wear and work piece quality. The motivation of this research work was to address the challenges and issues of micro-machining combining with those of machining of NiTi alloy which can affect the desired performance level of machining outputs. To explore the significance of range of cutting conditions on surface roughness and tool wear, machining tests were conducted on NiTi. Influence of different cutting conditions and cutting tools on surface and sub-surface deformation in work piece was investigated. Design of experiments strategy (L9 Array) was applied to determine the key process variables. The dominant cutting parameters were determined by analysis of variance. These findings showed that feed rate was the dominant factor on surface roughness whereas depth of cut found to be dominant factor as far as tool wear was concerned. The lowest surface roughness was achieved at the feed rate of equal to the cutting edge radius where as the lowest flank wear was observed at lowest depth of cut. Repeated machining trials have yet to be carried out in order to observe the tool life, sub-surface deformation and strain induced hardening which are also expecting to be amongst the critical issues in micro machining of NiTi. The machining performance using different cutting fluids and strategies have yet to be studied.Keywords: nickel titanium, micro-machining, surface roughness, machinability
Procedia PDF Downloads 340561 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques
Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang
Abstract:
Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE
Procedia PDF Downloads 530560 Determining a Sustainability Business Model Using Materiality Matrices in an Electricity Bus Factory
Authors: Ozcan Yavas, Berrak Erol Nalbur, Sermin Gunarslan
Abstract:
A materiality matrix is a tool that organizations use to prioritize their activities and adapt to the increasing sustainability requirements in recent years. For the materiality index to move from business models to the sustainability business model stage, it must be done with all partners in the raw material, supply, production, product, and end-of-life product stages. Within the scope of this study, the Materiality Matrix was used to transform the business model into a sustainability business model and to create a sustainability roadmap in a factory producing electric buses. This matrix determines the necessary roadmap for all stakeholders to participate in the process, especially in sectors that produce sustainable products, such as the electric vehicle sector, and to act together with the cradle-to-cradle approach of sustainability roadmaps. Global Reporting Initiative analysis was used in the study conducted with 1150 stakeholders within the scope of the study, and 43 questions were asked to the stakeholders under the main headings of 'Legal Compliance Level,' 'Environmental Strategies,' 'Risk Management Activities,' 'Impact of Sustainability Activities on Products and Services,' 'Corporate Culture,' 'Responsible and Profitable Business Model Practices' and 'Achievements in Leading the Sector' and Economic, Governance, Environment, Social and Other. The results of the study aimed to include five 1st priority issues and four 2nd priority issues in the sustainability strategies of the organization in the short and medium term. When the studies carried out in the short term are evaluated in terms of Sustainability and Environmental Risk Management, it is seen that the studies are still limited to the level of legal legislation (60%) and individual studies in line with the strategies (20%). At the same time, the stakeholders expect the company to integrate sustainability activities into its business model within five years (35%) and to carry out projects to become the first company that comes to mind with its success leading the sector (20%). Another result obtained within the study's scope is identifying barriers to implementation. It is seen that the most critical obstacles identified by stakeholders with climate change and environmental impacts are financial deficiency and lack of infrastructure in the dissemination of sustainable products. These studies are critical for transitioning to sustainable business models for the electric vehicle sector to achieve the EU Green Deal and CBAM targets.Keywords: sustainability business model, materiality matrix, electricity bus, carbon neutrality, sustainability management
Procedia PDF Downloads 61559 Application of Zeolite Nanoparticles in Biomedical Optics
Authors: Vladimir Hovhannisyan, Chen Yuan Dong
Abstract:
Recently nanoparticles (NPs) have been introduced in biomedicine as effective agents for cancer-targeted drug delivery and noninvasive tissue imaging. The most important requirements to these agents are their non-toxicity, biocompatibility and stability. In view of these criteria, the zeolite (ZL) nanoparticles (NPs) may be considered as perfect candidates for biomedical applications. ZLs are crystalline aluminosilicates consisting of oxygen-sharing SiO4 and AlO4 tetrahedral groups united by common vertices in three-dimensional framework and containing pores with diameters from 0.3 to 1.2 nm. Generally, the behavior and physical properties of ZLs are studied by SEM, X-ray spectroscopy, and AFM, whereas optical spectroscopic and microscopic approaches are not effective enough, because of strong scattering in common ZL bulk materials and powders. The light scattering can be reduced by using of ZL NPs. ZL NPs have large external surface area, high dispersibility in both aqueous and organic solutions, high photo- and thermal stability, and exceptional ability to adsorb various molecules and atoms in their nanopores. In this report, using multiphoton microscopy and nonlinear spectroscopy, we investigate nonlinear optical properties of clinoptilolite type of ZL micro- and nanoparticles with average diameters of 2200 nm and 240 nm, correspondingly. Multiphoton imaging is achieved using a laser scanning microscope system (LSM 510 META, Zeiss, Germany) coupled to a femtosecond titanium:sapphire laser (repetition rate- 80 MHz, pulse duration-120 fs, radiation wavelength- 720-820 nm) (Tsunami, Spectra-Physics, CA). Two Zeiss, Plan-Neofluar objectives (air immersion 20×∕NA 0.5 and water immersion 40×∕NA 1.2) are used for imaging. For the detection of the nonlinear response, we use two detection channels with 380-400 nm and 435-700 nm spectral bandwidths. We demonstrate that ZL micro- and nanoparticles can produce nonlinear optical response under the near-infrared femtosecond laser excitation. The interaction of hypericine, chlorin e6 and other dyes with ZL NPs and their photodynamic activity is investigated. Particularly, multiphoton imaging shows that individual ZL NPs particles adsorb Zn-tetraporphyrin molecules, but do not adsorb fluorescein molecules. In addition, nonlinear spectral properties of ZL NPs in native biotissues are studied. Nonlinear microscopy and spectroscopy may open new perspectives in the research and application of ZL NP in biomedicine, and the results may help to introduce novel approaches into the clinical environment.Keywords: multiphoton microscopy, nanoparticles, nonlinear optics, zeolite
Procedia PDF Downloads 417558 Effects of Exercise Training in the Cold on Browning of White Fat in Obese Rats
Authors: Xiquan Weng, Chaoge Wang, Guoqin Xu, Wentao Lin
Abstract:
Objective: Cold exposure and exercise serve as two powerful physiological stimuli to launch the conversion of fat-accumulating white adipose tissue (WAT) into energy-dissipating brown adipose tissue (BAT). So far, it remains to be elucidated whether exercise plus cold exposure can produce an addictive effect on promoting WAT browning. Methods: 64 SD rats were subjected to high-fat and high-sugar diets for 9-week and successfully established an obesity model. They were randomly divided into 8 groups: normal control group (NC), normal exercise group (NE), continuous cold control group (CC), continuous cold exercise group (CE), intermittent cold control group (IC) and intermittent cold exercise group (IE). For continuous cold exposure, the rats stayed in a cold environment all day; For intermittent cold exposure, the rats were exposed to cold for only 4h per day. The protocol for treadmill exercises were as follows: 25m/min (speed), 0°C (slope), 30mins each time, an interval for 10 mins between two exercises, twice/two days, lasting for 5 weeks. Sampling were conducted on the 5th weekend. The body length and weight of the rats were measured, and the Lee's index was calculated. The visceral fat rate (VFR), subcutaneous fat rate (SFR), brown fat rate (BrFR) and body fat rate (BoFR) were measured by Micro-CT LCT200, and the expression of UCP1 protein in inguinal fat was examined by Western-blot. SPSS 22.0 was used for statistical analysis of the experimental results, and the ANOVA analysis was performed between groups (P < 0.05 was significant). Results: (1) Compared with the NC group, the weight of obese rats was significantly declined in the NE, CE and IE groups (P < 0.05), the Lee's index of obese rats significantly declined in the CE group (P < 0.05). Compared with the NE group, the weight of obese rats was significantly declined in the CE and IE groups (P < 0.05). (2)Compared with the NC group, the VFR and BoFR of the rats significantly declined in the NE, CE and IE groups (P < 0.05), the SFR of the rats significantly declined in the CE and IE groups (P < 0.05), and the BFR of the rats was significantly higher in the CC and IC groups (P < 0.05), respectively. Compared with the NE group, the VFR and BoFR of the rats significantly declined in the CE group (P < 0.05), the SFR of the rats was significantly higher in the CC and IS groups (P < 0.05), and the BrFR of the rats was significantly higher in the IC group (P < 0.05). (3)Compared with the NC group, the up-regulation of UCP1 protein expression in the inguinal fat of the rats was significant in the NE, CC, CE, IC and IE groups (P < 0.05). Compared with the NE group, the up-regulation of UCP1 protein expression in the inguinal fat of the rats was significant in the CC, CE and IE groups (P < 0.05). Conclusions: Exercise in the continuous and intermittent cold, especially in the former, can effectively decline the weight and body fat rate of obese rats. This is related to the effect of cold and exercise on the browning of white fat in rats.Keywords: cold, browning of white fat, exercise, obesity
Procedia PDF Downloads 131