Search results for: animal feed industry and fertilizer industry
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7479

Search results for: animal feed industry and fertilizer industry

1209 Sustainability in Space: Material Efficiency in Space Missions

Authors: Hamda M. Al-Ali

Abstract:

From addressing fundamental questions about the history of the solar system to exploring other planets for any signs of life have always been the core of human space exploration. This triggered humans to explore whether other planets such as Mars could support human life on them. Therefore, many planned space missions to other planets have been designed and conducted to examine the feasibility of human survival on them. However, space missions are expensive and consume a large number of various resources to be successful. To overcome these problems, material efficiency shall be maximized through the use of reusable launch vehicles (RLV) rather than disposable and expendable ones. Material efficiency is defined as a way to achieve service requirements using fewer materials to reduce CO2 emissions from industrial processes. Materials such as aluminum-lithium alloys, steel, Kevlar, and reinforced carbon-carbon composites used in the manufacturing of spacecrafts could be reused in closed-loop cycles directly or by adding a protective coat. Material efficiency is a fundamental principle of a circular economy. The circular economy aims to cutback waste and reduce pollution through maximizing material efficiency so that businesses can succeed and endure. Five strategies have been proposed to improve material efficiency in the space industry, which includes waste minimization, introduce Key Performance Indicators (KPIs) to measure material efficiency, and introduce policies and legislations to improve material efficiency in the space sector. Another strategy to boost material efficiency is through maximizing resource and energy efficiency through material reusability. Furthermore, the environmental effects associated with the rapid growth in the number of space missions include black carbon emissions that lead to climate change. The levels of emissions must be tracked and tackled to ensure the safe utilization of space in the future. The aim of this research paper is to examine and suggest effective methods used to improve material efficiency in space missions so that space and Earth become more environmentally and economically sustainable. The objectives used to fulfill this aim are to identify the materials used in space missions that are suitable to be reused in closed-loop cycles considering material efficiency indicators and circular economy concepts. An explanation of how spacecraft materials could be re-used as well as propose strategies to maximize material efficiency in order to make RLVs possible so that access to space becomes affordable and reliable is provided. Also, the economic viability of the RLVs is examined to show the extent to which the use of RLVs has on the reduction of space mission costs. The environmental and economic implications of the increase in the number of space missions as a result of the use of RLVs are also discussed. These research questions are studied through detailed critical analysis of the literature, such as published reports, books, scientific articles, and journals. A combination of keywords such as material efficiency, circular economy, RLVs, and spacecraft materials were used to search for appropriate literature.

Keywords: access to space, circular economy, material efficiency, reusable launch vehicles, spacecraft materials

Procedia PDF Downloads 93
1208 The Classification Accuracy of Finance Data through Holder Functions

Authors: Yeliz Karaca, Carlo Cattani

Abstract:

This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).

Keywords: artificial neural networks, finance data, Holder regularity, multifractals

Procedia PDF Downloads 235
1207 Study of the Feasibility of Submerged Arc Welding(SAW) on Mild Steel Plate IS 2062 Grade B at Zero Degree Celsius

Authors: Ajay Biswas, Swapan Bhaumik, Saurav Datta, Abhijit Bhowmik

Abstract:

A series of experiments has been carried out to study the feasibility of submerged arc welding (SAW) on mild steel plate of designation IS 2062 grade B. Specimen temperature of which is reduced to zero degree Celsius whereas the ambient temperature is about 25-27 degree Celsius. To observe this, bead on plate submerged arc welding is formed on the specimen plate of heavy duty mild steel of designation IS 2062 grade B, fitted on the special fixture ensuring zero degree Celsius temperature to the specimen plate. Sixteen numbers of cold samples is welded by varying the most influencing parameters viz. voltage, wire feed rate, travel speed, and electrode stick-out at four different levels. Another sixteen numbers of specimens are at normal room temperature are welded by applying same combination of parameters. Those sixteen numbers of specimens are selected based on the design of experiment of Taguchi‘s L16 orthogonal array with the intension of reducing the number of experimental runs. Different attributes of bead geometry of the entire sample for both the situations are measured and compared. It is established that submerged arc welding is feasible at zero degree Celsius on mild steel plate of designation IS 2062 grade B and optimization of the process parameters can also be drawn as a clear response of parameters are obtained.

Keywords: submerged arc welding, zero degree celsius, Taguchi’s design of experiment, geometry of weldment

Procedia PDF Downloads 436
1206 Feasibility Study of Submerged Arc Welding (SAW) on Mild Steel Plate IS 2062 Grade B at Zero Degree Celsius

Authors: Ajay Biswas, Abhijit Bhowmik, Saurav Datta, Swapan Bhaumik

Abstract:

A series of experiments has been carried out to study the feasibility of submerged arc welding (SAW) on mild steel plate of designation IS 2062 grade B. Specimen temperature of which is reduced to zero degree Celsius whereas the ambient temperature is about 25-27 degree Celsius. To observe this, bead on plate submerged arc welding is formed on the specimen plate of heavy duty mild steel of designation IS 2062 grade B, fitted on the special fixture ensuring zero degree Celsius temperature to the specimen plate. Sixteen numbers of cold samples is welded by varying the most influencing parameters viz. Voltage, wire feed rate, travel speed and electrode stick-out at four different levels. Another sixteen numbers of specimens are at normal room temperature are welded by applying same combination of parameters. Those sixteen numbers of specimens are selected based on the design of experiment of Taguchi‘s L16 orthogonal array with the intension of reducing the number of experimental runs. Different attributes of bead geometry of the entire sample for both the situations are measured and compared. It is established that submerged arc welding is feasible at zero degree Celsius on mild steel plate of designation IS 2062 grade B and optimization of the process parameters can also be drawn as a clear response of parameters are obtained.

Keywords: geometry of weldment, submerged arc welding, Taguchi’s design of experiment, zero degree Celsius

Procedia PDF Downloads 422
1205 Parents as a Determinant for Students' Attitudes and Intentions toward Higher Education

Authors: Anna Öqvist, Malin Malmström

Abstract:

Attaining a higher level of education has become an increasingly important prerequisite for people’s economic and social independence and mobility. Young people who do not pursue higher education are not as attractive as potential employees in the modern work environment. Although completing a higher education degree is not a guarantee for getting a job, it substantially increases the chances for employment and, consequently, the chances for a better life. Despite this, it’s a fact that in several regions in Sweden, fewer students are choosing to engage in higher education. Similar trends have been emphasized in, for instance, the US where high dropout patterns among young people have been noted. This is a threat to future employment and industry development in these regions because the future employment base for society is dependent upon students’ willingness to invest in higher education. Much of prior studies have focused on the role of parents’ involvement in their children’s’ school work and the positive influence parents involvement have on their children’s school performance. Parental influence on education in general has been a topic of interest among those concerned with optimal developmental and educational outcomes for children and youth in pre-, secondary- and high school. Across a range of studies, there has emerged a strong conclusion that parental influence on child and youths education generally benefits children's and youths learning and school success. Arguably then, we could expect that parents influence on whether or not to pursue a higher education would be of importance to understand young people’s choice to engage in higher education. Accordingly, understanding what drives students’ intentions to pursue higher education is an essential component of motivating students to aspire to make the most of their potential in their future work life. Drawing on the theory of planned behavior, this study examines the role of parents influence on students’ attitudes about whether higher education can be beneficial to their future work life. We used a qualitative approach by collecting interview data from 18 high school students in Sweden to capture students’ cognitive and motivational mechanisms (attitudes) to influence intentions to engage in higher education. We found that parents may positively or negatively influence students’ attitudes and subsequently a student's intention to pursue higher education. Accordingly, our results show that parents’ own attitudes and expectations on their children are keys for influencing students’ attitudes and intentions for higher education. Further, our finding illuminates the mechanisms that drive students in one direction or the other. As such, our findings show that the same categories of arguments are used for driving students’ attitudes and intentions in two opposite directions, namely; financial arguments and work life benefits arguments. Our results contribute to existing literature by showing that parents do affect young people’s intentions to engage in higher studies. The findings contribute to the theory of planned behavior and have implications for the literature on higher education and educational psychology and also provide guidance on how to inform students about facts of higher studies in school.

Keywords: higher studies, intentions, parents influence, theory of planned behavior

Procedia PDF Downloads 245
1204 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector

Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini

Abstract:

Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.

Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products

Procedia PDF Downloads 134
1203 Music Genre Classification Based on Non-Negative Matrix Factorization Features

Authors: Soyon Kim, Edward Kim

Abstract:

In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.

Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)

Procedia PDF Downloads 281
1202 Sampling Error and Its Implication for Capture Fisheries Management in Ghana

Authors: Temiloluwa J. Akinyemi, Denis W. Aheto, Wisdom Akpalu

Abstract:

Capture fisheries in developing countries provide significant animal protein and directly supports the livelihoods of several communities. However, the misperception of biophysical dynamics owing to a lack of adequate scientific data has contributed to the suboptimal management in marine capture fisheries. This is because yield and catch potentials are sensitive to the quality of catch and effort data. Yet, studies on fisheries data collection practices in developing countries are hard to find. This study investigates the data collection methods utilized by fisheries technical officers within the four fishing regions of Ghana. We found that the officers employed data collection and sampling procedures which were not consistent with the technical guidelines curated by FAO. For example, 50 instead of 166 landing sites were sampled, while 290 instead of 372 canoes were sampled. We argue that such sampling errors could result in the over-capitalization of capture fish stocks and significant losses in resource rents.

Keywords: Fisheries data quality, fisheries management, Ghana, Sustainable Fisheries

Procedia PDF Downloads 76
1201 Plackett-Burman Design for Microencapsulation of Blueberry Bioactive Compounds

Authors: Feyza Tatar, Alime Cengiz, Dilara Sandikçi, Muhammed Dervisoglu, Talip Kahyaoglu

Abstract:

Blueberries are known for their bioactive properties such as high anthocyanin contents, antioxidant activities and potential health benefits. However, anthocyanins are sensitive to environmental conditions during processes. The objective of this study was to evaluate the effects of spray drying conditions on the blueberry microcapsules by Plackett-Burman experimental design. Inlet air temperature (120 and 180°C), feed pump rate (20% and 40%), DE of maltodextrin (6 and 15 DE), coating concentration (10% and 30%) and source of blueberry (Duke and Darrow) were independent variables, tested at high (+1) and low (-1) levels. Encapsulation efficiency (based on total phenol) of blueberry microcapsules was the dependent variable. In addition, anthocyanin content, antioxidant activity, water solubility, water activity and bulk density were measured for blueberry powders. The antioxidant activity of blueberry powders ranged from 72 to 265 mmol Trolox/g and anthocyanin content was changed from 528 to 5500 mg GAE/100g. Encapsulation efficiency was significantly affected (p<0.05) by inlet air temperature and coating concentration. Encapsulation efficiency increased with increasing inlet air temperature and decreasing coating concentration. The highest encapsulation efficiency could be produced by spray drying at 180°C inlet air temperature, 40% pump rate, 6 DE of maltodextrin, 13% maltodextrin concentration and source of duke blueberry.

Keywords: blueberry, microencapsulation, Plackett-Burman design, spray drying

Procedia PDF Downloads 277
1200 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 172
1199 Advanced Real-Time Fluorescence Imaging System for Rat's Femoral Vein Thrombosis Monitoring

Authors: Sang Hun Park, Chul Gyu Song

Abstract:

Artery and vein occlusion changes observed in patients and experimental animals are unexplainable symptoms. As the fat accumulated in cardiovascular ruptures, it causes vascular blocking. Likewise, early detection of cardiovascular disease can be useful for treatment. In this study, we used the mouse femoral occlusion model to observe the arterial and venous occlusion changes without darkroom. We observed the femoral arterial flow pattern changes by proposed fluorescent imaging system using an animal model of thrombosis. We adjusted the near-infrared light source current in order to control the intensity of the fluorescent substance light. We got the clear fluorescent images and femoral artery flow pattern were measured by a 5-minute interval. The result showed that the fluorescent substance flowing in the femoral arteries were accumulated in thrombus as time passed, and the fluorescence of other vessels gradually decreased.

Keywords: thrombus, fluorescence, femoral, arteries

Procedia PDF Downloads 326
1198 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory

Authors: Xiaochen Mu

Abstract:

Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.

Keywords: data protection, property rights, intellectual property, Big data

Procedia PDF Downloads 11
1197 Collaborative Procurement in the Pursuit of Net- Zero: A Converging Journey

Authors: Bagireanu Astrid, Bros-Williamson Julio, Duncheva Mila, Currie John

Abstract:

The Architecture, Engineering, and Construction (AEC) sector plays a critical role in the global transition toward sustainable and net-zero built environments. However, the industry faces unique challenges in planning for net-zero while struggling with low productivity, cost overruns and overall resistance to change. Traditional practices fall short due to their inability to meet the requirements for systemic change, especially as governments increasingly demand transformative approaches. Working in silos and rigid hierarchies and a short-term, client-centric approach prioritising immediate gains over long-term benefit stands in stark contrast to the fundamental requirements for the realisation of net-zero objectives. These practices have limited capacity to effectively integrate AEC stakeholders and promote the essential knowledge sharing required to address the multifaceted challenges of achieving net-zero. In the context of built environment, procurement may be described as the method by which a project proceeds from inception to completion. Collaborative procurement methods under the Integrated Practices (IP) umbrella have the potential to align more closely with net-zero objectives. This paper explores the synergies between collaborative procurement principles and the pursuit of net zero in the AEC sector, drawing upon the shared values of cross-disciplinary collaboration, Early Supply Chain involvement (ESI), use of standards and frameworks, digital information management, strategic performance measurement, integrated decision-making principles and contractual alliancing. To investigate the role of collaborative procurement in advancing net-zero objectives, a structured research methodology was employed. First, the study focuses on a systematic review on the application of collaborative procurement principles in the AEC sphere. Next, a comprehensive analysis is conducted to identify common clusters of these principles across multiple procurement methods. An evaluative comparison between traditional procurement methods and collaborative procurement for achieving net-zero objectives is presented. Then, the study identifies the intersection between collaborative procurement principles and the net-zero requirements. Lastly, an exploration of key insights for AEC stakeholders focusing on the implications and practical applications of these findings is made. Directions for future development of this research are recommended. Adopting collaborative procurement principles can serve as a strategic framework for guiding the AEC sector towards realising net-zero. Synergising these approaches overcomes fragmentation, fosters knowledge sharing, and establishes a net-zero-centered ecosystem. In the context of the ongoing efforts to amplify project efficiency within the built environment, a critical realisation of their central role becomes imperative for AEC stakeholders. When effectively leveraged, collaborative procurement emerges as a powerful tool to surmount existing challenges in attaining net-zero objectives.

Keywords: collaborative procurement, net-zero, knowledge sharing, architecture, built environment

Procedia PDF Downloads 61
1196 Immune Responses and Pathological Manifestations in Chicken to Oral Infection with Salmonella typhimurium

Authors: Mudasir Ahmad Syed, Raashid Ahmd Wani, Mashooq Ahmad Dar, Uneeb Urwat, Riaz Ahmad Shah, Nazir Ahmad Ganai

Abstract:

Salmonella enterica serovar Typhimurium (Salmonella Typhimurium) is a primary avian pathogen responsible for severe intestinal pathology in younger chickens and economic losses. However, the Salmonella Typhimurium is also able to cause infection in humans, described by typhoid fever and acute gastro-intestinal disease. A study was conducted at days to investigate pathological, histopathological, haemato-biochemical, immunological and expression kinetics of NRAMP (natural resistance associated macrophage protein) gene family (NRAMP1 and NRAMP2) in broiler chickens following experimental infection of Salmonella Typhimurium at 0,1,3,5,7,9,11,13 and 15 days respectively. Infection was developed in birds through oral route at 2×108 CFU/ml. Clinical symptoms appeared 4 days post infection (dpi) and after one-week birds showed progressive weakness, anorexia, diarrhea and lowering of head. On postmortem examination, liver showed congestion, hemorrhage and necrotic foci on surface, while as spleen, lungs and intestines revealed congestion and hemorrhages. Histopathological alterations were principally observed in liver in second week post infection. Changes in liver comprised of congestion, areas of necrosis, reticular endothelial hyperplasia in association with mononuclear cell and heterophilic infiltration. Hematological studies confirm a significant decrease (P<0.05) in RBC count, Hb concentration and PCV. White blood cell count showed significant increase throughout the experimental study. An increase in heterophils was found up to 7dpi and a decreased pattern was observed afterwards. Initial lymphopenia followed by lymphocytosis was found in infected chicks. Biochemical studies showed a significant increase in glucose, AST and ALT concentration and a significant decrease (P<0.05) in total protein and albumin level in the infected group. Immunological studies showed higher titers of IgG in infected group as compared to control group. The real time gene expression of NRAMPI and NRAMP2 genes increased significantly (P<0.05) in infected group as compared to controls. The peak expression of NRAMP1 gene was seen in liver, spleen and caecum of infected birds at 3dpi, 5dpi and 7dpi respectively, while as peak expression of NRAMP2 gene in liver, spleen and caecum of infected chicken was seen at 9dpi, 5dpi and 9dpi respectively. This study has role in diagnostics and prognostics in the poultry industry for the detection of salmonella infections at early stages of poultry development.

Keywords: biochemistry, histopathology, NRAMP, poultry, real time expression, Salmonella Typhimurium

Procedia PDF Downloads 322
1195 Fermented Unripe Plantain (Musa paradisiacal) Peel Meal as a Replacement for Maize in the Diet of Nile Tilapia (Oreochromis niloticus) Fingerlings

Authors: N. A. Bamidele, S. O. Obasa, I. O. Taiwo, I. Abdulraheem, O. C. Odebiyi, A. A. Adeoye, O. E. Babalola, O. V. Uzamere

Abstract:

A feeding trial was conducted to investigate the effect of fermented unripe plantain peel meal (FUP) on growth performance, nutrients digestibility and economic indices of production of Nile tilapia, Oreochromis niloticus fingerlings. Fingerlings (150) of Nile tilapia (1.70±0.1g) were stocked at 10 per plastic tank. Five iso-nitrogenous diets containing 40% crude protein in which maize meal was replaced by fermented unripe plantain peel meal at 0% (FUP0), 25% (FUP25), 50% (FUP50), 75% (FUP75) and 100% (FUP100) were formulated and prepared. The fingerlings were fed at 5% body weight per day for 56 days. There was no significant difference (p > 0.05) in all the growth parameters among the treatments. Feed conversion ratio of 1.35 in fish fed diet FUP25 was not significantly different (P > 0.05) from 1.42 of fish fed diet FUP0. Apparent protein digestibility of 86.94% in fish fed diet FUP100 was significantly higher (p < 0.05) than 70.37% in fish fed diet FUP0 while apparent carbohydrate of 88.34% in fish fed diet FUP0 was significantly different (p < 0.05) from 70.29% of FUP100. Red blood cell (4.30 ml/mm3) of fish fed diet FUP100 was not significantly different from 4.13 ml/mm3 of fish fed diet FUP50. The highest percentage profit of 88.85% in fish fed diet FUP100 was significantly higher than 66.68% in fish fed diet FUP0 while the profit index of 1.89 in fish fed diet FUP100 was significantly different from 1.67 in fish fed diet FUP0. Therefore, fermented unripe plantain peel meal can completely replace maize in the diet of O. niloticus fingerlings.

Keywords: fermentation, fish diets, plantain peel, tilapia

Procedia PDF Downloads 519
1194 Gas Network Noncooperative Game

Authors: Teresa Azevedo PerdicoúLis, Paulo Lopes Dos Santos

Abstract:

The conceptualisation of the problem of network optimisation as a noncooperative game sets up a holistic interactive approach that brings together different network features (e.g., com-pressor stations, sources, and pipelines, in the gas context) where the optimisation objectives are different, and a single optimisation procedure becomes possible without having to feed results from diverse software packages into each other. A mathematical model of this type, where independent entities take action, offers the ideal modularity and subsequent problem decomposition in view to design a decentralised algorithm to optimise the operation and management of the network. In a game framework, compressor stations and sources are under-stood as players which communicate through network connectivity constraints–the pipeline model. That is, in a scheme similar to tatonnementˆ, the players appoint their best settings and then interact to check for network feasibility. The devolved degree of network unfeasibility informs the players about the ’quality’ of their settings, and this two-phase iterative scheme is repeated until a global optimum is obtained. Due to network transients, its optimisation needs to be assessed at different points of the control interval. For this reason, the proposed approach to optimisation has two stages: (i) the first stage computes along the period of optimisation in order to fulfil the requirement just mentioned; (ii) the second stage is initialised with the solution found by the problem computed at the first stage, and computes in the end of the period of optimisation to rectify the solution found at the first stage. The liability of the proposed scheme is proven correct on an abstract prototype and three example networks.

Keywords: connectivity matrix, gas network optimisation, large-scale, noncooperative game, system decomposition

Procedia PDF Downloads 139
1193 New Method to Increase Contrast of Electromicrograph of Rat Tissues Sections

Authors: Lise Paule Labéjof, Raíza Sales Pereira Bizerra, Galileu Barbosa Costa, Thaísa Barros dos Santos

Abstract:

Since the beginning of the microscopy, improving the image quality has always been a concern of its users. Especially for transmission electron microscopy (TEM), the problem is even more important due to the complexity of the sample preparation technique and the many variables that can affect the conservation of structures, proper operation of the equipment used and then the quality of the images obtained. Animal tissues being transparent it is necessary to apply a contrast agent in order to identify the elements of their ultrastructural morphology. Several methods of contrastation of tissues for TEM imaging have already been developed. The most used are the “in block” contrastation and “in situ” contrastation. This report presents an alternative technique of application of contrast agent in vivo, i.e. before sampling. By this new method the electromicrographies of the tissue sections have better contrast compared to that in situ and present no artefact of precipitation of contrast agent. Another advantage is that a small amount of contrast is needed to get a good result given that most of them are expensive and extremely toxic.

Keywords: image quality, microscopy research, staining technique, ultra thin section

Procedia PDF Downloads 418
1192 Graphene-Graphene Oxide Dopping Effect on the Mechanical Properties of Polyamide Composites

Authors: Daniel Sava, Dragos Gudovan, Iulia Alexandra Gudovan, Ioana Ardelean, Maria Sonmez, Denisa Ficai, Laurentia Alexandrescu, Ecaterina Andronescu

Abstract:

Graphene and graphene oxide have been intensively studied due to the very good properties, which are intrinsic to the material or come from the easy doping of those with other functional groups. Graphene and graphene oxide have known a broad band of useful applications, in electronic devices, drug delivery systems, medical devices, sensors and opto-electronics, coating materials, sorbents of different agents for environmental applications, etc. The board range of applications does not come only from the use of graphene or graphene oxide alone, or by its prior functionalization with different moieties, but also it is a building block and an important component in many composite devices, its addition coming with new functionalities on the final composite or strengthening the ones that are already existent on the parent product. An attempt to improve the mechanical properties of polyamide elastomers by compounding with graphene oxide in the parent polymer composition was attempted. The addition of the graphene oxide contributes to the properties of the final product, improving the hardness and aging resistance. Graphene oxide has a lower hardness and textile strength, and if the amount of graphene oxide in the final product is not correctly estimated, it can lead to mechanical properties which are comparable to the starting material or even worse, the graphene oxide agglomerates becoming a tearing point in the final material if the amount added is too high (in a value greater than 3% towards the parent material measured in mass percentages). Two different types of tests were done on the obtained materials, the hardness standard test and the tensile strength standard test, and they were made on the obtained materials before and after the aging process. For the aging process, an accelerated aging was used in order to simulate the effect of natural aging over a long period of time. The accelerated aging was made in extreme heat. For all materials, FT-IR spectra were recorded using FT-IR spectroscopy. From the FT-IR spectra only the bands corresponding to the polyamide were intense, while the characteristic bands for graphene oxide were very small in comparison due to the very small amounts introduced in the final composite along with the low absorptivity of the graphene backbone and limited number of functional groups. In conclusion, some compositions showed very promising results, both in tensile strength test and in hardness tests. The best ratio of graphene to elastomer was between 0.6 and 0.8%, this addition extending the life of the product. Acknowledgements: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project ‘New nanostructured polymeric composites for centre pivot liners, centre plate and other components for the railway industry (RONERANANOSTRUCT)’, No: 18 PTE (PN-III-P2-2.1-PTE-2016-0146) is also acknowledged.

Keywords: graphene, graphene oxide, mechanical properties, dopping effect

Procedia PDF Downloads 301
1191 Survey of Campylobacter Contamination in Poultry Meat and By-Products in Khuzestan Province

Authors: Ali Bagherpour, Masoud Soltanialvar

Abstract:

Campylobacter species are common bacterial pathogens associated with human gastroenteritis which are generally transmitted through foods of animal origin. This study was carried out to determine the prevalence of Campylobacter species in poultry meat and by products in the city of Dezful in Iran. Since April 2012 to July 2013, a total of 400 samples including meat (n = 100), liver (n = 100), gizzard (n = 100), and poultry heart (n = 100), were randomly collected from Dezful industrial poultry abattoir and were experimented in order to investigate presence of Campylobacter species. According to culture test, 251 samples out of 400 samples under study (69%) were contaminated with Campylobacter species. The highest prevalence of Campylobacter species was observed in poultry's liver (78.3%) and then in gizzard (75.8%), heart (65%) and meat (56.7%). The most common isolated Campylobacter were C. jejuni (90.9%) and the rest were C. coli (9.1%). There was a significant difference (P < 0.05) in the prevalence of Campylobacter species between the meat samples taken in the summer (86.7%). The results of this study indicate the importance of edible offal of poultries as the potential source of Campylobacter infections.

Keywords: Campylobacter jejuni, Campylobacter coli, poultry, meat, products

Procedia PDF Downloads 593
1190 Comparison of Non-destructive Devices to Quantify the Moisture Content of Bio-Based Insulation Materials on Construction Sites

Authors: Léa Caban, Lucile Soudani, Julien Berger, Armelle Nouviaire, Emilio Bastidas-Arteaga

Abstract:

Improvement of the thermal performance of buildings is a high concern for the construction industry. With the increase in environmental issues, new types of construction materials are being developed. These include bio-based insulation materials. They capture carbon dioxide, can be produced locally, and have good thermal performance. However, their behavior with respect to moisture transfer is still facing some issues. With a high porosity, the mass transfer is more important in those materials than in mineral insulation ones. Therefore, they can be more sensitive to moisture disorders such as mold growth, condensation risks or decrease of the wall energy efficiency. For this reason, the initial moisture content on the construction site is a piece of crucial knowledge. Measuring moisture content in a laboratory is a mastered task. Diverse methods exist but the easiest and the reference one is gravimetric. A material is weighed dry and wet, and its moisture content is mathematically deduced. Non-destructive methods (NDT) are promising tools to determine in an easy and fast way the moisture content in a laboratory or on construction sites. However, the quality and reliability of the measures are influenced by several factors. Classical NDT portable devices usable on-site measure the capacity or the resistivity of materials. Water’s electrical properties are very different from those of construction materials, which is why the water content can be deduced from these measurements. However, most moisture meters are made to measure wooden materials, and some of them can be adapted for construction materials with calibration curves. Anyway, these devices are almost never calibrated for insulation materials. The main objective of this study is to determine the reliability of moisture meters in the measurement of biobased insulation materials. The determination of which one of the capacitive or resistive methods is the most accurate and which device gives the best result is made. Several biobased insulation materials are tested. Recycled cotton, two types of wood fibers of different densities (53 and 158 kg/m3) and a mix of linen, cotton, and hemp. It seems important to assess the behavior of a mineral material, so glass wool is also measured. An experimental campaign is performed in a laboratory. A gravimetric measurement of the materials is carried out for every level of moisture content. These levels are set using a climatic chamber and by setting the relative humidity level for a constant temperature. The mass-based moisture contents measured are considered as references values, and the results given by moisture meters are compared to them. A complete analysis of the uncertainty measurement is also done. These results are used to analyze the reliability of moisture meters depending on the materials and their water content. This makes it possible to determine whether the moisture meters are reliable, and which one is the most accurate. It will then be used for future measurements on construction sites to assess the initial hygrothermal state of insulation materials, on both new-build and renovation projects.

Keywords: capacitance method, electrical resistance method, insulation materials, moisture transfer, non-destructive testing

Procedia PDF Downloads 97
1189 Compensatory Increased Activities of Mitochondrial Respiratory Chain Complexes from Eyes of Glucose-Immersed Zebrafish

Authors: Jisun Jun, Eun Ko, Sooim Shin, Kitae Kim, Moonsung Choi

Abstract:

Diabetes is a metabolic disease characterized by hyperglycemia, insulin resistant, mitochondrial dysfunction. Diabetes is associated with the development of diabetic retinopathy resulting in worsening vision and eventual blindness. In this study, eyes were enucleated from glucose-immersed zebrafish which is a good animal model to generate diabetes, and then mitochondria were isolated to evaluate activities of mitochondrial electron transfer complexes. Surprisingly, the amount of isolated mitochondria was increased in eyes from glucose-immersed zebrafish compared to those from non-glucose-immerged zebrafish. Spectrophotometric analysis for measuring activities of mitochondrial complex I, II, III, and IV revealed that mitochondria functions was even enhanced in eyes from glucose-immersed zebrafish. These results indicated that 3 days or 7 days glucose-immersion on zebrafish to induce diabetes might contribute metabolic compensatory mechanism to restore their mitochondrial homeostasis on the early stage of diabetes in eyes.

Keywords: diabetes, glucose immersion, mitochondrial complexes, zebrafish

Procedia PDF Downloads 187
1188 Activation of Apoptosis in the Midgut Epithelium of Spodoptera exigua Hübner (Lepidoptera: Noctuidae) Exposed to Various Cadmium Concentration

Authors: Magdalena Maria Rost-Roszkowska, Alina Chachulska-Żymełka, Monika Tarnawska, Maria Augustyniak, Alina Kafel, Agnieszka Babczyńska

Abstract:

The digestive system of insects is composed of three distinct regions: fore-, mid- and hingut. The middle region (the midgut) is treated as one of the barriers which protects the organism against any stressors which originate from external environment, e.g. toxic metals. Such factors can activate the cell death in epithelial cells to preserve the entire tissue/organs against the degeneration. Different mechanisms involved in homeostasis maintenance have been described, but the studies of animals under field conditions do not give the opportunity to conclude about potential ability of subsequent generation to inherit the tolerance mechanisms. It is possible only by a multigenerational strain of an animal led under laboratory conditions, exposed to a selected toxic factor, present also in polluted ecosystems. The main purpose of the project was to check if changes, which appear in the midgut epithelium after Cd treatment, can be fixed during the following generations of insects with the special emphasis on apoptosis. As the animal for these studies we chose 5th larval stage of the beet armyworm Spodoptera exigua Hübner (Lepidoptera: Noctuidae), which is one of pest of many vegetable crops. Animals were divided into some experimental groups: K, Cd, KCd, Cd1, Cd2, Cd3. A control group (K) fed a standard diet, and was conducted for XX generations, a cadmium group (Cd), fed on standard diet supplemented with cadmium (44 mg Cd per kg of dry weight of food) for XXX generations. A reference Cd group (KCd) has been initiated: control insects were fed with Cd supplemented diet (44 mg Cd per kg of dry weight of food). Experimental groups Cd1, Cd2, Cd3 developed from the control one: 5 mg Cd per kg of dry weight of food, 10 mg Cd per kg of dry weight of food, 20 mg Cd per kg of dry weight of food. We were interested in the activation of apoptosis during following generations in all experimental groups. Therefore, during the 1st year of the experiment, the measurements were done for 6 generations in all experimental group. The intensity and the course of apoptosis have been examined using transmission electron microscope (TEM), confocal microscope and flow cytometry. During apoptosis the cell started to shrink, extracellular spaces appeared between digestive and neighboring cells, the nucleus achieved a lobular shape. Eventually, the apoptotic cells was discharged into the midgut lumen. A quantitative analysis revealed that the number of apoptotic cells depends significantly on the generation, tissue and cadmium concentration in the insect rearing medium. In the following 6 generations, we observed that the percentage of apoptotic cells in the midguts from cadmium-exposed groups decreased gradually according to the following order of strains: Cd1, Cd2, Cd3 and KCd. At the same time, it was still higher than the percentage of apoptotic cells in the same tissues of the insects from the control and multigenerational cadmium strain. The results of our studies suggest that changes caused by cadmium treatment were preserved during 6-generational development of lepidopteran larvae. The study has been financed by the National Science Centre Poland, grant no 2016/21/B/NZ8/00831.

Keywords: cadmium, cell death, digestive system, ultrastructure

Procedia PDF Downloads 201
1187 Pioneering Conservation of Aquatic Ecosystems under Australian Law

Authors: Gina M. Newton

Abstract:

Australia’s Environment Protection and Biodiversity Conservation Act (EPBC Act) is the premiere, national law under which species and 'ecological communities' (i.e., like ecosystems) can be formally recognised and 'listed' as threatened across all jurisdictions. The listing process involves assessment against a range of criteria (similar to the IUCN process) to demonstrate conservation status (i.e., vulnerable, endangered, critically endangered, etc.) based on the best available science. Over the past decade in Australia, there’s been a transition from almost solely terrestrial to the first aquatic threatened ecological community (TEC or ecosystem) listings (e.g., River Murray, Macquarie Marshes, Coastal Saltmarsh, Salt-wedge Estuaries). All constitute large areas, with some including multiple state jurisdictions. Development of these conservation and listing advices has enabled, for the first time, a more forensic analysis of three key factors across a range of aquatic and coastal ecosystems: -the contribution of invasive species to conservation status, -how to demonstrate and attribute decline in 'ecological integrity' to conservation status, and, -identification of related priority conservation actions for management. There is increasing global recognition of the disproportionate degree of biodiversity loss within aquatic ecosystems. In Australia, legislative protection at Commonwealth or State levels remains one of the strongest conservation measures. Such laws have associated compliance mechanisms for breaches to the protected status. They also trigger the need for environment impact statements during applications for major developments (which may be denied). However, not all jurisdictions have such laws in place. There remains much opposition to the listing of freshwater systems – for example, the River Murray (Australia's largest river) and Macquarie Marshes (an internationally significant wetland) were both disallowed by parliament four months after formal listing. This was mainly due to a change of government, dissent from a major industry sector, and a 'loophole' in the law. In Australia, at least in the immediate to medium-term time frames, invasive species (aliens, native pests, pathogens, etc.) appear to be the number one biotic threat to the biodiversity and ecological function and integrity of our aquatic ecosystems. Consequently, this should be considered a current priority for research, conservation, and management actions. Another key outcome from this analysis was the recognition that drawing together multiple lines of evidence to form a 'conservation narrative' is a more useful approach to assigning conservation status. This also helps to addresses a glaring gap in long-term ecological data sets in Australia, which often precludes a more empirical data-driven approach. An important lesson also emerged – the recognition that while conservation must be underpinned by the best available scientific evidence, it remains a 'social and policy' goal rather than a 'scientific' goal. Communication, engagement, and 'politics' necessarily play a significant role in achieving conservation goals and need to be managed and resourced accordingly.

Keywords: aquatic ecosystem conservation, conservation law, ecological integrity, invasive species

Procedia PDF Downloads 120
1186 Audio-Visual Co-Data Processing Pipeline

Authors: Rita Chattopadhyay, Vivek Anand Thoutam

Abstract:

Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.

Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech

Procedia PDF Downloads 68
1185 Sustainable Building Law - The Legal Issues Abound

Authors: Richard J. Sobelsohn

Abstract:

Green Building and Sustainable Development help fight climate change, and protects the ozone, animal habitats, air quality, and ground water. The myriad of reasons to go Green has multiplied to the point that a developer that is building a ground-up or renovating/retrofitting a property has a plethora of choices to get to the green goal post. Sustainability not affects the bottom line but satisfies corporate mandates (ESG), consumer demand, market requirements, and the many laws dictating green building practices. The good news is that there are many paths a property owner can take to become green. The bad news is that there are many paths a property owner can take to become green, and they need to choose which direction to take. Certification of a building used to be the highest achievement in the Green building world. Now there are so many variables and laws with which a property owner must comply, and the legal analysis has mushroomed. Operation and Maintenance have also become one of the most important functions for a prudent Green Building owner. So adding to the “development/retrofit” parties involved in the sustainable building legal world, we now need to include all those people who keep the building green, and there are a lot of them!

Keywords: green building, sustainable development, legal issues, greenwashing, green cleaning, compliance, ESQ

Procedia PDF Downloads 106
1184 A Greener Approach towards the Synthesis of an Antimalarial Drug Lumefantrine

Authors: Luphumlo Ncanywa, Paul Watts

Abstract:

Malaria is a disease that kills approximately one million people annually. Children and pregnant women in sub-Saharan Africa lost their lives due to malaria. Malaria continues to be one of the major causes of death, especially in poor countries in Africa. Decrease the burden of malaria and save lives is very essential. There is a major concern about malaria parasites being able to develop resistance towards antimalarial drugs. People are still dying due to lack of medicine affordability in less well-off countries in the world. If more people could receive treatment by reducing the cost of drugs, the number of deaths in Africa could be massively reduced. There is a shortage of pharmaceutical manufacturing capability within many of the countries in Africa. However one has to question how Africa would actually manufacture drugs, active pharmaceutical ingredients or medicines developed within these research programs. It is quite likely that such manufacturing would be outsourced overseas, hence increasing the cost of production and potentially limiting the full benefit of the original research. As a result the last few years has seen major interest in developing more effective and cheaper technology for manufacturing generic pharmaceutical products. Micro-reactor technology (MRT) is an emerging technique that enables those working in research and development to rapidly screen reactions utilizing continuous flow, leading to the identification of reaction conditions that are suitable for usage at a production level. This emerging technique will be used to develop antimalarial drugs. It is this system flexibility that has the potential to reduce both the time was taken and risk associated with transferring reaction methodology from research to production. Using an approach referred to as scale-out or numbering up, a reaction is first optimized within the laboratory using a single micro-reactor, and in order to increase production volume, the number of reactors employed is simply increased. The overall aim of this research project is to develop and optimize synthetic process of antimalarial drugs in the continuous processing. This will provide a step change in pharmaceutical manufacturing technology that will increase the availability and affordability of antimalarial drugs on a worldwide scale, with a particular emphasis on Africa in the first instance. The research will determine the best chemistry and technology to define the lowest cost manufacturing route to pharmaceutical products. We are currently developing a method to synthesize Lumefantrine in continuous flow using batch process as bench mark. Lumefantrine is a dichlorobenzylidine derivative effective for the treatment of various types of malaria. Lumefantrine is an antimalarial drug used with artemether for the treatment of uncomplicated malaria. The results obtained when synthesizing Lumefantrine in a batch process are transferred into a continuous flow process in order to develop an even better and reproducible process. Therefore, development of an appropriate synthetic route for Lumefantrine is significant in pharmaceutical industry. Consequently, if better (and cheaper) manufacturing routes to antimalarial drugs could be developed and implemented where needed, it is far more likely to enable antimalarial drugs to be available to those in need.

Keywords: antimalarial, flow, lumefantrine, synthesis

Procedia PDF Downloads 184
1183 Re-Entrant Direct Hexagonal Phases in a Lyotropic System Induced by Ionic Liquids

Authors: Saheli Mitra, Ramesh Karri, Praveen K. Mylapalli, Arka. B. Dey, Gourav Bhattacharya, Gouriprasanna Roy, Syed M. Kamil, Surajit Dhara, Sunil K. Sinha, Sajal K. Ghosh

Abstract:

The most well-known structures of lyotropic liquid crystalline systems are the two dimensional hexagonal phase of cylindrical micelles with a positive interfacial curvature and the lamellar phase of flat bilayers with zero interfacial curvature. In aqueous solution of surfactants, the concentration dependent phase transitions have been investigated extensively. However, instead of changing the surfactant concentrations, the local curvature of an aggregate can be altered by tuning the electrostatic interactions among the constituent molecules. Intermediate phases with non-uniform interfacial curvature are still unexplored steps to understand the route of phase transition from hexagonal to lamellar. Understanding such structural evolution in lyotropic liquid crystalline systems is important as it decides the complex rheological behavior of the system, which is one of the main interests of the soft matter industry. Sodium dodecyl sulfate (SDS) is an anionic surfactant and can be considered as a unique system to tune the electrostatics by cationic additives. In present study, imidazolium-based ionic liquids (ILs) with different number of carbon atoms in their single hydrocarbon chain were used as the additive in the aqueous solution of SDS. At a fixed concentration of total non-aqueous components (SDS and IL), the molar ratio of these components was changed, which effectively altered the electrostatic interactions between the SDS molecules. As a result, the local curvature is observed to modify, and correspondingly, the structure of the hexagonal liquid crystalline phases are transformed into other phases. Polarizing optical microscopy of SDS and imidazole-based-IL systems have exhibited different textures of the liquid crystalline phases as a function of increasing concentration of the ILs. The small angle synchrotron x-ray diffraction (SAXD) study has indicated the hexagonal phase of direct cylindrical micelles to transform to a rectangular phase at the presence of short (two hydrocarbons) chain IL. However, the hexagonal phase is transformed to a lamellar phase at the presence of long (ten hydrocarbons) chain IL. Interestingly, at the presence of a medium (four hydrocarbons) chain IL, the hexagonal phase is transformed to another hexagonal phase of direct cylindrical micelles through the lamellar phase. To the best of our knowledge, such a phase sequence has not been reported earlier. Even though the small angle x-ray diffraction study has revealed the lattice parameters of these phases to be similar to each other, their rheological behavior has been distinctly different. These rheological studies have shed lights on how these phases differ in their viscoelastic behavior. Finally, the packing parameters, calculated for these phases based on the geometry of the aggregates, have explained the formation of the self-assembled aggregates.

Keywords: lyotropic liquid crystals, polarizing optical microscopy, rheology, surfactants, small angle x-ray diffraction

Procedia PDF Downloads 125
1182 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping

Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello

Abstract:

Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.

Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration

Procedia PDF Downloads 155
1181 Ammonia Bunkering Spill Scenarios: Modelling Plume’s Behaviour and Potential to Trigger Harmful Algal Blooms in the Singapore Straits

Authors: Bryan Low

Abstract:

In the coming decades, the global maritime industry will face a most formidable environmental challenge -achieving net zero carbon emissions by 2050. To meet this target, the Maritime Port Authority of Singapore (MPA) has worked to establish green shipping and digital corridors with ports of several other countries around the world where ships will use low-carbon alternative fuels such as ammonia for power generation. While this paradigm shift to the bunkering of greener fuels is encouraging, fuels like ammonia will also introduce a new and unique type of environmental risk in the unlikely scenario of a spill. While numerous modelling studies have been conducted for oil spills and their associated environmental impact on coastal and marine ecosystems, ammonia spills are comparatively less well understood. For example, there is a knowledge gap regarding how the complex hydrodynamic conditions of the Singapore Straits may influence the dispersion of a hypothetical ammonia plume, which has different physical and chemical properties compared to an oil slick. Chemically, ammonia can be absorbed by phytoplankton, thus altering the balance of the marine nitrogen cycle. Biologically, ammonia generally serves the role of a nutrient in coastal ecosystems at lower concentrations. However, at higher concentrations, it has been found to be toxic to many local species. It may also have the potential to trigger eutrophication and harmful algal blooms (HABs) in coastal waters, depending on local hydrodynamic conditions. Thus, the key objective of this research paper is to support the development of a model-based forecasting system that can predict ammonia plume behaviour in coastal waters, given prevailing hydrodynamic conditions and their environmental impact. This will be essential as ammonia bunkering becomes more commonplace in Singapore’s ports and around the world. Specifically, this system must be able to assess the HAB-triggering potential of an ammonia plume, as well as its lethal and sub-lethal toxic effects on local species. This will allow the relevant authorities to better plan risk mitigation measures or choose a time window with the ideal hydrodynamic conditions to conduct ammonia bunkering operations with minimal risk. In this paper, we present the first part of such a forecasting system: a jointly coupled hydrodynamic-water quality model that can capture how advection-diffusion processes driven by ocean currents influence plume behaviour and how the plume interacts with the marine nitrogen cycle. The model is then applied to various ammonia spill scenarios where the results are discussed in the context of current ammonia toxicity guidelines, impact on local ecosystems, and mitigation measures for future bunkering operations conducted in the Singapore Straits.

Keywords: ammonia bunkering, forecasting, harmful algal blooms, hydrodynamics, marine nitrogen cycle, oceanography, water quality modeling

Procedia PDF Downloads 57
1180 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction

Authors: Bruce Wrightsman

Abstract:

Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.

Keywords: wood building systems, material histories, monocoque systems, construction waste

Procedia PDF Downloads 68