Search results for: main cable
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10344

Search results for: main cable

1854 Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering

Authors: R. Nandhini, Gaurab Mudbhari

Abstract:

Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method.

Keywords: machine learning, deep learning, image classification, image clustering

Procedia PDF Downloads 18
1853 Understanding the Interplay between Consumer Knowledge, Trust and Relationship Satisfaction in Financial Services

Authors: Torben Hansen, Lars Gronholdt, Alexander Josiassen, Anne Martensen

Abstract:

Consumers often exhibit a bias in their knowledge; they often think that they know more or less than they do. The concept of 'knowledge over/underconfidence' (O/U) has in previous studies been used to investigate such knowledge bias. O/U appears as a combination of subjective and objective knowledge. Subjective knowledge relates to consumers’ perception of their knowledge, while objective knowledge relates to consumers’ absolute knowledge measured by objective standards. This separation leads to three scenarios: The consumer can either be knowledge calibrated (subjective and objective knowledge are similar), overconfident (subjective knowledge exceeds objective knowledge) or underconfident (objective knowledge exceeds subjective knowledge). Knowledge O/U is a highly useful concept in understanding consumer choice behavior. For example, knowledge overconfident individuals are likely to exaggerate their ability to make right choices, are more likely to opt out of necessary information search, spend less time to carry out a specific task than less knowledge confident consumers, and are more likely to show high financial trading volumes. Through the use of financial services as a case study, this study contributes to previous research by examining how consumer knowledge O/U affects two types of trust (broad-scope trust and narrow-scope trust) and consumer relationship satisfaction. Trust does not only concern consumer trust in individual companies (i.e., narrow.-scope confidence NST), but also concerns consumer confidence in the broader business context in which consumers plan and implement their behavior (i.e., broad scope trust, BST). NST is defined as "the expectation that the service provider can be relied on to deliver on its promises’, while BST is defined as ‘the expectation that companies within a particular business type can generally be relied on to deliver on their promises.’ This study expands our understanding of the interplay between consumer knowledge bias, consumer trust, and relationship marketing in two main ways: First, it is demonstrated that the more knowledge O/U a consumer becomes, the higher/lower NST and levels of relationship satisfaction will be. Second, it is demonstrated that BST has a negative moderating effect on the relationship between knowledge O/U and satisfaction, such that knowledge O/U has a higher positive/negative effect on relationship satisfaction when BST is low vs. high. The data for this study comprises 756 mutual fund investors. Trust is particularly important in consumers’ mutual fund behavior because mutual funds have important responsibilities in providing financial advice and in managing consumers’ funds.

Keywords: knowledge, cognitive bias, trust, customer-seller relationships, financial services

Procedia PDF Downloads 302
1852 Acetic Acid Adsorption and Decomposition on Pt(111): Comparisons to Ni(111)

Authors: Lotanna Ezeonu, Jason P. Robbins, Ziyu Tang, Xiaofang Yang, Bruce E. Koel, Simon G. Podkolzin

Abstract:

The interaction of organic molecules with metal surfaces is of interest in numerous technological applications, such as catalysis, bone replacement, and biosensors. Acetic acid is one of the main products of bio-oils produced from the pyrolysis of hemicellulosic feedstocks. However, their high oxygen content makes them unsuitable for use as fuels. Hydrodeoxygenation is a proven technique for catalytic deoxygenation of bio-oils. An understanding of the energetics and control of the bond-breaking sequences of biomass-derived oxygenates on metal surfaces will enable a guided optimization of existing catalysts and the development of more active/selective processes for biomass transformations to fuels. Such investigations have been carried out with the aid of ultrahigh vacuum and its concomitant techniques. The high catalytic activity of platinum in biomass-derived oxygenate transformations has sparked a lot of interest. We herein exploit infrared reflection absorption spectroscopy(IRAS), temperature-programmed desorption(TPD), and density functional theory(DFT) to study the adsorption and decomposition of acetic acid on a Pt(111) surface, which was then compared with Ni(111), a model non-noble metal. We found that acetic acid adsorbs molecularly on the Pt(111) surface, interacting through the lone pair of electrons of one oxygen atomat 90 K. At 140 K, the molecular form is still predominant, with some dissociative adsorption (in the form of acetate and hydrogen). Annealing to 193 K led to complete dehydrogenation of molecular acetic acid species leaving adsorbed acetate. At 440 K, decomposition of the acetate species occurs via decarbonylation and decarboxylation as evidenced by desorption peaks for H₂,CO, CO₂ and CHX fragments (x=1, 2) in theTPD.The assignments for the experimental IR peaks were made using visualization of the DFT-calculated vibrational modes. The results showed that acetate adsorbs in a bridged bidentate (μ²η²(O,O)) configuration. The coexistence of linear and bridge bonded CO was also predicted by the DFT results. Similar molecular acid adsorption energy was predicted in the case of Ni(111) whereas a significant difference was found for acetate adsorption.

Keywords: acetic acid, platinum, nickel, infared-absorption spectrocopy, temperature programmed desorption, density functional theory

Procedia PDF Downloads 111
1851 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting

Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade

Abstract:

The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.

Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit

Procedia PDF Downloads 170
1850 A Survey and Analysis on Inflammatory Pain Detection and Standard Protocol Selection Using Medical Infrared Thermography from Image Processing View Point

Authors: Mrinal Kanti Bhowmik, Shawli Bardhan Jr., Debotosh Bhattacharjee

Abstract:

Human skin containing temperature value more than absolute zero, discharges infrared radiation related to the frequency of the body temperature. The difference in infrared radiation from the skin surface reflects the abnormality present in human body. Considering the difference, detection and forecasting the temperature variation of the skin surface is the main objective of using Medical Infrared Thermography(MIT) as a diagnostic tool for pain detection. Medical Infrared Thermography(MIT) is a non-invasive imaging technique that records and monitors the temperature flow in the body by receiving the infrared radiated from the skin and represent it through thermogram. The intensity of the thermogram measures the inflammation from the skin surface related to pain in human body. Analysis of thermograms provides automated anomaly detection associated with suspicious pain regions by following several image processing steps. The paper represents a rigorous study based survey related to the processing and analysis of thermograms based on the previous works published in the area of infrared thermal imaging for detecting inflammatory pain diseases like arthritis, spondylosis, shoulder impingement, etc. The study also explores the performance analysis of thermogram processing accompanied by thermogram acquisition protocols, thermography camera specification and the types of pain detected by thermography in summarized tabular format. The tabular format provides a clear structural vision of the past works. The major contribution of the paper introduces a new thermogram acquisition standard associated with inflammatory pain detection in human body to enhance the performance rate. The FLIR T650sc infrared camera with high sensitivity and resolution is adopted to increase the accuracy of thermogram acquisition and analysis. The survey of previous research work highlights that intensity distribution based comparison of comparable and symmetric region of interest and their statistical analysis assigns adequate result in case of identifying and detecting physiological disorder related to inflammatory diseases.

Keywords: acquisition protocol, inflammatory pain detection, medical infrared thermography (MIT), statistical analysis

Procedia PDF Downloads 345
1849 Application of the Building Information Modeling Planning Approach to the Factory Planning

Authors: Peggy Näser

Abstract:

Factory planning is a systematic, objective-oriented process for planning a factory, structured into a sequence of phases, each of which is dependent on the preceding phase and makes use of particular methods and tools, and extending from the setting of objectives to the start of production. The digital factory, on the other hand, is the generic term for a comprehensive network of digital models, methods, and tools – including simulation and 3D visualisation – integrated by a continuous data management system. Its aim is the holistic planning, evaluation and ongoing improvement of all the main structures, processes and resources of the real factory in conjunction with the product. Digital factory planning has already become established in factory planning. The application of Building Information Modeling has not yet been established in factory planning but has been used predominantly in the planning of public buildings. Furthermore, this concept is limited to the planning of the buildings and does not include the planning of equipment of the factory (machines, technical equipment) and their interfaces to the building. BIM is a cooperative method of working, in which the information and data relevant to its lifecycle are consistently recorded, managed and exchanged in a transparent communication between the involved parties on the basis of digital models of a building. Both approaches, the planning approach of Building Information Modeling and the methodical approach of the Digital Factory, are based on the use of a comprehensive data model. Therefore it is necessary to examine how the approach of Building Information Modeling can be extended in the context of factory planning in such a way that an integration of the equipment planning, as well as the building planning, can take place in a common digital model. For this, a number of different perspectives have to be investigated: the equipment perspective including the tools used to implement a comprehensive digital planning process, the communication perspective between the planners of different fields, the legal perspective, that the legal certainty in each country and the quality perspective, on which the quality criteria are defined and the planning will be evaluated. The individual perspectives are examined and illustrated in the article. An approach model for the integration of factory planning into the BIM approach, in particular for the integrated planning of equipment and buildings and the continuous digital planning is developed. For this purpose, the individual factory planning phases are detailed in the sense of the integration of the BIM approach. A comprehensive software concept is shown on the tool. In addition, the prerequisites required for this integrated planning are presented. With the help of the newly developed approach, a better coordination between equipment and buildings is to be achieved, the continuity of the digital factory planning is improved, the data quality is improved and expensive implementation errors are avoided in the implementation.

Keywords: building information modeling, digital factory, digital planning, factory planning

Procedia PDF Downloads 271
1848 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms

Authors: Habtamu Ayenew Asegie

Abstract:

Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.

Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction

Procedia PDF Downloads 44
1847 Detection of Powdery Mildew Disease in Strawberry Using Image Texture and Supervised Classifiers

Authors: Sultan Mahmud, Qamar Zaman, Travis Esau, Young Chang

Abstract:

Strawberry powdery mildew (PM) is a serious disease that has a significant impact on strawberry production. Field scouting is still a major way to find PM disease, which is not only labor intensive but also almost impossible to monitor disease severity. To reduce the loss caused by PM disease and achieve faster automatic detection of the disease, this paper proposes an approach for detection of the disease, based on image texture and classified with support vector machines (SVMs) and k-nearest neighbors (kNNs). The methodology of the proposed study is based on image processing which is composed of five main steps including image acquisition, pre-processing, segmentation, features extraction and classification. Two strawberry fields were used in this study. Images of healthy leaves and leaves infected with PM (Sphaerotheca macularis) disease under artificial cloud lighting condition. Colour thresholding was utilized to segment all images before textural analysis. Colour co-occurrence matrix (CCM) was introduced for extraction of textural features. Forty textural features, related to a physiological parameter of leaves were extracted from CCM of National television system committee (NTSC) luminance, hue, saturation and intensity (HSI) images. The normalized feature data were utilized for training and validation, respectively, using developed classifiers. The classifiers have experimented with internal, external and cross-validations. The best classifier was selected based on their performance and accuracy. Experimental results suggested that SVMs classifier showed 98.33%, 85.33%, 87.33%, 93.33% and 95.0% of accuracy on internal, external-I, external-II, 4-fold cross and 5-fold cross-validation, respectively. Whereas, kNNs results represented 90.0%, 72.00%, 74.66%, 89.33% and 90.3% of classification accuracy, respectively. The outcome of this study demonstrated that SVMs classified PM disease with a highest overall accuracy of 91.86% and 1.1211 seconds of processing time. Therefore, overall results concluded that the proposed study can significantly support an accurate and automatic identification and recognition of strawberry PM disease with SVMs classifier.

Keywords: powdery mildew, image processing, textural analysis, color co-occurrence matrix, support vector machines, k-nearest neighbors

Procedia PDF Downloads 122
1846 Mathematical Modeling for Continuous Reactive Extrusion of Poly Lactic Acid Formation by Ring Opening Polymerization Considering Metal/Organic Catalyst and Alternative Energies

Authors: Satya P. Dubey, Hrushikesh A Abhyankar, Veronica Marchante, James L. Brighton, Björn Bergmann

Abstract:

Aims: To develop a mathematical model that simulates the ROP of PLA taking into account the effect of alternative energy to be implemented in a continuous reactive extrusion production process of PLA. Introduction: The production of large amount of waste is one of the major challenges at the present time, and polymers represent 70% of global waste. PLA has emerged as a promising polymer as it is compostable, biodegradable thermoplastic polymer made from renewable sources. However, the main limitation for the application of PLA is the traces of toxic metal catalyst in the final product. Thus, a safe and efficient production process needs to be developed to avoid the potential hazards and toxicity. It has been found that alternative energy sources (LASER, ultrasounds, microwaves) could be a prominent option to facilitate the ROP of PLA via continuous reactive extrusion. This process may result in complete extraction of the metal catalysts and facilitate less active organic catalysts. Methodology: Initial investigation were performed using the data available in literature for the reaction mechanism of ROP of PLA based on conventional metal catalyst stannous octoate. A mathematical model has been developed by considering significant parameters such as different initial concentration ratio of catalyst, co-catalyst and impurity. Effects of temperature variation and alternative energies have been implemented in the model. Results: The validation of the mathematical model has been made by using data from literature as well as actual experiments. Validation of the model including alternative energies is in progress based on experimental data for partners of the InnoREX project consortium. Conclusion: The model developed reproduces accurately the polymerisation reaction when applying alternative energy. Alternative energies have a great positive effect to increase the conversion and molecular weight of the PLA. This model could be very useful tool to complement Ludovic® software to predict the large scale production process when using reactive extrusion.

Keywords: polymer, poly-lactic acid (PLA), ring opening polymerization (ROP), metal-catalyst, bio-degradable, renewable source, alternative energy (AE)

Procedia PDF Downloads 363
1845 The Impact of the Method of Extraction on 'Chemchali' Olive Oil Composition in Terms of Oxidation Index, and Chemical Quality

Authors: Om Kalthoum Sallem, Saidakilani, Kamiliya Ounaissa, Abdelmajid Abid

Abstract:

Introduction and purposes: Olive oil is the main oil used in the Mediterranean diet. Virgin olive oil is valued for its organoleptic and nutritional characteristics and is resistant to oxidation due to its high monounsaturated fatty acid content (MUFAs), and low polyunsaturates (PUFAs) and the presence of natural antioxidants such as phenols, tocopherols and carotenoids. The fatty acid composition, especially the MUFA content, and the natural antioxidants provide advantages for health. The aim of the present study was to examine the impact of method of extraction on the chemical profiles of ‘Chemchali’ olive oil variety, which is cultivated in the city of Gafsa, and to compare it with chetoui and chemchali varieties. Methods: Our study is a qualitative prospective study that deals with ‘Chemchali’ olive oil variety. Analyses were conducted during three months (from December to February) in different oil mills in the city of Gafsa. We have compared ‘Chemchali’ olive oil obtained by continuous method to this obtained by superpress method. Then we have analyzed quality index parameters, including free fatty acid content (FFA), acidity, and UV spectrophotometric characteristics and other physico-chemical data [oxidative stability, ß-carotene, and chlorophyll pigment composition]. Results: Olive oil resulting from super press method compared with continuous method is less acid(0,6120 vs. 0,9760), less oxydazible(K232:2,478 vs. 2,592)(k270:0,216 vs. 0,228), more rich in oleic acid(61,61% vs. 66.99%), less rich in linoleic acid(13,38% vs. 13,98 %), more rich in total chlorophylls pigments (6,22 ppm vs. 3,18 ppm ) and ß-carotene (3,128 mg/kg vs. 1,73 mg/kg). ‘Chemchali’ olive oil showed more equilibrated total content in fatty acids compared with the varieties ’Chemleli’ and ‘Chetoui’. Gafsa’s variety ’Chemlali’ have significantly less saturated and polyunsaturated fatty acids. Whereas it has a higher content in monounsaturated fatty acid C18:2, compared with the two other varieties. Conclusion: The use of super press method had benefic effects on general chemical characteristics of ‘Chemchali’ olive oil, maintaining the highest quality according to the ecocert legal standards. In light of the results obtained in this study, a more detailed study is required to establish whether the differences in the chemical properties of oils are mainly due to agronomic and climate variables or, to the processing employed in oil mills.

Keywords: olive oil, extraction method, fatty acids, chemchali olive oil

Procedia PDF Downloads 384
1844 Saco Sweet Cherry from Fundão Region, Portugal: Chemical Profile and Health-Promoting Properties

Authors: Luís R. Silva, Ana C. Gonçalves, Catarina Bento, Fábio Jesus, Branca M. Silva

Abstract:

Prunus avium Linnaeus, more known as sweet cherry, is one of the most appreciated fruit worldwide. Most of these quantities are produced in Fundão region, being Saco the cultivar most produced. Saco is very rich in bioactive compounds, especially phenolics, and presents great antioxidant capacity. The purpose of the present study was to investigate the chemical profile and biological potential, concerning antioxidant, anti-diabetic activity and protective effects towards erythrocytes by Saco sweet cherry collected from Fundão region (Portugal). The hydroethanolic extracts were prepared and passed through a C18 solid-phase extraction column. The phenolic profile analyzed by LC-DAD method allowed to the identification of 22 phenolic compounds, being 16 non-phenolics and 6 anthocyanins. In respect to non-coloured phenolics, 3-O-caffeoylquinic and ρ-coumaroylquinic acids were the main ones. Concerning to anthocyanins, cyanidin-3-O-rutinoside was found in higher amounts. Relatively to biological potential, Saco showed great antioxidant potential, through DPPH and NO radical assays, with IC50 =16.24 ± 0.46 µg/mL and IC50 = 176.69 ± 3.35 µg/mL for DPPH and NO, respectively. These results were similar to those obtained for ascorbic acid control (IC50 = 16.92 ± 0.69 and IC50 = 162.66 ± 1.31 μg/mL for DPPH and NO, respectively). In respect to antidiabetic potential, Saco revealed capacity to inhibit α-glucosidase in a dose-dependent manner (IC50 = 10.79 ± 0.40 µg/mL), being much active than positive control acarbose (IC50 = 306.66 ± 0.84 μg/mL). Additionally, Saco extracts revealed protective effects against ROO•-mediated toxicity generated by AAPH in human blood erythrocytes, inhibiting hemoglobin oxidation (IC50 = 38.57 ± 0.96 μg/mL) and hemolysis (IC50 = 73.03 ± 1.48 μg/mL), in a concentration-dependent manner. However, Saco extracts were less effective than quercetin control (IC50 = 3.10 μg/mL and IC50 = 0.7 μg/mL for inhibition of hemoglobin oxidation and hemolysis, respectively). The results obtained showed that Saco is an excellent source of phenolic compounds. These ones are natural antioxidant substances, which easily capture reactive species. This work presents new insights regarding sweet cherry antioxidant properties which may be useful for the future development of new therapeutic strategies for preventing or attenuating oxidative-related disorders.

Keywords: antioxidant capacity, health benefits, phenolic compounds, saco

Procedia PDF Downloads 318
1843 Use of AI for the Evaluation of the Effects of Steel Corrosion in Mining Environments

Authors: Maria Luisa de la Torre, Javier Aroba, Jose Miguel Davila, Aguasanta M. Sarmiento

Abstract:

Steel is one of the most widely used materials in polymetallic sulfide mining installations. One of the main problems suffered by these facilities is the economic losses due to the corrosion of this material, which is accelerated and aggravated by the contact with acid waters generated in these mines when sulfides come into contact with oxygen and water. This generation of acidic water, in turn, is accelerated by the presence of acidophilic bacteria. In order to gain a more detailed understanding of this corrosion process and the interaction between steel and acidic water, a laboratory experiment was carried out in which carbon steel plates were introduced into four different solutions for 27 days: distilled water (BK), which tried to assimilate the effect produced by rain on this material, an acid solution from a mine with a high Fe2+/Fe3+ (PO) content, another acid solution of water from another mine with a high Fe3+/Fe2+ (PH) content and, finally, one that reproduced the acid mine water with a high Fe2+/Fe3+ content but in which there were no bacteria (ST). Every 24 hours, physicochemical parameters were measured and water samples were taken to carry out an analysis of the dissolved elements. The results of these measurements were processed using an explainable AI model based on fuzzy logic. It could be seen that, in all cases, there was an increase in pH, as well as in the concentrations of Fe and, in particular, Fe(II), as a consequence of the oxidation of the steel plates. Proportionally, the increase in Fe concentration was higher in PO and ST than in PH because Fe precipitates were produced in the latter. The rise of Fe(II) was proportionally much higher in PH and, especially in the first hours of exposure, because it started from a lower initial concentration of this ion. Although to a lesser extent than in PH, the greater increase in Fe(II) also occurred faster in PO than in ST, a consequence of the action of the catalytic bacteria. On the other hand, Cu concentrations decreased throughout the experiment (with the exception of distilled water, which initially had no Cu, as a result of an electrochemical process that generates a precipitation of Cu together with Fe hydroxides. This decrease is lower in PH because the high total acidity keeps it in solution for a longer time. With the application of an artificial intelligence tool, it has been possible to evaluate the effects of steel corrosion in mining environments, corroborating and extending what was obtained by means of classical statistics. Acknowledgments: This work has been supported by MCIU/AEI/10.13039/501100011033/FEDER, UE, throughout the project PID2021-123130OB-I00.

Keywords: carbon steel, corrosion, acid mine drainage, artificial intelligence, fuzzy logic

Procedia PDF Downloads 27
1842 Development of a Multi-User Country Specific Food Composition Table for Malawi

Authors: Averalda van Graan, Joelaine Chetty, Malory Links, Agness Mwangwela, Sitilitha Masangwi, Dalitso Chimwala, Shiban Ghosh, Elizabeth Marino-Costello

Abstract:

Food composition data is becoming increasingly important as dealing with food insecurity and malnutrition in its persistent form of under-nutrition is now coupled with increasing over-nutrition and its related ailments in the developing world, of which Malawi is not spared. In the absence of a food composition database (FCDB) inherent to our dietary patterns, efforts were made to develop a country-specific FCDB for nutrition practice, research, and programming. The main objective was to develop a multi-user, country-specific food composition database, and table from existing published and unpublished scientific literature. A multi-phased approach guided by the project framework was employed. Phase 1 comprised a scoping mission to assess the nutrition landscape for compilation activities. Phase 2 involved training of a compiler and data collection from various sources, primarily; institutional libraries, online databases, and food industry nutrient data. Phase 3 subsumed evaluation and compilation of data using FAO and IN FOODS standards and guidelines. Phase 4 concluded the process with quality assurance. 316 Malawian food items categorized into eight food groups for 42 components were captured. The majority were from the baby food group (27%), followed by a staple (22%) and animal (22%) food group. Fats and oils consisted the least number of food items (2%), followed by fruits (6%). Proximate values are well represented; however, the percent missing data is huge for some components, including Se 68%, I 75%, Vitamin A 42%, and lipid profile; saturated fat 53%, mono-saturated fat 59%, poly-saturated fat 59% and cholesterol 56%. A multi-phased approach following the project framework led to the development of the first Malawian FCDB and table. The table reflects inherent Malawian dietary patterns and nutritional concerns. The FCDB can be used by various professionals in nutrition and health. Rising over-nutrition, NCD, and changing diets challenge us for nutrient profiles of processed foods and complete lipid profiles.

Keywords: analytical data, dietary pattern, food composition data, multi-phased approach

Procedia PDF Downloads 95
1841 Process Modeling in an Aeronautics Context

Authors: Sophie Lemoussu, Jean-Charles Chaudemar, Robertus A. Vingerhoeds

Abstract:

Many innovative projects exist in the field of aeronautics, each addressing specific areas so to reduce weight, increase autonomy, reduction of CO2, etc. In many cases, such innovative developments are being carried out by very small enterprises (VSE’s) or small and medium sized-enterprises (SME’s). A good example concerns airships that are being studied as a real alternative to passenger and cargo transportation. Today, no international regulations propose a precise and sufficiently detailed framework for the development and certification of airships. The absence of such a regulatory framework requires a very close contact with regulatory instances. However, VSE’s/SME’s do not always have sufficient resources and internal knowledge to handle this complexity and to discuss these issues. This poses an additional challenge for those VSE’s/SME’s, in particular those that have system integration responsibilities and that must provide all the necessary evidence to demonstrate their ability to design, produce, and operate airships with the expected level of safety and reliability. The main objective of this research is to provide a methodological framework enabling VSE’s/SME’s with limited resources to organize the development of airships while taking into account the constraints of safety, cost, time and performance. This paper proposes to provide a contribution to this problematic by proposing a Model-Based Systems Engineering approach. Through a comprehensive process modeling approach applied to the development processes, the regulatory constraints, existing best practices, etc., a good image can be obtained as to the process landscape that may influence the development of airships. To this effect, not only the necessary regulatory information is taken on board, also other international standards and norms on systems engineering and project management are being modeled and taken into account. In a next step, the model can be used for analysis of the specific situation for given developments, derive critical paths for the development, identify eventual conflicting aspects between the norms, standards, and regulatory expectations, or also identify those areas where not enough information is available. Once critical paths are known, optimization approaches can be used and decision support techniques can be applied so to better support VSE’s/SME’s in their innovative developments. This paper reports on the adopted modeling approach, the retained modeling languages, and how they all fit together.

Keywords: aeronautics, certification, process modeling, project management, regulation, SME, systems engineering, VSE

Procedia PDF Downloads 164
1840 Impact of Intelligent Transportation System on Planning, Operation and Safety of Urban Corridor

Authors: Sourabh Jain, S. S. Jain

Abstract:

Intelligent transportation system (ITS) is the application of technologies for developing a user–friendly transportation system to extend the safety and efficiency of urban transportation systems in developing countries. These systems involve vehicles, drivers, passengers, road operators, managers of transport services; all interacting with each other and the surroundings to boost the security and capacity of road systems. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. Intelligent transportation system is a product of the revolution in information and communications technologies that is the hallmark of the digital age. The basic ITS technology is oriented on three main directions: communications, information, integration. Information acquisition (collection), processing, integration, and sorting are the basic activities of ITS. In the paper, attempts have been made to present the endeavor that was made to interpret and evaluate the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of six lanes as well as eight lanes divided road network. Two categories of data have been collected such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, stop watch, radar gun, and mobile GPS (GPS tracker lite). From the analysis, the performance interpretations incorporated were the identification of peak and off-peak hours, congestion and level of service (LOS) at midblock sections and delay followed by plotting the speed contours. The paper proposed the urban corridor management strategies based on sensors integrated into both vehicles and on the roads that those have to be efficiently executable, cost-effective, and familiar to road users. It will be useful to reduce congestion, fuel consumption, and pollution so as to provide comfort, safety, and efficiency to the users.

Keywords: ITS strategies, congestion, planning, mobility, safety

Procedia PDF Downloads 180
1839 Lactate Biostimulation for Remediation of Aquifers Affected by Recalcitrant Sources of Chloromethanes

Authors: Diana Puigserver Cuerda, Jofre Herrero Ferran, José M. Carmona Perez

Abstract:

In the transition zone between aquifers and basal aquitards, DNAPL-pools of chlorinated solvents are more recalcitrant than at other depths in the aquifer. Although degradation of carbon tetrachloride (CT) and chloroform (CF) occurs in this zone, this is a slow process, which is why an adequate remediation strategy is necessary. The working hypothesis of this study is that the biostimulation of the transition zone of an aquifer contaminated by CT and CF can be an effective remediation strategy. This hypothesis has been tested in a site on an unconfined aquifer in which the major contaminants were CT and CF of industrial origin and where the hydrochemical background was rich in other compounds that can hinder natural attenuation of chloromethanes. Field studies and five laboratory microcosm experiments were carried out at the level of groundwater and sediments to identify: i) the degradation processes of CT and CF; ii) the structure of microbial communities; and iii) the microorganisms implicated on this degradation. For this, concentration of contaminants and co-contaminants (nitrate and sulfate), Compound Specific Isotope Analysis, molecular techniques (Denaturing Gradient Gel Electrophoresis) and clone library analysis were used. The main results were: i) degradation processes of CT and CF occurred in groundwater and in the lesser conductive sediments; ii) sulfate-reducing conditions in the transition zone were high and similar to those in the source of contamination; iii) two microorganisms (Azospira suillum and a bacterium of the Clostridiales order) were identified in the transition zone at the field and lab experiments that were compatible with the role of carrying out the reductive dechlorination of CT, CF and their degradation products (dichloromethane and chloromethane); iv) these two microorganisms were present at the high starting concentrations of the microcosm experiments (similar to those in the source of DNAPL) and continued being present until the last day of the lactate biostimulation; and v) the lactate biostimulation gave rise to the fastest and highest degradation rates and promoted the elimination of other electron acceptors (e.g. nitrate and sulfate). All these results are evidence that lactate biostimulation can be effective in remediating the source and plume, especially in the transition zone, and highlight the environmental relevance of the treatment of contaminated transition zones in industrial contexts similar to that studied.

Keywords: Azospira suillum, lactate biostimulation of carbon tetrachloride and chloroform, reductive dechlorination, transition zone between aquifer and aquitard

Procedia PDF Downloads 178
1838 Prevalence of Hepatitis B Virus Infection and Its Determinants among Pregnant Women in East Africa: Systematic Review and Meta-Analysis

Authors: Bantie Getnet Yirsaw, Muluken Chanie Agimas, Gebrie Getu Alemu, Tigabu Kidie Tesfie, Nebiyu Mekonnen Derseh, Habtamu Wagnew Abuhay, Meron Asmamaw Alemayehu, Getaneh Awoke Yismaw

Abstract:

Introduction: Hepatitis B virus (HBV) is one of the major public health problems globally and needs an urgent response. It is one of the most responsible causes of mortality among the five hepatitis viruses, and it affects almost every class of individuals. Thus, the main objective of this study was to determine the pooled prevalence and its determinants among pregnant women in East Africa. Methods: We searched studies using PubMed, Scopus, Embase, ScienceDirect, Google Scholar, and grey literature that were published between January 01/2020 to January 30/2024. The studies were assessed using the Newcastle Ottawa Scale (NOS) quality assessment scale. The random-effect (DerSimonian) model was used to determine the pooled prevalence and associated factors of HBV among pregnant women. Heterogeneity was assessed by I² statistic, sub-group analysis, and sensitivity analysis. Publication bias was assessed by the Egger test, and the analysis was done using STATA version 17. Result: A total of 45 studies with 35639 pregnant women were included in this systematic review and meta-analysis. The overall pooled prevalence of HBV among pregnant women in East Africa was 6.0% (95% CI: 6.0%−7.0%, I² = 89.7%). The highest prevalence of 8% ((95% CI: 6%, 10%), I² = 91.08%) was seen in 2021, and the lowest prevalence of 5% ((95% CI: 4%, 6%) I² = 52.52%) was observed in 2022. A pooled meta-analysis showed that history of surgical procedure (OR = 2.14 (95% CI: 1.27, 3.61)), having multiple sexual partners (OR = 3.87 (95% CI: 2.52, 5.95), history of body tattooing (OR = 2.55 (95% CI: 1.62, 4.01)), history of tooth extraction (OR = 2.09 (95% CI: 1.29, 3.39)), abortion history(OR = 2.20(95% CI: 1.38, 3.50)), history of sharing sharp material (OR = 1.88 (95% CI: 1.07, 3.31)), blood transfusion (OR = 2.41 (95% CI: 1.62, 3.57)), family history of HBV (OR = 4.87 (95% CI: 2.95, 8.05)) and history needle injury (OR = 2.62 (95% CI: 1.20, 5.72)) were significant risk factors associated with HBV infection among pregnant women. Conclusions: The pooled prevalence of HBV infection among pregnant women in East Africa was at an intermediate level and different across countries, ranging from 1.5% to 22.2%. The result of this pooled prevalence was an indication of the need for screening, prevention, and control of HBV infection among pregnant women in the region. Therefore, early identification of risk factors, awareness creation of the mode of transmission of HBV, and implementation of preventive measures are essential in reducing the burden of HBV infection among pregnant women.

Keywords: hepatitis B virus, prevalence, determinants, pregnant women, meta-analysis, East Africa

Procedia PDF Downloads 47
1837 Science and Mathematics Instructional Strategies, Teaching Performance and Academic Achievement in Selected Secondary Schools in Upland

Authors: Maria Belen C. Costa, Liza C. Costa

Abstract:

Teachers have an important influence on students’ academic achievement. Teachers play a crucial role in educational attainment because they stand in the interface of the transmission of knowledge, values, and skills in the learning process through the instructional strategies they employ in the classroom. The level of achievement of students in school depends on the degree of effectiveness of instructional strategies used by the teacher. Thus, this study was conceptualized and conducted to examine the instructional strategies preferred and used by the Science and Mathematics teachers and the impact of those strategies in their teaching performance and students’ academic achievement in Science and Mathematics. The participants of the study comprised a total enumeration of 61 teachers who were chosen through total enumeration and 610 students who were selected using two-stage random sampling technique. The descriptive correlation design was used in this study with a self-made questionnaire as the main tool in the data gathering procedure. Relationship among variables was tested and analyzed using Spearman Rank Correlation Coefficient and Wilcoxon Signed Rank statistics. The teacher participants under study mainly belonged to the age group of ‘young’ (35 years and below) and most were females having ‘very much experienced’ (16 years and above) in teaching. Teaching performance was found to be ‘very satisfactory’ while academic achievement in Science and Mathematics was found to be ‘satisfactory’. Demographic profile and teaching performance of teacher participants were found to be ‘not significant’ to their instructional strategy preferences. Results implied that age, sex, level of education and length of service of the teachers does not affect their preference on a particular instructional strategy. However, the teacher participants’ extent of use of the different instructional strategies was found to be ‘significant’ to their teaching performance. The instructional strategies being used by the teachers were found to have a direct effect on their teaching performance. Academic achievement of student participants was found to be ‘significant’ to the teacher participants’ instructional strategy preferences. The preference of the teachers on instructional strategies had a significant effect on the students’ academic performance. On the other hand, teacher participants’ extent of use of instructional strategies was showed to be ‘not significant’ to the academic achievement of students in Science and Mathematics. The instructional strategy being used by the teachers did not affect the level of performance of students in Science and Mathematics. The results of the study revealed that there was a significant difference between the teacher participants’ preference of instructional strategy and the student participants’ instructional strategy preference as well as between teacher participants’ extent of use and student participants’ perceived level of use of the different instructional strategies. Findings found a discrepancy between the teaching strategy preferences of students and strategies implemented by teachers.

Keywords: academic achievement, extent of use, instructional strategy, preferences

Procedia PDF Downloads 313
1836 Effects of Mental Skill Training Programme on Direct Free Kick of Grassroot Footballers in Lagos, Nigeria

Authors: Mayowa Adeyeye, Kehinde Adeyemo

Abstract:

The direct free kick is considered a great opportunity to score a goal but this is not always the case amidst Nigerian and other elite footballers. This study, therefore, examined the extent to which an 8 weeks mental skill training programme is effective for improving accuracy in direct free kick in football. Sixty (n-60) students of Pepsi Football Academy participated in the study. They were randomly distributed into two groups of positive self-talk group (intervention n-30) and control group (n-30). The instrument used in the collection of data include a standard football goal post while the research materials include a dummy soccer wall, a cord, an improvised vanishing spray, a clipboard, writing materials, a recording sheet, a self-talk log book, six standard 5 football, cones, an audiotape and a compact disc. The Weinberge and Gould (2011) mental skills training manual was used. The reliability coefficient of the apparatus following a pilot study stood at 0.72. Before the commencement of the mental skills training programme, the participants were asked to take six simulated direct free kick. At the end of each physical skills training session after the pre-test, the researcher spent at least 15 minutes with the groups exposing them to the intervention. The mental skills training programme alongside physical skills training took place in two different locations for the different groups under study, these included Agege Stadium Main bowl Football Pitch (Imagery Group), and Ogba Ijaye (Control Group). The mental skills training programme lasted for eight weeks. After the completion of the mental skills training programme, all the participants were asked to take another six simulated direct free kick attempts using the same field used for the pre-test to determine the efficacy of the treatments. The pre-test and post-test data were analysed using inferential statistics of t-test, while the alpha level was set at 0.05. The result revealed significant differences in t-test for positive self-talk and control group. Based on the findings, it is recommended that athletes should be exposed to positive self-talk alongside their normal physical skills training for quality delivery of accurate direct free kick during training and competition.

Keywords: accuracy, direct free kick, pepsi football academy, positive self-talk

Procedia PDF Downloads 350
1835 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.

Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate

Procedia PDF Downloads 179
1834 Enhancing the Quality of Silage Bales Produced by a Commercial Scale Silage Producer in Northern province, Sri Lanka: A Step Toward Supporting Smallholder Dairy Farmers in the Northern Province Sri Lanka

Authors: Harithas Aruchchunan

Abstract:

Silage production is an essential aspect of dairy farming, used to provide high-quality feed to ruminants. However, dairy farmers in Northern Province Sri Lanka are facing multiple challenges that compromise the quality and quantity of silage produced. To tackle these challenges, promoting silage feeding has become an essential component of sustainable dairy farming practices. In this study, silage bale samples were collected from a newly started silage baling factory in Jaffna, Northern province and their quality was analysed at the Veterinary Research Institute laboratory in Kandy in March 2023. The results show the nutritional composition of three Napier grass cultivars: Super Napier, CO6, and Indian Red Napier (BH18). The main parameters analysed were dry matter, pH, lactic acid, soluble carbohydrate, ammonia nitrogen, ash, crude protein, NDF, and ADF. The results indicate that Super Napier and CO6 have higher crude protein content and lower ADF levels, making them suitable for producing high-quality silage. The pH levels of all three cultivars were safe, and the ammonia nitrogen levels were considered appropriate. However, laboratory results indicate that the quality of silage bales produced can be further enhanced. Dairy farmers should be encouraged to adopt these cultivars to achieve better yields as they are high in protein and are better suited to Northern Province's soil and climate. Therefore, it is vital to educate small-scale fodder producers, who supply the raw material to silage factories, on the best practices of cultivating these new cultivars. To improve silage bale production and quality in Northern Province Sri Lanka, we recommend increasing public awareness about silage feeding, providing education and training to dairy farmers and small-scale fodder producers on modern silage production techniques and improving the availability of raw materials for silage production. Additionally, Napier grass cultivars need to be promoted among dairy farmers for better production and quality of silage bales. Failing to improve the quality and quantity of silage bale production could not only lead to the decline of dairy farming in Northern Province Sri Lanka but also the negative impact on the economy

Keywords: silage bales, dairy farming, economic crisis, Sri Lanka

Procedia PDF Downloads 94
1833 A Review of Type 2 Diabetes and Diabetes-Related Cardiovascular Disease in Zambia

Authors: Mwenya Mubanga, Sula Mazimba

Abstract:

Background: In Zambia, much of the focus on nutrition and health has been on reducing micronutrient deficiencies, wasting and underweight malnutrition and not on the rising global projections of trends in obesity and type 2 diabetes. The aim of this review was to identify and collate studies on the prevalence of obesity, diabetes and diabetes-related cardiovascular disease conducted in Zambia, to summarize their findings and to identify areas that need further research. Methods: The Medical Literature Analysis and Retrieval System (MEDLINE) database was searched for peer-reviewed articles on the prevalence of, and factors associated with obesity, type 2 diabetes, and diabetes-related cardiovascular disease amongst Zambian residents using a combination of search terms. The period of search was from 1 January 2000 to 31 December 2016. We expanded the search terms to include all possible synonyms and spellings obtained in the search strategy. Additionally, we performed a manual search for other articles and references of peer-reviewed articles. Results: In Zambia, the current prevalence of Obesity and Type 2 diabetes is estimated at 13%-16% and 2.0 – 3.0% respectively. Risk factors such as the adoption of western dietary habits, the social stigmatization associated with rapid weight loss due to Tuberculosis and/ or the human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) and rapid urbanization have all been blamed for fueling the increased risk of obesity and type 2 diabetes. However, unlike traditional Western populations, those with no formal education were less likely to be obese than those who attained secondary or tertiary level education. Approximately 30% of those surveyed were unaware of their diabetes diagnosis and more than 60% were not on treatment despite a known diabetic status. Socio-demographic factors such as older age, female sex, urban dwelling, lack of tobacco use and marital status were associated with an increased risk of obesity, impaired glucose tolerance and type 2 diabetes. We were unable to identify studies that specifically looked at diabetes-related cardiovascular disease. Conclusion: Although the prevalence of Obesity and Type 2 diabetes in Zambia appears low, more representative studies focusing on parts of the country outside of the main industrial zone need to be conducted. There also needs to be research on diabetes-related cardiovascular disease. National surveillance, monitoring and evaluation on all non-communicable diseases need to be prioritized and policies that address underweight, obesity and type 2 diabetes developed.

Keywords: type 2 diabetes, Zambia, obesity, cardiovascular disease

Procedia PDF Downloads 256
1832 Comparison of Iodine Density Quantification through Three Material Decomposition between Philips iQon Dual Layer Spectral CT Scanner and Siemens Somatom Force Dual Source Dual Energy CT Scanner: An in vitro Study

Authors: Jitendra Pratap, Jonathan Sivyer

Abstract:

Introduction: Dual energy/Spectral CT scanning permits simultaneous acquisition of two x-ray spectra datasets and can complement radiological diagnosis by allowing tissue characterisation (e.g., uric acid vs. non-uric acid renal stones), enhancing structures (e.g. boost iodine signal to improve contrast resolution), and quantifying substances (e.g. iodine density). However, the latter showed inconsistent results between the 2 main modes of dual energy scanning (i.e. dual source vs. dual layer). Therefore, the present study aimed to determine which technology is more accurate in quantifying iodine density. Methods: Twenty vials with known concentrations of iodine solutions were made using Optiray 350 contrast media diluted in sterile water. The concentration of iodine utilised ranged from 0.1 mg/ml to 1.0mg/ml in 0.1mg/ml increments, 1.5 mg/ml to 4.5 mg/ml in 0.5mg/ml increments followed by further concentrations at 5.0 mg/ml, 7mg/ml, 10 mg/ml and 15mg/ml. The vials were scanned using Dual Energy scan mode on a Siemens Somatom Force at 80kV/Sn150kV and 100kV/Sn150kV kilovoltage pairing. The same vials were scanned using Spectral scan mode on a Philips iQon at 120kVp and 140kVp. The images were reconstructed at 5mm thickness and 5mm increment using Br40 kernel on the Siemens Force and B Filter on Philips iQon. Post-processing of the Dual Energy data was performed on vendor-specific Siemens Syngo VIA (VB40) and Philips Intellispace Portal (Ver. 12) for the Spectral data. For each vial and scan mode, the iodine concentration was measured by placing an ROI in the coronal plane. Intraclass correlation analysis was performed on both datasets. Results: The iodine concentrations were reproduced with a high degree of accuracy for Dual Layer CT scanner. Although the Dual Source images showed a greater degree of deviation in measured iodine density for all vials, the dataset acquired at 80kV/Sn150kV had a higher accuracy. Conclusion: Spectral CT scanning by the dual layer technique has higher accuracy for quantitative measurements of iodine density compared to the dual source technique.

Keywords: CT, iodine density, spectral, dual-energy

Procedia PDF Downloads 123
1831 Literary Theatre and Embodied Theatre: A Practice-Based Research in Exploring the Authorship of a Performance

Authors: Rahul Bishnoi

Abstract:

Theatre, as Ann Ubersfld calls it, is a paradox. At once, it is both a literary work and a physical representation. Theatre as a text is eternal, reproducible, and identical while as a performance, theatre is momentary and never identical to the previous performances. In this dual existence of theatre, who is the author? Is the author the playwright who writes the dramatic text, or the director who orchestrates the performance, or the actor who embodies the text? From the poststructuralist lens of Barthes, the author is dead. Barthes’ argument of discrete temporality, i.e. the author is the before, and the text is the after, does not hold true for theatre. A published literary work is written, edited, printed, distributed and then gets consumed by the reader. On the other hand, theatrical production is immediate; an actor performs and the audience witnesses it instantaneously. Time, so to speak, does not separate the author, the text, and the reader anymore. The question of authorship gets further complicated in Augusto Boal’s “Theatre of the Oppressed” movement where the audience is a direct participant like the actors in the performance. In this research, through an experimental performance, the duality of theatre is explored with the authorship discourse. And the conventional definition of authorship is subjected to additional complexity by erasing the distinction between an actor and the audience. The design/methodology of the experimental performance is as follows: The audience will be asked to produce a text under an anonymous virtual alias. The text, as it is being produced, will be read and performed by the actor. The audience who are also collectively “authoring” the text, will watch this performance and write further until everyone has contributed with one input each. The cycle of writing, reading, performing, witnessing, and writing will continue until the end. The intention is to create a dynamic system of writing/reading with the embodiment of the text through the actor. The actor is giving up the power to the audience to write the spoken word, stage instruction and direction while still keeping the agency of interpreting that input and performing in the chosen manner. This rapid conversation between the actor and the audience also creates a conversion of authorship. The main conclusion of this study is a perspective on the nature of dynamic authorship of theatre containing a critical enquiry of the collaboratively produced text, an individually performed act, and a collectively witnessed event. Using practice as a methodology, this paper contests the poststructuralist notion of the author as merely a ‘scriptor’ and breaks it further by involving the audience in the authorship as well.

Keywords: practice based research, performance studies, post-humanism, Avant-garde art, theatre

Procedia PDF Downloads 111
1830 Constructed Wetlands with Subsurface Flow for Nitrogen and Metazachlor Removal from Tile Drainage: First Year Results

Authors: P. Fucik, J. Vymazal, M. Seres

Abstract:

Pollution from agricultural drainage is a severe issue for water quality, and it is a major reason for the failure in accomplishment of 'good chemical status' according to Water Framework Directive, especially due to high nitrogen and pesticide burden of receiving waters. Constructed wetlands were proposed as a suitable measure for removal of nitrogen from agricultural drainage in the early 1990s. Until now, the vast majority of constructed wetlands designed to treat tile drainage were free-surface constructed wetlands. In 2018, three small experimental constructed wetlands with horizontal subsurface flow were built in Czech Highlands to treat tile drainage from 15.73 ha watershed. The wetlands have a surface area of 79, 90 and 98 m² and were planted with Phalaris arundinacea and Glyceria maxima in parallel bands. The substrate in the first two wetlands is gravel (4-8 mm) mixed with birch woodchips (10:1 volume ratio). In one of those wetlands, the water level is kept 10 cm above the surface; in the second one, the water is kept below the surface. The third wetland has 20 cm layer of birch woodchips on top of gravel. The drainage outlet, as well as wetland outlets, are equipped with automatic discharge-gauging devices, temperature probes, as well as automatic water samplers (Teledyne ISCO). During the monitored period (2018-2019), the flows were unexpectedly low due to a drop of the shallow ground water level, being the main source of water for the monitored drainage system, as experienced at many areas of the Czech Republic. The mean water residence time was analyzed in the wetlands (KBr), which was 16, 9 and 27 days, respectively. The mean total nitrogen concentration eliminations during one-year period were 61.2%, 62.6%, and 70.9% for wetlands 1, 2, and 3, respectively. The average load removals amounted to 0.516, 0.323, and 0.399 g N m-2 d-1 or 1885, 1180 and 1457 kg ha-1 yr-1 in wetlands 1, 2 and 3, respectively. The plant uptake and nitrogen sequestration in aboveground biomass contributed only marginally to the overall nitrogen removal. Among the three variants, the one with shallow water on the surface was revealed to be the most effective for removal of nitrogen from drainage water. In August 2019, herbicide Metazachlor was experimentally poured in time of 2 hours at drainage outlet in a concentration of 250 ug/l to find out the removal rates of the aforementioned wetlands. Water samples were taken the first day every six hours, and for the next nine days, every day one water sample was taken. The removal rates were as follows 94, 69 and 99%; when the most effective wetland was the one with the longest water residence time and the birch woodchip-layer on top of gravel.

Keywords: constructed wetlands, metazachlor, nitrogen, tile drainage

Procedia PDF Downloads 152
1829 Classification of Emotions in Emergency Call Center Conversations

Authors: Magdalena Igras, Joanna Grzybowska, Mariusz Ziółko

Abstract:

The study of emotions expressed in emergency phone call is presented, covering both statistical analysis of emotions configurations and an attempt to automatically classify emotions. An emergency call is a situation usually accompanied by intense, authentic emotions. They influence (and may inhibit) the communication between caller and responder. In order to support responders in their responsible and psychically exhaustive work, we studied when and in which combinations emotions appeared in calls. A corpus of 45 hours of conversations (about 3300 calls) from emergency call center was collected. Each recording was manually tagged with labels of emotions valence (positive, negative or neutral), type (sadness, tiredness, anxiety, surprise, stress, anger, fury, calm, relief, compassion, satisfaction, amusement, joy) and arousal (weak, typical, varying, high) on the basis of perceptual judgment of two annotators. As we concluded, basic emotions tend to appear in specific configurations depending on the overall situational context and attitude of speaker. After performing statistical analysis we distinguished four main types of emotional behavior of callers: worry/helplessness (sadness, tiredness, compassion), alarm (anxiety, intense stress), mistake or neutral request for information (calm, surprise, sometimes with amusement) and pretension/insisting (anger, fury). The frequency of profiles was respectively: 51%, 21%, 18% and 8% of recordings. A model of presenting the complex emotional profiles on the two-dimensional (tension-insecurity) plane was introduced. In the stage of acoustic analysis, a set of prosodic parameters, as well as Mel-Frequency Cepstral Coefficients (MFCC) were used. Using these parameters, complex emotional states were modeled with machine learning techniques including Gaussian mixture models, decision trees and discriminant analysis. Results of classification with several methods will be presented and compared with the state of the art results obtained for classification of basic emotions. Future work will include optimization of the algorithm to perform in real time in order to track changes of emotions during a conversation.

Keywords: acoustic analysis, complex emotions, emotion recognition, machine learning

Procedia PDF Downloads 400
1828 Reformed Land: Extent of Use and Contribution to Livelihoods in the Waterberg District

Authors: A. J. Netshipale, M. L. Mashiloane, S. J. Oosting, I. J. M. De Boer, E. N. Raidimi

Abstract:

Three tier land reform programme (land restitution, land redistribution and land tenure reform) had been implemented for the past two decades in South Africa with an aim of redressing the unjust land ownership patterns of the past. Land restitution and redistribution seeked to make land available for beneficiaries’ ownership based on policy guidelines. Attention given to the two sub-programmes was mostly land reform focused with the quantity of land that exchanged ownership being used as a measure of success with disregard for how the land is used by the beneficiaries for their livelihoods. In few cases that the land use assessment was done for the two sub-programmes it was assessed on a case basis or few selected cases. The current study intended to shed light on a broader scope. This study investigated the extent to which land reform farms were used and contribution made by farms to the livelihoods of active beneficiaries. Seventy six farms that represented restitution (16 farms) and redistribution (60) programmes were selected for land use investigation. Land use data were collected from farm representatives by means of semi-structured questionnaire. A stratified sample of 87 households (38 for restitution and 49 for redistribution) were selected for livelihood investigations. Data on income generating activities and passive income sources were collected from household heads using semi-structured questionnaire. Additional data were collected through focus group discussions and from stakeholders through key-informants interviews. Livestock production used more land per farm on average (45%) in relation to the amount of average total land used per farm of 77% under land redistribution programme. Land restitution transformed crop farms into mixed farming and unused farms to be under use while land redistribution converted conservation land into agricultural land and also unused farms to be used. Livestock production contributed on average 25% to the livelihoods of 48% of the households whereas crop production contributed 31% on average to the livelihoods of 67% of the households. Government grants had the highest contribution of 54% on average and contributed to most households (72%). Agriculture was the sole source of livelihoods to only three per cent of the households. Most households (40%) had a mix of three livelihoods sources as their livelihood strategy. It could be concluded that the use of reformed land would be mainly influenced by the agro-ecological conditions of the area and agriculture could not be the main source of livelihoods for households that benefited from land reform. Land reform policies which accommodate diverse livelihoods activities could contribute to sustainable livelihoods.

Keywords: active beneficiaries, households, land reform, land use, livelihoods

Procedia PDF Downloads 199
1827 Emerging VC Industry and the Important Role of Marketing Expectations in Project Selection: Evidence on Russian Data

Authors: I. Rodionov, A. Semenov, E. Gosteva, O. Sokolova

Abstract:

Currently, the venture capital becomes more and more advanced and effective source of the innovation project financing, connected with a high-risk level. In the developed countries, it plays a key role in transforming innovation projects into successful businesses and creating prosperity of the modern economy. Actually, in Russia there are many necessary preconditions for creation of the effective venture investment system: the network of the public institutes for innovation financing operates; there is a significant number of the small and medium-sized enterprises, capable to sell production with good market potential. However, the current system does not confirm the necessary level of efficiency in practice that can be substantially explained by the absence of the accurate plan of action to form the national venture model and by the lack of experience of successful venture deals with profitable exits in Russian economy. This paper studies the influence of various factors on the venture industry development by the example of the IT-sector in Russia. The choice of the sector is based on the fact, that this segment is the main driver of the venture capital market growth in Russia, and the necessary set of data exists. The size of investment of the second round is used as the dependent variable. To analyse the influence of the previous round such determinant as the volume of the previous (first) round investments is used. There is also used a dummy variable in regression to examine that the participation of an investor with high reputation and experience in the previous round can influence the size of the next investment round. The regression analysis of short-term interrelations between studied variables reveals prevailing influence of the volume of the first round investments on the venture investments volume of the second round. Because of the research, the participation of investors with first-class reputation has a small impact on an indicator of the value of investment of the second round. The expected positive dependence of the second round investments on the forecasted market growth rate now of the deal is also rejected. So, the most important determinant of the value of the second-round investment is the value of first–round investment, so it means that the most competitive on the Russian market are the start-up teams which can attract more money on the start, and the target market growth is not the factor of crucial importance.

Keywords: venture industry, venture investment, determinants of the venture sector development, IT-sector

Procedia PDF Downloads 357
1826 Seismotectonic Deformations along Strike-Slip Fault Systems of the Maghreb Region, Western Mediterranean

Authors: Abdelkader Soumaya, Noureddine Ben Ayed, Mojtaba Rajabi, Mustapha Meghraoui, Damien Delvaux, Ali Kadri, Moritz Ziegler, Said Maouche, Ahmed Braham, Aymen Arfaoui

Abstract:

The northern Maghreb region (Western Mediterranean) is a key area to study the seismotectonic deformations across the Africa-Eurasia convergent plate boundary. On the basis of young geologic fault slip data and stress inversion of focal mechanisms, we defined a first-order transpression-compatible stress field and a second-order spatial variation of tectonic regime across the Maghreb region, with a relatively stable SHmax orientation from east to west. Therefore, the present-day active contraction of the western Africa-Eurasia plate boundary is accommodated by (1) E-W strike-slip faulting with a reverse component along the Eastern Tell and Saharan-Tunisian Atlas, (2) a predominantly NE trending thrust faulting with strike-slip component in the Western Tell part, and (3) a conjugate strike-slip faulting regime with a normal component in the Alboran/Rif domain. This spatial variation of the active stress field and the tectonic regime is relatively in agreement with the inferred stress information from neotectonic features. According to newly suggested structural models, we highlight the role of main geometrically complex shear zones in the present-day stress pattern of the Maghreb region. Then, different geometries of these major preexisting strike-slip faults and related fractures (V-shaped conjugate fractures, horsetail splays faults, and Riedel fractures) impose their component on the second- and third-order stress regimes. Smoothed present-day and Neotectonic stress maps (mean SHmax orientation) reveal that plate boundary forces acting on the Africa-Eurasia collisional plates control the long wavelength of the stress field pattern in the Maghreb. The seismotectonic deformations and the upper crustal stress field in the study area are governed by the interplay of the oblique plate convergence (i.e., Africa-Eurasia), lithosphere-mantle interaction, and preexisting tectonic weakness zones.

Keywords: Maghreb, strike-slip fault, seismotectonic, focal mechanism, inversion

Procedia PDF Downloads 124
1825 Modeling the Downstream Impacts of River Regulation on the Grand Lake Meadows Complex using Delft3D FM Suite

Authors: Jaime Leavitt, Katy Haralampides

Abstract:

Numerical modelling has been used to investigate the long-term impact of a large dam on downstream wetland areas, specifically in terms of changing sediment dynamics in the system. The Mactaquac Generating Station (MQGS) is a 672MW run-of-the-river hydroelectric facility, commissioned in 1968 on the mainstem of the Wolastoq|Saint John River in New Brunswick, Canada. New Brunswick Power owns and operates the dam and has been working closely with the Canadian Rivers Institute at UNB Fredericton on a multi-year, multi-disciplinary project investigating the impact the dam has on its surrounding environment. With focus on the downstream river, this research discusses the initialization, set-up, calibration, and preliminary results of a 2-D hydrodynamic model using the Delft3d Flexible Mesh Suite (successor of the Delft3d 4 Suite). The flexible mesh allows the model grid to be structured in the main channel and unstructured in the floodplains and other downstream regions with complex geometry. The combination of grid types improves computational time and output. As the movement of water governs the movement of sediment, the calibrated and validated hydrodynamic model was applied to sediment transport simulations, particularly of the fine suspended sediments. Several provincially significant Protected Natural Areas and federally significant National Wildlife Areas are located 60km downstream of the MQGS. These broad, low-lying floodplains and wetlands are known as the Grand Lake Meadows Complex (GLM Complex). There is added pressure to investigate the impacts of river regulation on these protected regions that rely heavily on natural river processes like sediment transport and flooding. It is hypothesized that the fine suspended sediment would naturally travel to the floodplains for nutrient deposition and replenishment, particularly during the freshet and large storms. The purpose of this research is to investigate the impacts of river regulation on downstream environments and use the model as a tool for informed decision making to protect and maintain biologically productive wetlands and floodplains.

Keywords: hydrodynamic modelling, national wildlife area, protected natural area, sediment transport.

Procedia PDF Downloads 13