Search results for: image processing of electrical impedance tomography
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8367

Search results for: image processing of electrical impedance tomography

387 High Throughput LC-MS/MS Studies on Sperm Proteome of Malnad Gidda (Bos Indicus) Cattle

Authors: Kerekoppa Puttaiah Bhatta Ramesha, Uday Kannegundla, Praseeda Mol, Lathika Gopalakrishnan, Jagish Kour Reen, Gourav Dey, Manish Kumar, Sakthivel Jeyakumar, Arumugam Kumaresan, Kiran Kumar M., Thottethodi Subrahmanya Keshava Prasad

Abstract:

Spermatozoa are the highly specialized transcriptionally and translationally inactive haploid male gamete. The understanding of proteome of sperm is indispensable to explore the mechanism of sperm motility and fertility. Though there is a large number of human sperm proteomic studies, in-depth proteomic information on Bos indicus spermatozoa is not well established yet. Therefore, we illustrated the profile of sperm proteome in indigenous cattle, Malnad gidda (Bos Indicus), using high-resolution mass spectrometry. In the current study, two semen ejaculates from 3 breeding bulls were collected employing the artificial vaginal method. Using 45% percoll purification, spermatozoa cells were isolated. Protein was extracted using lysis buffer containing 2% Sodium Dodecyl Sulphate (SDS) and protein concentration was estimated. Fifty micrograms of protein from each individual were pooled for further downstream processing. Pooled sample was fractionated using SDS-Poly Acrylamide Gel Electrophoresis, which is followed by in-gel digestion. The peptides were subjected to C18 Stage Tip clean-up and analyzed in Orbitrap Fusion Tribrid mass spectrometer interfaced with Proxeon Easy-nano LC II system (Thermo Scientific, Bremen, Germany). We identified a total of 6773 peptides with 28426 peptide spectral matches, which belonged to 1081 proteins. Gene ontology analysis has been carried out to determine the biological processes, molecular functions and cellular components associated with sperm protein. The biological process chiefly represented our data is an oxidation-reduction process (5%), spermatogenesis (2.5%) and spermatid development (1.4%). The highlighted molecular functions are ATP, and GTP binding (14%) and the prominent cellular components most observed in our data were nuclear membrane (1.5%), acrosomal vesicle (1.4%), and motile cilium (1.3%). Seventeen percent of sperm proteins identified in this study were involved in metabolic pathways. To the best of our knowledge, this data represents the first total sperm proteome from indigenous cattle, Malnad Gidda. We believe that our preliminary findings could provide a strong base for the future understanding of bovine sperm proteomics.

Keywords: Bos indicus, Malnad Gidda, mass spectrometry, spermatozoa

Procedia PDF Downloads 196
386 Implementation of Ecological and Energy-Efficient Building Concepts

Authors: Robert Wimmer, Soeren Eikemeier, Michael Berger, Anita Preisler

Abstract:

A relatively large percentage of energy and resource consumption occurs in the building sector. This concerns the production of building materials, the construction of buildings and also the energy consumption during the use phase. Therefore, the overall objective of this EU LIFE project “LIFE Cycle Habitation” (LIFE13 ENV/AT/000741) is to demonstrate innovative building concepts that significantly reduce CO₂emissions, mitigate climate change and contain a minimum of grey energy over their entire life cycle. The project is being realised with the contribution of the LIFE financial instrument of the European Union. The ultimate goal is to design and build prototypes for carbon-neutral and “LIFE cycle”-oriented residential buildings and make energy-efficient settlements the standard of tomorrow in line with the EU 2020 objectives. To this end, a resource and energy-efficient building compound is being built in Böheimkirchen, Lower Austria, which includes 6 living units and a community area as well as 2 single family houses with a total usable floor surface of approximately 740 m². Different innovative straw bale construction types (load bearing and pre-fabricated non loadbearing modules) together with a highly innovative energy-supply system, which is based on the maximum use of thermal energy for thermal energy services, are going to be implemented. Therefore only renewable resources and alternative energies are used to generate thermal as well as electrical energy. This includes the use of solar energy for space heating, hot water and household appliances like dishwasher or washing machine, but also a cooking place for the community area operated with thermal oil as heat transfer medium on a higher temperature level. Solar collectors in combination with a biomass cogeneration unit and photovoltaic panels are used to provide thermal and electric energy for the living units according to the seasonal demand. The building concepts are optimised by support of dynamic simulations. A particular focus is on the production and use of modular prefabricated components and building parts made of regionally available, highly energy-efficient, CO₂-storing renewable materials like straw bales. The building components will be produced in collaboration by local SMEs that are organised in an efficient way. The whole building process and results are monitored and prepared for knowledge transfer and dissemination including a trial living in the residential units to test and monitor the energy supply system and to involve stakeholders into evaluation and dissemination of the applied technologies and building concepts. The realised building concepts should then be used as templates for a further modular extension of the settlement in a second phase.

Keywords: energy-efficiency, green architecture, renewable resources, sustainable building

Procedia PDF Downloads 149
385 Production of Recombinant Human Serum Albumin in Escherichia coli: A Crucial Biomolecule for Biotechnological and Healthcare Applications

Authors: Ashima Sharma, Tapan K. Chaudhuri

Abstract:

Human Serum Albumin (HSA) is one of the most demanded therapeutic protein with immense biotechnological applications. The current source of HSA is human blood plasma. Blood is a limited and an unsafe source as it possesses the risk of contamination by various blood derived pathogens. This issue led to exploitation of various hosts with the aim to obtain an alternative source for the production of the rHSA. But, till now no host has been proven to be effective commercially for rHSA production because of their respective limitations. Thus, there exists an indispensable need to promote non-animal derived rHSA production. Of all the host systems, Escherichia coli is one of the most convenient hosts which has contributed in the production of more than 30% of the FDA approved recombinant pharmaceuticals. E. coli grows rapidly and its culture reaches high cell density using inexpensive and simple substrates. The fermentation batch turnaround number for E. coli culture is 300 per year, which is far greater than any of the host systems available. Therefore, E. coli derived recombinant products have more economical potential as fermentation processes are cheaper compared to the other expression hosts available. Despite of all the mentioned advantages, E. coli had not been successfully adopted as a host for rHSA production. The major bottleneck in exploiting E. coli as a host for rHSA production was aggregation i.e. majority of the expressed recombinant protein was forming inclusion bodies (more than 90% of the total expressed rHSA) in the E. coli cytosol. Recovery of functional rHSA form inclusion body is not preferred because it is tedious, time consuming, laborious and expensive. Because of this limitation, E. coli host system was neglected for rHSA production for last few decades. Considering the advantages of E. coli as a host, the present work has targeted E. coli as an alternate host for rHSA production through resolving the major issue of inclusion body formation associated with it. In the present study, we have developed a novel and innovative method for enhanced soluble and functional production of rHSA in E.coli (~60% of the total expressed rHSA in the soluble fraction) through modulation of the cellular growth, folding and environmental parameters, thereby leading to significantly improved and enhanced -expression levels as well as the functional and soluble proportion of the total expressed rHSA in the cytosolic fraction of the host. Therefore, in the present case we have filled in the gap in the literature, by exploiting the most well studied host system Escherichia coli which is of low cost, fast growing, scalable and ‘yet neglected’, for the enhancement of functional production of HSA- one of the most crucial biomolecule for clinical and biotechnological applications.

Keywords: enhanced functional production of rHSA in E. coli, recombinant human serum albumin, recombinant protein expression, recombinant protein processing

Procedia PDF Downloads 347
384 Comparing Deep Architectures for Selecting Optimal Machine Translation

Authors: Despoina Mouratidis, Katia Lida Kermanidis

Abstract:

Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.

Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification

Procedia PDF Downloads 132
383 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 87
382 Impact of Microwave and Air Velocity on Drying Kinetics and Rehydration of Potato Slices

Authors: Caiyun Liu, A. Hernandez-Manas, N. Grimi, E. Vorobiev

Abstract:

Drying is one of the most used methods for food preservation, which extend shelf life of food and makes their transportation, storage and packaging easier and more economic. The commonly dried method is hot air drying. However, its disadvantages are low energy efficiency and long drying times. Because of the high temperature during the hot air drying, the undesirable changes in pigments, vitamins and flavoring agents occur which result in degradation of the quality parameters of the product. Drying process can also cause shrinkage, case hardening, dark color, browning, loss of nutrients and others. Recently, new processes were developed in order to avoid these problems. For example, the application of pulsed electric field provokes cell membrane permeabilisation, which increases the drying kinetics and moisture diffusion coefficient. Microwave drying technology has also several advantages over conventional hot air drying, such as higher drying rates and thermal efficiency, shorter drying time, significantly improved product quality and nutritional value. Rehydration kinetics of dried product is a very important characteristic of dried products. Current research has indicated that the rehydration ratio and the coefficient of rehydration are dependent on the processing conditions of drying. The present study compares the efficiency of two processes (1: room temperature air drying, 2: microwave/air drying) in terms of drying rate, product quality and rehydration ratio. In this work, potato slices (≈2.2g) with a thickness of 2 mm and diameter of 33mm were placed in the microwave chamber and dried. Drying kinetics and drying rates of different methods were determined. The process parameters included inlet air velocity (1 m/s, 1.5 m/s, 2 m/s) and microwave power (50 W, 100 W, 200 W and 250 W) were studied. The evolution of temperature during microwave drying was measured. The drying power had a strong effect on drying rate, and the microwave-air drying resulted in 93% decrease in the drying time when the air velocity was 2 m/s and the power of microwave was 250 W. Based on Lewis model, drying rate constants (kDR) were determined. It was observed an increase from kDR=0.0002 s-1 to kDR=0.0032 s-1 of air velocity of 2 m/s and microwave/air (at 2m/s and 250W) respectively. The effective moisture diffusivity was calculated by using Fick's law. The results show an increase of effective moisture diffusivity from 7.52×10-11 to 2.64×10-9 m2.s-1 for air velocity of 2 m/s and microwave/air (at 2m/s and 250W) respectively. The temperature of the potato slices increased for higher microwaves power, but decreased for higher air velocity. The rehydration ratio, defined as the weight of the the sample after rehydration per the weight of dried sample, was determined at different water temperatures (25℃, 50℃, 75℃). The rehydration ratio increased with the water temperature and reached its maximum at the following conditions: 200 W for the microwave power, 2 m/s for the air velocity and 75°C for the water temperature. The present study shows the interest of microwave drying for the food preservation.

Keywords: drying, microwave, potato, rehydration

Procedia PDF Downloads 269
381 Urban Waste Management for Health and Well-Being in Lagos, Nigeria

Authors: Bolawole F. Ogunbodede, Mokolade Johnson, Adetunji Adejumo

Abstract:

High population growth rate, reactive infrastructure provision, inability of physical planning to cope with developmental pace are responsible for waste water crisis in the Lagos Metropolis. Septic tank is still the most prevalent waste-water holding system. Unfortunately, there is a dearth of septage treatment infrastructure. Public waste-water treatment system statistics relative to the 23 million people in Lagos State is worrisome. 1.85 billion Cubic meters of wastewater is generated on daily basis and only 5% of the 26 million population is connected to public sewerage system. This is compounded by inadequate budgetary allocation and erratic power supply in the last two decades. This paper explored community participatory waste-water management alternative at Oworonshoki Municipality in Lagos. The study is underpinned by decentralized Waste-water Management systems in built-up areas. The initiative accommodates 5 step waste-water issue including generation, storage, collection, processing and disposal through participatory decision making in two Oworonshoki Community Development Association (CDA) areas. Drone assisted mapping highlighted building footage. Structured interviews and focused group discussion of land lord associations in the CDA areas provided collaborator platform for decision-making. Water stagnation in primary open drainage channels and natural retention ponds in framing wetlands is traceable to frequent of climate change induced tidal influences in recent decades. Rise in water table resulting in septic-tank leakage and water pollution is reported to be responsible for the increase in the water born infirmities documented in primary health centers. This is in addition to unhealthy dumping of solid wastes in the drainage channels. The effect of uncontrolled disposal system renders surface waters and underground water systems unsafe for human and recreational use; destroys biotic life; and poisons the fragile sand barrier-lagoon urban ecosystems. Cluster decentralized system was conceptualized to service 255 households. Stakeholders agreed on public-private partnership initiative for efficient wastewater service delivery.

Keywords: health, infrastructure, management, septage, well-being

Procedia PDF Downloads 174
380 A Study on the Shear-Induced Crystallization of Aliphatic-Aromatic Copolyester

Authors: Ramin Hosseinnezhad, Iurii Vozniak, Andrzej Galeski

Abstract:

Shear-induced crystallization, originated from orientation of chains along the flow direction, is an inevitable part of most polymer processing technologies. It plays a dominant role in determining the final product properties and is affected by many factors such as shear rate, cooling rate, total strain, etc. Investigation of the shear-induced crystallization process become of great importance for preparation of nanocomposite, which requires crystallization of nanofibrous sheared inclusions at higher temperatures. Thus, the effects of shear time, shear rate, and also thermal condition of cooling on crystallization of two aliphatic-aromatic copolyesters have been investigated. This was performed using Linkam optical shearing system (CSS450) for both Ecoflex® F Blend C1200 produced by BASF and synthesized copolyester of butylene terephthalate and a mixture of butylene esters: adipate, succinate, and glutarate, (PBASGT), containing 60% of aromatic comonomer. Crystallization kinetics of these biodegradable copolyesters was studied at two different conditions of shearing. First, sample with a thickness of 60µm was heated to 60˚C above its melting point and subsequently subjected to different shear rates (100–800 sec-1) while cooling with specific rates. Second, the same type of sample was cooled down when shearing at constant temperature was finished. The intensity of transmitted depolarized light, recorded by a camera attached to the optical microscope, was used as a measure to follow the crystallization. Temperature dependencies of conversion degree of samples during cooling were collected and used to determine the half-temperature (Th), at which 50% conversion degree was reached. Shearing ecoflex films for 45 seconds with a shear rate of 100 sec-1 resulted in significant increase of Th from 56˚C to 70˚C. Moreover, the temperature range for the transition of molten samples to crystallized state decreased from 42˚C to 20˚C. Comparatively low shift of 10˚C in Th towards higher temperature was observed for PBASGT films at shear rate of 600 sec-1 for 45 seconds. However, insufficient melt flow strength and non-laminar flow due to Taylor vortices was a hindrance to reach more elevated Th at very high shear rates (600–800 sec-1). The shift in Th was smaller for the samples sheared at a constant temperature and subsequently cooled down. This may be attributed to the longer time gap between cessation of shearing and the onset of crystallization. The longer this time gap, the more possibility for crystal nucleus to re-melt at temperatures above Tm and for polymer chains to recoil and relax. It is found that the crystallization temperature, crystallization induction time and spherulite growth of aliphatic-aromatic copolyesters are dramatically influenced by both the cooling rate and the shear imposed during the process.

Keywords: induced crystallization, shear rate, aliphatic-aromatic copolyester, ecoflex

Procedia PDF Downloads 448
379 Modelling the Antecedents of Supply Chain Enablers in Online Groceries Using Interpretive Structural Modelling and MICMAC Analysis

Authors: Rose Antony, Vivekanand B. Khanapuri, Karuna Jain

Abstract:

Online groceries have transformed the way the supply chains are managed. These are facing numerous challenges in terms of product wastages, low margins, long breakeven to achieve and low market penetration to mention a few. The e-grocery chains need to overcome these challenges in order to survive the competition. The purpose of this paper is to carry out a structural analysis of the enablers in e-grocery chains by applying Interpretive Structural Modeling (ISM) and MICMAC analysis in the Indian context. The research design is descriptive-explanatory in nature. The enablers have been identified from the literature and through semi-structured interviews conducted among the managers having relevant experience in e-grocery supply chains. The experts have been contacted through professional/social networks by adopting a purposive snowball sampling technique. The interviews have been transcribed, and manual coding is carried using open and axial coding method. The key enablers are categorized into themes, and the contextual relationship between these and the performance measures is sought from the Industry veterans. Using ISM, the hierarchical model of the enablers is developed and MICMAC analysis identifies the driver and dependence powers. Based on the driver-dependence power the enablers are categorized into four clusters namely independent, autonomous, dependent and linkage. The analysis found that information technology (IT) and manpower training acts as key enablers towards reducing the lead time and enhancing the online service quality. Many of the enablers fall under the linkage cluster viz., frequent software updating, branding, the number of delivery boys, order processing, benchmarking, product freshness and customized applications for different stakeholders, depicting these as critical in online food/grocery supply chains. Considering the perishability nature of the product being handled, the impact of the enablers on the product quality is also identified. Hence, study aids as a tool to identify and prioritize the vital enablers in the e-grocery supply chain. The work is perhaps unique, which identifies the complex relationships among the supply chain enablers in fresh food for e-groceries and linking them to the performance measures. It contributes to the knowledge of supply chain management in general and e-retailing in particular. The approach focus on the fresh food supply chains in the Indian context and hence will be applicable in developing economies context, where supply chains are evolving.

Keywords: interpretive structural modelling (ISM), India, online grocery, retail operations, supply chain management

Procedia PDF Downloads 204
378 MigrationR: An R Package for Analyzing Bird Migration Data Based on Satellite Tracking

Authors: Xinhai Li, Huidong Tian, Yumin Guo

Abstract:

Bird migration is fantastic natural phenomenon. In recent years, the use of GPS transmitters has generated a vast amount of data, and the Movebank platform has made these data publicly accessible. For researchers, what they need are data analysis tools. Although there are approximately 90 R packages dedicated to animal movement analysis, the capacity for comprehensive processing of bird migration data remains limited. Hence, we introduce a novel package called migrationR. This package enables the calculation of movement speed, direction, changes in direction, flight duration, daily and annual movement distances. Furthermore, it can pinpoint the starting and ending dates of migration, estimate nest site locations and stopovers, and visualize movement trajectories at various time scales. migrationR distinguishes individuals through NMDS (non-metric multidimensional scaling) coordinates based on movement variables such as speed, flight duration, path tortuosity, and migration timing. A distinctive aspect of the package is the development of a hetero-occurrences species distribution model that takes into account the daily rhythm of individual birds across different landcover types. Habitat use for foraging and roosting differs significantly for many waterbirds. For example, White-naped Cranes at Poyang Lake in China typically forage in croplands and roost in shallow water areas. Both of these occurrence types are of equal importance. Optimal habitats consist of a combination of crop lands and shallow waters, whereas suboptimal habitats lack both, which necessitates birds to fly extensively. With migrationR, we conduct species distribution modeling for foraging and roosting separately and utilize the moving distance between crop lands and shallow water areas as an index of overall habitat suitability. This approach offers a more nuanced understanding of the habitat requirements for migratory birds and enhances our ability to analyze and interpret their movement patterns effectively. The functions of migrationR are demonstrated using our own tracking data of 78 White-naped Crane individuals from 2014 to 2023, comprising over one million valid locations in total. migrationR can be installed from a GitHub repository by executing the following command: remotes::install_github("Xinhai-Li/migrationR").

Keywords: bird migration, hetero-occurrences species distribution model, migrationR, R package, satellite telemetry

Procedia PDF Downloads 63
377 Correlation Between Cytokine Levels and Lung Injury in the Syrian Hamster (Mesocricetus Auratus) Covid-19 Model

Authors: Gleb Fomin, Kairat Tabynov, Nurkeldy Turebekov, Dinara Turegeldiyeva, Rinat Islamov

Abstract:

The level of major cytokines in the blood of patients with COVID-19 varies greatly depending on age, gender, duration and severity of infection, and comorbidity. There are two clinically significant cytokines, IL-6 and TNF-α, which increase in levels in patients with severe COVID-19. However, in a model of COVID-19 in hamsters, TNF-α levels are unchanged or reduced, while the expression of other cytokines reflects the profile of cytokines found in patients’ plasma. The aim of our study was to evaluate the relationship between the level of cytokines in the blood, lungs, and lung damage in the model of the Syrian hamster (Mesocricetus auratus) infected with the SARS-CoV-2 strain. The study used outbred female and male Syrian hamsters (n=36, 4 groups) weighing 80-110 g and 5 months old (protocol IACUC, #4, 09/22/2020). Animals were infected intranasally with the hCoV-19/Kazakhstan/KazNAU-NSCEDI-481/2020 strain and euthanized at 3 d.p.i. The level of cytokines IL-6, TNF-α, IFN-α, and IFN-γ was determined by ELISA MyBioSourse (USA) for hamsters. Lung samples were subjected to histological processing. The presence of pathological changes in histological preparations was assessed on a 3-point scale. The work was carried out in the ABSL-3 laboratory. The data were analyzed in GraphPad Prism 6.00 (GraphPad Software, La Jolla, California, USA). The work was supported by the MES RK grant (AP09259865). In the blood, the level of TNF-α increased in males (p=0.0012) and IFN-γ in males and females (p=0.0001). On the contrary, IFN-α production decreased (p=0.0006). Only TNF-α level increased in lung tissues (p=0.0011). Correlation analysis showed a negative relationship between the level of IL-6 in the blood and lung damage in males (r -0.71, p=0.0001) and females (r-0.57, p=0.025). On the contrary, in males, the level of IL-6 in the lungs and score is positively correlated (r 0.80, p=0.01). The level of IFN-γ in the blood (r -0.64, p=0.035) and lungs (r-0.72, p=0.017) in males has a negative correlation with lung damage. No links were found for TNF-α and IFN-α. The study showed a positive association between lung injury and tissue levels of IL-6 in male hamsters. It is known that in humans, high concentrations of IL-6 in the lungs are associated with suppression of cellular immunity and, as a result, with an increase in the severity of COVID-19. TNF-α and IFN-γ play a key role in the pathogenesis of COVID-19 in hamsters. However, the mechanisms of their activity require more detailed study. IFN-α plays a lesser role in direct lung injury in a Syrian hamster model. We have shown the significance of tissue IL-6 and IFN-γ as predictors of the severity of lung damage in COVID-19 in the Syrian hamster model. Changes in the level of cytokines in the blood may not always reflect pathological processes in the lungs with COVID-19.

Keywords: syrian hamster, COVID-19, cytokines, biological model

Procedia PDF Downloads 92
376 Development of Academic Software for Medial Axis Determination of Porous Media from High-Resolution X-Ray Microtomography Data

Authors: S. Jurado, E. Pazmino

Abstract:

Determination of the medial axis of a porous media sample is a non-trivial problem of interest for several disciplines, e.g., hydrology, fluid dynamics, contaminant transport, filtration, oil extraction, etc. However, the computational tools available for researchers are limited and restricted. The primary aim of this work was to develop a series of algorithms to extract porosity, medial axis structure, and pore-throat size distributions from porous media domains. A complementary objective was to provide the algorithms as free computational software available to the academic community comprising researchers and students interested in 3D data processing. The burn algorithm was tested on porous media data obtained from High-Resolution X-Ray Microtomography (HRXMT) and idealized computer-generated domains. The real data and idealized domains were discretized in voxels domains of 550³ elements and binarized to denote solid and void regions to determine porosity. Subsequently, the algorithm identifies the layer of void voxels next to the solid boundaries. An iterative process removes or 'burns' void voxels in sequence of layer by layer until all the void space is characterized. Multiples strategies were tested to optimize the execution time and use of computer memory, i.e., segmentation of the overall domain in subdomains, vectorization of operations, and extraction of single burn layer data during the iterative process. The medial axis determination was conducted identifying regions where burnt layers collide. The final medial axis structure was refined to avoid concave-grain effects and utilized to determine the pore throat size distribution. A graphic user interface software was developed to encompass all these algorithms, including the generation of idealized porous media domains. The software allows input of HRXMT data to calculate porosity, medial axis, and pore-throat size distribution and provide output in tabular and graphical formats. Preliminary tests of the software developed during this study achieved medial axis, pore-throat size distribution and porosity determination of 100³, 320³ and 550³ voxel porous media domains in 2, 22, and 45 minutes, respectively in a personal computer (Intel i7 processor, 16Gb RAM). These results indicate that the software is a practical and accessible tool in postprocessing HRXMT data for the academic community.

Keywords: medial axis, pore-throat distribution, porosity, porous media

Procedia PDF Downloads 115
375 E-Business Role in the Development of the Economy of Sultanate of Oman

Authors: Mairaj Salim, Asma Zaheer

Abstract:

Oman has accomplished as much or more than its fellow Gulf monarchies, despite starting from scratch considerably later, having less oil income to utilize, dealing with a larger and more rugged geography, and resolving a bitter civil war along the way. Of course, Oman's progress in the past 30-plus years has not been without problems and missteps, but the balance is squarely on the positive side of the ledger. Oil has been the driving force of the Omani economy since Oman began commercial production in 1967. The oil industry supports the country’s high standard of living and is primarily responsible for its modern and expansive infrastructure, including electrical utilities, telephone services, roads, public education and medical services. In addition to extensive oil reserves, Oman also has substantial natural gas reserves, which are expected to play a leading role in the Omani economy in the Twenty-first Century. To reduce the country’s dependence on oil revenues, the government is restructuring the economy by directing investment to non-oil activities. Since the 21st century IT has changed the performing tasks. To manage the affairs for the benefits of organizations and economy, the Omani government has adopted E-Business technologies for the development. E-Business is important because it allows • Transformation of old economy relationships (vertical/linear relationships) to new economy relationships characterized by end-to-end relationship management solutions (integrated or extended relationships) • Facilitation and organization of networks, small firms depend on ‘partner’ firms for supplies and product distribution to meet customer demands • SMEs to outsource back-end process or cost centers enabling the SME to focus on their core competence • ICT to connect, manage and integrate processes internally and externally • SMEs to join networks and enter new markets, through shortened supply chains to increase market share, customers and suppliers • SMEs to take up the benefits of e-business to reduce costs, increase customer satisfaction, improve client referral and attract quality partners • New business models of collaboration for SMEs to increase their skill base • SMEs to enter virtual trading arena and increase their market reach A national strategy for the advancement of information and communication technology (ICT) has been worked out, mainly to introduce e-government, e-commerce, and a digital society. An information technology complex KOM (Knowledge Oasis Muscat) had been established, consisting of section for information technology, incubator services, a shopping center of technology software and hardware, ICT colleges, E-Government services and other relevant services. So, all these efforts play a vital role in the development of Oman economy.

Keywords: ICT, ITA, CRM, SCM, ERP, KOM, SMEs, e-commerce and e-business

Procedia PDF Downloads 251
374 Progress Toward More Resilient Infrastructures

Authors: Amir Golalipour

Abstract:

In recent years, resilience emerged as an important topic in transportation infrastructure practice, planning, and design to address the myriad stressors of future climate facing the Nation. Climate change has increased the frequency of extreme weather events and also causes climate and weather patterns to diverge from historic trends, culminating in circumstances where transportation infrastructure and assets are operating outside the scope of their design. To design and maintain transportation infrastructure that can continue meeting objectives over the infrastructure’s design life, these systems must be made adaptable to the changing climate by incorporating resilience wherever practically and financially feasible. This study is focused on the adaptation strategies and incorporation of resilience in infrastructure construction, maintenance, rehabilitation, and preservation processes. This study will include highlights from some of the recent FHWA activities on resilience. This study describes existing resilience planning and decision-making practices related to transportation infrastructure; mechanisms to identify, analyze, and prioritize adaptation options; and the strain that future climate and extreme weather event pressures place on existing transportation assets and the stressors these systems face for both single and combined stressor scenarios. Results of two case studies from Transportation Engineering Approaches to Climate Resiliency (TEACR) projects with focus on temperature and precipitation impacts on transportation infrastructures will be presented. These case studies looked at the impact of infrastructure performance using future temperature and precipitation compared to traditional climate design parameters. The research team used the adaptation decision making assessment and Coupled Model Intercomparison Project (CMIP) processing tool to determine which solution is best to pursue. The CMIP tool provided project climate data for temperature and precipitation which then could be incorporated into the design procedure to estimate the performance. As a result, using the future climate scenarios would impact the design. These changes were noted to have only a slight increase in costs, however it is acknowledged that network wide these costs could be significant. This study will also focus on what we have learned from recent storms, floods, and climate related events that will help us be better prepared to ensure our communities have a resilient transportation network. It should be highlighted that standardized mechanisms to incorporate resilience practices are required to encourage widespread implementation, mitigate the effects of climate stressors, and ensure the continuance of transportation systems and assets in an evolving climate.

Keywords: adaptation strategies, extreme events, resilience, transportation infrastructure

Procedia PDF Downloads 1
373 Occult Haemolacria Paradigm in the Study of Tears

Authors: Yuliya Huseva

Abstract:

To investigate the contents of tears to determine latent blood. Methods: Tear samples from 72 women were studied with the microscopy of tears aspirated with a capillary and stained by Nocht and with a chemical method of test strips with chromogen. Statistical data processing was carried out using statistical packages Statistica 10.0 for Windows, calculation of Pearson's chi-square test, Yule association coefficient, the method of determining sensitivity and specificity. Results:, In 30.6% (22) of tear samples erythrocytes were revealed microscopically. Correlations between the presence of erythrocytes in the tear and the phase of the menstrual cycle has been discovered. In the follicular phase of the cycle, erythrocytes were found in 59.1% (13) people, which is significantly more (x2=4.2, p=0.041) compared to the luteal phase - in 40.9% (9) women. In the first seven days of the follicular phase of the menstrual cycle the erythrocytes were predominanted of in the tears of women examined testifies in favour of the vicarious bleeding from the mucous membranes of extragenital organs in sync with menstruation. Of the other cellular elements in tear samples with latent haemolacria, neutrophils prevailed - in 45.5% (10), while lymphocytes were less common - in 27.3% (6), because neutrophil exudation is accompanied by vasodilatation of the conjunctiva and the release of erythrocytes into the conjunctival cavity. It was found that the prognostic significance of the chemical method was 0.53 of the microscopic method. In contrast to microscopy, which detected blood in tear samples from 30.6% (22) of women, blood was detected chemically in tears of 16.7% (12). An association between latent haemolacria and endometriosis was found (k=0.75, p≤0.05). Microscopically, in the tears of patients with endometriosis, erythrocytes were detected in 70% of cases, while in healthy women without endometriosis - in 25% of cases. The proportion of women with erythrocytes in tears, determined by a chemical method, was 41.7% among patients with endometriosis, which is significantly more (x2=6.5, p=0.011) than 11.7% among women without endometriosis. The data obtained can be explained by the etiopathogenesis of the extragenital endometriosis which is caused by hematogenous spread of endometrial tissue into the orbit. In endometriosis, erythrocytes are found against the background of accumulations of epithelial cells. In the tear samples of 4 women with endometriosis, glandular cuboidal epithelial cells, morphologically similar to endometrial cells, were found, which may indicate a generalization of the disease. Conclusions: Single erythrocytes can normally be found in the tears, their number depends on the phase of the menstrual cycle, increasing in the follicular phase. Erythrocytes found in tears against the background of accumulations of epitheliocytes and their glandular atypia may indicate a manifestation of extragenital endometriosis. Both used methods (microscopic and chemical) are informative in revealing latent haemolacria. The microscopic method is more sensitive, reveals intact erythrocytes, and besides, it provides information about other cells. At the same time, the chemical method is faster and technically simpler, it determines the presence of haemoglobin and its metabolic products, and can be used as a screening.

Keywords: tear, blood, microscopy, epitheliocytes

Procedia PDF Downloads 120
372 Disseminating Positive Psychology Resources Online: Current Research and Future Directions

Authors: Warren Jared, Bekker Jeremy, Salazar Guy, Jackman Katelyn, Linford Lauren

Abstract:

Introduction: Positive Psychology research has burgeoned in the past 20 years; however, relatively few evidence-based resources to cultivate positive psychology skills are widely available to the general public. The positive psychology resources at www.mybestself101.org were developed to assist individuals in cultivating well-being using a variety of techniques, including gratitude, purpose, mindfulness, self-compassion, savoring, personal growth, and supportive relationships. These resources are empirically based and are built to be accessible to a broad audience. Key Objectives: This presentation highlights results from two recent randomized intervention studies of specific MBS101 learning modules. A key objective of this research is to empirically assess the efficacy and usability of these online resources. Another objective of this research is to encourage the broad dissemination of online positive psychology resources; thus, recommendations for further research and dissemination will be discussed. Methods: In both interventions, we recruited adult participants using social media advertisements. The participants completed several well-being and positive psychology construct-specific measures (savoring and self-compassion measures) at baseline and post-intervention. Participants in the experimental condition were also given a feedback questionnaire to gather qualitative data on how participants viewed the modules. Participants in the self-compassion study were randomly split between an experimental group, who received the treatment, and a control group, who were placed on a waitlist. There was no control group for the savoring study. Participants were instructed to read content on the module and practice savoring or self-compassion strategies listed in the module for a minimum of twenty minutes a day for 21 days. The intervention was semi-structured, as participants were free to choose which module activities they would complete from a menu of research-based strategies. Participants tracked which activities they completed and how long they spent on the modules each day. Results: In the savoring study, participants increased in savoring ability as indicated by multiple measures. In addition, participants increased in well-being from pre- to post-treatment. In the self-compassion study, repeated measures mixed model analyses revealed that compared to waitlist controls, participants who used the MBS101 self-compassion module experienced significant improvements in self-compassion, well-being, and body image with effect sizes ranging from medium to large. Attrition was 10.5% for the self-compassion study and 71% for the savoring study. Overall, participants indicated that the modules were generally helpful, and they particularly appreciated the specific strategy menus. Participants requested more structured course activities, more interactive content, and more practice activities overall. Recommendations: Mybestself101.org is an applied positive psychology research program that shows promise as a model for effectively disseminating evidence-based positive psychology resources that are both engaging and easily accessible. Considerable research is still needed, both to test the efficacy and usability of the modules currently available and to improve them based on participant feedback. Feedback received from participants in the randomized controlled trial led to the development of an expanded, 30-day online course called The Gift of Self-Compassion and an online mindfulness course currently in development called Mindfulness For Humans.

Keywords: positive psychology, intervention, online resources, self-compassion, dissemination, online curriculum

Procedia PDF Downloads 204
371 Regional Dynamics of Innovation and Entrepreneurship in the Optics and Photonics Industry

Authors: Mustafa İlhan Akbaş, Özlem Garibay, Ivan Garibay

Abstract:

The economic entities in innovation ecosystems form various industry clusters, in which they compete and cooperate to survive and grow. Within a successful and stable industry cluster, the entities acquire different roles that complement each other in the system. The universities and research centers have been accepted to have a critical role in these systems for the creation and development of innovations. However, the real effect of research institutions on regional economic growth is difficult to assess. In this paper, we present our approach for the identification of the impact of research activities on the regional entrepreneurship for a specific high-tech industry: optics and photonics. The optics and photonics has been defined as an enabling industry, which combines the high-tech photonics technology with the developing optics industry. The recent literature suggests that the growth of optics and photonics firms depends on three important factors: the embedded regional specializations in the labor market, the research and development infrastructure, and a dynamic small firm network capable of absorbing new technologies, products and processes. Therefore, the role of each factor and the dynamics among them must be understood to identify the requirements of the entrepreneurship activities in optics and photonics industry. There are three main contributions of our approach. The recent studies show that the innovation in optics and photonics industry is mostly located around metropolitan areas. There are also studies mentioning the importance of research center locations and universities in the regional development of optics and photonics industry. These studies are mostly limited with the number of patents received within a short period of time or some limited survey results. Therefore the first contribution of our approach is conducting a comprehensive analysis for the state and recent history of the photonics and optics research in the US. For this purpose, both the research centers specialized in optics and photonics and the related research groups in various departments of institutions (e.g. Electrical Engineering, Materials Science) are identified and a geographical study of their locations is presented. The second contribution of the paper is the analysis of regional entrepreneurship activities in optics and photonics in recent years. We use the membership data of the International Society for Optics and Photonics (SPIE) and the regional photonics clusters to identify the optics and photonics companies in the US. Then the profiles and activities of these companies are gathered by extracting and integrating the related data from the National Establishment Time Series (NETS) database, ES-202 database and the data sets from the regional photonics clusters. The number of start-ups, their employee numbers and sales are some examples of the extracted data for the industry. Our third contribution is the utilization of collected data to investigate the impact of research institutions on the regional optics and photonics industry growth and entrepreneurship. In this analysis, the regional and periodical conditions of the overall market are taken into consideration while discovering and quantifying the statistical correlations.

Keywords: entrepreneurship, industrial clusters, optics, photonics, emerging industries, research centers

Procedia PDF Downloads 406
370 EverPro as the Missing Piece in the Plant Protein Portfolio to Aid the Transformation to Sustainable Food Systems

Authors: Aylin W Sahin, Alice Jaeger, Laura Nyhan, Gregory Belt, Steffen Münch, Elke K. Arendt

Abstract:

Our current food systems cause an increase in malnutrition resulting in more people being overweight or obese in the Western World. Additionally, our natural resources are under enormous pressure and the greenhouse gas emission increases yearly with a significant contribution to climate change. Hence, transforming our food systems is of highest priority. Plant-based food products have a lower environmental impact compared to their animal-based counterpart, representing a more sustainable protein source. However, most plant-based protein ingredients, such as soy and pea, are lacking indispensable amino acids and extremely limited in their functionality and, thus, in their food application potential. They are known to have a low solubility in water and change their properties during processing. The low solubility displays the biggest challenge in the development of milk alternatives leading to inferior protein content and protein quality in dairy alternatives on the market. Moreover, plant-based protein ingredients often possess an off-flavour, which makes them less attractive to consumers. EverPro, a plant-protein isolate originated from Brewer’s Spent Grain, the most abundant by-product in the brewing industry, represents the missing piece in the plant protein portfolio. With a protein content of >85%, it is of high nutritional value, including all indispensable amino acids which allows closing the protein quality gap of plant proteins. Moreover, it possesses high techno-functional properties. It is fully soluble in water (101.7 ± 2.9%), has a high fat absorption capacity (182.4 ± 1.9%), and a foaming capacity which is superior to soy protein or pea protein. This makes EverPro suitable for a vast range of food applications. Furthermore, it does not cause changes in viscosity during heating and cooling of dispersions, such as beverages. Besides its outstanding nutritional and functional characteristics, the production of EverPro has a much lower environmental impact compared to dairy or other plant protein ingredients. Life cycle assessment analysis showed that EverPro has the lowest impact on global warming compared to soy protein isolate, pea protein isolate, whey protein isolate, and egg white powder. It also contributes significantly less to freshwater eutrophication, marine eutrophication and land use compared the protein sources mentioned above. EverPro is the prime example of sustainable ingredients, and the type of plant protein the food industry was waiting for: nutritious, multi-functional, and environmentally friendly.

Keywords: plant-based protein, upcycled, brewers' spent grain, low environmental impact, highly functional ingredient

Procedia PDF Downloads 80
369 The Effects of Labeling Cues on Sensory and Affective Responses of Consumers to Categories of Functional Food Carriers: A Mixed Factorial ANOVA Design

Authors: Hedia El Ourabi, Marc Alexandre Tomiuk, Ahmed Khalil Ben Ayed

Abstract:

The aim of this study is to investigate the effects of the labeling cues traceability (T), health claim (HC), and verification of health claim (VHC) on consumer affective response and sensory appeal toward a wide array of functional food carriers (FFC). Predominantly, research in the food area has tended to examine the effects of these information cues independently on cognitive responses to food product offerings. Investigations and findings of potential interaction effects among these factors on effective response and sensory appeal are therefore scant. Moreover, previous studies have typically emphasized single or limited sets of functional food products and categories. In turn, this study considers five food product categories enriched with omega-3 fatty acids, namely: meat products, eggs, cereal products, dairy products and processed fruits and vegetables. It is, therefore, exhaustive in scope rather than exclusive. An investigation of the potential simultaneous effects of these information cues on the affective responses and sensory appeal of consumers should give rise to important insights to both functional food manufacturers and policymakers. A mixed (2 x 3) x (2 x 5) between-within subjects factorial ANOVA design was implemented in this study. T (two levels: completely traceable or non-traceable) and HC (three levels: functional health claim, or disease risk reduction health claim, or disease prevention health claim) were treated as between-subjects factors whereas VHC (two levels: by a government agency and by a non-government agency) and FFC (five food categories) were modeled as within-subjects factors. Subjects were randomly assigned to one of the six between-subjects conditions. A total of 463 questionnaires were obtained from a convenience sample of undergraduate students at various universities in the Montreal and Ottawa areas (in Canada). Consumer affective response and sensory appeal were respectively measured via the following statements assessed on seven-point semantic differential scales: ‘Your evaluation of [food product category] enriched with omega-3 fatty acids is Unlikeable (1) / Likeable (7)’ and ‘Your evaluation of [food product category] enriched with omega-3 fatty acids is Unappetizing (1) / Appetizing (7).’ Results revealed a significant interaction effect between HC and VHC on consumer affective response as well as on sensory appeal toward foods enriched with omega-3 fatty acids. On the other hand, the three-way interaction effect between T, HC, and VHC on either of the two dependent variables was not significant. However, the triple interaction effect among T, VHC, and FFC was significant on consumer effective response and the interaction effect among T, HC, and FFC was significant on consumer sensory appeal. Findings of this study should serve as impetus for functional food manufacturers to closely cooperate with policymakers in order to improve on and legitimize the use of health claims in their marketing efforts through credible verification practices and protocols put in place by trusted government agencies. Finally, both functional food manufacturers and retailers may benefit from the socially-responsible image which is conveyed by product offerings whose ingredients remain traceable from farm to kitchen table.

Keywords: functional foods, labeling cues, effective appeal, sensory appeal

Procedia PDF Downloads 164
368 Application of the Material Point Method as a New Fast Simulation Technique for Textile Composites Forming and Material Handling

Authors: Amir Nazemi, Milad Ramezankhani, Marian Kӧrber, Abbas S. Milani

Abstract:

The excellent strength to weight ratio of woven fabric composites, along with their high formability, is one of the primary design parameters defining their increased use in modern manufacturing processes, including those in aerospace and automotive. However, for emerging automated preform processes under the smart manufacturing paradigm, complex geometries of finished components continue to bring several challenges to the designers to cope with manufacturing defects on site. Wrinklinge. g. is a common defectoccurring during the forming process and handling of semi-finished textile composites. One of the main reasons for this defect is the weak bending stiffness of fibers in unconsolidated state, causing excessive relative motion between them. Further challenges are represented by the automated handling of large-area fiber blanks with specialized gripper systems. For fabric composites forming simulations, the finite element (FE)method is a longstanding tool usedfor prediction and mitigation of manufacturing defects. Such simulations are predominately meant, not only to predict the onset, growth, and shape of wrinkles but also to determine the best processing condition that can yield optimized positioning of the fibers upon forming (or robot handling in the automated processes case). However, the need for use of small-time steps via explicit FE codes, facing numerical instabilities, as well as large computational time, are among notable drawbacks of the current FEtools, hindering their extensive use as fast and yet efficient digital twins in industry. This paper presents a novel woven fabric simulation technique through the application of the material point method (MPM), which enables the use of much larger time steps, facing less numerical instabilities, hence the ability to run significantly faster and efficient simulationsfor fabric materials handling and forming processes. Therefore, this method has the ability to enhance the development of automated fiber handling and preform processes by calculating the physical interactions with the MPM fiber models and rigid tool components. This enables the designers to virtually develop, test, and optimize their processes based on either algorithmicor Machine Learning applications. As a preliminary case study, forming of a hemispherical plain weave is shown, and the results are compared to theFE simulations, as well as experiments.

Keywords: material point method, woven fabric composites, forming, material handling

Procedia PDF Downloads 181
367 A Review of Data Visualization Best Practices: Lessons for Open Government Data Portals

Authors: Bahareh Ansari

Abstract:

Background: The Open Government Data (OGD) movement in the last decade has encouraged many government organizations around the world to make their data publicly available to advance democratic processes. But current open data platforms have not yet reached to their full potential in supporting all interested parties. To make the data useful and understandable for everyone, scholars suggested that opening the data should be supplemented by visualization. However, different visualizations of the same information can dramatically change an individual’s cognitive and emotional experience in working with the data. This study reviews the data visualization literature to create a list of the methods empirically tested to enhance users’ performance and experience in working with a visualization tool. This list can be used in evaluating the OGD visualization practices and informing the future open data initiatives. Methods: Previous reviews of visualization literature categorized the visualization outcomes into four categories including recall/memorability, insight/comprehension, engagement, and enjoyment. To identify the papers, a search for these outcomes was conducted in the abstract of the publications of top-tier visualization venues including IEEE Transactions for Visualization and Computer Graphics, Computer Graphics, and proceedings of the CHI Conference on Human Factors in Computing Systems. The search results are complemented with a search in the references of the identified articles, and a search for 'open data visualization,' and 'visualization evaluation' keywords in the IEEE explore and ACM digital libraries. Articles are included if they provide empirical evidence through conducting controlled user experiments, or provide a review of these empirical studies. The qualitative synthesis of the studies focuses on identification and classifying the methods, and the conditions under which they are examined to positively affect the visualization outcomes. Findings: The keyword search yields 760 studies, of which 30 are included after the title/abstract review. The classification of the included articles shows five distinct methods: interactive design, aesthetic (artistic) style, storytelling, decorative elements that do not provide extra information including text, image, and embellishment on the graphs), and animation. Studies on decorative elements show consistency on the positive effects of these elements on user engagement and recall but are less consistent in their examination of the user performance. This inconsistency could be attributable to the particular data type or specific design method used in each study. The interactive design studies are consistent in their findings of the positive effect on the outcomes. Storytelling studies show some inconsistencies regarding the design effect on user engagement, enjoyment, recall, and performance, which could be indicative of the specific conditions required for the use of this method. Last two methods, aesthetics and animation, have been less frequent in the included articles, and provide consistent positive results on some of the outcomes. Implications for e-government: Review of the visualization best-practice methods show that each of these methods is beneficial under specific conditions. By using these methods in a potentially beneficial condition, OGD practices can promote a wide range of individuals to involve and work with the government data and ultimately engage in government policy-making procedures.

Keywords: best practices, data visualization, literature review, open government data

Procedia PDF Downloads 105
366 Railway Ballast Volumes Automated Estimation Based on LiDAR Data

Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert

Abstract:

The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.

Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point

Procedia PDF Downloads 109
365 The Effect of Extruded Full-Fat Rapeseed on Productivity and Eggs Quality of Isa Brown Laying Hens

Authors: Vilma Sasyte, Vilma Viliene, Agila Dauksiene, Asta Raceviciute-Stupeliene, Romas Gruzauskas, Saulius Alijosius

Abstract:

The eight-week feeding trial was conducted involving 27-wk-old Isa brown laying hens to study the effect of dry extrusion processing on partial reduction in total glucosinolates content of locally produced rapeseed and on productivity and eggs quality parameters of laying hens. Thirty-six hens were randomly assigned one of three treatments (CONTR, AERS and HERS), each comprising 12, individual caged layers. The main composition of the diets was the same, but extruded soya bean seed were replaced with 2.5% of the extruded rapeseed in the AERS group and 4.5 % in the HERS group. Rapeseed was extruded together with faba beans. Due to extrusion process the glucosinolates content was reduced by 7.83 µmol/g of rapeseed. The results of conducted trial shows, that during all experimental period egg production parameters, such as the average feed intake (6529.17 vs. 6257 g/hen/14 day; P < 0.05) and laying intensity (94.35% vs. 89.29; P < 0.05) were statistically different for HERS and CONTR laying hens respectively. Only the feed conversion ratio to produce 1 kg of eggs, kg in AERS group was by 11 % lower compared to CONTR group (P < 0.05). By analysing the effect of extruded rapeseed on egg mass, the statistical differences between treatments were no determined. The dietary treatments did not affect egg weight, albumen height, haugh units, albumen and yolk pH. However, in the HERS group were get eggs with the more intensive yolk color, higher redness (a) and yellowness (b) values. The inclusion of full-fat extruded rapeseed had no effect on egg shell quality parameters, i.e. shell breaking strength, shell weight with and without coat and shell index, but in the experimental groups were get eggs with the thinner shell (P < 0.05). The internal egg quality analysis showed that with higher content of extruded rapeseed (4.5 %) level in the diet, the total cholesterol in the eggs yolk decreased by 1.92 mg/g in comparison with CONTR group (P < 0.05). Eggs laid by hens fed the diet containing 2.5% and 4.5% had increasing ∑PNRR/∑SRR ratio and decreasing ∑(n-6)/∑(n-3) ratio values of eggs yolk fatty acids than in CONTR group. Eggs of hens fed different amount of extruded rapeseed presented an n-6 : n-3 ratio changed from 5.17 to 4.71. The analysis of the relationship between hypocholesteremia/ hypercholesterolemia fatty acids (H/H), which is based on the functional properties of fatty acids, found that the value of it ratio is significant higher in laying hens fed diets supplemented with 4.5% extruded rapeseed than the CONTR group, demonstrating the positive effects of extruded rapeseed on egg quality. The results of trial confirmed that extruded full fat rapeseed to the 4.5% are suitable to replace soyabean in the compound feed of laying hens.

Keywords: egg quality, extruded full-fat rapeseed, laying hens, productivity

Procedia PDF Downloads 215
364 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading

Authors: Jerome Joshi

Abstract:

The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.

Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus

Procedia PDF Downloads 77
363 Forecasting Residential Water Consumption in Hamilton, New Zealand

Authors: Farnaz Farhangi

Abstract:

Many people in New Zealand believe that the access to water is inexhaustible, and it comes from a history of virtually unrestricted access to it. For the region like Hamilton which is one of New Zealand’s fastest growing cities, it is crucial for policy makers to know about the future water consumption and implementation of rules and regulation such as universal water metering. Hamilton residents use water freely and they do not have any idea about how much water they use. Hence, one of proposed objectives of this research is focusing on forecasting water consumption using different methods. Residential water consumption time series exhibits seasonal and trend variations. Seasonality is the pattern caused by repeating events such as weather conditions in summer and winter, public holidays, etc. The problem with this seasonal fluctuation is that, it dominates other time series components and makes difficulties in determining other variations (such as educational campaign’s effect, regulation, etc.) in time series. Apart from seasonality, a stochastic trend is also combined with seasonality and makes different effects on results of forecasting. According to the forecasting literature, preprocessing (de-trending and de-seasonalization) is essential to have more performed forecasting results, while some other researchers mention that seasonally non-adjusted data should be used. Hence, I answer the question that is pre-processing essential? A wide range of forecasting methods exists with different pros and cons. In this research, I apply double seasonal ARIMA and Artificial Neural Network (ANN), considering diverse elements such as seasonality and calendar effects (public and school holidays) and combine their results to find the best predicted values. My hypothesis is the examination the results of combined method (hybrid model) and individual methods and comparing the accuracy and robustness. In order to use ARIMA, the data should be stationary. Also, ANN has successful forecasting applications in terms of forecasting seasonal and trend time series. Using a hybrid model is a way to improve the accuracy of the methods. Due to the fact that water demand is dominated by different seasonality, in order to find their sensitivity to weather conditions or calendar effects or other seasonal patterns, I combine different methods. The advantage of this combination is reduction of errors by averaging of each individual model. It is also useful when we are not sure about the accuracy of each forecasting model and it can ease the problem of model selection. Using daily residential water consumption data from January 2000 to July 2015 in Hamilton, I indicate how prediction by different methods varies. ANN has more accurate forecasting results than other method and preprocessing is essential when we use seasonal time series. Using hybrid model reduces forecasting average errors and increases the performance.

Keywords: artificial neural network (ANN), double seasonal ARIMA, forecasting, hybrid model

Procedia PDF Downloads 337
362 Treatment of Onshore Petroleum Drill Cuttings via Soil Washing Process: Characterization and Optimal Conditions

Authors: T. Poyai, P. Painmanakul, N. Chawaloesphonsiya, P. Dhanasin, C. Getwech, P. Wattana

Abstract:

Drilling is a key activity in oil and gas exploration and production. Drilling always requires the use of drilling mud for lubricating the drill bit and controlling the subsurface pressure. As drilling proceeds, a considerable amount of cuttings or rock fragments is generated. In general, water or Water Based Mud (WBM) serves as drilling fluid for the top hole section. The cuttings generated from this section is non-hazardous and normally applied as fill materials. On the other hand, drilling the bottom hole to reservoir section uses Synthetic Based Mud (SBM) of which synthetic oils are composed. The bottom-hole cuttings, SBM cuttings, is regarded as a hazardous waste, in accordance with the government regulations, due to the presence of hydrocarbons. Currently, the SBM cuttings are disposed of as an alternative fuel and raw material in cement kiln. Instead of burning, this work aims to propose an alternative for drill cuttings management under two ultimate goals: (1) reduction of hazardous waste volume; and (2) making use of the cleaned cuttings. Soil washing was selected as the major treatment process. The physiochemical properties of drill cuttings were analyzed, such as size fraction, pH, moisture content, and hydrocarbons. The particle size of cuttings was analyzed via light scattering method. Oil present in cuttings was quantified in terms of total petroleum hydrocarbon (TPH) through gas chromatography equipped with flame ionization detector (GC-FID). Other components were measured by the standard methods for soil analysis. Effects of different washing agents, liquid-to-solid (L/S) ratio, washing time, mixing speed, rinse-to-solid (R/S) ratio, and rinsing time were also evaluated. It was found that drill cuttings held the electrical conductivity of 3.84 dS/m, pH of 9.1, and moisture content of 7.5%. The TPH in cuttings existed in the diesel range with the concentration ranging from 20,000 to 30,000 mg/kg dry cuttings. A majority of cuttings particles held a mean diameter of 50 µm, which represented silt fraction. The results also suggested that a green solvent was considered most promising for cuttings treatment regarding occupational health, safety, and environmental benefits. The optimal washing conditions were obtained at L/S of 5, washing time of 15 min, mixing speed of 60 rpm, R/S of 10, and rinsing time of 1 min. After washing process, three fractions including clean cuttings, spent solvent, and wastewater were considered and provided with recommendations. The residual TPH less than 5,000 mg/kg was detected in clean cuttings. The treated cuttings can be then used for various purposes. The spent solvent held the calorific value of higher than 3,000 cal/g, which can be used as an alternative fuel. Otherwise, the recovery of the used solvent can be conducted using distillation or chromatography techniques. Finally, the generated wastewater can be combined with the produced water and simultaneously managed by re-injection into the reservoir.

Keywords: drill cuttings, green solvent, soil washing, total petroleum hydrocarbon (TPH)

Procedia PDF Downloads 153
361 A Crowdsourced Homeless Data Collection System and Its Econometric Analysis: Strengthening Inclusive Public Administration Policies

Authors: Praniil Nagaraj

Abstract:

This paper proposes a method to collect homeless data using crowdsourcing and presents an approach to analyze the data, demonstrating its potential to strengthen existing and future policies aimed at promoting socio-economic equilibrium. This paper's contributions can be categorized into three main areas. Firstly, a unique method for collecting homeless data is introduced, utilizing a user-friendly smartphone app (currently available for Android). The app enables the general public to quickly record information about homeless individuals, including the number of people and details about their living conditions. The collected data, including date, time, and location, is anonymized and securely transmitted to the cloud. It is anticipated that an increasing number of users motivated to contribute to society will adopt the app, thus expanding the data collection efforts. Duplicate data is addressed through simple classification methods, and historical data is utilized to fill in missing information. The second contribution of this paper is the description of data analysis techniques applied to the collected data. By combining this new data with existing information, statistical regression analysis is employed to gain insights into various aspects, such as distinguishing between unsheltered and sheltered homeless populations, as well as examining their correlation with factors like unemployment rates, housing affordability, and labor demand. Initial data is collected in San Francisco, while pre-existing information is drawn from three cities: San Francisco, New York City, and Washington D.C., facilitating the conduction of simulations. The third contribution focuses on demonstrating the practical implications of the data processing results. The challenges faced by key stakeholders, including charitable organizations and local city governments, are taken into consideration. Two case studies are presented as examples. The first case study explores improving the efficiency of food and necessities distribution, as well as medical assistance, driven by charitable organizations. The second case study examines the correlation between micro-geographic budget expenditure by local city governments and homeless information to justify budget allocation and expenditures. The ultimate objective of this endeavor is to enable the continuous enhancement of the quality of life for the underprivileged. It is hoped that through increased crowdsourcing of data from the public, the Generosity Curve and the Need Curve will intersect, leading to a better world for all.

Keywords: crowdsourcing, homelessness, socio-economic policies, statistical analysis

Procedia PDF Downloads 44
360 Comparing the Effectiveness of the Crushing and Grinding Route of Comminution to That of the Mine to Mill Route in Terms of the Percentage of Middlings Present in Processed Lead-Zinc Ore Samples

Authors: Chinedu F. Anochie

Abstract:

The presence of gangue particles in recovered metal concentrates has been a serious challenge to ore dressing engineers. Middlings lower the quality of concentrates, and in most cases, drastically affect the smelter terms, owing to exorbitant amounts paid by Mineral Processing industries as treatment charge. Models which encourage optimization of liberation operations have been utilized in most ore beneficiation industries to reduce the presence of locked particles in valuable concentrates. Moreover, methods such as incorporation of regrind mills, scavenger, rougher and cleaner cells, to the milling and flotation plants has been widely employed to tackle these concerns, and to optimize the grade–recovery relationship of metal concentrates. This work compared the crushing and grinding method of liberation, to the mine to mill route, by evaluating the proportion of middlings present in selectively processed complex Pb-Zn ore samples. To establish the effect of size reduction operations on the percentage of locked particles present in recovered concentrates, two similar samples of complex Pb- Zn ores were processed. Following blasting operation, the first ore sample was ground directly in a ball mill (Mine to Mill Route of Comminution), while the other sample was manually crushed, and subsequently ground in the ball mill (Crushing and Grinding Route of Comminution). The two samples were separately sieved in a mesh to obtain the desired representative particle sizes. An equal amount of each sample that would be processed in the flotation circuit was then obtained with the aid of a weighing balance. These weighed fine particles were simultaneously processed in the flotation circuit using the selective flotation technique. Sodium cyanide, Methyl isobutyl carbinol, Sodium ethyl xanthate, Copper sulphate, Sodium hydroxide, Lime and Isopropyl xanthate, were the reagents used to effect differential flotation of the two ore samples. Analysis and calculations showed that the degree of liberation obtained for the ore sample which went through the conventional crushing and grinding route of comminution, was higher than that of the directly milled run off mine (ROM) ore. Similarly, the proportion of middlings obtained from the separated galena (PbS) and sphalerite (ZnS) concentrates, were lower for the crushed and ground ore sample. A concise data which proved that the mine to mill method of size reduction is not the most ideal technique for the recovery of quality metal concentrates has been established.

Keywords: comminution, degree of liberation, middlings, mine to mill

Procedia PDF Downloads 133
359 A Feasibility and Implementation Model of Small-Scale Hydropower Development for Rural Electrification in South Africa: Design Chart Development

Authors: Gideon J. Bonthuys, Marco van Dijk, Jay N. Bhagwan

Abstract:

Small scale hydropower used to play a very important role in the provision of energy to urban and rural areas of South Africa. The national electricity grid, however, expanded and offered cheap, coal generated electricity and a large number of hydropower systems were decommissioned. Unfortunately, large numbers of households and communities will not be connected to the national electricity grid for the foreseeable future due to high cost of transmission and distribution systems to remote communities due to the relatively low electricity demand within rural communities and the allocation of current expenditure on upgrading and constructing of new coal fired power stations. This necessitates the development of feasible alternative power generation technologies. A feasibility and implementation model was developed to assist in designing and financially evaluating small-scale hydropower (SSHP) plants. Several sites were identified using the model. The SSHP plants were designed for the selected sites and the designs for the different selected sites were priced using pricing models (civil, mechanical and electrical aspects). Following feasibility studies done on the designed and priced SSHP plants, a feasibility analysis was done and a design chart developed for future similar potential SSHP plant projects. The methodology followed in conducting the feasibility analysis for other potential sites consisted of developing cost and income/saving formulae, developing net present value (NPV) formulae, Capital Cost Comparison Ratio (CCCR) and levelised cost formulae for SSHP projects for the different types of plant installations. It included setting up a model for the development of a design chart for a SSHP, calculating the NPV, CCCR and levelised cost for the different scenarios within the model by varying different parameters within the developed formulae, setting up the design chart for the different scenarios within the model and analyzing and interpreting results. From the interpretation of the develop design charts for feasible SSHP in can be seen that turbine and distribution line cost are the major influences on the cost and feasibility of SSHP. High head, short transmission line and islanded mini-grid SSHP installations are the most feasible and that the levelised cost of SSHP is high for low power generation sites. The main conclusion from the study is that the levelised cost of SSHP projects indicate that the cost of SSHP for low energy generation is high compared to the levelised cost of grid connected electricity supply; however, the remoteness of SSHP for rural electrification and the cost of infrastructure to connect remote rural communities to the local or national electricity grid provides a low CCCR and renders SSHP for rural electrification feasible on this basis.

Keywords: cost, feasibility, rural electrification, small-scale hydropower

Procedia PDF Downloads 224
358 Approach to Honey Volatiles' Profiling by Gas Chromatography and Mass Spectrometry

Authors: Igor Jerkovic

Abstract:

Biodiversity of flora provides many different nectar sources for the bees. Unifloral honeys possess distinctive flavours, mainly derived from their nectar sources (characteristic volatile organic components (VOCs)). Specific or nonspecific VOCs (chemical markers) could be used for unifloral honey characterisation as addition to the melissopalynologycal analysis. The main honey volatiles belong, in general, to three principal categories: terpenes, norisoprenoids, and benzene derivatives. Some of these substances have been described as characteristics of the floral source, and other compounds, like several alcohols, branched aldehydes, and furan derivatives, may be related to the microbial purity of honey processing and storage conditions. Selection of the extraction method for the honey volatiles profiling should consider that heating of the honey produce different artefacts and therefore conventional methods of VOCs isolation (such as hydrodistillation) cannot be applied for the honey. Two-way approach for the isolation of the honey VOCs was applied using headspace solid-phase microextraction (HS-SPME) and ultrasonic solvent extraction (USE). The extracts were analysed by gas chromatography and mass spectrometry (GC-MS). HS-SPME (with the fibers of different polarity such as polydimethylsiloxane/ divinylbenzene (PDMS/DVB) or divinylbenzene/carboxene/ polydimethylsiloxane (DVB/CAR/PDMS)) enabled isolation of high volatile headspace VOCs of the honey samples. Among them, some characteristic or specific compounds can be found such as 3,4-dihydro-3-oxoedulan (in Centaurea cyanus L. honey) or 1H-indole, methyl anthranilate, and cis-jasmone (in Citrus unshiu Marc. honey). USE with different solvents (mainly dichloromethane or the mixture pentane : diethyl ether 1 : 2 v/v) enabled isolation of less volatile and semi-volatile VOCs of the honey samples. Characteristic compounds from C. unshiu honey extracts were caffeine, 1H-indole, 1,3-dihydro-2H-indol-2-one, methyl anthranilate, and phenylacetonitrile. Sometimes, the selection of solvent sequence was useful for more complete profiling such as sequence I: pentane → diethyl ether or sequence II: pentane → pentane/diethyl ether (1:2, v/v) → dichloromethane). The extracts with diethyl ether contained hydroquinone and 4-hydroxybenzoic acid as the major compounds, while (E)-4-(r-1’,t-2’,c-4’-trihydroxy-2’,6’,6’-trimethylcyclo-hexyl)but-3-en-2-one predominated in dichloromethane extracts of Allium ursinum L. honey. With this two-way approach, it was possible to obtain a more detailed insight into the honey volatile and semi-volatile compounds and to minimize the risks of compound discrimination due to their partial extraction that is of significant importance for the complete honey profiling and identification of the chemical biomarkers that can complement the pollen analysis.

Keywords: honey chemical biomarkers, honey volatile compounds profiling, headspace solid-phase microextraction (HS-SPME), ultrasonic solvent extraction (USE)

Procedia PDF Downloads 202