Search results for: mass flow ratio
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11592

Search results for: mass flow ratio

192 Developing a Deep Understanding of the Immune Response in Hepatitis B Virus Infected Patients Using a Knowledge Driven Approach

Authors: Hanan Begali, Shahi Dost, Annett Ziegler, Markus Cornberg, Maria-Esther Vidal, Anke R. M. Kraft

Abstract:

Chronic hepatitis B virus (HBV) infection can be treated with nucleot(s)ide analog (NA), for example, which inhibits HBV replication. However, they have hardly any influence on the functional cure of HBV, which is defined by hepatitis B surface antigen (HBsAg) loss. NA needs to be taken life-long, which is not available for all patients worldwide. Additionally, NA-treated patients are still at risk of developing cirrhosis, liver failure, or hepatocellular carcinoma (HCC). Although each patient has the same components of the immune system, immune responses vary between patients. Therefore, a deeper understanding of the immune response against HBV in different patients is necessary to understand the parameters leading to HBV cure and to use this knowledge to optimize HBV therapies. This requires seamless integration of an enormous amount of diverse and fine-grained data from viral markers, e.g., hepatitis B core-related antigen (HBcrAg) and hepatitis B surface antigen (HBsAg). The data integration system relies on the assumption that profiling human immune systems requires the analysis of various variables (e.g., demographic data, treatments, pre-existing conditions, immune cell response, or HLA-typing) rather than only one. However, the values of these variables are collected independently. They are presented in a myriad of formats, e.g., excel files, textual descriptions, lab book notes, and images of flow cytometry dot plots. Additionally, patients can be identified differently in these analyses. This heterogeneity complicates the integration of variables, as data management techniques are needed to create a unified view in which individual formats and identifiers are transparent when profiling the human immune systems. The proposed study (HBsRE) aims at integrating heterogeneous data sets of 87 chronically HBV-infected patients, e.g., clinical data, immune cell response, and HLA-typing, with knowledge encoded in biomedical ontologies and open-source databases into a knowledge-driven framework. This new technique enables us to harmonize and standardize heterogeneous datasets in the defined modeling of the data integration system, which will be evaluated in the knowledge graph (KG). KGs are data structures that represent the knowledge and data as factual statements using a graph data model. Finally, the analytic data model will be applied on top of KG in order to develop a deeper understanding of the immune profiles among various patients and to evaluate factors playing a role in a holistic profile of patients with HBsAg level loss. Additionally, our objective is to utilize this unified approach to stratify patients for new effective treatments. This study is developed in the context of the project “Transforming big data into knowledge: for deep immune profiling in vaccination, infectious diseases, and transplantation (ImProVIT)”, which is a multidisciplinary team composed of computer scientists, infection biologists, and immunologists.

Keywords: chronic hepatitis B infection, immune response, knowledge graphs, ontology

Procedia PDF Downloads 108
191 The Effect of Post Spinal Hypotension on Cerebral Oxygenation Using Near-Infrared Spectroscopy and Neonatal Outcomes in Full Term Parturient Undergoing Lower Segment Caesarean Section: A Prospective Observational Study

Authors: Shailendra Kumar, Lokesh Kashyap, Puneet Khanna, Nishant Patel, Rakesh Kumar, Arshad Ayub, Kelika Prakash, Yudhyavir Singh, Krithikabrindha V.

Abstract:

Introduction: Spinal anesthesia is considered a standard anesthesia technique for caesarean delivery. The incidence of spinal hypotension during caesarean delivery is 70 -80%. Spinal hypotension may cause cerebral hypoperfusion in the mother, but physiologically cerebral autoregulatory mechanisms accordingly prevent cerebral hypoxia. Cerebral blood flow remains constant in the 50-150 mmHg of Cerebral Perfusion Pressure (CPP) range. Near-infrared spectroscopy (NIRS) is a non-invasive technology that is used to detect Cerebral Desaturation Events (CDEs) immediately compared to other conventional intraoperative monitoring techniques. Objective: The primary aim of the study is to correlate the change in cerebral oxygen saturation using NIRS with respect to a fall in mean blood pressure after spinal anaesthesia and to find out the effects of spinal hypotension on neonatal APGAR score, neonatal acid-base variations, and presence of Postoperative Delirium (POD). Methodology: NIRS sensors were attached to the forehead of all the patients, and their baseline readings of cerebral oxygenation on the right and left frontal regions and mean blood pressure were noted. Subarachnoid block was given with hyperbaric 0.5% bupivacaine plus fentanyl, the dose being determined by the individual anaesthesiologist. Co-loading of IV crystalloid solutions was given to the patient. Blood pressure reading and cerebral saturation were recorded every 1 minute till 30min. Hypotension was a fall in MAP less than 20% of the baseline values. Patients going for hypotension were treated with an IV Bolus of phenylephrine/ephedrine. Umbilical cord blood samples were taken for blood gas analysis, and neonatal APGAR was noted by a neonatologist. Study design: A prospective observational study conducted in a population of Thirty ASA 2 and 3 parturients scheduled for lower segment caesarean section (LSCS). Results: Mean fall in regional cerebral saturation is 28.48 ± 14.7% with respect to the mean fall in blood pressure 38.92 ± 8.44 mm Hg. The correlation coefficient between fall in saturation and fall in mean blood pressure is 0.057, and p-value {0.7} after subarachnoid block. A fall in regional cerebral saturation occurred 2±1 min before a fall in mean blood pressure. Twenty-nine out of thirty patients required vasopressors during hypotension. The first dose of vasopressor requirement is needed at 6.02±2 min after the block. The mean APGAR score was 7.86 and 9.74 at 1 and 5 min of birth, respectively, and the mean umbilical arterial pH of 7.3±0.1. According to DRS-98 (Delirium Rating Scale), the mean delirium rating score on postoperative day 1 and day 2 were 0.1 and 0.7, respectively. Discussion: There was a fall in regional cerebral oxygen saturation, which started before with respect to a significant fall in mean blood pressure readings but was statistically not significant. Maximal fall in blood pressure requiring vasopressors occurs within 10 min of SAB. Neonatal APGAR scores and acid-base variations were in the normal range with maternal hypotension, and there was no incidence of postoperative delirium in patients with post-spinal hypotension.

Keywords: cerebral oxygenation, LSCS, NIRS, spinal hypotension

Procedia PDF Downloads 69
190 Supply Chain Improvement of the Halal Goat Industry in the Autonomous Region in Muslim Mindanao

Authors: Josephine R. Migalbin

Abstract:

Halal is an Arabic word meaning "lawful" or "permitted". When it comes to food and consumables, Halal is the dietary standard of Muslims. The Autonomous Region in Muslim Mindanao (ARMM) has a comparative advantage when it comes to Halal Industry because it is the only Muslim region in the Philippines and the natural starting point for the establishment of a halal industry in the country. The region has identified goat production not only for domestic consumption but for export market. Goat production is one of its strengths due to cultural compatibility. There is a high demand for goats during Ramadhan and Eid ul-Adha. The study aimed to provide an overview of the ARMM Halal Goat Industry; to map out the specific supply chain of halal goat, and to analyze the performance of the halal goat supply chain in terms of efficiency, flexibility, and overall responsiveness. It also aimed to identify areas for improvement in the supply chain such as behavioural, institutional, and process to provide recommendations for improvement in the supply chain towards efficient and effective production and marketing of halal goats, subsequently improving the plight of the actors in the supply chain. Generally, the raising of goats is characterized by backyard production (92.02%). There are four interrelated factors affecting significantly the production of goats which are breeding prolificacy, prevalence of diseases, feed abundance and pre-weaning mortality rate. The institutional buyers are mostly traders, restaurants/eateries, supermarkets, and meat shops, among others. The municipalities of Midsayap and Pikit in another region and Parang are the major goat sources and the municipalities in ARMM among others. In addition to the major supply centers, Siquijor, an island province in the Visayas is becoming a key source of goats. Goats are usually gathered by traders/middlemen and brought to the public markets. Meat vendors purchase them directly from raisers, slaughtered and sold fresh in wet markets. It was observed that there is increased demand at 2%/year and that supply is not enough to meet the demand. Farm gate price is 2.04 USD to 2.11 USD/kg liveweight. Industry information is shared by three key participants - raisers, traders and buyers. All respondents reported that information is through personal built-upon past experiences and that there is no full disclosure of information among the key participants in the chain. The information flow in the industry is fragmented in nature such that no total industry picture exists. In the last five years, numerous local and foreign agencies had undertaken several initiatives for the development of the halal goat industry in ARMM. The major issues include productivity which is the greatest challenge, difficulties in accessing technical support channels and lack of market linkage and consolidation. To address the various issues and concerns of the various industry players, there is a need to intensify appropriate technology transfer through extension activities, improve marketing channels by grouping producers, strengthen veterinary services and provide capital windows to improve facilities and reduce logistics and transaction costs in the entire supply chain.

Keywords: autonomous region in Muslim Mindanao, halal, halal goat industry, supply chain improvement

Procedia PDF Downloads 335
189 A Parallel Cellular Automaton Model of Tumor Growth for Multicore and GPU Programming

Authors: Manuel I. Capel, Antonio Tomeu, Alberto Salguero

Abstract:

Tumor growth from a transformed cancer-cell up to a clinically apparent mass spans through a range of spatial and temporal magnitudes. Through computer simulations, Cellular Automata (CA) can accurately describe the complexity of the development of tumors. Tumor development prognosis can now be made -without making patients undergo through annoying medical examinations or painful invasive procedures- if we develop appropriate CA-based software tools. In silico testing mainly refers to Computational Biology research studies of application to clinical actions in Medicine. To establish sound computer-based models of cellular behavior, certainly reduces costs and saves precious time with respect to carrying out experiments in vitro at labs or in vivo with living cells and organisms. These aim to produce scientifically relevant results compared to traditional in vitro testing, which is slow, expensive, and does not generally have acceptable reproducibility under the same conditions. For speeding up computer simulations of cellular models, specific literature shows recent proposals based on the CA approach that include advanced techniques, such the clever use of supporting efficient data structures when modeling with deterministic stochastic cellular automata. Multiparadigm and multiscale simulation of tumor dynamics is just beginning to be developed by the concerned research community. The use of stochastic cellular automata (SCA), whose parallel programming implementations are open to yield a high computational performance, are of much interest to be explored up to their computational limits. There have been some approaches based on optimizations to advance in multiparadigm models of tumor growth, which mainly pursuit to improve performance of these models through efficient memory accesses guarantee, or considering the dynamic evolution of the memory space (grids, trees,…) that holds crucial data in simulations. In our opinion, the different optimizations mentioned above are not decisive enough to achieve the high performance computing power that cell-behavior simulation programs actually need. The possibility of using multicore and GPU parallelism as a promising multiplatform and framework to develop new programming techniques to speed-up the computation time of simulations is just starting to be explored in the few last years. This paper presents a model that incorporates parallel processing, identifying the synchronization necessary for speeding up tumor growth simulations implemented in Java and C++ programming environments. The speed up improvement that specific parallel syntactic constructs, such as executors (thread pools) in Java, are studied. The new tumor growth parallel model is proved using implementations with Java and C++ languages on two different platforms: chipset Intel core i-X and a HPC cluster of processors at our university. The parallelization of Polesczuk and Enderling model (normally used by researchers in mathematical oncology) proposed here is analyzed with respect to performance gain. We intend to apply the model and overall parallelization technique presented here to solid tumors of specific affiliation such as prostate, breast, or colon. Our final objective is to set up a multiparadigm model capable of modelling angiogenesis, or the growth inhibition induced by chemotaxis, as well as the effect of therapies based on the presence of cytotoxic/cytostatic drugs.

Keywords: cellular automaton, tumor growth model, simulation, multicore and manycore programming, parallel programming, high performance computing, speed up

Procedia PDF Downloads 244
188 Assessment of Environmental Mercury Contamination from an Old Mercury Processing Plant 'Thor Chemicals' in Cato Ridge, KwaZulu-Natal, South Africa

Authors: Yohana Fessehazion

Abstract:

Mercury is a prominent example of a heavy metal contaminant in the environment, and it has been extensively investigated for its potential health risk in humans and other organisms. In South Africa, massive mercury contamination happened in1980s when the England-based mercury reclamation processing plant relocated to Cato Ridge, KwaZulu-Natal Province, and discharged mercury waste into the Mngceweni River. This mercury waste discharge resulted in high mercury concentration that exceeded the acceptable levels in Mngceweni River, Umgeni River, and human hair of the nearby villagers. This environmental issue raised the alarm, and over the years, several environmental assessments were reported the dire environmental crises resulting from the Thor Chemicals (now known as Metallica Chemicals) and urged the immediate removal of the around 3,000 tons of mercury waste stored in the factory storage facility over two decades. Recently theft of some containers with the toxic substance from the Thor Chemicals warehouse and the subsequent fire that ravaged the facility furtherly put the factory on the spot escalating the urgency of left behind deadly mercury waste removal. This project aims to investigate the mercury contamination leaking from an old Thor Chemicals mercury processing plant. The focus will be on sediments, water, terrestrial plants, and aquatic weeds such as the prominent water hyacinth weeds in the nearby water systems of Mngceweni River, Umgeni River, and Inanda Dam as a bio-indicator and phytoremediator for mercury pollution. Samples will be collected in spring around October when the condition is favourable for microbial activity to methylate mercury incorporated in sediments and blooming season for some aquatic weeds, particularly water hyacinth. Samples of soil, sediment, water, terrestrial plant, and aquatic weed will be collected per sample site from the point of source (Thor Chemicals), Mngceweni River, Umgeni River, and the Inanda Dam. One-way analysis of variance (ANOVA) tests will be conducted to determine any significant differences in the Hg concentration among all sampling sites, followed by Least Significant Difference post hoc test to determine if mercury contamination varies with the gradient distance from the source point of pollution. The flow injection atomic spectrometry (FIAS) analysis will also be used to compare the mercury sequestration between the different plant tissues (roots and stems). The principal component analysis is also envisaged for use to determine the relationship between the source of mercury pollution and any of the sampling points (Umgeni and Mngceweni Rivers and the Inanda Dam). All the Hg values will be expressed in µg/L or µg/g in order to compare the result with the previous studies and regulatory standards. Sediments are expected to have relatively higher levels of Hg compared to the soils, and aquatic macrophytes, water hyacinth weeds are expected to accumulate a higher concentration of mercury than terrestrial plants and crops.

Keywords: mercury, phytoremediation, Thor chemicals, water hyacinth

Procedia PDF Downloads 223
187 Study of the Association between Salivary Microbiological Data, Oral Health Indicators, Behavioral Factors, and Social Determinants among Post-COVID Patients Aged 7 to 12 Years in Tbilisi City

Authors: Lia Mania, Ketevan Nanobashvili

Abstract:

Background: The coronavirus disease COVID-19 has become the cause of a global health crisis during the current pandemic. This study aims to fill the paucity of epidemiological studies on the impact of COVID-19 on the oral health of pediatric populations. Methods: It was conducted an observational, cross-sectional study in Georgia, in Tbilisi (capital of Georgia), among 7 to 12-year-old PCR or rapid test-confirmed post-Covid populations in all districts of Tbilisi (10 districts in total). 332 beneficiaries who were infected with Covid within one year were included in the study. The population was selected in schools of Tbilisi according to the principle of cluster selection. A simple random selection took place in the selected clusters. According to this principle, an equal number of beneficiaries were selected in all districts of Tbilisi. By July 1, 2022, according to National Center for Disease Control and Public Health data (NCDC.Ge), the number of test-confirmed cases in the population aged 0-18 in Tbilisi was 115137 children (17.7% of all confirmed cases). The number of patients to be examined was determined by the sample size. Oral screening, microbiological examination of saliva, and administration of oral health questionnaires to guardians were performed. Statistical processing of data was done with SPSS-23. Risk factors were estimated by odds ratio and logistic regression with 95% confidence interval. Results: Statistically reliable differences between the averages of oral health indicators in asymptomatic and symptomatic covid-infected groups are: for caries intensity (DMF+def) t=4.468 and p=0.000, for modified gingival index (MGI) t=3.048, p=0.002, for simplified oral hygiene index (S-OHI) t=4.853; p=0.000. Symptomatic covid-infection has a reliable effect on the oral microbiome (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis); (n=332; 77.3% vs n=332; 58.0%; OR=2.46, 95%CI: 1.318-4.617). According to the logistic regression, it was found that the severity of the covid infection has a significant effect on the frequency of pathogenic and conditionally pathogenic bacteria in the oral cavity B=0.903 AOR=2.467 (CL 1.318-4.617). Symptomatic covid-infection affects oral health indicators, regardless of the presence of other risk factors, such as parental employment status, tooth brushing behaviors, carbohydrate meal, fruit consumption. (p<0.05). Conclusion: Risk factors (parental employment status, tooth brushing behaviors, carbohydrate consumption) were associated with poorer oral health status in a post-Covid population of 7- to 12-year-old children. However, such a risk factor as symptomatic ongoing covid-infection affected the oral microbiome in terms of the abundant growth of pathogenic and conditionally pathogenic bacteria (Staphylococcus aureus, Candida albicans, Pseudomonas aeruginosa, Streptococcus pneumoniae, Staphylococcus epidermalis) and further worsened oral health indicators. Thus, a close association was established between symptomatic covid-infection and microbiome changes in the post-covid period; also - between the variables of oral health indicators and the symptomatic course of covid-infection.

Keywords: oral microbiome, COVID-19, population based research, oral health indicators

Procedia PDF Downloads 70
186 Effect of Fertilization and Combined Inoculation with Azospirillum brasilense and Pseudomonas fluorescens on Rhizosphere Microbial Communities of Avena sativa (Oats) and Secale Cereale (Rye) Grown as Cover Crops

Authors: Jhovana Silvia Escobar Ortega, Ines Eugenia Garcia De Salamone

Abstract:

Cover crops are an agri-technological alternative to improve all properties of soils. Cover crops such as oats and rye could be used to reduce erosion and favor system sustainability when they are grown in the same agricultural cycle of the soybean crop. This crop is very profitable but its low contribution of easily decomposable residues, due to its low C/N ratio, leaves the soil exposed to erosive action and raises the need to reduce its monoculture. Furthermore, inoculation with the plant growth promoting rhizobacteria contributes to the implementation, development and production of several cereal crops. However, there is little information on its effects on forage crops which are often used as cover crops to improve soil quality. In order to evaluate the effect of combined inoculation with Azospirillum brasilense and Pseudomonas fluorescens on rhizosphere microbial communities, field experiments were conducted in the west of Buenos Aires province, Argentina, with a split-split plot randomized complete block factorial design with three replicates. The factors were: type of cover crop, inoculation and fertilization. In the main plot two levels of fertilization 0 and 7 40-0-5 (NPKS) were established at sowing. Rye (Secale cereale cultivar Quehué) and oats (Avena sativa var Aurora.) were sown in the subplots. In the sub-subplots two inoculation treatments are applied without and with application of a combined inoculant with A. brasilense and P. fluorescens. Due to the growth of cover crops has to be stopped usually with the herbicide glyphosate, rhizosphere soil of 0-20 and 20-40 cm layers was sampled at three sampling times which were: before glyphosate application (BG), a month after glyphosate application (AG) and at soybean harvest (SH). Community level of physiological profiles (CLPP) and Shannon index of microbial diversity (H) were obtained by multivariate analysis of Principal Components. Also, the most probable number (MPN) of nitrifiers and cellulolytics were determined using selective liquid media for each functional group. The CLPP of rhizosphere microbial communities showed significant differences between sampling times. There was not interaction between sampling times and both, types of cover crops and inoculation. Rhizosphere microbial communities of samples obtained BG had different CLPP with respect to the samples obtained in the sampling times AG and SH. Fertilizer and depth of sampling also caused changes in the CLPP. The H diversity index of rhizosphere microbial communities of rye in the sampling time BG were higher than those associated with oats. The MPN of both microbial functional types was lower in the deeper layer since these microorganisms are mostly aerobic. The MPN of nitrifiers decreased in rhizosphere of both cover crops only AG. At the sampling time BG, the NMP of both microbial types were larger than those obtained for AG and SH. This may mean that the glyphosate application could cause fairly permanent changes in these microbial communities which can be considered bio-indicators of soil quality. Inoculation and fertilizer inputs could be included to improve management of these cover crops because they can have a significant positive effect on the sustainability of the agro-ecosystem.

Keywords: community level of physiological profiles, microbial diversity, plant growth promoting rhizobacteria, rhizosphere microbial communities, soil quality, system sustainability

Procedia PDF Downloads 408
185 Application of Pedicled Perforator Flaps in Large Cavities of the Breast

Authors: Neerja Gupta

Abstract:

Objective-Reconstruction of large cavities of the breast without contralateral symmetrisation Background- Reconstruction of breast includes a wide spectrum of procedures from displacement to regional and distant flaps. The pedicled Perforator flaps cover a wide spectrum of reconstruction surgery for all quadrants of the breast, especially in patients with comorbidities. These axial flaps singly or adjunct are based on a near constant perforator vessel, a ratio of 2:1 at its entry in a flap is good to maintain vascularity. The perforators of lateral chest wall viz LICAP, LTAP have overlapping perfurosomes without clear demarcation. LTAP is localized in the narrow zone between the lateral breast fold and anterior axillary line,2.5-3.8cm from the fold. MICAP are localized at 1-2 cm from sternum. Being 1-2mm in diameter, a Single perforator is good to maintain the flap. LICAP has a dominant perforator in 6th-11th spaces, while LTAP has higher placed dominant perforators in 4th and 5th spaces. Methodology-Six consecutive patients who underwent reconstruction of the breast with pedicled perforator flaps were retrospectively analysed. Selections of the flap was done based on the size and locations of the tumour, anticipated volume loss, willingness to undergo contralateral symmetrisation, cosmetic expectations, and finances available.3 patients underwent vertical LTAP, the distal limit of the flap being the inframammary crease. 3 patients underwent MICAP, oriented along the axis of rib, the distal limit being the anterior axillary line. Preoperative identification was done using a unidirectional hand held doppler. The flap was raised caudal to cranial, the pivot point of rotation being the vessel entry into the skin. The donor area is determined by the skin pinch. Flap harvest time was 20-25 minutes. Intra operative vascularity was assessed with dermal bleed. The patient immediate pre, post-operative and follow up pics were compared independently by two breast surgeons. Patients were given a breast Q questionnaire (licensed) for scoring. Results-The median age of six patients was 46. Each patient had a hospital stay of 24 hours. None of the patients was willing for contralateral symmetrisation. The specimen dimensions were from 8x6.8x4 cm to 19x16x9 cm. The breast volume reconstructed range was 30 percent to 45 percent. All wide excision had free margins on frozen. The mean flap dimensions were 12x5x4.5 cm. One LTAP underwent marginal necrosis and delayed wound healing due to seroma. Three patients were phyllodes, of which one was borderline, and 2 were benign on final histopathology. All other 3 patients were invasive ductal cancer and have completed their radiation. The median follow up is 7 months the satisfaction scores at median follow of 7 months are 90 for physical wellbeing and 85 for surgical results. Surgeons scored fair to good in Harvard score. Conclusion- Pedicled perforator flaps are a valuable option for 3/8th volume of breast defects. LTAP is preferred for tumours at the Central, upper, and outer quadrants of the breast and MICAP for the inner and lower quadrant. The vascularity of the flap is dependent on the angiosomalterritories; adequate venous and cavity drainage.

Keywords: breast, oncoplasty, pedicled, perforator

Procedia PDF Downloads 187
184 Farm-Women in Technology Transfer to Foster the Capacity Building of Agriculture: A Forecast from a Draught-Prone Rural Setting in India

Authors: Pradipta Chandra, Titas Bhattacharjee, Bhaskar Bhowmick

Abstract:

The foundation of economy in India is primarily based on agriculture while this is the most neglected in the rural setting. More significantly, household women take part in agriculture with higher involvement. However, because of lower education of women they have limited access towards financial decisions, land ownership and technology but they have vital role towards the individual family level. There are limited studies on the institution-wise training barriers with the focus of gender disparity. The main purpose of this paper is to find out the factors of institution-wise training (non-formal education) barriers in technology transfer with the focus of participation of rural women in agriculture. For this study primary and secondary data were collected in the line of qualitative and quantitative approach. Qualitative data were collected by several field visits in the adjacent areas of Seva-Bharati, Seva Bharati Krishi Vigyan Kendra through semi-structured questionnaires. In the next level detailed field surveys were conducted with close-ended questionnaires scored on the seven-point Likert scale. Sample size was considered as 162. During the data collection the focus was to include women although some biasness from the end of respondents and interviewer might exist due to dissimilarity in observation, views etc. In addition to that the heterogeneity of sample is not very high although female participation is more than fifty percent. Data were analyzed using Exploratory Factor Analysis (EFA) technique with the outcome of three significant factors of training barriers in technology adoption by farmers: (a) Failure of technology transfer training (TTT) comprehension interprets that the technology takers, i.e., farmers can’t understand the technology either language barrier or way of demonstration exhibited by the experts/ trainers. (b) Failure of TTT customization, articulates that the training for individual farmer, gender crop or season-wise is not tailored. (c) Failure of TTT generalization conveys that absence of common training methods for individual trainers for specific crops is more prominent at the community level. The central finding is that the technology transfer training method can’t fulfill the need of the farmers under an economically challenged area. The impact of such study is very high in the area of dry lateritic and resource crunch area of Jangalmahal under Paschim Medinipur district, West Bengal and areas with similar socio-economy. Towards the policy level decision this research may help in framing digital agriculture for implementation of the appropriate information technology for the farming community, effective and timely investment by the government with the selection of beneficiary, formation of farmers club/ farm science club etc. The most important research implication of this study lies upon the contribution towards the knowledge diffusion mechanism of the agricultural sector in India. Farmers may overcome the barriers to achieve higher productivity through adoption of modern farm practices. Corporates will be interested in agro-sector through investment under corporate social responsibility (CSR). The research will help in framing public or industry policy and land use pattern. Consequently, a huge mass of rural farm-women will be empowered and farmer community will be benefitted.

Keywords: dry lateritic zone, institutional barriers, technology transfer in India, farm-women participation

Procedia PDF Downloads 375
183 After Schubert’s Winterreise: Contemporary Aesthetic Journeys

Authors: Maria de Fátima Lambert

Abstract:

Following previous studies about Writing and Seeing, this paper focuses on the aesthetic assumptions within the concept of Winter Journey (Voyage d’Hiver/Winterreise) both in Georges Perec’s Saga and the Oulipo Group vis-à-vis with the creations by William Kentridge and Michael Borremans. The aesthetic and artistic connections are widespread. Nevertheless, we can identify common poetical principles shared by these different authors, not only according to the notion of ekphrasis, but also following the procedures of contemporary creation in literature and visual arts. The analysis of the ongoing process of the French writers as individuals and as group and the visual artists’ acting might contribute for another crossed definition of contemporary conception. The same title/theme was a challenge and a goal for them. Let’s wonder how deep the concept encouraged them and which symbolic upbringings were directing their poetical achievements. The idea of an inner journey became the main point, and got “over” and “across” a shared path worth to be followed. The authors were chosen due to the resilient contents of their visual and written images, and looking for the reasons that might had driven their conceptual basis to be. In Pérec’s “Winter Journey” as for the following fictions by Jacques Roubaud, Hervé le Tellier, Jacques Jouet and Hugo Vernier (that emerges from Perec’s fiction and becomes a real author) powerful aesthetic and enigmatic reflections grow connected with a poetic (and aesthetic) understanding of Walkscapes. They might be assumed as ironic fictions and poetical drifts. Outstanding from different logics, the overwhelming impact of Winterreise Lied by Schubert after Wilhelm Müller’s poems is a major reference in present authorship creations. Both Perec and Oulipo’s author’s texts are powerfully ekphrastic, although we should not forget they follow goals, frameworks and identities. When acting as a reader, they induce powerful imageries - cinematic or cinematographic - that flow in our minds. It was well-matched with William Kentridge animated video Winter Journey (2014) and the creations (sharing the same title) of Michael Borremans (2014) for the KlaraFestival, Bozar, Cité de la musique, in Belgium. Both were taken by the foremost Schubert’s Winterreise. Several metaphors fulfil new Winter Journeys (or Travels) that were achieved in contemporary art and literature, as it once succeeded in the 19th century. Maybe the contemporary authors and artists were compelled by the consciousness of nothingness, although outstanding different aesthetics and ontological sources. The unbearable knowledge of the road’s end, and also the urge of fulfilling the void might be a common element to all of them. As Schopenhauer once wrote, after all, Art is the only human subjective power that we can call upon in life. These newer aesthetic meanings, released from these winter journeys are surely open to wider approaches that might happen in other poetic makings to be.

Keywords: Aesthetic, voyage D’Hiver, George Perec & Oulipo, William Kentridge & Michael Borreman, Schubert's Winterreise

Procedia PDF Downloads 208
182 Geotechnical Evaluation and Sizing of the Reinforcement Layer on Soft Soil in the Construction of the North Triage Road Clover, in Brasilia Federal District, Brazil

Authors: Rideci Farias, Haroldo Paranhos, Joyce Silva, Elson Almeida, Hellen Silva, Lucas Silva

Abstract:

The constant growth of the fleet of vehicles in the big cities, makes that the Engineering is dynamic, with respect to the new solutions for traffic flow in general. In the Federal District (DF), Brazil, it is no different. The city of Brasilia, Capital of Brazil, and Cultural Heritage of Humanity by UNESCO, is projected to 500 thousand inhabitants, and today circulates more than 3 million people in the city, and with a fleet of more than one vehicle for every two inhabitants. The growth of the city to the North region, made that the urban planning presented solutions for the fleet in constant growth. In this context, a complex of viaducts, road accesses, creation of new rolling roads and duplication of the Bragueto bridge over Paranoa lake in the northern part of the city was designed, giving access to the BR-020 highway, denominated Clover of North Triage (TTN). In the geopedological context, the region is composed of hydromorphic soils, with the presence of the water level at some times of the year. From the geotechnical point of view, are soils with SPT < 4 and Resistance not drained, Su < 50 kPa. According to urban planning in Brasília, special art works can not rise in the urban landscape, contrasting with the urban characteristics of the architects Lúcio Costa and Oscar Niemeyer. Architects hired to design the new Capital of Brazil. The urban criterion then created the technical impasse, resulting in the technical need to ‘bury’ the works of art and in turn the access greenhouses at different levels, in regions of low support soil and water level Outcrossing, generally inducing the need for this study and design. For the adoption of the appropriate solution, Standard Penetration Test (SPT), Vane Test, Diagnostic peritoneal lavage (DPL) and auger boring campaigns were carried out. With the comparison of the results of these tests, the profiles of resistance of the soils and water levels were created in the studied sections. Geometric factors such as existing sidewalks and lack of elevation for the discharge of deep drainage water have inhibited traditional techniques for total removal of soft soils, thus avoiding the use of temporary drawdown and shoring of excavations. Thus, a structural layer was designed to reinforce the subgrade by means of the ‘needling’ of the soft soil, without the need for longitudinal drains. In this context, the article presents the geological and geotechnical studies carried out, but also the dimensioning of the reinforcement layer on the soft soil with a view to the main objective of this solution that is to allow the execution of the civil works without the interference in the roads in use, Execution of services in rainy periods, presentation of solution compatible with drainage characteristics and soft soil reinforcement.

Keywords: layer, reinforcement, soft soil, clover of north triage

Procedia PDF Downloads 229
181 Effects of the In-Situ Upgrading Project in Afghanistan: A Case Study on the Formally and Informally Developed Areas in Kabul

Authors: Maisam Rafiee, Chikashi Deguchi, Akio Odake, Minoru Matsui, Takanori Sata

Abstract:

Cities in Afghanistan have been rapidly urbanized; however, many parts of these cities have been developed with no detailed land use plan or infrastructure. In other words, they have been informally developed without any government leadership. The new government started the In-situ Upgrading Project in Kabul to upgrade roads, the water supply network system, and the surface water drainage system on the existing street layout in 2002, with the financial support of international agencies. This project is an appropriate emergency improvement for living life, but not an essential improvement of living conditions and infrastructure problems because the life expectancies of the improved facilities are as short as 10–15 years, and residents cannot obtain land tenure in the unplanned areas. The Land Readjustment System (LRS) conducted in Japan has good advantages that rearrange irregularly shaped land lots and develop the infrastructure effectively. This study investigates the effects of the In-situ Upgrading Project on private investment, land prices, and residents’ satisfaction with projects in Kart-e-Char, where properties are registered, and in Afshar-e-Silo Lot 1, where properties are unregistered. These projects are located 5 km and 7 km from the CBD area of Kabul, respectively. This study discusses whether LRS should be applied to the unplanned area based on the questionnaire and interview responses of experts experienced in the In-situ Upgrading Project who have knowledge of LRS. The analysis results reveal that, in Kart-e-Char, a lot of private investment has been made in the construction of medium-rise (five- to nine-story) buildings for commercial and residential purposes. Land values have also incrementally increased since the project, and residents are commonly satisfied with the road pavement, drainage systems, and water supplies, but dissatisfied with the poor delivery of electricity as well as the lack of public facilities (e.g., parks and sport facilities). In Afshar-e-Silo Lot 1, basic infrastructures like paved roads and surface water drainage systems have improved from the project. After the project, a few four- and five-story residential buildings were built with very low-level private investments, but significant increases in land prices were not evident. The residents are satisfied with the contribution ratio, drainage system, and small increase in land price, but there is still no drinking water supply system or tenure security; moreover, there are substandard paved roads and a lack of public facilities, such as parks, sport facilities, mosques, and schools. The results of the questionnaire and interviews with the four engineers highlight the problems that remain to be solved in the unplanned areas if LRS is applied—namely, land use differences, types and conditions of the infrastructure still to be installed by the project, and time spent for positive consensus building among the residents, given the project’s budget limitation.

Keywords: in-situ upgrading, Kabul city, land readjustment, land value, planned area, private investment, residents' satisfaction, unplanned area

Procedia PDF Downloads 205
180 Elements of Creativity and Innovation

Authors: Fadwa Al Bawardi

Abstract:

In March 2021, the Saudi Arabian Council of Ministers issued a decision to form a committee called the "Higher Committee for Research, Development and Innovation," a committee linked to the Council of Economic and Development Affairs, chaired by the Chairman of the Council of Economic and Development Affairs, and concerned with the development of the research, development and innovation sector in the Kingdom. In order to talk about the dimensions of this wonderful step, let us first try to answer the following questions. Is there a difference between creativity and innovation..? What are the factors of creativity in the individual. Are they mental genetic factors or are they factors that an individual acquires through learning..? The methodology included surveys that have been conducted on more than 500 individuals, males and females, between the ages of 18 till 60. And the answer is. "Creativity" is the creation of a new idea, while "Innovation" is the development of an already existing idea in a new, successful way. They are two sides of the same coin, as the "creative idea" needs to be developed and transformed into an "innovation" in order to achieve either strategic achievements at the level of countries and institutions to enhance organizational intelligence, or achievements at the level of individuals. For example, the beginning of smart phones was just a creative idea from IBM in 1994, but the actual successful innovation for the manufacture, development and marketing of these phones was through Apple later. Nor does creativity have to be hereditary. There are three basic factors for creativity: The first factor is "the presence of a challenge or an obstacle" that the individual faces and seeks thinking to find solutions to overcome, even if thinking requires a long time. The second factor is the "environment surrounding" of the individual, which includes science, training, experience gained, the ability to use techniques, as well as the ability to assess whether the idea is feasible or otherwise. To achieve this factor, the individual must be aware of own skills, strengths, hobbies, and aspects in which one can be creative, and the individual must also be self-confident and courageous enough to suggest those new ideas. The third factor is "Experience and the Ability to Accept Risk and Lack of Initial Success," and then learn from mistakes and try again tirelessly. There are some tools and techniques that help the individual to reach creative and innovative ideas, such as: Mind Maps tool, through which the available information is drawn by writing a short word for each piece of information and arranging all other relevant information through clear lines, which helps in logical thinking and correct vision. There is also a tool called "Flow Charts", which are graphics that show the sequence of data and expected results according to an ordered scenario of events and workflow steps, giving clarity to the ideas, their sequence, and what is expected of them. There are also other great tools such as the Six Hats tool, a useful tool to be applied by a group of people for effective planning and detailed logical thinking, and the Snowball tool. And all of them are tools that greatly help in organizing and arranging mental thoughts, and making the right decisions. It is also easy to learn, apply and use all those tools and techniques to reach creative and innovative solutions. The detailed figures and results of the conducted surveys are available upon request, with charts showing the %s based on gender, age groups, and job categories.

Keywords: innovation, creativity, factors, tools

Procedia PDF Downloads 55
179 Post-Exercise Recovery Tracking Based on Electrocardiography-Derived Features

Authors: Pavel Bulai, Taras Pitlik, Tatsiana Kulahava, Timofei Lipski

Abstract:

The method of Electrocardiography (ECG) interpretation for post-exercise recovery tracking was developed. Metabolic indices (aerobic and anaerobic) were designed using ECG-derived features. This study reports the associations between aerobic and anaerobic indices and classical parameters of the person’s physiological state, including blood biochemistry, glycogen concentration and VO2max changes. During the study 9 participants, healthy, physically active medium trained men and women, which trained 2-4 times per week for at least 9 weeks, fulfilled (i) ECG monitoring using Apple Watch Series 4 (AWS4); (ii) blood biochemical analysis; (iii) maximal oxygen consumption (VO2max) test, (iv) bioimpedance analysis (BIA). ECG signals from a single-lead wrist-wearable device were processed with detection of QRS-complex. Aerobic index (AI) was derived as the normalized slope of QR segment. Anaerobic index (ANI) was derived as the normalized slope of SJ segment. Biochemical parameters, glycogen content and VO2max were evaluated eight times within 3-60 hours after training. ECGs were recorded 5 times per day, plus before and after training, cycloergometry and BIA. The negative correlation between AI and blood markers of the muscles functional status including creatine phosphokinase (r=-0.238, p < 0.008), aspartate aminotransferase (r=-0.249, p < 0.004) and uric acid (r = -0.293, p<0.004) were observed. ANI was also correlated with creatine phosphokinase (r= -0.265, p < 0.003), aspartate aminotransferase (r = -0.292, p < 0.001), lactate dehydrogenase (LDH) (r = -0.190, p < 0.050). So, when the level of muscular enzymes increases during post-exercise fatigue, AI and ANI decrease. During recovery, the level of metabolites is restored, and metabolic indices rising is registered. It can be concluded that AI and ANI adequately reflect the physiology of the muscles during recovery. One of the markers of an athlete’s physiological state is the ratio between testosterone and cortisol (TCR). TCR provides a relative indication of anabolic-catabolic balance and is considered to be more sensitive to training stress than measuring testosterone and cortisol separately. AI shows a strong negative correlation with TCR (r=-0.437, p < 0.001) and correctly represents post-exercise physiology. In order to reveal the relation between the ECG-derived metabolic indices and the state of the cardiorespiratory system, direct measurements of VO2max were carried out at various time points after training sessions. The negative correlation between AI and VO2max (r = -0.342, p < 0.001) was obtained. These data testifying VO2max rising during fatigue are controversial. However, some studies have revealed increased stroke volume after training, that agrees with findings. It is important to note that post-exercise increase in VO2max does not mean an athlete’s readiness for the next training session, because the recovery of the cardiovascular system occurs over a substantially longer period. Negative correlations registered for ANI with glycogen (r = -0.303, p < 0.001), albumin (r = -0.205, p < 0.021) and creatinine (r = -0.268, p < 0.002) reflect the dehydration status of participants after training. Correlations between designed metabolic indices and physiological parameters revealed in this study can be considered as the sufficient evidence to use these indices for assessing the state of person’s aerobic and anaerobic metabolic systems after training during fatigue, recovery and supercompensation.

Keywords: aerobic index, anaerobic index, electrocardiography, supercompensation

Procedia PDF Downloads 115
178 Multi-Criteria Decision Making Network Optimization for Green Supply Chains

Authors: Bandar A. Alkhayyal

Abstract:

Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.

Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains

Procedia PDF Downloads 160
177 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays

Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal

Abstract:

Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).

Keywords: fault tolerance, FPGA, single event upset, approximate computing

Procedia PDF Downloads 199
176 Balloon Analogue Risk Task (BART) Performance Indicators Help Predict Outcomes of Matched Savings Program

Authors: Carlos M. Parra, Matthew Sutherland, Ranjita Poudel

Abstract:

Reduced mental-bandwidth related to low socioeconomic status (low-SES) might lead to impulsivity and risk-taking behavior, which poses as a major hurdle towards asset building (savings) behavior. Understanding the relationship between risk-related personality metrics as well as laboratory risk behavior and real-life savings behavior can help facilitate the development of effective asset building programs, which are vital for mitigating financial vulnerability and income inequality. As such, this study explored the relationship between personality metrics, laboratory behavior in a risky decision-making task and real-life asset building (savings) behaviors among individuals with low-SES from Miami, Florida (FL). Study participants (12 male, 15 female) included racially and ethnically diverse adults (mean age 41.22 ± 12.65 years), with incomplete higher education (18% had High School Diploma, 30% Associates, and 52% Some College), and low annual income (mean $13,872 ± $8020.43). Participants completed eight self-report surveys and played a widely used risky decision-making paradigm called the Balloon Analogue Risk Task (BART). Specifically, participants played three runs of BART (20 trials in each run; total 60 trials). In addition, asset building behavior data was collected for 24 participants who opened and used savings accounts and completed a 6-month savings program that involved monthly matches, and a final reward for completing the savings program without any interim withdrawals. Each participant’s total savings at the end of this program was the main asset building indicator considered. In addition, a new effective use of average pump bet (EUAPB) indicator was developed to characterize each participant’s ability to place winning bets. This indicator takes the ratio of each participant’s total BART earnings to average pump bet (APB) in all 60 trials. Our findings indicated that EUAPB explained more than a third of the variation in total savings among participants. Moreover, participants who managed to obtain BART earnings of at least 30 cents out of their APB, also tended to exhibit better asset building (savings) behavior. In particular, using this criterion to separate participants into high and low EUAPB groups, the nine participants with high EUAPB (mean BART earnings of 35.64 cents per APB) ended up with higher mean total savings ($255.11), while the 15 participants with low EUAPB (mean BART earnings of 22.50 cents per APB) obtained lower mean total savings ($40.01). All mean differences are statistically significant (2-tailed p  .0001) indicating that the relation between higher EUAPB and higher total savings is robust. Overall, these findings can help refine asset building interventions implemented by policy makers and practitioners interested in reducing financial vulnerability among low-SES population. Specifically, by helping identify individuals who are likely to readily take advantage of savings opportunities (such as matched savings programs) and avoiding the stipulation of unnecessary and expensive financial coaching programs to these individuals. This study was funded by J.P. Morgan Chase (JPMC) and carried out by scientists from Florida International University (FIU) in partnership with Catalyst Miami.

Keywords: balloon analogue risk task (BART), matched savings programs, asset building capability, low-SES participants

Procedia PDF Downloads 145
175 Manual Wheelchair Propulsion Efficiency on Different Slopes

Authors: A. Boonpratatong, J. Pantong, S. Kiattisaksophon, W. Senavongse

Abstract:

In this study, an integrated sensing and modeling system for manual wheelchair propulsion measurement and propulsion efficiency calculation was used to indicate the level of overuse. Seven subjects participated in the measurement. On the level surface, the propulsion efficiencies were not different significantly as the riding speed increased. By contrast, the propulsion efficiencies on the 15-degree incline were restricted to around 0.5. The results are supported by previously reported wheeling resistance and propulsion torque relationships implying margin of the overuse. Upper limb musculoskeletal injuries and syndromes in manual wheelchair riders are common, chronic, and may be caused at different levels by the overuse i.e. repetitive riding on steep incline. The qualitative analysis such as the mechanical effectiveness on manual wheeling to establish the relationship between the riding difficulties, mechanical efforts and propulsion outputs is scarce, possibly due to the challenge of simultaneous measurement of those factors in conventional manual wheelchairs and everyday environments. In this study, the integrated sensing and modeling system were used to measure manual wheelchair propulsion efficiency in conventional manual wheelchairs and everyday environments. The sensing unit is comprised of the contact pressure and inertia sensors which are portable and universal. Four healthy male and three healthy female subjects participated in the measurement on level and 15-degree incline surface. Subjects were asked to perform manual wheelchair ridings with three different self-selected speeds on level surface and only preferred speed on the 15-degree incline. Five trials were performed in each condition. The kinematic data of the subject’s dominant hand and a spoke and the trunk of the wheelchair were collected through the inertia sensors. The compression force applied from the thumb of the dominant hand to the push rim was collected through the contact pressure sensors. The signals from all sensors were recorded synchronously. The subject-selected speeds for slow, preferred and fast riding on level surface and subject-preferred speed on 15-degree incline were recorded. The propulsion efficiency as a ratio between the pushing force in tangential direction to the push rim and the net force as a result of the three-dimensional riding motion were derived by inverse dynamic problem solving in the modeling unit. The intra-subject variability of the riding speed was not different significantly as the self-selected speed increased on the level surface. Since the riding speed on the 15-degree incline was difficult to regulate, the intra-subject variability was not applied. On the level surface, the propulsion efficiencies were not different significantly as the riding speed increased. However, the propulsion efficiencies on the 15-degree incline were restricted to around 0.5 for all subjects on their preferred speed. The results are supported by the previously reported relationship between the wheeling resistance and propulsion torque in which the wheelchair axle torque increased but the muscle activities were not increased when the resistance is high. This implies the margin of dynamic efforts on the relatively high resistance being similar to the margin of the overuse indicated by the restricted propulsion efficiency on the 15-degree incline.

Keywords: contact pressure sensor, inertia sensor, integrating sensing and modeling system, manual wheelchair propulsion efficiency, manual wheelchair propulsion measurement, tangential force, resultant force, three-dimensional riding motion

Procedia PDF Downloads 290
174 Challenges and Recommendations for Medical Device Tracking and Traceability in Singapore: A Focus on Nursing Practices

Authors: Zhuang Yiwen

Abstract:

The paper examines the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. One of the major challenges identified is the lack of a standard coding system for medical devices, which makes it difficult to track them effectively. The paper suggests the use of the Unique Device Identifier (UDI) as a single standard for medical devices to improve tracking and reduce errors. The paper also explores the use of barcoding and image recognition to identify and document medical devices in nursing practices. In nursing practices, the use of barcodes for identifying medical devices is common. However, the information contained in these barcodes is often inconsistent, making it challenging to identify which segment contains the model identifier. Moreover, the use of barcodes may be improved with the use of UDI, but many subsidized accessories may still lack barcodes. The paper suggests that the readiness for UDI and barcode standardization requires standardized information, fields, and logic in electronic medical record (EMR), operating theatre (OT), and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. Nursing workflow and data flow also need to be taken into account. The paper also explores the use of image recognition, specifically the Tesseract OCR engine, to identify and document implants in public hospitals due to limitations in barcode scanning. The study found that the solution requires an implant information database and checking output against the database. The solution also requires customization of the algorithm, cropping out objects affecting text recognition, and applying adjustments. The solution requires additional resources and costs for a mobile/hardware device, which may pose space constraints and require maintenance of sterile criteria. The integration with EMR is also necessary, and the solution require changes in the user's workflow. The paper suggests that the long-term use of Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) as a supporting terminology to improve clinical documentation and data exchange in healthcare. SNOMED CT provides a standardized way of documenting and sharing clinical information with respect to procedure, patient and device documentation, which can facilitate interoperability and data exchange. In conclusion, the paper highlights the challenges facing the Singapore healthcare system related to the tracking and traceability of medical devices. The paper suggests the use of UDI and barcode standardization to improve tracking and reduce errors. It also explores the use of image recognition to identify and document medical devices in nursing practices. The paper emphasizes the importance of standardized information, fields, and logic in EMR, OT, and billing systems, as well as barcode scanners that can read various formats and selectively parse barcode segments. These recommendations could help the Singapore healthcare system to improve tracking and traceability of medical devices and ultimately enhance patient safety.

Keywords: medical device tracking, unique device identifier, barcoding and image recognition, systematized nomenclature of medicine clinical terms

Procedia PDF Downloads 79
173 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 96
172 The System-Dynamic Model of Sustainable Development Based on the Energy Flow Analysis Approach

Authors: Inese Trusina, Elita Jermolajeva, Viktors Gopejenko, Viktor Abramov

Abstract:

Global challenges require a transition from the existing linear economic model to a model that will consider nature as a life support system for the development of the way to social well-being in the frame of the ecological economics paradigm. The objective of the article is to present the results of the analysis of socio-economic systems in the context of sustainable development using the systems power (energy flows) changes analyzing method and structural Kaldor's model of GDP. In accordance with the principles of life's development and the ecological concept was formalized the tasks of sustainable development of the open, non-equilibrium, stable socio-economic systems were formalized using the energy flows analysis method. The methodology of monitoring sustainable development and level of life were considered during the research of interactions in the system ‘human - society - nature’ and using the theory of a unified system of space-time measurements. Based on the results of the analysis, the time series consumption energy and economic structural model were formulated for the level, degree and tendencies of sustainable development of the system and formalized the conditions of growth, degrowth and stationarity. In order to design the future state of socio-economic systems, a concept was formulated, and the first models of energy flows in systems were created using the tools of system dynamics. During the research, the authors calculated and used a system of universal indicators of sustainable development in the invariant coordinate system in energy units. In order to design the future state of socio-economic systems, a concept was formulated, and the first models of energy flows in systems were created using the tools of system dynamics. In the context of the proposed approach and methods, universal sustainable development indicators were calculated as models of development for the USA and China. The calculations used data from the World Bank database for the period from 1960 to 2019. Main results: 1) In accordance with the proposed approach, the heterogeneous energy resources of countries were reduced to universal power units, summarized and expressed as a unified number. 2) The values of universal indicators of the life’s level were obtained and compared with generally accepted similar indicators.3) The system of indicators in accordance with the requirements of sustainable development can be considered as a basis for monitoring development trends. This work can make a significant contribution to overcoming the difficulties of forming socio-economic policy, which is largely due to the lack of information that allows one to have an idea of the course and trends of socio-economic processes. The existing methods for the monitoring of the change do not fully meet this requirement since indicators have different units of measurement from different areas and, as a rule, are the reaction of socio-economic systems to actions already taken and, moreover, with a time shift. Currently, the inconsistency or inconsistency of measures of heterogeneous social, economic, environmental, and other systems is the reason that social systems are managed in isolation from the general laws of living systems, which can ultimately lead to a systemic crisis.

Keywords: sustainability, system dynamic, power, energy flows, development

Procedia PDF Downloads 60
171 New Hybrid Process for Converting Small Structural Parts from Metal to CFRP

Authors: Yannick Willemin

Abstract:

Carbon fibre-reinforced plastic (CFRP) offers outstanding value. However, like all materials, CFRP also has its challenges. Many forming processes are largely manual and hard to automate, making it challenging to control repeatability and reproducibility (R&R); they generate significant scrap and are too slow for high-series production; fibre costs are relatively high and subject to supply and cost fluctuations; the supply chain is fragmented; many forms of CFRP are not recyclable, and many materials have yet to be fully characterized for accurate simulation; shelf life and outlife limitations add cost; continuous-fibre forms have design limitations; many materials are brittle; and small and/or thick parts are costly to produce and difficult to automate. A majority of small structural parts are metal due to high CFRP fabrication costs for the small-size class. The fact that CFRP manufacturing processes that produce the highest performance parts also tend to be the slowest and least automated is another reason CFRP parts are generally higher in cost than comparably performing metal parts, which are easier to produce. Fortunately, business is in the midst of a major manufacturing evolution—Industry 4.0— one technology seeing rapid growth is additive manufacturing/3D printing, thanks to new processes and materials, plus an ability to harness Industry 4.0 tools. No longer limited to just prototype parts, metal-additive technologies are used to produce tooling and mold components for high-volume manufacturing, and polymer-additive technologies can incorporate fibres to produce true composites and be used to produce end-use parts with high aesthetics, unmatched complexity, mass customization opportunities, and high mechanical performance. A new hybrid manufacturing process combines the best capabilities of additive—high complexity, low energy usage and waste, 100% traceability, faster to market—and post-consolidation—tight tolerances, high R&R, established materials, and supply chains—technologies. The platform was developed by Zürich-based 9T Labs AG and is called Additive Fusion Technology (AFT). It consists of a design software offering the possibility to determine optimal fibre layup, then exports files back to check predicted performance—plus two pieces of equipment: a 3d-printer—which lays up (near)-net-shape preforms using neat thermoplastic filaments and slit, roll-formed unidirectional carbon fibre-reinforced thermoplastic tapes—and a post-consolidation module—which consolidates then shapes preforms into final parts using a compact compression press fitted with a heating unit and matched metal molds. Matrices—currently including PEKK, PEEK, PA12, and PPS, although nearly any high-quality commercial thermoplastic tapes and filaments can be used—are matched between filaments and tapes to assure excellent bonding. Since thermoplastics are used exclusively, larger assemblies can be produced by bonding or welding together smaller components, and end-of-life parts can be recycled. By combining compression molding with 3D printing, higher part quality with very-low voids and excellent surface finish on A and B sides can be produced. Tight tolerances (min. section thickness=1.5mm, min. section height=0.6mm, min. fibre radius=1.5mm) with high R&R can be cost-competitively held in production volumes of 100 to 10,000 parts/year on a single set of machines.

Keywords: additive manufacturing, composites, thermoplastic, hybrid manufacturing

Procedia PDF Downloads 96
170 Effect of Salinity and Heavy Metal Toxicity on Gene Expression, and Morphological Characteristics in Stevia rebaudiana Plants

Authors: Umara Nissar Rafiqi, Irum Gul, Nazima Nasrullah, Monica Saifi, Malik Z. Abdin

Abstract:

Background: Stevia rebaudiana, a member of Asteraceae family is an important medicinal plant and produces a commercially used non-caloric natural sweetener, which is also an alternate herbal cure for diabetes. Steviol glycosides are the main sweetening compounds present in these plants. Secondary metabolites are crucial to the adaption of plants to the environment and its overcoming stress conditions. In agricultural procedures, the abiotic stresses like salinity, high metal toxicity and drought, in particular, are responsible for the majority of the reduction that differentiates yield potential from harvestable yield. Salt stress and heavy metal toxicity lead to increased production of reactive oxygen species (ROS). To avoid oxidative damage due to ROS and osmotic stress, plants have a system of anti-oxidant enzymes along with several stress induced enzymes. This helps in scavenging the ROS and relieve the osmotic stress in different cell compartments. However, whether stress induced toxicity modulates the activity of these enzymes in Stevia rebaudiana is poorly understood. Aim: The present study focussed on the effect of salinity, heavy metal toxicity (lead and mercury) on physiological traits and transcriptional profiling of Stevia rebaudiana. Method: Stevia rebaudiana plants were collected from the Central Institute of Medicinal and Aromatic plants (CIMAP), Patnagar, India and maintained under controlled conditions in a greenhouse at Hamdard University, Delhi, India. The plants were subjected to different concentrations of salt (0, 25, 50 and 75 mM respectively) and heavy metals, lead and mercury (0, 100, 200 and 300 µM respectively). The physiological traits such as shoot length, root numbers, leaf growth were evaluated. The samples were collected at different developmental stages and analysed for transcription profiling by RT-PCR. Transcriptional studies in stevia rebaudiana involves important antioxidant enzymes: catalase (CAT), superoxide dismutase (SOD), cytochrome P450 monooxygenase (CYP) and stress induced aquaporin (AQU), auxin repressed protein (ARP-1), Ndhc gene. The data was analysed using GraphPad Prism and expressed as mean ± SD. Result: Low salinity and lower metal toxicity did not affect the fresh weight of the plant. However, this was substantially decreased by 55% at high salinity and heavy metal treatment. With increasing salinity and heavy metal toxicity, the values of all studied physiological traits were significantly decreased. Chlorosis in treated plants was also observed which could be due to changes in Fe:Zn ratio. At low concentrations (upto 25 mM) of NaCl and heavy metals, we did not observe any significant difference in the gene expressions of treated plants compared to control plants. Interestingly, at high salt concentration and high metal toxicity, a significant increase in the expression profile of stress induced genes was observed in treated plants compared to control (p < 0.005). Conclusion: Stevia rebaudiana is tolerant to lower salt and heavy metal concentration. This study also suggests that with the increase in concentrations of salt and heavy metals, harvest yield of S. rebaudiana was hampered.

Keywords: Stevia rebaudiana, natural sweetener, salinity, heavy metal toxicity

Procedia PDF Downloads 197
169 Forming-Free Resistive Switching Effect in ZnₓTiᵧHfzOᵢ Nanocomposite Thin Films for Neuromorphic Systems Manufacturing

Authors: Vladimir Smirnov, Roman Tominov, Vadim Avilov, Oleg Ageev

Abstract:

The creation of a new generation micro- and nanoelectronics elements opens up unlimited possibilities for electronic devices parameters improving, as well as developing neuromorphic computing systems. Interest in the latter is growing up every year, which is explained by the need to solve problems related to the unstructured classification of data, the construction of self-adaptive systems, and pattern recognition. However, for its technical implementation, it is necessary to fulfill a number of conditions for the basic parameters of electronic memory, such as the presence of non-volatility, the presence of multi-bitness, high integration density, and low power consumption. Several types of memory are presented in the electronics industry (MRAM, FeRAM, PRAM, ReRAM), among which non-volatile resistive memory (ReRAM) is especially distinguished due to the presence of multi-bit property, which is necessary for neuromorphic systems manufacturing. ReRAM is based on the effect of resistive switching – a change in the resistance of the oxide film between low-resistance state (LRS) and high-resistance state (HRS) under an applied electric field. One of the methods for the technical implementation of neuromorphic systems is cross-bar structures, which are ReRAM cells, interconnected by cross data buses. Such a structure imitates the architecture of the biological brain, which contains a low power computing elements - neurons, connected by special channels - synapses. The choice of the ReRAM oxide film material is an important task that determines the characteristics of the future neuromorphic system. An analysis of literature showed that many metal oxides (TiO2, ZnO, NiO, ZrO2, HfO2) have a resistive switching effect. It is worth noting that the manufacture of nanocomposites based on these materials allows highlighting the advantages and hiding the disadvantages of each material. Therefore, as a basis for the neuromorphic structures manufacturing, it was decided to use ZnₓTiᵧHfzOᵢ nanocomposite. It is also worth noting that the ZnₓTiᵧHfzOᵢ nanocomposite does not need an electroforming, which degrades the parameters of the formed ReRAM elements. Currently, this material is not well studied, therefore, the study of the effect of resistive switching in forming-free ZnₓTiᵧHfzOᵢ nanocomposite is an important task and the goal of this work. Forming-free nanocomposite ZnₓTiᵧHfzOᵢ thin film was grown by pulsed laser deposition (Pioneer 180, Neocera Co., USA) on the SiO2/TiN (40 nm) substrate. Electrical measurements were carried out using a semiconductor characterization system (Keithley 4200-SCS, USA) with W probes. During measurements, TiN film was grounded. The analysis of the obtained current-voltage characteristics showed a resistive switching from HRS to LRS resistance states at +1.87±0.12 V, and from LRS to HRS at -2.71±0.28 V. Endurance test shown that HRS was 283.21±32.12 kΩ, LRS was 1.32±0.21 kΩ during 100 measurements. It was shown that HRS/LRS ratio was about 214.55 at reading voltage of 0.6 V. The results can be useful for forming-free nanocomposite ZnₓTiᵧHfzOᵢ films in neuromorphic systems manufacturing. This work was supported by RFBR, according to the research project № 19-29-03041 mk. The results were obtained using the equipment of the Research and Education Center «Nanotechnologies» of Southern Federal University.

Keywords: nanotechnology, nanocomposites, neuromorphic systems, RRAM, pulsed laser deposition, resistive switching effect

Procedia PDF Downloads 132
168 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 181
167 Significant Aspects and Drivers of Germany and Australia's Energy Policy from a Political Economy Perspective

Authors: Sarah Niklas, Lynne Chester, Mark Diesendorf

Abstract:

Geopolitical tensions, climate change and recent movements favouring a transformative shift in institutional power structures have influenced the economics of conventional energy supply for decades. This study takes a multi-dimensional approach to illustrate the potential of renewable energy (RE) technology to provide a pathway to a low-carbon economy driven by ecologically sustainable, independent and socially just energy. This comparative analysis identifies economic, political and social drivers that shaped the adoption of RE policy in two significantly different economies, Germany and Australia, with strong and weak commitments to RE respectively. Two complementary political-economy theories frame the document-based analysis. Régulation Theory, inspired by Marxist ideas and strongly influenced by contemporary economic problems, provides the background to explore the social relationships contributing the adoption of RE within the macro-economy. Varieties of Capitalism theory, a more recently developed micro-economic approach, examines the nature of state-firm relationships. Together these approaches provide a comprehensive lens of analysis. Germany’s energy policy transformed substantially over the second half of the last century. The development is characterised by the coordination of societal, environmental and industrial demands throughout the advancement of capitalist regimes. In the Fordist regime, mass production based on coal drove Germany’s astounding economic recovery during the post-war period. Economic depression and the instability of institutional arrangements necessitated the impulsive seeking of national security and energy independence. During the postwar Flexi-Fordist period, quality-based production, innovation and technology-based competition schemes, particularly with regard to political power structures in and across Europe, favoured the adoption of RE. Innovation, knowledge and education were institutionalized, leading to the legislation of environmental concerns. Lastly the establishment of government-industry-based coordinative programs supported the phase out of nuclear power and the increased adoption of RE during the last decade. Australia’s energy policy is shaped by the country’s richness in mineral resources. Energy policy largely served coal mining, historically and currently one of the most capital-intense industry. Assisted by the macro-economic dimensions of institutional arrangements, social and financial capital is orientated towards the export-led and strongly demand-oriented economy. Here energy policy serves the maintenance of capital accumulation in the mining sector and the emerging Asian economies. The adoption of supportive renewable energy policy would challenge the distinct role of the mining industry within the (neo)-liberal market economy. The state’s protective role of the mining sector has resulted in weak commitment to RE policy and investment uncertainty in the energy sector. Recent developments, driven by strong public support for RE, emphasize the sense of community in urban and rural areas and the emergence of a bottom-up approach to adopt renewables. Thus, political economy frameworks on both the macro-economic (Regulation Theory) and micro-economic (Varieties of Capitalism theory) scales can together explain the strong commitment to RE in Germany vis-à-vis the weak commitment in Australia.

Keywords: political economy, regulation theory, renewable energy, social relationships, energy transitions

Procedia PDF Downloads 384
166 Physico-Chemical Characterization of Vegetable Oils from Oleaginous Seeds (Croton megalocarpus, Ricinus communis L., and Gossypium hirsutum L.)

Authors: Patrizia Firmani, Sara Perucchini, Irene Rapone, Raffella Borrelli, Stefano Chiaberge, Manuela Grande, Rosamaria Marrazzo, Alberto Savoini, Andrea Siviero, Silvia Spera, Fabio Vago, Davide Deriu, Sergio Fanutti, Alessandro Oldani

Abstract:

According to the Renewable Energy Directive II, the use of palm oil in diesel will be gradually reduced from 2023 and should reach zero in 2030 due to the deforestation caused by its production. Eni aims at finding alternative feedstocks for its biorefineries to eliminate the use of palm oil by 2023. Therefore, the ideal vegetable oils to be used in bio-refineries are those obtainable from plants that grow in marginal lands and with low impact on food-and-feed chain; hence, Eni research is studying the possibility of using oleaginous seeds, such as castor, croton, and cotton, to extract the oils to be exploited as feedstock in bio-refineries. To verify their suitability for the upgrading processes, an analytical protocol for their characterization has been drawn up and applied. The analytical characterizations include a step of water and ashes content determination, elemental analysis (CHNS analysis, X-Ray Fluorescence, Inductively Coupled Plasma - Optical Emission Spectroscopy, ICP– Mass Spectrometry), and total acid number determination. Gas chromatography coupled to flame ionization detector (GC-FID) is used to quantify the lipid content in terms of free fatty acids, mono-, di- and triacylglycerols, and fatty acids composition. Eventually, Nuclear Magnetic Resonance and Fourier Transform-Infrared spectroscopies are exploited with GC-MS and Fourier Transform-Ion Cyclotron Resonance to study the composition of the oils. This work focuses on the GC-FID analysis of the lipid fraction of these oils, as the main constituent and of greatest interest for bio-refinery processes. Specifically, the lipid component of the extracted oil was quantified after sample silanization and transmethylation: silanization allows the elution of high-boiling compounds and is useful for determining the quantity of free acids and glycerides in oils, while transmethylation leads to a mixture of fatty acid esters and glycerol, thus allowing to evaluate the composition of glycerides in terms of Fatty Acids Methyl Esters (FAME). Cotton oil was extracted from cotton oilcake, croton oil was obtained by seeds pressing and seeds and oilcake ASE extraction, while castor oil comes from seed pressing (not performed in Eni laboratories). GC-FID analyses reported that the cotton oil is 90% constituted of triglycerides and about 6% diglycerides, while free fatty acids are about 2%. In terms of FAME, C18 acids make up 70% of the total and linoleic acid is the major constituent. Palmitic acid is present at 17.5%, while the other acids are in low concentration (<1%). Both analyzes show the presence of non-gas chromatographable compounds. Croton oils from seed pressing and extraction mainly contain triglycerides (98%). Concerning FAME, the main component is linoleic acid (approx. 80%). Oilcake croton oil shows higher abundance of diglycerides (6% vs ca 2%) and a lower content of triglycerides (38% vs 98%) compared to the previous oils. Eventually, castor oil is mostly constituted of triacylglycerols (about 69%), followed by diglycerides (about 10%). About 85.2% of total FAME is ricinoleic acid, as a constituent of triricinolein, the most abundant triglyceride of castor oil. Based on the analytical results, these oils represent feedstocks of interest for possible exploitation as advanced biofuels.

Keywords: analytical protocol, biofuels, biorefinery, gas chromatography, vegetable oil

Procedia PDF Downloads 147
165 Using Low-Calorie Gas to Generate Heat and Electricity

Authors: Аndrey Marchenko, Oleg Linkov, Alexander Osetrov, Sergiy Kravchenko

Abstract:

The low-calorie of gases include biogas, coal gas, coke oven gas, associated petroleum gas, gases sewage, etc. These gases are usually released into the atmosphere or burned on flares, causing substantial damage to the environment. However, with the right approach, low-calorie gas fuel can become a valuable source of energy. Specified determines the relevance of areas related to the development of low-calorific gas utilization technologies. As an example, in the work considered one of way of utilization of coalmine gas, because Ukraine ranks fourth in the world in terms of coal mine gas emission (4.7% of total global emissions, or 1.2 billion m³ per year). Experts estimate that coal mine gas is actively released in the 70-80 percent of existing mines in Ukraine. The main component of coal mine gas is methane (25-60%) Methane in 21 times has a greater impact on the greenhouse effect than carbon dioxide disposal problem has become increasingly important in the context of the increasing need to address the problems of climate, ecology and environmental protection. So marked causes negative effect of both local and global nature. The efforts of the United Nations and the World Bank led to the adoption of the program 'Zero Routine Flaring by 2030' dedicated to the cessation of these gases burn in flares and disposing them with the ability to generate heat and electricity. This study proposes to use coal gas as a fuel for gas engines to generate heat and electricity. Analyzed the physical-chemical properties of low-calorie gas fuels were allowed to choose a suitable engine, as well as estimate the influence of the composition of the fuel at its techno-economic indicators. Most suitable for low-calorie gas is engine with pre-combustion chamber jet ignition. In Ukraine is accumulated extensive experience in exploitation and production of gas engines with capacity of 1100 kW type GD100 (10GDN 207/2 * 254) fueled by natural gas. By using system pre- combustion chamber jet ignition and quality control in the engines type GD100 introduces the concept of burning depleted burn fuel mixtures, which in turn leads to decrease in the concentration of harmful substances of exhaust gases. The main problems of coal mine gas as a fuel for ICE is low calorific value, the presence of components that adversely affect combustion processes and terms of operation of the ICE, the instability of the composition, weak ignition. In some cases, these problems can be solved by adaptation engine design using coal mine gas as fuel (changing compression ratio, fuel injection quantity increases, change ignition time, increase energy plugs, etc.). It is shown that the use of coal mine gas engines with prechamber has not led to significant changes in the indicator parameters (ηi = 0.43 - 0.45). However, this significantly increases the volumetric fuel consumption, which requires increased fuel injection quantity to ensure constant nominal engine power. Thus, the utilization of low-calorie gas fuels in stationary gas engine type-based GD100 will significantly reduce emissions of harmful substances into the atmosphere when the generate cheap electricity and heat.

Keywords: gas engine, low-calorie gas, methane, pre-combustion chamber, utilization

Procedia PDF Downloads 265
164 Innovative Technologies of Distant Spectral Temperature Control

Authors: Leonid Zhukov, Dmytro Petrenko

Abstract:

Optical thermometry has no alternative in many cases of industrial most effective continuous temperature control. Classical optical thermometry technologies can be used on available for pyrometers controlled objects with stable radiation characteristics and transmissivity of the intermediate medium. Without using temperature corrections, it is possible in the case of a “black” body for energy pyrometry and the cases of “black” and “grey” bodies for spectral ratio pyrometry or with using corrections – for any colored bodies. Consequently, with increasing the number of operating waves, optical thermometry possibilities to reduce methodical errors significantly expand. That is why, in recent 25-30 years, research works have been reoriented on more perfect spectral (multicolor) thermometry technologies. There are two physical material substances, i.e., substance (controlled object) and electromagnetic field (thermal radiation), to be operated in optical thermometry. Heat is transferred by radiation; therefore, radiation has the energy, entropy, and temperature. Optical thermometry was originating simultaneously with the developing of thermal radiation theory when the concept and the term "radiation temperature" was not used, and therefore concepts and terms "conditional temperatures" or "pseudo temperature" of controlled objects were introduced. They do not correspond to the physical sense and definitions of temperature in thermodynamics, molecular-kinetic theory, and statistical physics. Launched by the scientific thermometric society, discussion about the possibilities of temperature measurements of objects, including colored bodies, using the temperatures of their radiation is not finished. Are the information about controlled objects transferred by their radiation enough for temperature measurements? The positive and negative answers on this fundamental question divided experts into two opposite camps. Recent achievements of spectral thermometry develop events in her favour and don’t leave any hope for skeptics. This article presents the results of investigations and developments in the field of spectral thermometry carried out by the authors in the Department of Thermometry and Physics-Chemical Investigations. The authors have many-year’s of experience in the field of modern optical thermometry technologies. Innovative technologies of optical continuous temperature control have been developed: symmetric-wave, two-color compensative, and based on obtained nonlinearity equation of spectral emissivity distribution linear, two-range, and parabolic. Тhe technologies are based on direct measurements of physically substantiated and proposed by Prof. L. Zhukov, radiation temperatures with the next calculation of the controlled object temperature using this radiation temperatures and corresponding mathematical models. Тhe technologies significantly increase metrological characteristics of continuous contactless and light-guide temperature control in energy, metallurgical, ceramic, glassy, and other productions. For example, under the same conditions, the methodical errors of proposed technologies are less than the errors of known spectral and classical technologies in 2 and 3-13 times, respectively. Innovative technologies provide quality products obtaining at the lowest possible resource-including energy costs. More than 600 publications have been published on the completed developments, including more than 100 domestic patents, as well as 34 patents in Australia, Bulgaria, Germany, France, Canada, the USA, Sweden, and Japan. The developments have been implemented in the enterprises of USA, as well as Western Europe and Asia, including Germany and Japan.

Keywords: emissivity, radiation temperature, object temperature, spectral thermometry

Procedia PDF Downloads 99
163 Horizontal Stress Magnitudes Using Poroelastic Model in Upper Assam Basin, India

Authors: Jenifer Alam, Rima Chatterjee

Abstract:

Upper Assam sedimentary basin is one of the oldest commercially producing basins of India. Being in a tectonically active zone, estimation of tectonic strain and stress magnitudes has vast application in hydrocarbon exploration and exploitation. This East North East –West South West trending shelf-slope basin encompasses the Bramhaputra valley extending from Mikir Hills in the southwest to the Naga foothills in the northeast. Assam Shelf lying between the Main Boundary Thrust (MBT) and Naga Thrust area is comparatively free from thrust tectonics and depicts normal faulting mechanism. The study area is bounded by the MBT and Main Central Thrust in the northwest. The Belt of Schuppen in the southeast, is bordered by Naga and Disang thrust marking the lower limit of the study area. The entire Assam basin shows low-level seismicity compared to other regions of northeast India. Pore pressure (PP), vertical stress magnitude (SV) and horizontal stress magnitudes have been estimated from two wells - N1 and T1 located in Upper Assam. N1 is located in the Assam gap below the Bramhaputra river while T1, lies in the Belt of Schuppen. N1 penetrates geological formations from top Alluvial through Dhekiajuli, Girujan, Tipam, Barail, Kopili, Sylhet and Langpur to the granitic basement while T1 in trusted zone crosses through Girujan Suprathrust, Tipam Suprathrust, Barail Suprathrust to reach Naga Thrust. Normal compaction trend is drawn through shale points through both wells for estimation of PP using the conventional Eaton sonic equation with an exponent of 1.0 which is validated with Modular Dynamic Tester and mud weight. Observed pore pressure gradient ranges from 10.3 MPa/km to 11.1 MPa/km. The SV has a gradient from 22.20 to 23.80 MPa/km. Minimum and maximum horizontal principal stress (Sh and SH) magnitudes under isotropic conditions are determined using poroelastic model. This approach determines biaxial tectonic strain utilizing static Young’s Modulus, Poisson’s Ratio, SV, PP, leak off test (LOT) and SH derived from breakouts using prior information on unconfined compressive strength. Breakout derived SH information is used for obtaining tectonic strain due to lack of measured SH data from minifrac or hydrofracturing. Tectonic strain varies from 0.00055 to 0.00096 along x direction and from -0.0010 to 0.00042 along y direction. After obtaining tectonic strains at each well, the principal horizontal stress magnitudes are calculated from linear poroelastic model. The magnitude of Sh and SH gradient in normal faulting region are 12.5 and 16.0 MPa/km while in thrust faulted region the gradients are 17.4 and 20.2 MPa/km respectively. Model predicted Sh and SH matches well with the LOT data and breakout derived SH data in both wells. It is observed from this study that the stresses SV>SH>Sh prevailing in the shelf region while near the Naga foothills the regime changes to SH≈SV>Sh area corresponds to normal faulting regime. Hence this model is a reliable tool for predicting stress magnitudes from well logs under active tectonic regime in Upper Assam Basin.

Keywords: Eaton, strain, stress, poroelastic model

Procedia PDF Downloads 216