Search results for: multipoint optimal minimum entropy deconvolution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5291

Search results for: multipoint optimal minimum entropy deconvolution

161 Kinetic Evaluation of Sterically Hindered Amines under Partial Oxy-Combustion Conditions

Authors: Sara Camino, Fernando Vega, Mercedes Cano, Benito Navarrete, José A. Camino

Abstract:

Carbon capture and storage (CCS) technologies should play a relevant role towards low-carbon systems in the European Union by 2030. Partial oxy-combustion emerges as a promising CCS approach to mitigate anthropogenic CO₂ emissions. Its advantages respect to other CCS technologies rely on the production of a higher CO₂ concentrated flue gas than these provided by conventional air-firing processes. The presence of more CO₂ in the flue gas increases the driving force in the separation process and hence it might lead to further reductions of the energy requirements of the overall CO₂ capture process. A higher CO₂ concentrated flue gas should enhance the CO₂ capture by chemical absorption in solvent kinetic and CO₂ cyclic capacity. They have impact on the performance of the overall CO₂ absorption process by reducing the solvent flow-rate required for a specific CO₂ removal efficiency. Lower solvent flow-rates decreases the reboiler duty during the regeneration stage and also reduces the equipment size and pumping costs. Moreover, R&D activities in this field are focused on novel solvents and blends that provide lower CO₂ absorption enthalpies and therefore lower energy penalties associated to the solvent regeneration. In this respect, sterically hindered amines are considered potential solvents for CO₂ capture. They provide a low energy requirement during the regeneration process due to its molecular structure. However, its absorption kinetics are slow and they must be promoted by blending with faster solvents such as monoethanolamine (MEA) and piperazine (PZ). In this work, the kinetic behavior of two sterically hindered amines were studied under partial oxy-combustion conditions and compared with MEA. A lab-scale semi-batch reactor was used. The CO₂ composition of the synthetic flue gas varied from 15%v/v – conventional coal combustion – to 60%v/v – maximum CO₂ concentration allowable for an optimal partial oxy-combustion operation. Firstly, 2-amino-2-methyl-1-propanol (AMP) showed a hybrid behavior with fast kinetics and a low enthalpy of CO₂ absorption. The second solvent was Isophrondiamine (IF), which has a steric hindrance in one of the amino groups. Its free amino group increases its cyclic capacity. In general, the presence of higher CO₂ concentration in the flue gas accelerated the CO₂ absorption phenomena, producing higher CO₂ absorption rates. In addition, the evolution of the CO2 loading also exhibited higher values in the experiments using higher CO₂ concentrated flue gas. The steric hindrance causes a hybrid behavior in this solvent, between both fast and slow kinetic solvents. The kinetics rates observed in all the experiments carried out using AMP were higher than MEA, but lower than the IF. The kinetic enhancement experienced by AMP at a high CO2 concentration is slightly over 60%, instead of 70% – 80% for IF. AMP also improved its CO₂ absorption capacity by 24.7%, from 15%v/v to 60%v/v, almost double the improvements achieved by MEA. In IF experiments, the CO₂ loading increased around 10% from 15%v/v to 60%v/v CO₂ and it changed from 1.10 to 1.34 mole CO₂ per mole solvent, more than 20% of increase. This hybrid kinetic behavior makes AMP and IF promising solvents for partial oxy–combustion applications.

Keywords: absorption, carbon capture, partial oxy-combustion, solvent

Procedia PDF Downloads 191
160 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI

Procedia PDF Downloads 174
159 Partially Aminated Polyacrylamide Hydrogel: A Novel Approach for Temporary Oil and Gas Well Abandonment

Authors: Hamed Movahedi, Nicolas Bovet, Henning Friis Poulsen

Abstract:

Following the advent of the Industrial Revolution, there has been a significant increase in the extraction and utilization of hydrocarbon and fossil fuel resources. However, a new era has emerged, characterized by a shift towards sustainable practices, namely the reduction of carbon emissions and the promotion of renewable energy generation. Given the substantial number of mature oil and gas wells that have been developed inside the petroleum reservoir domain, it is imperative to establish an environmental strategy and adopt appropriate measures to effectively seal and decommission these wells. In general, the cement plug serves as a material for plugging purposes. Nevertheless, there exist some scenarios in which the durability of such a plug is compromised, leading to the potential escape of hydrocarbons via fissures and fractures within cement plugs. Furthermore, cement is often not considered a practical solution for temporary plugging, particularly in the case of well sites that have the potential for future gas storage or CO2 injection. The Danish oil and gas industry has promising potential as a prospective candidate for future carbon dioxide (CO2) injection, hence contributing to the implementation of carbon capture strategies within Europe. The primary reservoir component consists of chalk, a rock characterized by limited permeability. This work focuses on the development and characterization of a novel hydrogel variant. The hydrogel is designed to be injected via a low-permeability reservoir and afterward undergoes a transformation into a high-viscosity gel. The primary objective of this research is to explore the potential of this hydrogel as a new solution for effectively plugging well flow. Initially, the synthesis of polyacrylamide was carried out using radical polymerization inside the confines of the reaction flask. Subsequently, with the application of the Hoffman rearrangement, the polymer chain undergoes partial amination, facilitating its subsequent reaction with the crosslinker and enabling the formation of a hydrogel in the subsequent stage. The organic crosslinker, glutaraldehyde, was employed in the experiment to facilitate the formation of a gel. This gel formation occurred when the polymeric solution was subjected to heat within a specified range of reservoir temperatures. Additionally, a rheological survey and gel time measurements were conducted on several polymeric solutions to determine the optimal concentration. The findings indicate that the gel duration is contingent upon the starting concentration and exhibits a range of 4 to 20 hours, hence allowing for manipulation to accommodate diverse injection strategies. Moreover, the findings indicate that the gel may be generated in environments characterized by acidity and high salinity. This property ensures the suitability of this substance for application in challenging reservoir conditions. The rheological investigation indicates that the polymeric solution exhibits the characteristics of a Herschel-Bulkley fluid with somewhat elevated yield stress prior to solidification.

Keywords: polyacrylamide, hofmann rearrangement, rheology, gel time

Procedia PDF Downloads 78
158 Description of Decision Inconsistency in Intertemporal Choices and Representation of Impatience as a Reflection of Irrationality: Consequences in the Field of Personalized Behavioral Finance

Authors: Roberta Martino, Viviana Ventre

Abstract:

Empirical evidence has, over time, confirmed that the behavior of individuals is inconsistent with the descriptions provided by the Discounted Utility Model, an essential reference for calculating the utility of intertemporal prospects. The model assumes that individuals calculate the utility of intertemporal prospectuses by adding up the values of all outcomes obtained by multiplying the cardinal utility of the outcome by the discount function estimated at the time the outcome is received. The trend of the discount function is crucial for the preferences of the decision maker because it represents the perception of the future, and its trend causes temporally consistent or temporally inconsistent preferences. In particular, because different formulations of the discount function lead to various conclusions in predicting choice, the descriptive ability of models with a hyperbolic trend is greater than linear or exponential models. Suboptimal choices from any time point of view are the consequence of this mechanism, the psychological factors of which are encapsulated in the discount rate trend. In addition, analyzing the decision-making process from a psychological perspective, there is an equivalence between the selection of dominated prospects and a degree of impatience that decreases over time. The first part of the paper describes and investigates the anomalies of the discounted utility model by relating the cognitive distortions of the decision-maker to the emotional factors that are generated during the evaluation and selection of alternatives. Specifically, by studying the degree to which impatience decreases, it’s possible to quantify how the psychological and emotional mechanisms of the decision-maker result in a lack of decision persistence. In addition, this description presents inconsistency as the consequence of an inconsistent attitude towards time-delayed choices. The second part of the paper presents an experimental phase in which we show the relationship between inconsistency and impatience in different contexts. Analysis of the degree to which impatience decreases confirms the influence of the decision maker's emotional impulses for each anomaly in the utility model discussed in the first part of the paper. This work provides an application in the field of personalized behavioral finance. Indeed, the numerous behavioral diversities, evident even in the degrees of decrease in impatience in the experimental phase, support the idea that optimal strategies may not satisfy individuals in the same way. With the aim of homogenizing the categories of investors and to provide a personalized approach to advice, the results proven in the experimental phase are used in a complementary way with the information in the field of behavioral finance to implement the Analytical Hierarchy Process model in intertemporal choices, useful for strategic personalization. In the construction of the Analytic Hierarchy Process, the degree of decrease in impatience is understood as reflecting irrationality in decision-making and is therefore used for the construction of weights between anomalies and behavioral traits.

Keywords: analytic hierarchy process, behavioral finance, financial anomalies, impatience, time inconsistency

Procedia PDF Downloads 68
157 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization

Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon

Abstract:

The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.

Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization

Procedia PDF Downloads 447
156 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security

Authors: D. Pugazhenthi, B. Sree Vidya

Abstract:

Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.

Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification

Procedia PDF Downloads 260
155 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries

Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni

Abstract:

In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.

Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm

Procedia PDF Downloads 118
154 Ruta graveolens Fingerprints Obtained with Reversed-Phase Gradient Thin-Layer Chromatography with Controlled Solvent Velocity

Authors: Adrian Szczyrba, Aneta Halka-Grysinska, Tomasz Baj, Tadeusz H. Dzido

Abstract:

Since prehistory, plants were constituted as an essential source of biologically active substances in folk medicine. One of the examples of medicinal plants is Ruta graveolens L. For a long time, Ruta g. herb has been famous for its spasmolytic, diuretic, or anti-inflammatory therapeutic effects. The wide spectrum of secondary metabolites produced by Ruta g. includes flavonoids (eg. rutin, quercetin), coumarins (eg. bergapten, umbelliferone) phenolic acids (eg. rosmarinic acid, chlorogenic acid), and limonoids. Unfortunately, the presence of produced substances is highly dependent on environmental factors like temperature, humidity, or soil acidity; therefore standardization is necessary. There were many attempts of characterization of various phytochemical groups (eg. coumarins) of Ruta graveolens using the normal – phase thin-layer chromatography (TLC). However, due to the so-called general elution problem, usually, some components remained unseparated near the start or finish line. Therefore Ruta graveolens is a very good model plant. Methanol and petroleum ether extract from its aerial parts were used to demonstrate the capabilities of the new device for gradient thin-layer chromatogram development. The development of gradient thin-layer chromatograms in the reversed-phase system in conventional horizontal chambers can be disrupted by problems associated with an excessive flux of the mobile phase to the surface of the adsorbent layer. This phenomenon is most likely caused by significant differences between the surface tension of the subsequent fractions of the mobile phase. An excessive flux of the mobile phase onto the surface of the adsorbent layer distorts the flow of the mobile phase. The described effect produces unreliable, and unrepeatable results, causing blurring and deformation of the substance zones. In the prototype device, the mobile phase solution is delivered onto the surface of the adsorbent layer with controlled velocity (by moving pipette driven by 3D machine). The delivery of the solvent to the adsorbent layer is equal to or lower than that of conventional development. Therefore chromatograms can be developed with optimal linear mobile phase velocity. Furthermore, under such conditions, there is no excess of eluent solution on the surface of the adsorbent layer so the higher performance of the chromatographic system can be obtained. Directly feeding the adsorbent layer with eluent also enables to perform convenient continuous gradient elution practically without the so-called gradient delay. In the study, unique fingerprints of methanol and petroleum ether extracts of Ruta graveolens aerial parts were obtained with stepwise gradient reversed-phase thin-layer chromatography. Obtained fingerprints under different chromatographic conditions will be compared. The advantages and disadvantages of the proposed approach to chromatogram development with controlled solvent velocity will be discussed.

Keywords: fingerprints, gradient thin-layer chromatography, reversed-phase TLC, Ruta graveolens

Procedia PDF Downloads 289
153 Nutritional Genomics Profile Based Personalized Sport Nutrition

Authors: Eszter Repasi, Akos Koller

Abstract:

Our genetic information determines our look, physiology, sports performance and all our features. Maximizing the performances of athletes have adopted a science-based approach to the nutritional support. Nowadays genetics studies have blended with nutritional sciences, and a dynamically evolving, new research field have appeared. Nutritional genomics is needed to be used by nutritional experts. This is a recent field of nutritional science, which can provide a solution to reach the best sport performance using correlations between the athlete’s genome, nutritions, molecules, included human microbiome (links between food, microbiome and epigenetics), nutrigenomics and nutrigenetics. Nutritional genomics has a tremendous potential to change the future of dietary guidelines and personal recommendations. Experts need to use new technology to get information about the athletes, like nutritional genomics profile (included the determination of the oral and gut microbiome and DNA coded reaction for food components), which can modify the preparation term and sports performance. The influence of nutrients on the genes expression is called Nutrigenomics. The heterogeneous response of gene variants to nutrients, dietary components is called Nutrigenetics. The human microbiome plays a critical role in the state of health and well-being, and there are more links between food or nutrition and the human microbiome composition, which can develop diseases and epigenetic changes as well. A nutritional genomics-based profile of athletes can be the best technic for a dietitian to make a unique sports nutrition diet plan. Using functional food and the right food components can be effected on health state, thus sports performance. Scientists need to determine the best response, due to the effect of nutrients on health, through altering genome promote metabolites and result changes in physiology. Nutritional biochemistry explains why polymorphisms in genes for the absorption, circulation, or metabolism of essential nutrients (such as n-3 polyunsaturated fatty acids or epigallocatechin-3-gallate), would affect the efficacy of that nutrient. Controlled nutritional deficiencies and failures, prevented the change of health state or a newly discovered food intolerance are observed by a proper medical team, can support better sports performance. It is important that the dietetics profession informed on gene-diet interactions, that may be leading to optimal health, reduced risk of injury or disease. A special medical application for documentation and monitoring of data of health state and risk factors can uphold and warn the medical team for an early action and help to be able to do a proper health service in time. This model can set up a personalized nutrition advice from the status control, through the recovery, to the monitoring. But more studies are needed to understand the mechanisms and to be able to change the composition of the microbiome, environmental and genetic risk factors in cases of athletes.

Keywords: gene-diet interaction, multidisciplinary team, microbiome, diet plan

Procedia PDF Downloads 172
152 Opportunities for Reducing Post-Harvest Losses of Cactus Pear (Opuntia Ficus-Indica) to Improve Small-Holder Farmers Income in Eastern Tigray, Northern Ethiopia: Value Chain Approach

Authors: Meron Zenaselase Rata, Euridice Leyequien Abarca

Abstract:

The production of major crops in Northern Ethiopia, especially the Tigray Region, is at subsistence level due to drought, erratic rainfall, and poor soil fertility. Since cactus pear is a drought-resistant plant, it is considered as a lifesaver fruit and a strategy for poverty reduction in a drought-affected area of the region. Despite its contribution to household income and food security in the area, the cactus pear sub-sector is experiencing many constraints with limited attention given to its post-harvest loss management. Therefore, this research was carried out to identify opportunities for reducing post-harvest losses and recommend possible strategies to reduce post-harvest losses, thereby improving production and smallholder’s income. Both probability and non-probability sampling techniques were employed to collect the data. Ganta Afeshum district was selected from Eastern Tigray, and two peasant associations (Buket and Golea) were also selected from the district purposively for being potential in cactus pear production. Simple random sampling techniques were employed to survey 30 households from each of the two peasant associations, and a semi-structured questionnaire was used as a tool for data collection. Moreover, in this research 2 collectors, 2 wholesalers, 1 processor, 3 retailers, 2 consumers were interviewed; and two focus group discussion was also done with 14 key farmers using semi-structured checklist; and key informant interview with governmental and non-governmental organizations were interviewed to gather more information about the cactus pear production, post-harvest losses, the strategies used to reduce the post-harvest losses and suggestions to improve the post-harvest management. To enter and analyze the quantitative data, SPSS version 20 was used, whereas MS-word were used to transcribe the qualitative data. The data were presented using frequency and descriptive tables and graphs. The data analysis was also done using a chain map, correlations, stakeholder matrix, and gross margin. Mean comparisons like ANOVA and t-test between variables were used. The analysis result shows that the present cactus pear value chain involves main actors and supporters. However, there is inadequate information flow and informal market linkages among actors in the cactus pear value chain. The farmer's gross margin is higher when they sell to the processor than sell to collectors. The significant postharvest loss in the cactus pear value chain is at the producer level, followed by wholesalers and retailers. The maximum and minimum volume of post-harvest losses at the producer level is 4212 and 240 kgs per season. The post-harvest loss was caused by limited farmers skill on-farm management and harvesting, low market price, limited market information, absence of producer organization, poor post-harvest handling, absence of cold storage, absence of collection centers, poor infrastructure, inadequate credit access, using traditional transportation system, absence of quality control, illegal traders, inadequate research and extension services and using inappropriate packaging material. Therefore, some of the recommendations were providing adequate practical training, forming producer organizations, and constructing collection centers.

Keywords: cactus pear, post-harvest losses, profit margin, value-chain

Procedia PDF Downloads 132
151 An Unusual Manifestation of Spirituality: Kamppi Chapel of Helsinki

Authors: Emine Umran Topcu

Abstract:

In both urban design and architecture, the primary goal is considered to be looking for ways in which people feel and think about space and place. Humans, in general, see a place as security and space as freedom and feel attached to place and long for space. Contemporary urban design manifests itself by addressing basic physical and psychological human needs. Not much attention is paid to transcendence. There seems to be a gap in the hierarchy of human needs. Usually, social aspects of public space are addressed through urban design. More personal and intimately scaled needs of an individual are neglected. How does built form contribute to an individual’s growth, contemplation, and exploration? In other words, a greater meaning in the immediate environment. Architects love to talk about meaning, poetics, attachment and other ethereal aspects of space that are not visible attributes of places. This paper aims at describing spirituality through built form with a personal experience of Kamppi Chapel of Helsinki. Experience covers various modes through which a person unfolds or constructs reality. Perception, sensation, emotion, and thought can be counted as for these modes. To experience is to get to know. What can be known is a construct of experience. Feelings and thoughts about space and place are very complex in human beings. They grow out of life experiences. The author had the chance of visiting Kamppi Chapel in April 2017, out of which the experience grew. The Kamppi Chapel is located on the South side of the busy Narinnka Square in central Helsinki. It offers a place to quiet down and compose oneself in a most lively urban space. With its curved wooden facade, the small building looks more like a museum than a chapel. It can be called a museum for contemplation. With its gently shaped interior, it embraces visitors and shields them from the hustle bustle of the city outside. Places of worship in all faiths signify sacred power. The author, having origins in a part of the world where domes and minarets dominate the cityscape, was impressed by the size and the architectural visibility of the Chapel. Anyone born and trained in such a tradition shares the inherent values and psychological mechanisms of spirituality, sacredness and the modest realities of their environment. Spirituality in all cultural traditions has not been analyzed and reinterpreted in new conceptual frameworks. Fundamentalists may reject this positivist attitude, but Kamppi Chapel as it stands does not look like it has a say like “I’m a model to be followed”. It just faces the task of representing a religious facility in an urban setting largely shaped by modern urban planning, which seems to the author as looking for a new definition of individual status. The quest between the established and the new is the demand for modern efficiency versus dogmatic rigidity. The architecture here has played a very promising and rewarding role for spirituality. The designers have been the translators for human desire for better life and aesthetic environment for an optimal satisfaction of local citizens and the visitors alike.

Keywords: architecture, Kamppi Chapel, spirituality, urban

Procedia PDF Downloads 183
150 Cereal Bioproducts Conversion to Higher Value Feed by Using Pediococcus Strains Isolated from Spontaneous Fermented Cereal, and Its Influence on Milk Production of Dairy Cattle

Authors: Vita Krungleviciute, Rasa Zelvyte, Ingrida Monkeviciene, Jone Kantautaite, Rolandas Stankevicius, Modestas Ruzauskas, Elena Bartkiene

Abstract:

The environmental impact of agricultural bioproducts from the processing of food crops is an increasing concern worldwide. Currently, cereal bran has been used as a low-value ingredient for both human consumption and animal feed. The most popular bioprocessing technologies for cereal bran nutritional and technological functionality increasing are enzymatic processing and fermentation, and the most popular starters in fermented feed production are lactic acid bacteria (LAB) including pediococci. However, the ruminant digestive system is unique, there are billions of microorganisms which help the cow to digest and utilize nutrients in the feed. To achieve efficient feed utilization and high milk yield, the microorganisms must have optimal conditions, and the disbalance of this system is highly undesirable. Pediococcus strains Pediococcus acidilactici BaltBio01 and Pediococcus pentosaceus BaltBio02 from spontaneous fermented rye were isolated (by rep – PCR method), identified, and characterized by their growth (by Thermo Bioscreen C automatic turbidometer), acidification rate (2 hours in 2.5 pH), gas production (Durham method), and carbohydrate metabolism (by API 50 CH test ). Antimicrobial activities of isolated pediococcus against variety of pathogenic and opportunistic bacterial strains previously isolated from diseased cattle, and their resistance to antibiotics were evaluated (EFSA-FEEDAP method). The isolated pediococcus strains were cultivated in barley/wheat bran (90 / 10, m / m) substrate, and developed supplements, with high content of valuable pediococcus, were used for Lithuanian black and white dairy cows feeding. In addition, the influence of supplements on milk production and composition was determined. Milk composition was evaluated by the LactoScope FTIR” FT1.0. 2001 (Delta Instruments, Holland). P. acidilactici BaltBio01 and P. pentosaceus BaltBio02 demonstrated versatile carbohydrate metabolism, grown at 30°C and 37°C temperatures, and acidic tolerance. Isolated pediococcus strains showed to be non resistant to antibiotics, and having antimicrobial activity against undesirable microorganisms. By barley/wheat bran utilisation using fermentation with selected pediococcus strains, it is possible to produce safer (reduced Enterobacteriaceae, total aerobic bacteria, yeast and mold count) feed stock with high content of pediococcus. Significantly higher milk yield (after 33 days) by using pediococcus supplements mix for dairy cows feeding could be obtained, while similar effect by using separate strains after 66 days of feeding could be achieved. It can be stated that barley/wheat bran could be used for higher value feed production in order to increase milk production. Therefore, further research is needed to identify what is the main mechanism of the positive action.

Keywords: barley/wheat bran, dairy cattle, fermented feed, milk, pediococcus

Procedia PDF Downloads 308
149 Antibacterial Bioactive Glasses in Orthopedic Surgery and Traumatology

Authors: V. Schmidt, L. Janovák, N. Wiegand, B. Patczai, K. Turzó

Abstract:

Large bone defects are not able to heal spontaneously. Bioactive glasses seem to be appropriate (bio)materials for bone reconstruction. Bioactive glasses are osteoconductive and osteoinductive, therefore, play a useful role in bony regeneration and repair. Because of their not optimal mechanical properties (e.g., brittleness, low bending strength, and fracture toughness), their applications are limited. Bioactive glass can be used as a coating material applied on metal surfaces. In this way -when using them as implants- the excellent mechanical properties of metals and the biocompatibility and bioactivity of glasses will be utilized. Furthermore, ion release effects of bioactive glasses regarding osteogenic and angiogenic responses have been shown. Silicate bioactive glasses (45S5 Bioglass) induce the release and exchange of soluble Si, Ca, P, and Na ions on the material surface. This will lead to special cellular responses inducing bone formation, which is favorable in the biointegration of the orthopedic prosthesis. The incorporation of other additional elements in the silicate network such as fluorine, magnesium, iron, silver, potassium, or zinc has been shown, as the local delivery of these ions is able to enhance specific cell functions. Although hip and knee prostheses present a high success rate, bacterial infections -mainly implant associated- are serious and frequent complications. Infection can also develop after implantation of hip prostheses, the elimination of which means more surgeries for the patient and additional costs for the clinic. Prosthesis-related infection is a severe complication of orthopedic surgery, which often causes prolonged illness, pain, and functional loss. While international efforts are made to reduce the risk of these infections, orthopedic surgical infections (SSIs) continue to occur in high numbers. It is currently estimated that up to 2.5% of primary hip and knee surgeries and up to 20% of revision arthroplasties are complicated by periprosthetic joint infection (PJIs). According to some authors, these numbers are underestimated, and they are also increasing. Staphylococcus aureus is the leading cause of both SSIs and PJIs, and the prevalence of methicillin-resistant S. aureus (MRSA) is on the rise, particularly in the United States. These deep infections lead to implant removal and consequently increase morbidity and mortality. The study targets this clinical problem using our experience so far with the Ag-doped polymer coatings on Titanium implants. Non-modified or modified (e.g., doped with antibacterial agents, like Ag) bioactive glasses could play a role in the prevention of infections or the therapy of infected tissues. Bioactive glasses have excellent biocompatibility, proved by in vitro cell culture studies of human osteoblast-like MG-63 cells. Ag-doped bioactive glass-scaffold has a good antibacterial ability against Escherichia coli and other bacteria. It may be concluded that these scaffolds have great potential in the prevention and therapy of implant-associated bone infection.

Keywords: antibacterial agents, bioactive glass, hip and knee prosthesis, medical implants

Procedia PDF Downloads 193
148 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 292
147 'Go Baby Go'; Community-Based Integrated Early Childhood and Maternal Child Health Model Improving Early Childhood Stimulation, Care Practices and Developmental Outcomes in Armenia: A Quasi-Experimental Study

Authors: Viktorya Sargsyan, Arax Hovhannesyan, Karine Abelyan

Abstract:

Introduction: During the last decade, scientific studies have proven the importance of Early Childhood Development (ECD) interventions. These interventions are shown to create strong foundations for children’s intellectual, emotional and physical well-being, as well as the impact they have on learning and economic outcomes for children as they mature into adulthood. Many children in rural Armenia fail to reach their full development potential due to lack of early brain stimulation (playing, singing, reading, etc.) from their parents, and lack of community tools and services to follow-up children’s neurocognitive development. This is exacerbated by high rates of stunting and anemia among children under 3(CU3). This research study tested the effectiveness of an integrated ECD and Maternal, Newborn and Childhood Health (MNCH) model, called “Go Baby, Go!” (GBG), against the traditional (MNCH) strategy which focuses solely on preventive health and nutrition interventions. The hypothesis of this quasi-experimental study was: Children exposed to GBG will have better neurocognitive and nutrition outcomes compared to those receiving only the MNCH intervention. The secondary objective was to assess the effect of GBG on parental child care and nutrition practices. Methodology: The 14 month long study, targeted all 1,300 children aged 0 to 23 months, living in 43 study communities the in Gavar and Vardenis regions (Gegharkunik province, Armenia). Twenty-three intervention communities, 680 children, received GBG, and 20 control communities, 630 children, received MCHN interventions only. Baseline and evaluation data on child development, nutrition status and parental child care and nutrition practices were collected (caregiver interview, direct child assessment). In the intervention sites, in addition to MNCH (maternity schools, supportive supervision for Health Care Providers (HCP), the trained GBG facilitators conducted six interactive group sessions for mothers (key messages, information, group discussions, role playing, video-watching, toys/books preparation, according to GBG curriculum), and two sessions (condensed GBG) for adult family members (husbands, grandmothers). The trained HCPs received quality supervision for ECD counseling and screening. Findings: The GBG model proved to be effective in improving ECD outcomes. Children in the intervention sites had 83% higher odd of total ECD composite score (cognitive, language, motor) compared to children in the control sites (aOR 1.83; 95 percent CI: 1.08-3.09; p=0.025). Caregivers also demonstrated better child care and nutrition practices (minimum dietary diversity in intervention site is 55 percent higher compared to control (aOR=1.55, 95 percent CI 1.10-2.19, p =0.013); support for learning and disciplining practices (aOR=2.22, 95 percent CI 1.19-4.16, p=0.012)). However, there was no evidence of stunting reduction in either study arm. he effect of the integrated model was more prominent in Vardenis, a community which is characterised by high food insecurity and limited knowledge of positive parenting skills. Conclusion: The GBG model is effective and could be applied in target areas with the greatest economic disadvantages and parenting challenges to improve ECD, care practices and developmental outcomes. Longitudinal studies are needed to view the long-term effects of GBG on learning and school readiness.

Keywords: early childhood development, integrated interventions, parental practices, quasi-experimental study

Procedia PDF Downloads 172
146 Supplementing Aerial-Roving Surveys with Autonomous Optical Cameras: A High Temporal Resolution Approach to Monitoring and Estimating Effort within a Recreational Salmon Fishery in British Columbia, Canada

Authors: Ben Morrow, Patrick O'Hara, Natalie Ban, Tunai Marques, Molly Fraser, Christopher Bone

Abstract:

Relative to commercial fisheries, recreational fisheries are often poorly understood and pose various challenges for monitoring frameworks. In British Columbia (BC), Canada, Pacific salmon are heavily targeted by recreational fishers while also being a key source of nutrient flow and crucial prey for a variety of marine and terrestrial fauna, including endangered Southern Resident killer whales (Orcinus orca). Although commercial fisheries were historically responsible for the majority of salmon retention, recreational fishing now comprises both greater effort and retention. The current monitoring scheme for recreational salmon fisheries involves aerial-roving creel surveys. However, this method has been identified as costly and having low predictive power as it is often limited to sampling fragments of fluid and temporally dynamic fisheries. This study used imagery from two shore-based autonomous cameras in a highly active recreational fishery around Sooke, BC, and evaluated their efficacy in supplementing existing aerial-roving surveys for monitoring a recreational salmon fishery. This study involved continuous monitoring and high temporal resolution (over one million images analyzed in a single fishing season), using a deep learning-based vessel detection algorithm and a custom image annotation tool to efficiently thin datasets. This allowed for the quantification of peak-season effort from a busy harbour, species-specific retention estimates, high levels of detected fishing events at a nearby popular fishing location, as well as the proportion of the fishery management area represented by cameras. Then, this study demonstrated how it could substantially enhance the temporal resolution of a fishery through diel activity pattern analyses, scaled monthly to visualize clusters of activity. This work also highlighted considerable off-season fishing detection, currently unaccounted for in the existing monitoring framework. These results demonstrate several distinct applications of autonomous cameras for providing enhanced detail currently unavailable in the current monitoring framework, each of which has important considerations for the managerial allocation of resources. Further, the approach and methodology can benefit other studies that apply shore-based camera monitoring, supplement aerial-roving creel surveys to improve fine-scale temporal understanding, inform the optimal timing of creel surveys, and improve the predictive power of recreational stock assessments to preserve important and endangered fish species.

Keywords: cameras, monitoring, recreational fishing, stock assessment

Procedia PDF Downloads 123
145 Communicating Safety: A Digital Ethnography Investigating Social Media Use for Workplace Safety

Authors: Kelly Jaunzems

Abstract:

Social media is a powerful instrument of communication, enabling the presentation of information in multiple forms and modes, amplifying the interactions between people, organisations, and stakeholders, and increasing the range of communication channels available. Younger generations are highly engaged with social media and more likely to use this channel than any other to seek information. Given this, it may appear extraordinary that occupational safety and health professionals have yet to seriously engage with social media for communicating safety messages to younger audiences who, in many industries, might be statistically more likely to encounter more workplace harm or injury. Millennials, defined as those born between 1981-2000, have distinctive characteristics that also impact their interaction patterns rendering many traditional occupational safety and health communication channels sub-optimal or near obsolete. Used to immediate responses, 280-character communication, shares, likes, and visual imagery, millennials struggle to take seriously the low-tech, top-down communication channels such as safety noticeboards, toolbox meetings, and passive tick-box online inductions favoured by traditional OSH professionals. This paper draws upon well-established communication findings, which argue that it is important to know a target audience and reach them using their preferred communication pathways, particularly if the aim is to impact attitudes and behaviours. Health practitioners have adopted social media as a communication channel with great success, yet safety practitioners have failed to follow this lead. Using a digital ethnography approach, this paper examines seven organisations’ Facebook posts from two one-month periods one year apart, one in 2018 and one in 2019. Each of the years informs organisation-based case studies. Comparing, contrasting, and drawing upon these case studies, the paper discusses and evaluates the (non) use of social media communication of safety information in terms of user engagement, shareability, and overall appeal. The success of health practitioners’ use of social media provides a compelling template for the implementation of social media into organisations’ safety communication strategies. Highly visible content such as that found on social media allows an organization to become more responsive and engage in two-way conversations with their audience, creating more engaged and participatory conversations around safety. Further, using social media to address younger audiences with a range of tonal qualities (for example, the use of humour) can achieve cut through in a way that grim statistics fail to do. On the basis of 18 months of interviews, filed work, and data analysis, the paper concludes with recommendations for communicating safety information via social media. It proposes exploration of the social media communication formula that, when utilised by safety practitioners, may create an effective social media presence. It is anticipated that such social media use will increase engagement, expand the number of followers and reduce the likelihood and severity of safety-related incidents. The tools offered may provide a path for safety practitioners to reach a disengaged generation of workers to build a cohesive and inclusive conversation around ways to keep people safe at work.

Keywords: social media, workplace safety, communication strategies, young workers

Procedia PDF Downloads 119
144 Evaluation of Correct Usage, Comfort and Fit of Personal Protective Equipment in Construction Work

Authors: Anna-Lisa Osvalder, Jonas Borell

Abstract:

There are several reasons behind the use, non-use, or inadequate use of personal protective equipment (PPE) in the construction industry. Comfort and accurate size support proper use, while discomfort, misfit, and difficulties to understand how the PPEs should be handled inhibit correct usage. The need for several protective equipments simultaneously might also create problems. The purpose of this study was to analyse the correct usage, comfort, and fit of different types of PPEs used for construction work. Correct usage was analysed as guessability, i.e., human perceptions of how to don, adjust, use, and doff the equipment, and if used as intended. The PPEs tested individually or in combinations were a helmet, ear protectors, goggles, respiratory masks, gloves, protective cloths, and safety harnesses. First, an analytical evaluation was performed with ECW (enhanced cognitive walkthrough) and PUEA (predictive use error analysis) to search for usability problems and use errors during handling and use. Then usability tests were conducted to evaluate guessability, comfort, and fit with 10 test subjects of different heights and body constitutions. The tests included observations during donning, five different outdoor work tasks, and doffing. The think-aloud method, short interviews, and subjective estimations were performed. The analytical evaluation showed that some usability problems and use errors arise during donning and doffing, but with minor severity, mostly causing discomfort. A few use errors and usability problems arose for the safety harness, especially for novices, where some could lead to a high risk of severe incidents. The usability tests showed that discomfort arose for all test subjects when using a combination of PPEs, increasing over time. For instance, goggles, together with the face mask, caused pressure, chafing at the nose, and heat rash on the face. This combination also limited sight of vision. The helmet, in combination with the goggles and ear protectors, did not fit well and caused uncomfortable pressure at the temples. No major problems were found with the individual fit of the PPEs. The ear protectors, goggles, and face masks could be adjusted for different head sizes. The guessability for how to don and wear the combination of PPE was moderate, but it took some time to adjust them for a good fit. The guessability was poor for the safety harness; few clues in the design showed how it should be donned, adjusted, or worn on the skeletal bones. Discomfort occurred when the straps were tightened too much. All straps could not be adjusted for somebody's constitutions leading to non-optimal safety. To conclude, if several types of PPEs are used together, discomfort leading to pain is likely to occur over time, which can lead to misuse, non-use, or reduced performance. If people who are not regular users should wear a safety harness correctly, the design needs to be improved for easier interpretation, correct position of the straps, and increased possibilities for individual adjustments. The results from this study can be a base for re-design ideas for PPE, especially when they should be used in combinations.

Keywords: construction work, PPE, personal protective equipment, misuse, guessability, usability

Procedia PDF Downloads 88
143 Kidney Supportive Care in Canada: A Constructivist Grounded Theory of Dialysis Nurses’ Practice Engagement

Authors: Jovina Concepcion Bachynski, Lenora Duhn, Idevania G. Costa, Pilar Camargo-Plazas

Abstract:

Kidney failure is a life-limiting condition for which treatment, such as dialysis (hemodialysis and peritoneal dialysis), can exact a tremendously high physical and psychosocial symptom burden. Kidney failure can be severe enough to require a palliative approach to care. The term supportive care can be used in lieu of palliative care to avoid the misunderstanding that palliative care is synonymous with end-of-life or hospice care. Kidney supportive care, encompassing advance care planning, is an approach to care that improves the quality of life for people receiving dialysis through early identification and treatment of symptoms throughout the disease trajectory. Advanced care planning involves ongoing conversations about the values, goals, and preferences for future care between individuals and their healthcare teams. Kidney supportive care is underutilized and often initiated late in this population. There is evidence to indicate nurses are not providing the necessary elements of supportive kidney care. Dialysis nurses’ delay or lack of engagement in supportive care until close to the end of life may result in people dying without receiving optimal palliative care services. Using Charmaz’s constructivist grounded theory, the purpose of this doctoral study is to develop a substantive theory that explains the process of engagement in supportive care by nurses working in dialysis settings in Canada. Through initial purposeful and subsequent theoretical sampling, 23 nurses with current or recent work experience in outpatient hemodialysis, home hemodialysis, and peritoneal dialysis settings drawn from across Canada were recruited to participate in two intensive interviews using the Zoom© teleconferencing platform. Concurrent data collection and data analysis, constant comparative analysis of initial and focused codes until the attainment of theoretical saturation, and memo-writing, as well as researcher reflexivity, have been undertaken to aid the emergence of concepts, categories, and, ultimately, the constructed theory. At the time of abstract submission, data analysis is currently at the second level of coding (i.e., focused coding stage) of the research study. Preliminary categories include: (a) focusing on biomedical care; (b) multi-dimensional challenges to having the conversation; (c) connecting and setting boundaries with patients; (d) difficulty articulating kidney-supportive care; and (e) unwittingly practising kidney-supportive care. For the conference, the resulting theory will be presented. Nurses working in dialysis are well-positioned to ensure the delivery of quality kidney-supportive care. This study will help to determine the process and the factors enabling and impeding nurse engagement in supportive care in dialysis to effect change for normalizing advance care planning conversations in the clinical setting. This improved practice will have substantive beneficial implications for the many individuals living with kidney failure and their supporting loved ones.

Keywords: dialysis, kidney failure, nursing, supportive care

Procedia PDF Downloads 103
142 Development of Alternative Fuels Technologies for Transportation

Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej

Abstract:

Currently, in automotive transport to power vehicles, almost exclusively hydrocarbon based fuels are used. Due to increase of hydrocarbon fuels consumption, quality parameters are tightend for clean environment. At the same time efforts are undertaken for development of alternative fuels. The reasons why looking for alternative fuels for petroleum and diesel are: to increase vehicle efficiency and to reduce the environmental impact, reduction of greenhouse gases emissions and savings in consumption of limited oil resources. Significant progress was performed on development of alternative fuels such as methanol, ethanol, natural gas (CNG / LNG), LPG, dimethyl ether (DME) and biodiesel. In addition, biggest vehicle manufacturers work on fuel cell vehicles and its introduction to the market. Alcohols such as methanol and ethanol create the perfect fuel for spark-ignition engines. Their advantages are high-value antiknock which determines their application as additive (10%) to unleaded petrol and relative purity of produced exhaust gasses. Ethanol is produced in distillation process of plant products, which value as a food can be irrational. Ethanol production can be costly also for the entire economy of the country, because it requires a large complex distillation plants, large amounts of biomass and finally a significant amount of fuel to sustain the process. At the same time, the fermentation process of plants releases into the atmosphere large quantities of carbon dioxide. Natural gas cannot be directly converted into liquid fuels, although such arrangements have been proposed in the literature. Going through stage of intermediates is inevitable yet. Most popular one is conversion to methanol, which can be processed further to dimethyl ether (DME) or olefin (ethylene and propylene) for the petrochemical sector. Methanol uses natural gas as a raw material, however, requires expensive and advanced production processes. In relation to pollution emissions, the optimal vehicle fuel is LPG which is used in many countries as an engine fuel. Production of LPG is inextricably linked with production and processing of oil and gas, and which represents a small percentage. Its potential as an alternative for traditional fuels is therefore proportionately reduced. Excellent engine fuel may be biogas, however, follows to the same limitations as ethanol - the same production process is used and raw materials. Most essential fuel in the campaign of environment protection against pollution is natural gas. Natural gas as fuel may be either compressed (CNG) or liquefied (LNG). Natural gas can also be used for hydrogen production in steam reforming. Hydrogen can be used as a basic starting material for the chemical industry, an important raw material in the refinery processes, as well as a fuel vehicle transportation. Natural gas can be used as CNG which represents an excellent compromise between the availability of the technology that is proven and relatively cheap to use in many areas of the automotive industry. Natural gas can also be seen as an important bridge to other alternative sources of energy derived from fuel and harmless to the environment. For these reasons CNG as a fuel stimulates considerable interest in the worldwide.

Keywords: alternative fuels, CNG (Compressed Natural Gas), LNG (Liquefied Natural Gas), NGVs (Natural Gas Vehicles)

Procedia PDF Downloads 183
141 Characterization and Evaluation of the Dissolution Increase of Molecular Solid Dispersions of Efavirenz

Authors: Leslie Raphael de M. Ferraz, Salvana Priscylla M. Costa, Tarcyla de A. Gomes, Giovanna Christinne R. M. Schver, Cristóvão R. da Silva, Magaly Andreza M. de Lyra, Danilo Augusto F. Fontes, Larissa A. Rolim, Amanda Carla Q. M. Vieira, Miracy M. de Albuquerque, Pedro J. Rolim-Neto

Abstract:

Efavirenz (EFV) is a drug used as first-line treatment of AIDS. However, it has poor aqueous solubility and wettability, presenting problems in the gastrointestinal tract absorption and bioavailability. One of the most promising strategies to improve the solubility is the use of solid dispersions (SD). Therefore, this study aimed to characterize SD EFZ with the polymers: PVP-K30, PVPVA 64 and SOLUPLUS in order to find an optimal formulation to compose a future pharmaceutical product for AIDS therapy. Initially, Physical Mixtures (PM) and SD with the polymers were obtained containing 10, 20, 50 and 80% of drug (w/w) by the solvent method. The best formulation obtained between the SD was selected by in vitro dissolution test. Finally, the drug-carrier system chosen, in all ratios obtained, were analyzed by the following techniques: Differential Scanning Calorimetry (DSC), polarization microscopy, Scanning Electron Microscopy (SEM) and spectrophotometry of absorption in the region of infrared (IR). From the dissolution profiles of EFV, PM and SD, the values of area Under The Curve (AUC) were calculated. The data showed that the AUC of all PM is greater than the isolated EFV, this result is derived from the hydrophilic properties of the polymers thus favoring a decrease in surface tension between the drug and the dissolution medium. In adittion, this ensures an increasing of wettability of the drug. In parallel, it was found that SD whom had higher AUC values, were those who have the greatest amount of polymer (with only 10% drug). As the amount of drug increases, it was noticed that these results either decrease or are statistically similar. The AUC values of the SD using the three different polymers, followed this decreasing order: SD PVPVA 64-EFV 10% > SD PVP-K30-EFV 10% > SD Soluplus®-EFV 10%. The DSC curves of SD’s did not show the characteristic endothermic event of drug melt process, suggesting that the EFV was converted to its amorphous state. The analysis of polarized light microscopy showed significant birefringence of the PM’s, but this was not observed in films of SD’s, thus suggesting the conversion of the drug from the crystalline to the amorphous state. In electron micrographs of all PM, independently of the percentage of the drug, the crystal structure of EFV was clearly detectable. Moreover, electron micrographs of the SD with the two polymers in different ratios investigated, we observed the presence of particles with irregular size and morphology, also occurring an extensive change in the appearance of the polymer, not being possible to differentiate the two components. IR spectra of PM corresponds to the overlapping of polymer and EFV bands indicating thereby that there is no interaction between them, unlike the spectra of all SD that showed complete disappearance of the band related to the axial deformation of the NH group of EFV. Therefore, this study was able to obtain a suitable formulation to overcome the solubility limitations of the EFV, since SD PVPVA 64-EFZ 10% was chosen as the best system in delay crystallization of the prototype, reaching higher levels of super saturation.

Keywords: characterization, dissolution, Efavirenz, solid dispersions

Procedia PDF Downloads 631
140 Design and Integration of an Energy Harvesting Vibration Absorber for Rotating System

Authors: F. Infante, W. Kaal, S. Perfetto, S. Herold

Abstract:

In the last decade the demand of wireless sensors and low-power electric devices for condition monitoring in mechanical structures has been strongly increased. Networks of wireless sensors can potentially be applied in a huge variety of applications. Due to the reduction of both size and power consumption of the electric components and the increasing complexity of mechanical systems, the interest of creating dense nodes sensor networks has become very salient. Nevertheless, with the development of large sensor networks with numerous nodes, the critical problem of powering them is drawing more and more attention. Batteries are not a valid alternative for consideration regarding lifetime, size and effort in replacing them. Between possible alternative solutions for durable power sources useable in mechanical components, vibrations represent a suitable source for the amount of power required to feed a wireless sensor network. For this purpose, energy harvesting from structural vibrations has received much attention in the past few years. Suitable vibrations can be found in numerous mechanical environments including automotive moving structures, household applications, but also civil engineering structures like buildings and bridges. Similarly, a dynamic vibration absorber (DVA) is one of the most used devices to mitigate unwanted vibration of structures. This device is used to transfer the primary structural vibration to the auxiliary system. Thus, the related energy is effectively localized in the secondary less sensitive structure. Then, the additional benefit of harvesting part of the energy can be obtained by implementing dedicated components. This paper describes the design process of an energy harvesting tuned vibration absorber (EHTVA) for rotating systems using piezoelectric elements. The energy of the vibration is converted into electricity rather than dissipated. The device proposed is indeed designed to mitigate torsional vibrations as with a conventional rotational TVA, while harvesting energy as a power source for immediate use or storage. The resultant rotational multi degree of freedom (MDOF) system is initially reduced in an equivalent single degree of freedom (SDOF) system. The Den Hartog’s theory is used for evaluating the optimal mechanical parameters of the initial DVA for the SDOF systems defined. The performance of the TVA is operationally assessed and the vibration reduction at the original resonance frequency is measured. Then, the design is modified for the integration of active piezoelectric patches without detuning the TVA. In order to estimate the real power generated, a complex storage circuit is implemented. A DC-DC step-down converter is connected to the device through a rectifier to return a fixed output voltage. Introducing a big capacitor, the energy stored is measured at different frequencies. Finally, the electromechanical prototype is tested and validated achieving simultaneously reduction and harvesting functions.

Keywords: energy harvesting, piezoelectricity, torsional vibration, vibration absorber

Procedia PDF Downloads 148
139 Using Differentiated Instruction Applying Cognitive Approaches and Strategies for Teaching Diverse Learners

Authors: Jolanta Jonak, Sylvia Tolczyk

Abstract:

Educational systems are tasked with preparing students for future success in academic or work environments. Schools strive to achieve this goal, but often it is challenging as conventional teaching approaches are often ineffective in increasingly diverse educational systems. In today’s ever-increasing global society, educational systems become increasingly diverse in terms of cultural and linguistic differences, learning preferences and styles, ability and disability. Through increased understanding of disabilities and improved identification processes, students having some form of disabilities tend to be identified earlier than in the past, meaning that more students with identified disabilities are being supported in our classrooms. Also, a large majority of students with disabilities are educated in general education environments. Due to cognitive makeup and life experiences, students have varying learning styles and preferences impacting how they receive and express what they are learning. Many students come from bi or multilingual households and with varying proficiencies in the English language, further impacting their learning. All these factors need to be seriously considered when developing learning opportunities for student's. Educators try to adjust their teaching practices as they discover that conventional methods are often ineffective in reaching each student’s potential. Many teachers do not have the necessary educational background or training to know how to teach students whose learning needs are more unique and may vary from the norm. This is further complicated by the fact that many classrooms lack consistent access to interventionists/coaches that are adequately trained in evidence-based approaches to meet the needs of all students, regardless of what their academic needs may be. One evidence-based way for providing successful education for all students is by incorporating cognitive approaches and strategies that tap into affective, recognition, and strategic networks in the student's brain. This can be done through Differentiated Instruction (DI). Differentiated Instruction is increasingly recognized model that is established on the basic principles of Universal Design for Learning. This form of support ensures that regardless of the students’ learning preferences and cognitive learning profiles, they have opportunities to learn through approaches that are suitable to their needs. This approach improves the educational outcomes of students with special needs and it benefits other students as it accommodates learning styles as well as the scope of unique learning needs that are evident in the typical classroom setting. Differentiated Instruction also is recognized as an evidence-based best practice in education and is highly effective when it is implemented within the tiered system of the Response to Intervention (RTI) model. Recognition of DI becomes more common; however, there is still limited understanding of the effective implementation and use of strategies that can create unique learning environments for each student within the same setting. Through employing knowledge of a variety of instructional strategies, general and special education teachers can facilitate optimal learning for all students, with and without a disability. A desired byproduct of DI is that it can eliminate inaccurate perceptions about the students’ learning abilities, unnecessary referrals for special education evaluations, and inaccurate decisions about the presence of a disability.

Keywords: differentiated instruction, universal design for learning, special education, diversity

Procedia PDF Downloads 222
138 Impact of Elevated Temperature on Spot Blotch Development in Wheat and Induction of Resistance by Plant Growth Promoting Rhizobacteria

Authors: Jayanwita Sarkar, Usha Chakraborty, Bishwanath Chakraborty

Abstract:

Plants are constantly interacting with various abiotic and biotic stresses. In changing climate scenario plants are continuously modifying physiological processes to adapt to changing environmental conditions which profoundly affect plant-pathogen interactions. Spot blotch in wheat is a fast-rising disease in the warmer plains of South Asia where the rise in minimum average temperature over most of the year already affecting wheat production. Hence, the study was undertaken to explore the role of elevated temperature in spot blotch disease development and modulation of antioxidative responses by plant growth promoting rhizobacteria (PGPR) for biocontrol of spot blotch at high temperature. Elevated temperature significantly increases the susceptibility of wheat plants to spot blotch causing pathogen Bipolaris sorokiniana. Two PGPR Bacillus safensis (W10) and Ochrobactrum pseudogrignonense (IP8) isolated from wheat (Triticum aestivum L.) and blady grass (Imperata cylindrical L.) rhizophere respectively, showing in vitro antagonistic activity against Bipolaris sorokiniana were tested for growth promotion and induction of resistance against spot blotch in wheat. GC-MS analysis showed that Bacillus safensis (W10) and Ochrobactrum pseudogrignonense (IP8) produced antifungal and antimicrobial compounds in culture. Seed priming with these two bacteria significantly increase growth, modulate antioxidative signaling and induce resistance and eventually reduce disease incidence in wheat plants at optimum as well as elevated temperature which was further confirmed by indirect immunofluorescence assay using polyclonal antibody raised against Bipolaris sorokiniana. Application of the PGPR led to enhancement in activities of plant defense enzymes- phenylalanine ammonia lyase, peroxidase, chitinase and β-1,3 glucanase in infected leaves. Immunolocalization of chitinase and β-1,3 glucanase in PGPR primed and pathogen inoculated leaf tissue was further confirmed by transmission electron microscopy using PAb of chitinase, β-1,3 glucanase and gold labelled conjugates. Activity of ascorbate-glutathione redox cycle related enzymes such as ascorbate peroxidase, superoxide dismutase and glutathione reductase along with antioxidants such as carotenoids, glutathione and ascorbate and osmolytes like proline and glycine betain accumulation were also increased during disease development in PGPR primed plant in comparison to unprimed plants at high temperature. Real-time PCR analysis revealed enhanced expression of defense genes- chalcone synthase and phenyl alanineammonia lyase. Over expression of heat shock proteins like HSP 70, small HSP 26.3 and heat shock factor HsfA3 in PGPR primed plants effectively protect plants against spot blotch infection at elevated temperature as compared with control plants. Our results revealed dynamic biochemical cross talk between elevated temperature and spot blotch disease development and furthermore highlight PGPR mediated array of antioxidative and molecular alterations responsible for induction of resistance against spot blotch disease at elevated temperature which seems to be associated with up-regulation of defense genes, heat shock proteins and heat shock factors, less ROS production, membrane damage, increased expression of redox enzymes and accumulation of osmolytes and antioxidants.

Keywords: antioxidative enzymes, defense enzymes, elevated temperature, heat shock proteins, PGPR, Real-Time PCR, spot blotch, wheat

Procedia PDF Downloads 172
137 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 161
136 The Digital Transformation of Life Insurance Sales in Iran With the Emergence of Personal Financial Planning Robots; Opportunities and Challenges

Authors: Pedram Saadati, Zahra Nazari

Abstract:

Anticipating and identifying future opportunities and challenges facing industry activists for the emergence and entry of new knowledge and technologies of personal financial planning, and providing practical solutions is one of the goals of this research. For this purpose, a future research tool based on receiving opinions from the main players of the insurance industry has been used. The research method in this study was in 4 stages; including 1- a survey of the specialist salesforce of life insurance in order to identify the variables 2- the ranking of the variables by experts selected by a researcher-made questionnaire 3- holding a panel of experts with the aim of understanding the mutual effects of the variables and 4- statistical analyzes of the mutual effects matrix in Mick Mac software is done. The integrated analysis of influencing variables in the future has been done with the method of Structural Analysis, which is one of the efficient and innovative methods of future research. A list of opportunities and challenges was identified through a survey of best-selling life insurance representatives who were selected by snowball sampling. In order to prioritize and identify the most important issues, all the issues raised were sent to selected experts who were selected theoretically through a researcher-made questionnaire. The respondents determined the importance of 36 variables through scoring, so that the prioritization of opportunity and challenge variables can be determined. 8 of the variables identified in the first stage were removed by selected experts, and finally, the number of variables that could be examined in the third stage became 28 variables, which, in order to facilitate the examination, were divided into 6 categories, respectively, 11 variables of organization and management. Marketing and sales 7 cases, social and cultural 6 cases, technological 2 cases, rebranding 1 case and insurance 1 case were divided. The reliability of the researcher-made questionnaire was confirmed with the Cronbach's alpha test value of 0.96. In the third stage, by forming a panel consisting of 5 insurance industry experts, the consensus of their opinions about the influence of factors on each other and the ranking of variables was entered into the matrix. The matrix included the interrelationships of 28 variables, which were investigated using the structural analysis method. By analyzing the data obtained from the matrix by Mic Mac software, the findings of the research indicate that the categories of "correct training in the use of the software, the weakness of the technology of insurance companies in personalizing products, using the approach of equipping the customer, and honesty in declaring no need Customer to Insurance", the most important challenges of the influencer and the categories of "salesforce equipping approach, product personalization based on customer needs assessment, customer's pleasant experience of being consulted with consulting robots, business improvement of the insurance company due to the use of these tools, increasing the efficiency of the issuance process and optimal customer purchase" were identified as the most important opportunities for influence.

Keywords: personal financial planning, wealth management, advisor robots, life insurance, digital transformation

Procedia PDF Downloads 47
135 Energy Efficiency of Secondary Refrigeration with Phase Change Materials and Impact on Greenhouse Gases Emissions

Authors: Michel Pons, Anthony Delahaye, Laurence Fournaison

Abstract:

Secondary refrigeration consists of splitting large-size direct-cooling units into volume-limited primary cooling units complemented by secondary loops for transporting and distributing cold. Such a design reduces the refrigerant leaks, which represents a source of greenhouse gases emitted into the atmosphere. However, inserting the secondary circuit between the primary unit and the ‘users’ heat exchangers (UHX) increases the energy consumption of the whole process, which induces an indirect emission of greenhouse gases. It is thus important to check whether that efficiency loss is sufficiently limited for the change to be globally beneficial to the environment. Among the likely secondary fluids, phase change slurries offer several advantages: they transport latent heat, they stabilize the heat exchange temperature, and the formerly evaporators still can be used as UHX. The temperature level can also be adapted to the desired cooling application. Herein, the slurry {ice in mono-propylene-glycol solution} (melting temperature Tₘ of 6°C) is considered for food preservation, and the slurry {mixed hydrate of CO₂ + tetra-n-butyl-phosphonium-bromide in aqueous solution of this salt + CO₂} (melting temperature Tₘ of 13°C) is considered for air conditioning. For the sake of thermodynamic consistency, the analysis encompasses the whole process, primary cooling unit plus secondary slurry loop, and the various properties of the slurries, including their non-Newtonian viscosity. The design of the whole process is optimized according to the properties of the chosen slurry and under explicit constraints. As a first constraint, all the units must deliver the same cooling power to the user. The other constraints concern the heat exchanges areas, which are prescribed, and the flow conditions, which prevent deposition of the solid particles transported in the slurry, and their agglomeration. Minimization of the total energy consumption leads to the optimal design. In addition, the results are analyzed in terms of exergy losses, which allows highlighting the couplings between the primary unit and the secondary loop. One important difference between the ice-slurry and the mixed-hydrate one is the presence of gaseous carbon dioxide in the latter case. When the mixed-hydrate crystals melt in the UHX, CO₂ vapor is generated at a rate that depends on the phase change kinetics. The flow in the UHX, and its heat and mass transfer properties are significantly modified. This effect has never been investigated before. Lastly, inserting the secondary loop between the primary unit and the users increases the temperature difference between the refrigerated space and the evaporator. This results in a loss of global energy efficiency, and therefore in an increased energy consumption. The analysis shows that this loss of efficiency is not critical in the first case (Tₘ = 6°C), while the second case leads to more ambiguous results, partially because of the higher melting temperature.The consequences in terms of greenhouse gases emissions are also analyzed.

Keywords: exergy, hydrates, optimization, phase change material, thermodynamics

Procedia PDF Downloads 132
134 Characterisation, Extraction of Secondary Metabolite from Perilla frutescens for Therapeutic Additives: A Phytogenic Approach

Authors: B. M. Vishal, Monamie Basu, Gopinath M., Rose Havilah Pulla

Abstract:

Though there are several methods of synthesizing silver nano particles, Green synthesis always has its own dignity. Ranging from the cost-effectiveness to the ease of synthesis, the process is simplified in the best possible way and is one of the most explored topics. This study of extracting secondary metabolites from Perilla frutescens and using them for therapeutic additives has its own significance. Unlike the other researches that have been done so far, this study aims to synthesize Silver nano particles from Perilla frutescens using three available forms of the plant: leaves, seed, and commercial leaf extract powder. Perilla frutescens, commonly known as 'Beefsteak Plant', is a perennial plant and belongs to the mint family. The plant has two varieties classed within itself. They are frutescens crispa and frutescens frutescens. The species, frutescens crispa (commonly known as 'Shisho' in Japanese), is generally used for edible purposes. Its leaves occur in two forms, varying on the colors. It is found in two different colors of red with purple streaks and green with crinkly pattern on it. This species is aromatic due to the presence of two major compounds: polyphenols and perillaldehyde. The red (purple streak) variety of this plant is due to the presence of a pigment, Perilla anthocyanin. The species, frutescens frutescens (commonly known as 'Egoma' in Japanese), is the main source for perilla oil. This species is also aromatic, but in this case, the major compound which gives the aroma is Perilla ketone or egoma ketone. Shisho grows short as compared with Wild Sesame and both produce seeds. The seeds of Wild Sesame are large and soft whereas that of Shisho is small and hard. The seeds have a large proportion of lipids, ranging about 38-45 percent. Excluding those, the seeds have a large quantity of Omega-3 fatty acids, linoleic acid, and an Omega-6 fatty acid. Other than these, Perilla leaf extract has gold and silver nano particles in it. The yield comparison in all the cases have been done, and the process’ optimal conditions were modified, keeping in mind the efficiencies. The characterization of secondary metabolites includes GC-MS and FTIR which can be used to identify the components of purpose that actually helps in synthesizing silver nano particles. The analysis of silver was done through a series of characterization tests that include XRD, UV-Vis, EDAX, and SEM. After the synthesis, for being used as therapeutic additives, the toxin analysis was done, and the results were tabulated. The synthesis of silver nano particles was done in a series of multiple cycles of extraction from leaves, seeds and commercially purchased leaf extract. The yield and efficiency comparison were done to bring out the best and the cheapest possible way of synthesizing silver nano particles using Perilla frutescens. The synthesized nano particles can be used in therapeutic drugs, which has a wide range of application from burn treatment to cancer treatment. This will, in turn, replace the traditional processes of synthesizing nano particles, as this method will prove effective in terms of cost and the environmental implications.

Keywords: nanoparticles, green synthesis, Perilla frutescens, characterisation, toxin analysis

Procedia PDF Downloads 234
133 Development of an Automatic Control System for ex vivo Heart Perfusion

Authors: Pengzhou Lu, Liming Xin, Payam Tavakoli, Zhonghua Lin, Roberto V. P. Ribeiro, Mitesh V. Badiwala

Abstract:

Ex vivo Heart Perfusion (EVHP) has been developed as an alternative strategy to expand cardiac donation by enabling resuscitation and functional assessment of hearts donated from marginal donors, which were previously not accepted. EVHP parameters, such as perfusion flow (PF) and perfusion pressure (PP) are crucial for optimal organ preservation. However, with the heart’s constant physiological changes during EVHP, such as coronary vascular resistance, manual control of these parameters is rendered imprecise and cumbersome for the operator. Additionally, low control precision and the long adjusting time may lead to irreversible damage to the myocardial tissue. To solve this problem, an automatic heart perfusion system was developed by applying a Human-Machine Interface (HMI) and a Programmable-Logic-Controller (PLC)-based circuit to control PF and PP. The PLC-based control system collects the data of PF and PP through flow probes and pressure transducers. It has two control modes: the RPM-flow mode and the pressure mode. The RPM-flow control mode is an open-loop system. It influences PF through providing and maintaining the desired speed inputted through the HMI to the centrifugal pump with a maximum error of 20 rpm. The pressure control mode is a closed-loop system where the operator selects a target Mean Arterial Pressure (MAP) to control PP. The inputs of the pressure control mode are the target MAP, received through the HMI, and the real MAP, received from the pressure transducer. A PID algorithm is applied to maintain the real MAP at the target value with a maximum error of 1mmHg. The precision and control speed of the RPM-flow control mode were examined by comparing the PLC-based system to an experienced operator (EO) across seven RPM adjustment ranges (500, 1000, 2000 and random RPM changes; 8 trials per range) tested in a random order. System’s PID algorithm performance in pressure control was assessed during 10 EVHP experiments using porcine hearts. Precision was examined through monitoring the steady-state pressure error throughout perfusion period, and stabilizing speed was tested by performing two MAP adjustment changes (4 trials per change) of 15 and 20mmHg. A total of 56 trials were performed to validate the RPM-flow control mode. Overall, the PLC-based system demonstrated the significantly faster speed than the EO in all trials (PLC 1.21±0.03, EO 3.69±0.23 seconds; p < 0.001) and greater precision to reach the desired RPM (PLC 10±0.7, EO 33±2.7 mean RPM error; p < 0.001). Regarding pressure control, the PLC-based system has the median precision of ±1mmHg error and the median stabilizing times in changing 15 and 20mmHg of MAP are 15 and 19.5 seconds respectively. The novel PLC-based control system was 3 times faster with 60% less error than the EO for RPM-flow control. In pressure control mode, it demonstrates a high precision and fast stabilizing speed. In summary, this novel system successfully controlled perfusion flow and pressure with high precision, stability and a fast response time through a user-friendly interface. This design may provide a viable technique for future development of novel heart preservation and assessment strategies during EVHP.

Keywords: automatic control system, biomedical engineering, ex-vivo heart perfusion, human-machine interface, programmable logic controller

Procedia PDF Downloads 175
132 Estimation of State of Charge, State of Health and Power Status for the Li-Ion Battery On-Board Vehicle

Authors: S. Sabatino, V. Calderaro, V. Galdi, G. Graber, L. Ippolito

Abstract:

Climate change is a rapidly growing global threat caused mainly by increased emissions of carbon dioxide (CO₂) into the atmosphere. These emissions come from multiple sources, including industry, power generation, and the transport sector. The need to tackle climate change and reduce CO₂ emissions is indisputable. A crucial solution to achieving decarbonization in the transport sector is the adoption of electric vehicles (EVs). These vehicles use lithium (Li-Ion) batteries as an energy source, making them extremely efficient and with low direct emissions. However, Li-Ion batteries are not without problems, including the risk of overheating and performance degradation. To ensure its safety and longevity, it is essential to use a battery management system (BMS). The BMS constantly monitors battery status, adjusts temperature and cell balance, ensuring optimal performance and preventing dangerous situations. From the monitoring carried out, it is also able to optimally manage the battery to increase its life. Among the parameters monitored by the BMS, the main ones are State of Charge (SoC), State of Health (SoH), and State of Power (SoP). The evaluation of these parameters can be carried out in two ways: offline, using benchtop batteries tested in the laboratory, or online, using batteries installed in moving vehicles. Online estimation is the preferred approach, as it relies on capturing real-time data from batteries while operating in real-life situations, such as in everyday EV use. Actual battery usage conditions are highly variable. Moving vehicles are exposed to a wide range of factors, including temperature variations, different driving styles, and complex charge/discharge cycles. This variability is difficult to replicate in a controlled laboratory environment and can greatly affect performance and battery life. Online estimation captures this variety of conditions, providing a more accurate assessment of battery behavior in real-world situations. In this article, a hybrid approach based on a neural network and a statistical method for real-time estimation of SoC, SoH, and SoP parameters of interest is proposed. These parameters are estimated from the analysis of a one-day driving profile of an electric vehicle, assumed to be divided into the following four phases: (i) Partial discharge (SoC 100% - SoC 50%), (ii) Partial discharge (SoC 50% - SoC 80%), (iii) Deep Discharge (SoC 80% - SoC 30%) (iv) Full charge (SoC 30% - SoC 100%). The neural network predicts the values of ohmic resistance and incremental capacity, while the statistical method is used to estimate the parameters of interest. This reduces the complexity of the model and improves its prediction accuracy. The effectiveness of the proposed model is evaluated by analyzing its performance in terms of square mean error (RMSE) and percentage error (MAPE) and comparing it with the reference method found in the literature.

Keywords: electric vehicle, Li-Ion battery, BMS, state-of-charge, state-of-health, state-of-power, artificial neural networks

Procedia PDF Downloads 69