Search results for: covariance matrix estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4100

Search results for: covariance matrix estimation

500 Climate Species Lists: A Combination of Methods for Urban Areas

Authors: Andrea Gion Saluz, Tal Hertig, Axel Heinrich, Stefan Stevanovic

Abstract:

Higher temperatures, seasonal changes in precipitation, and extreme weather events are increasingly affecting trees. To counteract the increasing challenges of urban trees, strategies are increasingly being sought to preserve existing tree populations on the one hand and to prepare for the coming years on the other. One such strategy lies in strategic climate tree species selection. The search is on for species or varieties that can cope with the new climatic conditions. Many efforts in German-speaking countries deal with this in detail, such as the tree lists of the German Conference of Garden Authorities (GALK), the project Stadtgrün 2021, or the instruments of the Climate Species Matrix by Prof. Dr. Roloff. In this context, different methods for a correct species selection are offered. One possibility is to select certain physiological attributes that indicate the climate resilience of a species. To calculate the dissimilarity of the present climate of different geographic regions in relation to the future climate of any city, a weighted (standardized) Euclidean distance (SED) for seasonal climate values is calculated for each region of the Earth. The calculation was performed in the QGIS geographic information system, using global raster datasets on monthly climate values in the 1981-2010 standard period. Data from a European forest inventory were used to identify tree species growing in the calculated analogue climate regions. The inventory used is the compilation of georeferenced point data at a 1 km grid resolution on the occurrence of tree species in 21 European countries. In this project, the results of the methodological application are shown for the city of Zurich for the year 2060. In the first step, analog climate regions based on projected climate values for the measuring station Kirche Fluntern (ZH) were searched for. In a further step, the methods mentioned above were applied to generate tree species lists for the city of Zurich. These lists were then qualitatively evaluated with respect to the suitability of the different tree species for the Zurich area to generate a cleaned and thus usable list of possible future tree species.

Keywords: climate change, climate region, climate tree, urban tree

Procedia PDF Downloads 81
499 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment

Authors: Neda Orak, Mostafa Zarei

Abstract:

Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.

Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park

Procedia PDF Downloads 271
498 Detecting Tomato Flowers in Greenhouses Using Computer Vision

Authors: Dor Oppenheim, Yael Edan, Guy Shani

Abstract:

This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.

Keywords: agricultural engineering, image processing, computer vision, flower detection

Procedia PDF Downloads 302
497 Control of Lymphatic Remodelling by miR-132

Authors: Valeria Arcucci, Musarat Ishaq, Steven A. Stacker, Greg J. Goodall, Marc G. Achen

Abstract:

Metastasis is the lethal aspect of cancer for most patients. Remodelling of lymphatic vessels associated with a tumour is a key initial step in metastasis because it facilitates the entry of cancer cells into the lymphatic vasculature and their spread to lymph nodes and distant organs. Although it is clear that vascular endothelial growth factors (VEGFs), such as VEGF-C and VEGF-D, are key drivers of lymphatic remodelling, the means by which many signaling pathways in endothelial cells are coordinately regulated to drive growth and remodelling of lymphatics in cancer is not understood. We seek to understand the broader molecular mechanisms that control cancer metastasis, and are focusing on microRNAs, which coordinately regulate signaling pathways involved in complex biological responses in health and disease. Here, using small RNA sequencing, we found that a specific microRNA, miR-132, is upregulated in expression in lymphatic endothelial cells (LECs) in response to the lymphangiogenic growth factors. Interestingly, ectopic expression of miR-132 in LECs in vitro stimulated proliferation and tube formation of these cells. Moreover, miR-132 is expressed in lymphatic vessels of a subset of human breast tumours which were previously found to express high levels of VEGF-D by immunohistochemical analysis on tumour tissue microarrays. In order to dissect the complexity of regulation by miR-132 in lymphatic biology, we performed Argonaute HITS-CLIP, which led us to identify the miR-132-mRNA interactome in LECs. We found that this microRNA in LECs is involved in the control of many different pathways mainly involved in cell proliferation and regulation of the extracellular matrix and cell-cell junctions. We are now exploring the functional significance of miR-132 targets in the biology of LECs using biochemical techniques, functional in vitro cell assays and in vivo lymphangiogenesis assays. This project will ultimately define the molecular regulation of lymphatic remodelling by miR-132, and thereby identify potential therapeutic targets for drugs designed to restrict the growth and remodelling of tumour lymphatics resulting in metastatic spread.

Keywords: argonaute HITS-CLIP, cancer, lymphatic remodelling, miR-132, VEGF

Procedia PDF Downloads 105
496 Enhanced Performance of Supercapacitor Based on Boric Acid Doped Polyvinyl Alcohol-H₂SO₄ Gel Polymer Electrolyte System

Authors: Hamide Aydin, Banu Karaman, Ayhan Bozkurt, Umran Kurtan

Abstract:

Recently, Proton Conducting Gel Polymer Electrolytes (GPEs) have drawn much attention in supercapacitor applications due to their physical and electrochemical characteristics and stability conditions for low temperatures. In this research, PVA-H2SO4-H3BO3 GPE has been used for electric-double layer capacitor (EDLCs) application, in which electrospun free-standing carbon nanofibers are used as electrodes. Introduced PVA-H2SO4-H3BO3 GPE behaves as both separator and the electrolyte in the supercapacitor. Symmetric Swagelok cells including GPEs were assembled via using two electrode arrangements and the electrochemical properties were searched. Electrochemical performance studies demonstrated that PVA-H2SO4-H3BO3 GPE had a maximum specific capacitance (Cs) of 134 F g-1 and showed great capacitance retention (%100) after 1000 charge/discharge cycles. Furthermore, PVA-H2SO4-H3BO3 GPE yielded an energy density of 67 Wh kg-1 with a corresponding power density of 1000 W kg-1 at a current density of 1 A g-1. PVA-H2SO4 based polymer electrolyte was produced according to following procedure; Firstly, 1 g of commercial PVA was dissolved in distilled water at 90°C and stirred until getting transparent solution. This was followed by addition of the diluted H2SO4 (1 g of H2SO4 in a distilled water) to the solution to obtain PVA-H2SO4. PVA-H2SO4-H3BO3 based polymer electrolyte was produced by dissolving H3BO3 in hot distilled water and then inserted into the PVA-H2SO4 solution. The mole fraction was arranged to ¼ of the PVA repeating unit. After the stirring 2 h at RT, gel polymer electrolytes were obtained. The final electrolytes for supercapacitor testing included 20% of water in weight. Several blending combinations of PVA/H2SO4 and H3BO3 were studied to observe the optimized combination in terms of conductivity as well as electrolyte stability. As the amount of boric acid increased in the matrix, excess sulfuric acid was excluded due to cross linking, especially at lower solvent content. This resulted in the reduction of proton conductivity. Therefore, the mole fraction of H3BO3 was chosen as ¼ of PVA repeating unit. Within this optimized limits, the polymer electrolytes showed better conductivities as well as stability.

Keywords: electrical double layer capacitor, energy density, gel polymer electrolyte, ultracapacitor

Procedia PDF Downloads 195
495 The Contact between a Rigid Substrate and a Thick Elastic Layer

Authors: Nicola Menga, Giuseppe Carbone

Abstract:

Although contact mechanics has been widely focused on the study of contacts between half-space, it has been recently pointed out that in presence of finite thickness elastic layers the results of the contact problem show significant difference in terms of the main contact quantities (e.g. contact area, penetration, mean pressure, etc.). Actually, there exist a wide range of industrial application demanding for this kind of studies, such as seals leakage prediction or pressure-sensitive coatings for electrical applications. In this work, we focus on the contact between a rigid profile and an elastic layer of thickness h confined under two different configurations: rigid constrain and applied uniform pressure. The elastic problem at hand has been formalized following Green’s function method and then numerically solved by means of a matrix inversion. We study different contact conditions, both considering and neglecting adhesive interactions at the interface. This leads to different solution techniques: Adhesive contacts equilibrium solution is found, in term of contact area for given penetration, making stationary the total free energy of the system; whereas, adhesiveless contacts are addressed defining an equilibrium criterion, again on the contact area, relying on the fracture mechanics stress intensity factor KI. In particular, we make the KI vanish at the edges of the contact area, as peculiar for adhesiveless elastic contacts. The results are obtained in terms of contact area, penetration, and mean pressure for both adhesive and adhesiveless contact conditions. As expected, in the case of a uniform applied pressure the slab turns out much more compliant than the rigidly constrained one. Indeed, we have observed that the peak value of the contact pressure, for both the adhesive and adhesiveless condition, is much higher for the rigidly constrained configuration than in the case of applied uniform pressure. Furthermore, we observed that, for little contact area, both systems behave the same and the pull-off occurs at approximately the same contact area and mean contact pressure. This is an expected result since in this condition the ratio between the layers thickness and the contact area is very high and both layer configurations recover the half-space behavior where the pull-off occurrence is mainly controlled by the adhesive interactions, which are kept constant among the cases.

Keywords: contact mechanics, adhesion, friction, thick layer

Procedia PDF Downloads 486
494 Concanavaline a Conjugated Bacterial Polyester Based PHBHHx Nanoparticles Loaded with Curcumin for the Ovarian Cancer Therapy

Authors: E. Kilicay, Z. Karahaliloglu, B. Hazer, E. B. Denkbas

Abstract:

In this study, we have prepared concanavaline A (ConA) functionalized curcumin (CUR) loaded PHBHHx (poly(3-hydroxybutyrate-co-3-hydroxyhexanoate)) nanoparticles as a novel and efficient drug delivery system. CUR is a promising anticancer agent for various cancer types. The aim of this study was to evaluate therapeutic potential of curcumin loaded PHBHHx nanoparticles (CUR-NPs) and concanavaline A conjugated curcumin loaded NPs (ConA-CUR NPs) for ovarian cancer treatment. ConA was covalently connected to the carboxylic group of nanoparticles by EDC/NHS activation method. In the ligand attachment experiment, the binding capacity of ConA on the surface of NPs was found about 90%. Scanning electron microscopy (SEM) and atomic force microscopy (AFM) analysis showed that the prepared nanoparticles were smooth and spherical in shape. The size and zeta potential of prepared NPs were about 228±5 nm and −21.3 mV respectively. ConA-CUR NPs were characterized by FT-IR spectroscopy which confirmed the existence of CUR and ConA in the nanoparticles. The entrapment and loading efficiencies of different polymer/drug weight ratios, 1/0.125 PHBHHx/CUR= 1.25CUR-NPs; 1/0.25 PHBHHx/CUR= 2.5CUR-NPs; 1/0.5 PHBHHx/CUR= 5CUR-NPs, ConA-1.25CUR NPs, ConA-2.5CUR NPs and ConA-5CUR NPs were found to be ≈ 68%-16.8%; 55%-17.7 %; 45%-33.6%; 70%-15.7%; 60%-17%; 51%-30.2% respectively. In vitro drug release showed that the sustained release of curcumin was observed from CUR-NPs and ConA-CUR NPs over a period of 19 days. After binding of ConA, the release rate was slightly increased due to the migration of curcumin to the surface of the nanoparticles and the matrix integrities was decreased because of the conjugation reaction. This functionalized nanoparticles demonstrated high drug loading capacity, sustained drug release profile, and high and long term anticancer efficacy in human cancer cell lines. Anticancer activity of ConA-CUR NPs was proved by MTT assay and reconfirmed by apoptosis and necrosis assay. The anticancer activity of ConA-CUR NPs was measured in ovarian cancer cells (SKOV-3) and the results revealed that the ConA-CUR NPs had better tumor cells decline activity than free curcumin. The nacked nanoparticles have no cytotoxicity against human ovarian carcinoma cells. Thus the developed functionalized nanoformulation could be a promising candidate in cancer therapy.

Keywords: curcumin, curcumin-PHBHHx nanoparticles, concanavalin A, concanavalin A-curcumin PHBHHx nanoparticles, PHBHHx nanoparticles, ovarian cancer cell

Procedia PDF Downloads 379
493 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 49
492 Productivity and Household Welfare Impact of Technology Adoption: A Microeconometric Analysis

Authors: Tigist Mekonnen Melesse

Abstract:

Since rural households are basically entitled to food through own production, improving productivity might lead to enhance the welfare of rural population through higher food availability at the household level and lowering the price of agricultural products. Increasing agricultural productivity through the use of improved technology is one of the desired outcomes from sensible food security and agricultural policy. The ultimate objective of this study was to evaluate the potential impact of improved agricultural technology adoption on smallholders’ crop productivity and welfare. The study is conducted in Ethiopia covering 1500 rural households drawn from four regions and 15 rural villages based on data collected by Ethiopian Rural Household Survey. Endogenous treatment effect model is employed in order to account for the selection bias on adoption decision that is expected from the self-selection of households in technology adoption. The treatment indicator, technology adoption is a binary variable indicating whether the household used improved seeds and chemical fertilizer or not. The outcome variables were cereal crop productivity, measured in real value of production and welfare of households, measured in real per capita consumption expenditure. Results of the analysis indicate that there is positive and significant effect of improved technology use on rural households’ crop productivity and welfare in Ethiopia. Adoption of improved seeds and chemical fertilizer alone will increase the crop productivity by 7.38 and 6.32 percent per year of each. Adoption of such technologies is also found to improve households’ welfare by 1.17 and 0.25 percent per month of each. The combined effect of both technologies when adopted jointly is increasing crop productivity by 5.82 percent and improving welfare by 0.42 percent. Besides, educational level of household head, farm size, labor use, participation in extension program, expenditure for input and number of oxen positively affect crop productivity and household welfare, while large household size negatively affect welfare of households. In our estimation, the average treatment effect of technology adoption (average treatment effect on the treated, ATET) is the same as the average treatment effect (ATE). This implies that the average predicted outcome for the treatment group is similar to the average predicted outcome for the whole population.

Keywords: Endogenous treatment effect, technologies, productivity, welfare, Ethiopia

Procedia PDF Downloads 622
491 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status

Authors: Rosa Figueroa, Christopher Flores

Abstract:

Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).

Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm

Procedia PDF Downloads 276
490 Mechanical Behavior of Hybrid Hemp/Jute Fibers Reinforced Polymer Composites at Liquid Nitrogen Temperature

Authors: B. Vinod, L. Jsudev

Abstract:

Natural fibers as reinforcement in polymer matrix material is gaining lot of attention in recent years, as they are light in weight, less in cost, and ecologically advanced surrogate material to glass and carbon fibers in composites. Natural fibers like jute, sisal, coir, hemp, banana etc. have attracted substantial importance as a potential structural material because of its attractive features along with its good mechanical properties. Cryogenic applications of natural fiber reinforced polymer composites like cryogenic wind tunnels, cryogenic transport vessels, support structures in space shuttles and rockets are gaining importance. In these unique cryogenic applications, the requirements of polymer composites are extremely severe and complicated. These materials need to possess good mechanical and physical properties at cryogenic temperatures such as liquid helium (4.2 K), liquid hydrogen (20 K), liquid nitrogen (77 K), and liquid oxygen (90 K) temperatures, etc., to meet the high requirements by the cryogenic engineering applications. The objective of this work is to investigate the mechanical behavior of hybrid hemp/jute fibers reinforced epoxy composite material at liquid nitrogen temperature. Hemp and Jute fibers are used as reinforcement material as they have high specific strength, stiffness and good adhering property and has the potential to replace the synthetic fibers. Hybrid hemp/jute fibers reinforced polymer composite is prepared by hand lay-up method and test specimens are cut according to ASTM standards. These test specimens are dipped in liquid nitrogen for different time durations. The tensile properties, flexural properties and impact strength of the specimen are tested immediately after the specimens are removed from liquid nitrogen container. The experimental results indicate that the cryogenic treatment of the polymer composite has a significant effect on the mechanical properties of this material. The tensile properties and flexural properties of the hybrid hemp/jute fibers epoxy composite at liquid nitrogen temperature is higher than at room temperature. The impact strength of the material decreased after subjecting it to liquid nitrogen temperature.

Keywords: liquid nitrogen temperature, polymer composite, tensile properties, flexural properties

Procedia PDF Downloads 324
489 Survival of Micro-Encapsulated Probiotic Lactic Acid Bacteria in Mutton Nuggets and Their Assessments in Simulated Gastro-Intestinal Conditions

Authors: Rehana Akhter, Sajad A. Rather, F. A. Masoodi, Adil Gani, S. M. Wani

Abstract:

During recent years probiotic food products receive market interest as health-promoting, functional foods, which are believed to contribute health benefits. In order to deliver the health benefits by probiotic bacteria, it has been recommended that they must be present at a minimum level of 106 CFU/g to 107 CFU/g at point of delivery or be eaten in sufficient amounts to yield a daily intake of 108 CFU. However a major challenge in relation to the application of probiotic cultures in food matrix is the maintenance of viability during processing which might lead to important losses in viability as probiotic cultures are very often thermally labile and sensitive to acidity, oxygen or other food constituents for example, salts. In this study Lactobacillus plantarum and Lactobacillus casei were encapsulated in calcium alginate beads with the objective of enhancing their survivability and preventing exposure to the adverse conditions of the gastrointestinal tract and where then inoculated in mutton nuggets. Micro encapsulated Lactobacillus plantarum and Lactobacillus casei were resistant to simulated gastric conditions (pH 2, 2h) and bile solution (3%, 2 h) resulting in significantly (p ≤ 0.05) improved survivability when compared with free cell counterparts. A high encapsulation yield was found due to the encapsulation procedure. After incubation at low pH-values, micro encapsulation yielded higher survival rates compared to non-encapsulated probiotic cells. The viable cell numbers of encapsulated Lactobacillus plantarum and Lactobacillus casei were 107-108 CFU/g higher compared to free cells after 90 min incubation at pH 2.5. The viable encapsulated cells were inoculated into mutton nuggets at the rate of 108 to 1010 CFU/g. The micro encapsulated Lactobacillus plantarum and Lactobacillus casei achieved higher survival counts (105-107 CFU/g) than the free cell counterparts (102-104 CFU/g). Thus micro encapsulation offers an effective means of delivery of viable probiotic bacterial cells to the colon and maintaining their survival during simulated gastric, intestinal juice and processing conditions during nugget preparation.

Keywords: survival, Lactobacillus plantarum, Lactobacillus casei, micro-encapsulation, nugget

Procedia PDF Downloads 262
488 Nanoparticles Made of Amino Acid Derived Biodegradable Polymers as Promising Drug Delivery Containers

Authors: Sophio Kobauri, Tengiz Kantaria, Temur Kantaria, David Tugushi, Nina Kulikova, Ramaz Katsarava

Abstract:

Polymeric disperse systems such as nanoparticles (NPs) are of high interest for numerous applications in contemporary medicine and nanobiotechnology to a considerable potential for treatment of many human diseases. The important technological advantages of NPs usage as drug carriers (nanocontainers) are their high stability, high carrier capacity, feasibility of encapsulation of both hydrophilic or hydrophobic substances, as well as a high variety of possible administration routes, including oral application and inhalation. NPs can also be designed to allow controlled (sustained) drug release from the matrix. These properties of NPs enable improvement of drug bioavailability and might allow drug dosage decrease. The targeted and controlled administration of drugs using NPs might also help to overcome drug resistance, which is one of the major obstacles in the control of epidemics. Various degradable and non-degradable polymers of both natural and synthetic origin have been used for NPs construction. One of the most promising for the design of NPs are amino acid-based biodegradable polymers (AABBPs) which can clear from the body after the fulfillment of their function. The AABBPs are composed of naturally occurring and non-toxic building blocks such as α-amino acids, fatty diols and dicarboxylic acids. The particles designed from these polymers are expected to have an improved bioavailability along with a high biocompatibility. The present work deals with a systematic study of the preparation of NPs by cost-effective polymer deposition/solvent displacement method using AABBPs. The influence of the nature and concentration of surfactants, concentration of organic phase (polymer solution), and the ratio organic phase/inorganic(water) phase, as well as of some other factors on the size of the fabricated NPs have been studied. It was established that depending on the used conditions the NPs size could be tuned within 40-330 nm. At the next step of this research was carried out an evaluation of biocompability and bioavailability of the synthesized NPs using a stable human cell culture line – A549. It was established that the obtained NPs are not only biocompatible but they stimulate the cell growth.

Keywords: amino acids, biodegradable polymers, bioavailability, nanoparticles

Procedia PDF Downloads 275
487 Study on Varying Solar Blocking Depths in the Exploration of Energy-Saving Renovation of the Energy-Saving Design of the External Shell of Existing Buildings: Using Townhouse Residences in Kaohsiung City as an Example

Authors: Kuang Sheng Liu, Yu Lin Shih*, Chun Ta Tzeng, Cheng Chen Chen

Abstract:

Buildings in the 21st century are facing issues such as an extreme climate and low-carbon/energy-saving requirements. Many countries in the world are of the opinion that a building during its medium- and long-term life cycle is an energy-consuming entity. As for the use of architectural resources, including the United Nations-implemented "Global Green Policy" and "Sustainable building and construction initiative", all are working towards "zero-energy building" and "zero-carbon building" policies. Because of this, countries are cooperating with industry development using policies such as "mandatory design criteria", "green procurement policy" and "incentive grants and rebates programme". The results of this study can provide a reference for sustainable building renovation design criteria. Aimed at townhouses in Kaohsiung City, this study uses different levels of solar blocking depth to carry out evaluation of design and energy-saving renovation of the outer shell of existing buildings by using data collection and the selection of representative cases. Using building resources from a building information model (BIM), simulation and efficiency evaluation are carried out and proven with simulation estimation. This leads into the ECO-efficiency model (EEM) for the life cycle cost efficiency (LCCE) evalution. The buildings selected by this research sit in a north-south direction set with different solar blocking depths. The indoor air-conditioning consumption rates are compared. The current balcony depth of 1 metre as the simulated EUI value acts as a reference value of 100%. The solar blocking of the balcony is increased to 1.5, 2, 2.5 and 3 metres for a total of 5 different solar-blocking balcony depths, for comparison of the air-conditioning improvement efficacy. This research uses different solar-blocking balcony depths to carry out air-conditioning efficiency analysis. 1.5m saves 3.08%, 2m saves 6.74%, 2.5m saves 9.80% and 3m saves 12.72% from the air-conditioning EUI value. This shows that solar-blocking balconies have an efficiency-increasing potential for indoor air-conditioning.

Keywords: building information model, eco-efficiency model, energy-saving in the external shell, solar blocking depth.

Procedia PDF Downloads 387
486 Optimization of Bills Assignment to Different Skill-Levels of Data Entry Operators in a Business Process Outsourcing Industry

Authors: M. S. Maglasang, S. O. Palacio, L. P. Ogdoc

Abstract:

Business Process Outsourcing has been one of the fastest growing and emerging industry in the Philippines today. Unlike most of the contact service centers, more popularly known as "call centers", The BPO Industry’s primary outsourced service is performing audits of the global clients' logistics. As a service industry, manpower is considered as the most important yet the most expensive resource in the company. Because of this, there is a need to maximize the human resources so people are effectively and efficiently utilized. The main purpose of the study is to optimize the current manpower resources through effective distribution and assignment of different types of bills to the different skill-level of data entry operators. The assignment model parameters include the average observed time matrix gathered from through time study, which incorporates the learning curve concept. Subsequently, a simulation model was made to duplicate the arrival rate of demand which includes the different batches and types of bill per day. Next, a mathematical linear programming model was formulated. Its objective is to minimize direct labor cost per bill by allocating the different types of bills to the different skill-levels of operators. Finally, a hypothesis test was done to validate the model, comparing the actual and simulated results. The analysis of results revealed that the there’s low utilization of effective capacity because of its failure to determine the product-mix, skill-mix, and simulated demand as model parameters. Moreover, failure to consider the effects of learning curve leads to overestimation of labor needs. From 107 current number of operators, the proposed model gives a result of 79 operators. This results to an increase of utilization of effective capacity to 14.94%. It is recommended that the excess 28 operators would be reallocated to the other areas of the department. Finally, a manpower capacity planning model is also recommended in support to management’s decisions on what to do when the current capacity would reach its limit with the expected increasing demand.

Keywords: optimization modelling, linear programming, simulation, time and motion study, capacity planning

Procedia PDF Downloads 489
485 DNA Nano Wires: A Charge Transfer Approach

Authors: S. Behnia, S. Fathizadeh, A. Akhshani

Abstract:

In the recent decades, DNA has increasingly interested in the potential technological applications that not directly related to the coding for functional proteins that is the expressed in form of genetic information. One of the most interesting applications of DNA is related to the construction of nanostructures of high complexity, design of functional nanostructures in nanoelectronical devices, nanosensors and nanocercuits. In this field, DNA is of fundamental interest to the development of DNA-based molecular technologies, as it possesses ideal structural and molecular recognition properties for use in self-assembling nanodevices with a definite molecular architecture. Also, the robust, one-dimensional flexible structure of DNA can be used to design electronic devices, serving as a wire, transistor switch, or rectifier depending on its electronic properties. In order to understand the mechanism of the charge transport along DNA sequences, numerous studies have been carried out. In this regard, conductivity properties of DNA molecule could be investigated in a simple, but chemically specific approach that is intimately related to the Su-Schrieffer-Heeger (SSH) model. In SSH model, the non-diagonal matrix element dependence on intersite displacements is considered. In this approach, the coupling between the charge and lattice deformation is along the helix. This model is a tight-binding linear nanoscale chain established to describe conductivity phenomena in doped polyethylene. It is based on the assumption of a classical harmonic interaction between sites, which is linearly coupled to a tight-binding Hamiltonian. In this work, the Hamiltonian and corresponding motion equations are nonlinear and have high sensitivity to initial conditions. Then, we have tried to move toward the nonlinear dynamics and phase space analysis. Nonlinear dynamics and chaos theory, regardless of any approximation, could open new horizons to understand the conductivity mechanism in DNA. For a detailed study, we have tried to study the current flowing in DNA and investigated the characteristic I-V diagram. As a result, It is shown that there are the (quasi-) ohmic areas in I-V diagram. On the other hand, the regions with a negative differential resistance (NDR) are detectable in diagram.

Keywords: DNA conductivity, Landauer resistance, negative di erential resistance, Chaos theory, mean Lyapunov exponent

Procedia PDF Downloads 400
484 A Program of Data Analysis on the Possible State of the Antibiotic Resistance in Bangladesh Environment in 2019

Authors: S. D. Kadir

Abstract:

Background: Antibiotics have always been at the centrum of the revolution of modern microbiology. Micro-organisms and its pathogenicity, resistant organisms, inappropriate or over usage of various types of antibiotic agents are fuelled multidrug-resistant pathogenic organisms. Our present time review report mainly focuses on the therapeutic condition of antibiotic resistance and the possible roots behind the development of antibiotic resistance in Bangladesh in 2019. Methodology: The systemic review has progressed through a series of research analyses on various manuscripts published on Google Scholar, PubMed, Research Gate, and collected relevant information from established popular healthcare and diagnostic center and its subdivisions all over Bangladesh. Our research analysis on the possible assurance of antibiotic resistance been ensured by the selective medical reports and on random assay on the extent of individual antibiotic in 2019. Results: 5 research articles, 50 medical report summary, and around 5 patients have been interviewed while going through the estimation process. We have prioritized research articles where the research analysis been performed by the appropriate use of the Kirby-Bauer method. Kirby-Bauer technique is preferred as it provides greater efficiency, ensures lower performance expenditure, and supplies greater convenience and simplification in the application. In most of the reviews, clinical and laboratory standards institute guidelines were strictly followed. Most of our reports indicate significant resistance shown by the Beta-lactam drugs. Specifically by the derivatives of Penicillin's, Cephalosporin's (rare use of the first generation Cephalosporin and overuse of the second and third generation of Cephalosporin and misuse of the fourth generation of Cephalosporin), which are responsible for almost 67 percent of the bacterial resistance. Moreover, approximately 20 percent of the resistance was due to the fact of drug pumping from the bacterial cell by tetracycline and sulphonamides and their derivatives. Conclusion: 90 percent of the approximate antibiotic resistance is due to the usage of relative and true broad-spectrum antibiotics. The environment has been created by the following circumstances where; the excessive usage of broad-spectrum antibiotics had led to a condition where the disruption of native bacteria and a series of anti-microbial resistance causing a disturbance of the surrounding environments in medium, leading to a state of super-infection.

Keywords: antibiotics, antibiotic resistance, Kirby Bauer method, microbiology

Procedia PDF Downloads 107
483 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors

Authors: Jakob Krause

Abstract:

Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.

Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling

Procedia PDF Downloads 126
482 Agro-Morphological Traits Based Genetic Diversity Analysis of ‘Ethiopian Dinich’ Plectranthus edulis (Vatke) Agnew Populations Collected from Diverse Agro-Ecologies in Ethiopia

Authors: Fekadu Gadissa, Kassahun Tesfaye, Kifle Dagne, Mulatu Geleta

Abstract:

‘Ethiopian dinich’ also called ‘Ethiopian potato’ is one of the economically important ‘orphan’ edible tuber crops indigenous to Ethiopia. We evaluated the morphological and agronomic traits performances of 174 samples from Ethiopia at multiple locations using 12 qualitative and 16 quantitative traits, recorded at the correct growth stages. We observed several morphotypes and phenotypic variations for qualitative traits along with a wide range of mean performance values for all quantitative traits. Analysis of variance for each quantitative trait showed a highly significant (p<0.001) variation among the collections with eventually non-significant variation for environment-traits interaction for all but flower length. A comparatively high phenotypic and genotypic coefficient of variation was observed for plant height, days to flower initiation, days to 50% flowering and tuber number per hill. Moreover, the variability and coefficients of variation due to genotype-environment interaction was nearly zero for all the traits except flower length. High genotypic coefficients of variation coupled with a high estimate of broad sense heritability and high genetic advance as a percent of collection mean were obtained for tuber weight per hill, number of primary branches per plant, tuber number per hill and number of plants per hill. Association of tuber yield per hectare of land showed a large magnitude of positive phenotypic and genotypic correlation with those traits. Principal components analysis revealed 76% of the total variation for the first six principal axes with high factor loadings again from tuber number per hill, number of primary branches per plant and tuber weight. The collections were grouped into four clusters with the weak region (zone) of origin based pattern. In general, there is high genetic-based variability for ‘Ethiopian dinich’ improvement and conservation. DNA based markers are recommended for further genetic diversity estimation for use in breeding and conservation.

Keywords: agro-morphological traits, Ethiopian dinich, genetic diversity, variance components

Procedia PDF Downloads 167
481 Reduction Shrinkage of Concrete without Use Reinforcement

Authors: Martin Tazky, Rudolf Hela, Lucia Osuska, Petr Novosad

Abstract:

Concrete’s volumetric changes are natural process caused by silicate minerals’ hydration. These changes can lead to cracking and subsequent destruction of cementitious material’s matrix. In most cases, cracks can be assessed as a negative effect of hydration, and in all cases, they lead to an acceleration of degradation processes. Preventing the formation of these cracks is, therefore, the main effort. Once of the possibility how to eliminate this natural concrete shrinkage process is by using different types of dispersed reinforcement. For this application of concrete shrinking, steel and polymer reinforcement are preferably used. Despite ordinarily used reinforcement in concrete to eliminate shrinkage it is possible to look at this specific problematic from the beginning by itself concrete mix composition. There are many secondary raw materials, which are helpful in reduction of hydration heat and also with shrinkage of concrete during curing. The new science shows the possibilities of shrinkage reduction also by the controlled formation of hydration products, which could act by itself morphology as a traditionally used dispersed reinforcement. This contribution deals with the possibility of controlled formation of mono- and tri-sulfate which are considered like degradation minerals. Mono- and tri- sulfate's controlled formation in a cementitious composite can be classified as a self-healing ability. Its crystal’s growth acts directly against the shrinking tension – this reduces the risk of cracks development. Controlled formation means that these crystals start to grow in the fresh state of the material (e.g. concrete) but stop right before it could cause any damage to the hardened material. Waste materials with the suitable chemical composition are very attractive precursors because of their added value in the form of landscape pollution’s reduction and, of course, low cost. In this experiment, the possibilities of using the fly ash from fluidized bed combustion as a mono- and tri-sulphate formation additive were investigated. The experiment itself was conducted on cement paste and concrete and specimens were subjected to a thorough analysis of physicomechanical properties as well as microstructure from the moment of mixing up to 180 days. In cement composites, were monitored the process of hydration and shrinkage. In a mixture with the used admixture of fluidized bed combustion fly ash, possible failures were specified by electronic microscopy and dynamic modulus of elasticity. The results of experiments show the possibility of shrinkage concrete reduction without using traditionally dispersed reinforcement.

Keywords: shrinkage, monosulphates, trisulphates, self-healing, fluidized fly ash

Procedia PDF Downloads 167
480 Estimation of the Exergy-Aggregated Value Generated by a Manufacturing Process Using the Theory of the Exergetic Cost

Authors: German Osma, Gabriel Ordonez

Abstract:

The production of metal-rubber spares for vehicles is a sequential process that consists in the transformation of raw material through cutting activities and chemical and thermal treatments, which demand electricity and fossil fuels. The energy efficiency analysis for these cases is mostly focused on studying of each machine or production step, but is not common to study of the quality of the production process achieves from aggregated value viewpoint, which can be used as a quality measurement for determining of impact on the environment. In this paper, the theory of exergetic cost is used for determining of aggregated exergy to three metal-rubber spares, from an exergy analysis and thermoeconomic analysis. The manufacturing processing of these spares is based into batch production technique, and therefore is proposed the use of this theory for discontinuous flows from of single models of workstations; subsequently, the complete exergy model of each product is built using flowcharts. These models are a representation of exergy flows between components into the machines according to electrical, mechanical and/or thermal expressions; they determine the demanded exergy to produce the effective transformation in raw materials (aggregated exergy value), the exergy losses caused by equipment and irreversibilities. The energy resources of manufacturing process are electricity and natural gas. The workstations considered are lathes, punching presses, cutters, zinc machine, chemical treatment tanks, hydraulic vulcanizing presses and rubber mixer. The thermoeconomic analysis was done by workstation and by spare; first of them describes the operation of the components of each machine and where the exergy losses are; while the second of them estimates the exergy-aggregated value for finished product and wasted feedstock. Results indicate that exergy efficiency of a mechanical workstation is between 10% and 60% while this value in the thermal workstations is less than 5%; also that each effective exergy-aggregated value is one-thirtieth of total exergy required for operation of manufacturing process, which amounts approximately to 2 MJ. These troubles are caused mainly by technical limitations of machines, oversizing of metal feedstock that demands more mechanical transformation work, and low thermal insulation of chemical treatment tanks and hydraulic vulcanizing presses. From established information, in this case, it is possible to appreciate the usefulness of theory of exergetic cost for analyzing of aggregated value in manufacturing processes.

Keywords: exergy-aggregated value, exergy efficiency, thermoeconomics, exergy modeling

Procedia PDF Downloads 152
479 Estimation of Noise Barriers for Arterial Roads of Delhi

Authors: Sourabh Jain, Parul Madan

Abstract:

Traffic noise pollution has become a challenging problem for all metro cities of India due to rapid urbanization, growing population and rising number of vehicles and transport development. In Delhi the prime source of noise pollution is vehicular traffic. In Delhi it is found that the ambient noise level (Leq) is exceeding the standard permissible value at all the locations. Noise barriers or enclosures are definitely useful in obtaining effective deduction of traffic noise disturbances in urbanized areas. US’s Federal Highway Administration Model (FHWA) and Calculation of Road Traffic Noise (CORTN) of UK are used to develop spread sheets for noise prediction. Spread sheets are also developed for evaluating effectiveness of existing boundary walls abutting houses in mitigating noise, redesigning them as noise barriers. Study was also carried out to examine the changes in noise level due to designed noise barrier by using both models FHWA and CORTN respectively. During the collection of various data it is found that receivers are located far away from road at Rithala and Moolchand sites and hence extra barrier height needed to meet prescribed limits was less as seen from calculations and most of the noise diminishes by propagation effect.On the basis of overall study and data analysis, it is concluded that FHWA and CORTN models under estimate noise levels. FHWA model predicted noise levels with an average percentage error of -7.33 and CORTN predicted with an average percentage error of -8.5. It was observed that at all sites noise levels at receivers were exceeding the standard limit of 55 dB. It was seen from calculations that existing walls are reducing noise levels. Average noise reduction due to walls at Rithala was 7.41 dB and at Panchsheel was 7.20 dB and lower amount of noise reduction was observed at Friend colony which was only 5.88. It was observed from analysis that Friends colony sites need much greater height of barrier. This was because of residential buildings abutting the road. At friends colony great amount of traffic was observed since it is national highway. At this site diminishing of noise due to propagation effect was very less.As FHWA and CORTN models were developed in excel programme, it eliminates laborious calculations of noise. There was no reflection correction in FHWA models as like in CORTN model.

Keywords: IFHWA, CORTN, Noise Sources, Noise Barriers

Procedia PDF Downloads 114
478 A Method for Evaluating Gender Equity of Cycling from Rawls Justice Perspective

Authors: Zahra Hamidi

Abstract:

Promoting cycling, as an affordable environmentally friendly mode of transport to replace private car use has been central to sustainable transport policies. Cycling is faster than walking and combined with public transport has the potential to extend the opportunities that people can access. In other words, cycling, besides direct positive health impacts, can improve people mobility and ultimately their quality of life. Transport literature well supports the close relationship between mobility, quality of life, and, well being. At the same time inequity in the distribution of access and mobility has been associated with the key aspects of injustice and social exclusion. The pattern of social exclusion and inequality in access are also often related to population characteristics such as age, gender, income, health, and ethnic background. Therefore, while investing in transport infrastructure it is important to consider the equity of provided access for different population groups. This paper proposes a method to evaluate the equity of cycling in a city from Rawls egalitarian perspective. Since this perspective is concerned with the difference between individuals and social groups, this method combines accessibility measures and Theil index of inequality that allows capturing the inequalities ‘within’ and ‘between’ groups. The paper specifically focuses on two population characteristics as gender and ethnic background. Following Rawls equity principles, this paper measures accessibility by bikes to a selection of urban activities that can be linked to the concept of the social primary goods. Moreover, as growing number of cities around the world have launched bike-sharing systems (BSS) this paper incorporates both private and public bikes networks in the estimation of accessibility levels. Additionally, the typology of bike lanes (separated from or shared with roads), the presence of a bike sharing system in the network, as well as bike facilities (e.g. parking racks) have been included in the developed accessibility measures. Application of this proposed method to a real case study, the city of Malmö, Sweden, shows its effectiveness and efficiency. Although the accessibility levels were estimated only based on gender and ethnic background characteristics of the population, the author suggests that the analysis can be applied to other contexts and further developed using other properties, such as age, income, or health.

Keywords: accessibility, cycling, equity, gender

Procedia PDF Downloads 379
477 Relation Between Traffic Mix and Traffic Accidents in a Mixed Industrial Urban Area

Authors: Michelle Eliane Hernández-García, Angélica Lozano

Abstract:

The traffic accidents study usually contemplates the relation between factors such as the type of vehicle, its operation, and the road infrastructure. Traffic accidents can be explained by different factors, which have a greater or lower relevance. Two zones are studied, a mixed industrial zone and the extended zone of it. The first zone has mainly residential (57%), and industrial (23%) land uses. Trucks are mainly on the roads where industries are located. Four sensors give information about traffic and speed on the main roads. The extended zone (which includes the first zone) has mainly residential (47%) and mixed residential (43%) land use, and just 3% of industrial use. The traffic mix is composed mainly of non-trucks. 39 traffic and speed sensors are located on main roads. The traffic mix in a mixed land use zone, could be related to traffic accidents. To understand this relation, it is required to identify the elements of the traffic mix which are linked to traffic accidents. Models that attempt to explain what factors are related to traffic accidents have faced multiple methodological problems for obtaining robust databases. Poisson regression models are used to explain the accidents. The objective of the Poisson analysis is to estimate a vector to provide an estimate of the natural logarithm of the mean number of accidents per period; this estimate is achieved by standard maximum likelihood procedures. For the estimation of the relation between traffic accidents and the traffic mix, the database is integrated of eight variables, with 17,520 observations and six vectors. In the model, the dependent variable is the occurrence or non-occurrence of accidents, and the vectors that seek to explain it, correspond to the vehicle classes: C1, C2, C3, C4, C5, and C6, respectively, standing for car, microbus, and van, bus, unitary trucks (2 to 6 axles), articulated trucks (3 to 6 axles) and bi-articulated trucks (5 to 9 axles); in addition, there is a vector for the average speed of the traffic mix. A Poisson model is applied, using a logarithmic link function and a Poisson family. For the first zone, the Poisson model shows a positive relation among traffic accidents and C6, average speed, C3, C2, and C1 (in a decreasing order). The analysis of the coefficient shows a high relation with bi-articulated truck and bus (C6 and the C3), indicating an important participation of freight trucks. For the expanded zone, the Poisson model shows a positive relation among traffic accidents and speed average, biarticulated truck (C6), and microbus and vans (C2). The coefficients obtained in both Poisson models shows a higher relation among freight trucks and traffic accidents in the first industrial zone than in the expanded zone.

Keywords: freight transport, industrial zone, traffic accidents, traffic mix, trucks

Procedia PDF Downloads 115
476 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)

Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira

Abstract:

Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.

Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina

Procedia PDF Downloads 190
475 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour

Authors: Libor Zachoval, Daire O Broin, Oisin Cawley

Abstract:

E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).

Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI

Procedia PDF Downloads 101
474 In vitro and in vivo Anticancer Activity of Nanosize Zinc Oxide Composites of Doxorubicin

Authors: Emma R. Arakelova, Stepan G. Grigoryan, Flora G. Arsenyan, Nelli S. Babayan, Ruzanna M. Grigoryan, Natalia K. Sarkisyan

Abstract:

Novel nanosize zinc oxide composites of doxorubicin obtained by deposition of 180 nm thick zinc oxide film on the drug surface using DC-magnetron sputtering of a zinc target in the form of gels (PEO+Dox+ZnO and Starch+NaCMC+Dox+ZnO) were studied for drug delivery applications. The cancer specificity was revealed both in in vitro and in vivo models. The cytotoxicity of the test compounds was analyzed against human cancer (HeLa) and normal (MRC5) cell lines using MTT colorimetric cell viability assay. IC50 values were determined and compared to reveal the cancer specificity of the test samples. The mechanistic study of the most active compound was investigated using Flow cytometry analyzing of the DNA content after PI (propidium iodide) staining. Data were analyzed with Tree Star FlowJo software using cell cycle analysis Dean-Jett-Fox module. The in vivo anticancer activity estimation experiments were carried out on mice with inoculated ascitic Ehrlich’s carcinoma at intraperitoneal introduction of doxorubicin and its zinc oxide compositions. It was shown that the nanosize zinc oxide film deposition on the drug surface leads to the selective anticancer activity of composites at the cellular level with the range of selectivity index (SI) from 4 (Starch+NaCMC+Dox+ZnO) to 200 (PEO(gel)+Dox+ZnO) which is higher than that of free Dox (SI = 56). The significant increase in vivo antitumor activity (by a factor of 2-2.5) and decrease of general toxicity of zinc oxide compositions of doxorubicin in the form of the above mentioned gels compared to free doxorubicin were shown on the model of inoculated Ehrlich's ascitic carcinoma. Mechanistic studies of anticancer activity revealed the cytostatic effect based on the high level of DNA biosynthesis inhibition at considerable low concentrations of zinc oxide compositions of doxorubicin. The results of studies in vitro and in vivo behavior of PEO+Dox+ZnO and Starch+NaCMC+Dox+ZnO composites confirm the high potential of the nanosize zinc oxide composites as a vector delivery system for future application in cancer chemotherapy.

Keywords: anticancer activity, cancer specificity, doxorubicin, zinc oxide

Procedia PDF Downloads 385
473 Intensity of Dyspnea and Anxiety in Seniors in the Terminal Phase of the Disease

Authors: Mariola Głowacka

Abstract:

Aim: The aim of this study was to present the assessment of dyspnea and anxiety in seniors staying in the hospice in the context of the nurse's tasks. Materials and methods: The presented research was carried out at the "Hospicjum Płockie" Association of St. Urszula Ledóchowska in Płock, in a stationary ward, for adults. The research group consisted of 100 people, women, and men. In the study described in this paper, the method of diagnostic survey, the method of estimation and analysis of patient records were used, and the research tools were the numerical scale of the NRS assessment, the modified Borg scale to assess dyspnea, the Trait Anxiety scale to test the intensity of anxiety and the sociodemographic assessment of the respondent. Results: Among the patients, the greatest number were people without dyspnoea (38 people) and with average levels of dyspnoea (26 people). People with lung cancer had a higher level of breathlessness than people with other cancers. Half of the patients included in the study felt anxiety at a low level. On average, men had a higher level of anxiety than women. Conclusion: 1) Patients staying in the hospice require comprehensive nursing care due to the underlying disease, comorbidities, and a wide range of medications taken, which aggravate the feeling of dyspnea and anxiety. 2) The study showed that in patients staying in the hospice, the level of dyspnea was of varying severity. The greatest number of people were without dyspnea (38) and patients with a low level of dyspnea (34). There were 12 people experiencing an average level of dyspnea and a high level of dyspnea 15. 3) The main factor influencing the severity of dyspnea in patients was the location of cancer. There was no significant relationship between the intensity of dyspnea and the age, gender of the patient, and time from diagnosis. 4) The study showed that in patients staying in the hospice, the level of anxiety was of varying severity. Most people experience a low level of anxiety (51). There were 16 people with a high level of anxiety, while there were 33 people experiencing anxiety at an average level. 5) The patient's gender was the main factor influencing the increase in anxiety intensity. Men had higher levels of anxiety than women. There was no significant correlation between the intensity of anxiety and the age of the respondents, as well as the type of cancer and time since diagnosis. 6) The intensity of dyspnea depended on the type of cancer the subjects had. People with lung cancer had a higher level of breathlessness than those with breast cancer and bowel cancer. It was not found that the anxiety increased depending on the type of cancer and comorbidities of the examined person.

Keywords: cancer, shortness of breath, anxiety, senior, hospice

Procedia PDF Downloads 75
472 Development of a Wound Dressing Material Based on Microbial Polyhydroxybutyrate Electrospun Microfibers Containing Curcumin

Authors: Ariel Vilchez, Francisca Acevedo, Rodrigo Navia

Abstract:

The wound healing process can be accelerated and improved by the action of antioxidants such as curcumin (Cur) over the tissues; however, the efficacy of curcumin used through the digestive system is not enough to exploit its benefits. Electrospinning presents an alternative to carry curcumin directly to the wounds, and polyhydroxybutyrate (PHB) is proposed as the matrix to load curcumin owing to its biodegradable and biocompatible properties. PHB is among 150 types of Polyhydroxyalkanoates (PHAs) identified, it is a natural thermoplastic polyester produced by microbial fermentation obtained from microorganisms. The proposed objective is to develop electrospun bacterial PHB-based microfibers containing curcumin for possible biomedical applications. Commercial PHB was solved in Chloroform: Dimethylformamide (4:1) to a final concentration of 7% m/V. Curcumin was added to the polymeric solution at 1%, and 7% m/m regarding PHB. The electrospinning equipment (NEU-BM, China) with a rotary collector was used to obtain Cur-PHB fibers at different voltages and flow rate of the polymeric solution considering a distance of 20 cm from the needle to the collector. Scanning electron microscopy (SEM) was used to determine the diameter and morphology of the obtained fibers. Thermal stability was obtained from Thermogravimetric (TGA) analysis, and Fourier Transform Infrared Spectroscopy (FT-IR) was carried out in order to study the chemical bonds and interactions. A preliminary curcumin release to Phosphate Buffer Saline (PBS) pH = 7.4 was obtained in vitro and measured by spectrophotometry. PHB fibers presented an intact chemical composition regarding the original condition (dust) according to FTIR spectra, the diameter fluctuates between 0.761 ± 0.123 and 2.157 ± 0.882 μm, with different qualities according to their morphology. The best fibers in terms of quality and diameter resulted in sample 2 and sample 6, obtained at 0-10kV and 0.5 mL/hr, and 0-10kV and 1.5 mL/hr, respectively. The melting temperature resulted near 178 °C, according to the bibliography. The crystallinity of fibers decreases while curcumin concentration increases for the studied interval. The curcumin release reaches near 14% at 37 °C at 54h in PBS adjusted to a quasi-Fickian Diffusion. We conclude that it is possible to load curcumin in PHB to obtain continuous, homogeneous, and solvent-free microfibers by electrospinning. Between 0% and 7% of curcumin, the crystallinity of fibers decreases as the concentration of curcumin increases. Thus, curcumin enhances the flexibility of the obtained material. HPLC should be used in further analysis of curcumin release.

Keywords: antioxidant, curcumin, polyhydroxybutyrate, wound healing

Procedia PDF Downloads 107
471 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model

Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung

Abstract:

The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.

Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation

Procedia PDF Downloads 145