Search results for: expected annual loss
2151 Compatibility of Sulphate Resisting Cement with Super and Hyper-Plasticizer
Authors: Alper Cumhur, Hasan Baylavlı, Eren Gödek
Abstract:
Use of superplasticity chemical admixtures in concrete production is widespread all over the world and has become almost inevitable. Super-plasticizers (SPA), extend the setting time of concrete by adsorbing onto cement particles and provide concrete to preserve its fresh state workability properties. Hyper-plasticizers (HPA), as a special type of superplasticizer, provide the production of qualified concretes by increasing the workability properties of concrete, effectively. However, compatibility of cement with super and hyper-plasticizers is quite important for achieving efficient workability in order to produce qualified concretes. In 2011, the EN 197-1 standard is edited and cement classifications were updated. In this study, the compatibility of hyper-plasticizer and CEM I SR0 type sulphate resisting cement (SRC) that firstly classified in EN 197-1 is investigated. Within the scope of the experimental studies, a reference cement mortar was designed with a water/cement ratio of 0.50 confirming to EN 196-1. Fresh unit density of mortar was measured and spread diameters (at 0, 60, 120 min after mix preparation) and setting time of reference mortar were determined with flow table and Vicat tests, respectively. Three mortars are being re-prepared with using both super and hyper-plasticizer confirming to ASTM C494 by 0.50, 0.75 and 1.00% of cement weight. Fresh unit densities, spread diameters and setting times of super and hyper plasticizer added mortars (SPM, HPM) will be determined. Theoretical air-entrainment values of both SPMs and HPMs will be calculated by taking the differences between the densities of plasticizer added mortars and reference mortar. The flow table and Vicat tests are going to be repeated to these mortars and results will be compared. In conclusion, compatibility of SRC with SPA and HPA will be investigated. It is expected that optimum dosages of SPA and HPA will be determined for providing the required workability and setting conditions of SRC mortars, and the advantages/disadvantages of both SPA and HPA will be discussed.Keywords: CEM I SR0, hyper-plasticizer, setting time, sulphate resisting cement, super-plasticizer, workability
Procedia PDF Downloads 2152150 Full-Scale 3D Simulation of the Electroslag Rapid Remelting Process
Authors: E. Karimi-Sibaki, A. Kharicha, M. Wu, A. Ludwig
Abstract:
The standard electroslag remelting (ESR) process can ideally control the solidification of an ingot and produce homogeneous structure with minimum defects. However, the melt rate of electrode is rather low that makes the whole process uneconomical especially to produce small ingot sizes. In contrast, continuous casting is an economical process to produce small ingots such as billets at high casting speed. Unfortunately, deep liquid melt pool forms in the billet ingot of continuous casting that leads to center porosity and segregation. As such, continuous casting is not suitable to produce segregation prone alloys like tool steel or several super alloys. On the other hand, the electro slag rapid remelting (ESRR) process has advantages of both traditional ESR and continuous casting processes to produce billets. In the ESRR process, a T-shaped mold is used including a graphite ring that takes major amount of current through the mold. There are only a few reports available in the literature discussing about this topic. The research on the ESRR process is currently ongoing aiming to improve the design of the T-shaped mold, to decrease overall heat loss in the process, and to obtain a higher temperature at metal meniscus. In the present study, a 3D model is proposed to investigate the electromagnetic, thermal, and flow fields in the whole process as well as solidification of the billet ingot. We performed a fully coupled numerical simulation to explore the influence of the electromagnetically driven flow (MHD) on the thermal field in the slag and ingot. The main goal is to obtain some fundamental understanding of the formation of melt pool of the solidifying billet ingot in the ESRR process.Keywords: billet ingot, magnetohydrodynamics (mhd), numerical simulation, remelting, solidification, t-shaped mold.
Procedia PDF Downloads 2942149 Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, dyslexia, dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 642148 Bi-Layer Electro-Conductive Nanofibrous Conduits for Peripheral Nerve Regeneration
Authors: Niloofar Nazeri, Mohammad Ali Derakhshan, Reza Faridi Majidi, Hossein Ghanbari
Abstract:
Injury of peripheral nervous system (PNS) can lead to loss of sensation or movement. To date, one of the challenges for surgeons is repairing large gaps in PNS. To solve this problem, nerve conduits have been developed. Conduits produced by means of electrospinning can mimic extracellular matrix and provide enough surface for further functionalization. In this research, a conductive bilayer nerve conduit with poly caprolactone (PCL), poly (lactic acid co glycolic acid) (PLGA) and MWCNT for promoting peripheral nerve regeneration was fabricated. The conduit was made of longitudinally aligned PLGA nanofibrous sheets in the lumen to promote nerve regeneration and randomly oriented PCL nanofibers on the outer surface for mechanical support. The intra-luminal guidance channel was made out of conductive aligned nanofibrous rolled sheets which are coated with laminin via dopamine. Different properties of electrospun scaffolds were investigated by using contact angle, mechanical strength, degradation time, scanning electron microscopy (SEM) and X-ray photoelectron spectroscopy (XPS). The SEM analysis was shown that size range of nanofibrous mat were about 600-750 nm and MWCNTs deposited between nanofibers. The XPS result was shown that laminin attached to the nanofibers surface successfully. The contact-angle and tensile tests analysis revealed that scaffolds have good hydrophilicity and enough mechanical strength. In vitro studies demonstrated that this conductive surface was able to enhance the attachment and proliferation of PC12 and Schwann cells. We concluded that this bilayer composite conduit has good potential for nerve regeneration.Keywords: conductive, conduit, laminin, MWCNT
Procedia PDF Downloads 2002147 Design of Hybrid Auxetic Metamaterials for Enhanced Energy Absorption under Compression
Authors: Ercan Karadogan, Fatih Usta
Abstract:
Auxetic materials have a negative Poisson’s ratio (NPR), which is not often found in nature. They are metamaterials that have potential applications in many engineering fields. Mechanical metamaterials are synthetically designed structures with unusual mechanical properties. These mechanical properties are dependent on the properties of the matrix structure. They have the following special characteristics, i.e., improved shear modulus, increased energy absorption, and intensive fracture toughness. Non-auxetic materials compress transversely when they are stretched. The system naturally is inclined to keep its density constant. The transversal compression increases the density to balance the loss in the longitudinal direction. This study proposes to improve the crushing performance of hybrid auxetic materials. The re-entrant honeycomb structure has been combined with a star honeycomb, an S-shaped unit cell, a double arrowhead, and a structurally hexagonal re-entrant honeycomb by 9 X 9 cells, i.e., the number of cells is 9 in the lateral direction and 9 in the vertical direction. The Finite Element (FE) and experimental methods have been used to determine the compression behavior of the developed hybrid auxetic structures. The FE models have been developed by using Abaqus software. The specimens made of polymer plastic materials have been 3D printed and subjected to compression loading. The results are compared in terms of specific energy absorption and strength. This paper describes the quasi-static crushing behavior of two types of hybrid lattice structures (auxetic + auxetic and auxetic + non-auxetic). The results show that the developed hybrid structures can be useful to control collapse mechanisms and present larger energy absorption compared to conventional re-entrant auxetic structures.Keywords: auxetic materials, compressive behavior, metamaterials, negative Poisson’s ratio
Procedia PDF Downloads 972146 Bilateral Simultaneous Acute Primary Angle Closure Glaucoma: A Remarkable Case
Authors: Nita Nurlaila Kadarwaty
Abstract:
Purpose: This study presents a rare case of bilateral Acute Primary Angle Closure Glaucoma (PACG). Method: A case report of a 64-year-old woman with a good outcome Acute PACG in both eyes who underwent phacotrabeculectomy surgery. Result: A 64-year-old woman complained of acute pain in both eyes, accompanied by decreased vision, photophobia, and seeing halos for three weeks. There was no history of trauma, steroid or other systemic drugs used, or intraocular surgery before. Ophthalmologic examination revealed a right eye (RE) visual acuity of 0.1, left eye (LE) 0.2. RE intraocular pressure (IOP) was 12 mmhg and LE: 36.4 mmHg in medication of timolol maleat ED and acetazolamide oral. Both eyes' anterior segments revealed mixed injection, corneal edema, shallow anterior chamber, posterior synechiae, mid-dilatation pupil with negative pupillary reflection, and cloudy lens without intumescent. There was a glaucomatous optic and closed iridocorneal angle on the gonioscopy. Initial treatments included oral acetazolamide and potassium aspartate 250 mg three times a day, timolol maleate ED 0.5% twice a day, and prednisolone acetate ED 1% four times a day. This patient underwent trabeculectomy, phacoemulsification, and implantation of IOL in both eyes. One week after the surgeries, both eyes showed decreased IOP and good visual improvement. Conclusion: Bilateral simultaneous Acute PACG is generally severe and results in a poor outcome. It causes rapidly progressive visual loss and is often irreversible. Phacotrabeculectomy has more benefits compared to only phacoemulsification for the intervention regarding the reduced IOP post-surgical.Keywords: acute primary angle closure glaucoma, intraocular pressure, phacotrabeculectomy, glaucoma
Procedia PDF Downloads 732145 Development of Cost-Effective Protocol for Preparation of Dehydrated Paneer (Indian Cottage Cheese) Using Freeze Drying
Authors: Sadhana Sharma, P. K. Nema, Siddhartha Singha
Abstract:
Paneer or Indian cottage cheese is an acid and heat coagulated milk product, highly perishable because of high moisture (58-60 %). Typically paneer is marble to light creamy white in appearance. A good paneer should have cohesive body with slight sponginess or springiness. The texture must be smooth and velvety with close-knit compactness. It should have pleasing mild acidic, slightly sweet and nutty flavour. Consumers today demand simple to prepare, convenient, healthy and natural foods. Dehydrated paneer finds numerous ways to be used. It can be used in curry preparation similar to paneer-in-curry, a delicacy in Indian cuisine. It may be added to granola/ trail mix yielding a high energy snack. If grounded to a powder, it may be used as a cheesy spice mix or used as popcorn seasoning. Dried paneer powder may be added to pizza dough or to a white sauce to turn it into a paneer sauce. Drying of such food hydrogels by conventional methods is associated with several undesirable characteristics including case hardening, longer drying time, poor rehydration ability and fat loss during drying. The present study focuses on developing cost-effective protocol for freeze-drying of paneer. The dehydrated product would be shelf-stable and can be rehydrated to its original state having flavor and texture comparable to the fresh form. Moreover, the final product after rehydration would be more fresh and softer than its frozen counterparts. The developed product would be shelf-stable at room temperature without any addition of preservatives.Keywords: color, freeze-drying, paneer, texture
Procedia PDF Downloads 1602144 Verification of a Simple Model for Rolling Isolation System Response
Authors: Aarthi Sridhar, Henri Gavin, Karah Kelly
Abstract:
Rolling Isolation Systems (RISs) are simple and effective means to mitigate earthquake hazards to equipment in critical and precious facilities, such as hospitals, network collocation facilities, supercomputer centers, and museums. The RIS works by isolating components acceleration the inertial forces felt by the subsystem. The RIS consists of two platforms with counter-facing concave surfaces (dishes) in each corner. Steel balls lie inside the dishes and allow the relative motion between the top and bottom platform. Formerly, a mathematical model for the dynamics of RISs was developed using Lagrange’s equations (LE) and experimentally validated. A new mathematical model was developed using Gauss’s Principle of Least Constraint (GPLC) and verified by comparing impulse response trajectories of the GPLC model and the LE model in terms of the peak displacements and accelerations of the top platform. Mathematical models for the RIS are tedious to derive because of the non-holonomic rolling constraints imposed on the system. However, using Gauss’s Principle of Least constraint to find the equations of motion removes some of the obscurity and yields a system that can be easily extended. Though the GPLC model requires more state variables, the equations of motion are far simpler. The non-holonomic constraint is enforced in terms of accelerations and therefore requires additional constraint stabilization methods in order to avoid the possibility that numerical integration methods can cause the system to go unstable. The GPLC model allows the incorporation of more physical aspects related to the RIS, such as contribution of the vertical velocity of the platform to the kinetic energy and the mass of the balls. This mathematical model for the RIS is a tool to predict the motion of the isolation platform. The ability to statistically quantify the expected responses of the RIS is critical in the implementation of earthquake hazard mitigation.Keywords: earthquake hazard mitigation, earthquake isolation, Gauss’s Principle of Least Constraint, nonlinear dynamics, rolling isolation system
Procedia PDF Downloads 2502143 Platelet Transfusion Thresholds for Pediatrics; A Retrospective Study
Authors: Hessah Alsulami, Majedah Aldosari
Abstract:
Introduction: Platelet threshold of 10x109 /L is recommended for clinically stable thrombocytopenic pediatric patients. Transfusions at a higher level (given the absence of research evidence, as determined by clinical circumstances, generally at threshold of 40x109 /L) may be required for patients with signs of bleeding, high fever, hyper-leukocytosis, rapid fall in platelet count, concomitant coagulation abnormality, critically ill patients, and those with impaired platelet function (including drug induced). Transfusions at a higher level may be also required for patients undergoing invasive procedures. Method: This study is a retrospective observational analysis of platelet transfusion thresholds in a single secondary pediatric hospital in Riyadh. From the blood bank database, the list of the patients who received platelet transfusions in the second half of 2018 was retrieved. Patients were divided into two groups; group A, those belong to the category of high platelet level for transfusion (such as those with bleeding, high fever, rapid fall in platelet count, impaired platelet function or undergoing invasive procedures) and group B, those who were not. Then we looked at the pre and post transfusion platelet levels for each group. The data was analyzed using GraphPad software and the data expressed as Mean ± SD. Result: A total of 112 of transfusion episodes in 61 patients (38% female) were analyzed. The age ranged from 24 days to 8 years. The distribution of platelet transfusion episodes was 64% (n=72) for group A and 36% (n= 40) for group B. The mean pre-transfusion platelet count was 46x103 ± (11x 103) for group A and 28x103 ± (6x103) for group B. the post-transfusion mean platelet count was 61 x 103 ± (14 x 103) and 60 x103 ± (24 x 103) for group A and B respectively. Among the groups the rise in the mean platelet count after transfusion was significant among stable patients (group B) compared to unstable patients (group A) (P < 0.001). Conclusion: The platelet count threshold for transfusion varied with the clinical condition and is higher among unstable patients’ group which is expected. For stable patients the threshold was higher than what it should be which means that the clinicians don’t follow the guidelines in this regard. The rise of platelet count after transfusion was higher among stable patients.Keywords: platelet, transfusion, threshold, pediatric
Procedia PDF Downloads 712142 Applications of Evolutionary Optimization Methods in Reinforcement Learning
Authors: Rahul Paul, Kedar Nath Das
Abstract:
The paradigm of Reinforcement Learning (RL) has become prominent in training intelligent agents to make decisions in environments that are both dynamic and uncertain. The primary objective of RL is to optimize the policy of an agent in order to maximize the cumulative reward it receives throughout a given period. Nevertheless, the process of optimization presents notable difficulties as a result of the inherent trade-off between exploration and exploitation, the presence of extensive state-action spaces, and the intricate nature of the dynamics involved. Evolutionary Optimization Methods (EOMs) have garnered considerable attention as a supplementary approach to tackle these challenges, providing distinct capabilities for optimizing RL policies and value functions. The ongoing advancement of research in both RL and EOMs presents an opportunity for significant advancements in autonomous decision-making systems. The convergence of these two fields has the potential to have a transformative impact on various domains of artificial intelligence (AI) applications. This article highlights the considerable influence of EOMs in enhancing the capabilities of RL. Taking advantage of evolutionary principles enables RL algorithms to effectively traverse extensive action spaces and discover optimal solutions within intricate environments. Moreover, this paper emphasizes the practical implementations of EOMs in the field of RL, specifically in areas such as robotic control, autonomous systems, inventory problems, and multi-agent scenarios. The article highlights the utilization of EOMs in facilitating RL agents to effectively adapt, evolve, and uncover proficient strategies for complex tasks that may pose challenges for conventional RL approaches.Keywords: machine learning, reinforcement learning, loss function, optimization techniques, evolutionary optimization methods
Procedia PDF Downloads 802141 Phylogenetic Differential Separation of Environmental Samples
Authors: Amber C. W. Vandepoele, Michael A. Marciano
Abstract:
Biological analyses frequently focus on single organisms, however many times, the biological sample consists of more than the target organism; for example, human microbiome research targets bacterial DNA, yet most samples consist largely of human DNA. Therefore, there would be an advantage to removing these contaminating organisms. Conversely, some analyses focus on a single organism but would greatly benefit from the additional information regarding the other organismal components of the sample. Forensic analysis is one such example, wherein most forensic casework, human DNA is targeted; however, it typically exists in complex non-pristine sample substrates such as soil or unclean surfaces. These complex samples are commonly comprised of not just human tissue but also microbial and plant life, where these organisms may help gain more forensically relevant information about a specific location or interaction. This project aims to optimize a ‘phylogenetic’ differential extraction method that will separate mammalian, bacterial and plant cells in a mixed sample. This is accomplished through the use of size exclusion separation, whereby the different cell types are separated through multiple filtrations using 5 μm filters. The components are then lysed via differential enzymatic sensitivities among the cells and extracted with minimal contribution from the preceding component. This extraction method will then allow complex DNA samples to be more easily interpreted through non-targeting sequencing since the data will not be skewed toward the smaller and usually more numerous bacterial DNAs. This research project has demonstrated that this ‘phylogenetic’ differential extraction method successfully separated the epithelial and bacterial cells from each other with minimal cell loss. We will take this one step further, showing that when adding the plant cells into the mixture, they will be separated and extracted from the sample. Research is ongoing, and results are pending.Keywords: DNA isolation, geolocation, non-human, phylogenetic separation
Procedia PDF Downloads 1122140 Comparative Study of Electronic and Optical Properties of Ammonium and Potassium Dinitramide Salts through Ab-Initio Calculations
Authors: J. Prathap Kumar, G. Vaitheeswaran
Abstract:
The present study investigates the role of ammonium and potassium ion in the electronic, bonding and optical properties of dinitramide salts due to their stability and non-toxic nature. A detailed analysis of bonding between NH₄ and K with dinitramide, optical transitions from the valence band to the conduction band, absorption spectra, refractive indices, reflectivity, loss function are reported. These materials are well known as oxidizers in solid rocket propellants. In the present work, we use full potential linear augmented plane wave (FP-LAPW) method which is implemented in the Wien2k package within the framework of density functional theory. The standard DFT functional local density approximation (LDA) and generalized gradient approximation (GGA) always underestimate the band gap by 30-40% due to the lack of derivative discontinuities of the exchange-correlation potential with respect to an occupation number. In order to get reliable results, one must use hybrid functional (HSE-PBE), GW calculations and Tran-Blaha modified Becke-Johnson (TB-mBJ) potential. It is very well known that hybrid functionals GW calculations are very expensive, the later methods are computationally cheap. The new developed TB-mBJ functionals use information kinetic energy density along with the charge density employed in DFT. The TB-mBJ functionals cannot be used for total energy calculations but instead yield very much improved band gap. The obtained electronic band gap at gamma point for both the ammonium dinitramide and potassium dinitramide are found to be 2.78 eV and 3.014 eV with GGA functional, respectively. After the inclusion of TB-mBJ, the band gap improved by 4.162 eV for potassium dinitramide and 4.378 eV for ammonium dinitramide. The nature of the band gap is direct in ADN and indirect in KDN. The optical constants such as dielectric constant, absorption, and refractive indices, birefringence values are presented. Overall as there are no experimental studies we present the improved band gap with TB-mBJ functional following with optical properties.Keywords: ammonium dinitramide, potassium dinitramide, DFT, propellants
Procedia PDF Downloads 1572139 Cocoon Characterization of Sericigenous Insects in North-East India and Prospects
Authors: Tarali Kalita, Karabi Dutta
Abstract:
The North Eastern Region of India, with diverse climatic conditions and a wide range of ecological habitats, makes an ideal natural abode for a good number of silk-producing insects. Cocoon is the economically important life stage from where silk of economic importance is obtained. In recent years, silk-based biomaterials have gained considerable attention, which is dependent on the structure and properties of the silkworm cocoons as well as silk yarn. The present investigation deals with the morphological study of cocoons, including cocoon color, cocoon size, shell weight and shell ratio of eleven different species of silk insects collected from different regions of North East India. The Scanning Electron Microscopic study and X-ray photoelectron spectroscopy were performed to know the arrangement of silk threads in cocoons and the atomic elemental analysis, respectively. Further, collected cocoons were degummed and reeled/spun on a reeling machine or spinning wheel to know the filament length, linear density and tensile strength by using Universal Testing Machine. The study showed significant variation in terms of cocoon color, cocoon shape, cocoon weight and filament packaging. XPS analysis revealed the presence of elements (Mass %) C, N, O, Si and Ca in varying amounts. The wild cocoons showed the presence of Calcium oxalate crystals which makes the cocoons hard and needs further treatment to reel. In the present investigation, the highest percentage of strain (%) and toughness (g/den) were observed in Antheraea assamensis, which implies that the muga silk is a more compact packing of molecules. It is expected that this study will be the basis for further biomimetic studies to design and manufacture artificial fiber composites with novel morphologies and associated material properties.Keywords: cocoon characterization, north-east India, prospects, silk characterization
Procedia PDF Downloads 902138 Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application
Authors: Thiago Spilborghs Bueno Meyer, Plinio Thomaz Aquino Junior
Abstract:
Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs.Keywords: emotion recognition, speech, deep learning, human-robot interaction, neural networks
Procedia PDF Downloads 1702137 The Extent of Land Use Externalities in the Fringe of Jakarta Metropolitan: An Application of Spatial Panel Dynamic Land Value Model
Authors: Rahma Fitriani, Eni Sumarminingsih, Suci Astutik
Abstract:
In a fast growing region, conversion of agricultural lands which are surrounded by some new development sites will occur sooner than expected. This phenomenon has been experienced by many regions in Indonesia, especially the fringe of Jakarta (BoDeTaBek). Being Indonesia’s capital city, rapid conversion of land in this area is an unavoidable process. The land conversion expands spatially into the fringe regions, which were initially dominated by agricultural land or conservation sites. Without proper control or growth management, this activity will invite greater costs than benefits. The current land use is the use which maximizes its value. In order to maintain land for agricultural activity or conservation, some efforts are needed to keep the land value of this activity as high as possible. In this case, the knowledge regarding the functional relationship between land value and its driving forces is necessary. In a fast growing region, development externalities are the assumed dominant driving force. Land value is the product of the past decision of its use leading to its value. It is also affected by the local characteristics and the observed surrounded land use (externalities) from the previous period. The effect of each factor on land value has dynamic and spatial virtues; an empirical spatial dynamic land value model will be more useful to capture them. The model will be useful to test and to estimate the extent of land use externalities on land value in the short run as well as in the long run. It serves as a basis to formulate an effective urban growth management’s policy. This study will apply the model to the case of land value in the fringe of Jakarta Metropolitan. The model will be used further to predict the effect of externalities on land value, in the form of prediction map. For the case of Jakarta’s fringe, there is some evidence about the significance of neighborhood urban activity – negative externalities, the previous land value and local accessibility on land value. The effects are accumulated dynamically over years, but they will fully affect the land value after six years.Keywords: growth management, land use externalities, land value, spatial panel dynamic
Procedia PDF Downloads 2562136 The Research On The Necessity Of Launching Environmental Programs For Studies In Universities As Well As Training Specialists In This Sphere.
Authors: Anastasia V. Lazareva
Abstract:
Nowadays in the light of the evolving multifocal challenges in the sphere of environmental and social difficulties and despite the strong opposition of globalist and anti-globalist movements, we are facing the urgent need of the creation of a vast pool of educated environmentalists through the implementation of relevant university faculties and programs. Considering the threats humanity has tackled these years portrayed in every tiny detail in AGENDA – 2030 –namely, poverty, biodiversity loss, marine and terrestrial pollution, lack of sanitation, and equal rights for all, we must admit that professionals are required to address them all. With this purpose, we have conducted research based on the questionnaires of students, faculty members, and companies’ chief executives and human resources managers on what particular disciplines should be incorporated into the programs in universities and higher institutions to meet the millennium goals and tests. The research is based on the Linkert scale and covers various age groups of students. The topicality of this issue is predetermined by modern reality. The subject of the research is a questionnaire database filled in by 97 students, 17 faculty members of MGIMO University, and 14 companies’ representatives concerning their attitudes towards the implementation of environmental programs of studies in universities and the choice of disciplines required. The study has a limitation -it is based only on one university students' and faculty members’ questionnaires. The methods applied are a questionnaire, content analysis, sampling, and categorization. The findings of this survey imply that all three groups of respondents admit the necessity of implementing environmental programs for studies in higher education. Nevertheless, different groups favor various programs and disciplines to be incorporated into the curriculum.Keywords: ecology, university studies, environmentalists, education, global challenges
Procedia PDF Downloads 192135 Solvent-Aided Dispersion of Tannic Acid to Enhance Flame Retardancy of Epoxy
Authors: Matthew Korey, Jeffrey Youngblood, John Howarter
Abstract:
Background and Significance: Tannic acid (TA) is a bio-based high molecular weight organic, aromatic molecule that has been found to increase thermal stability and flame retardancy of many polymer matrices when used as an additive. Although it is biologically sourced, TA is a pollutant in industrial wastewater streams, and there is a desire to find applications in which to downcycle this molecule after extraction from these streams. Additionally, epoxy thermosets have revolutionized many industries, but are too flammable to be used in many applications without additives which augment their flame retardancy (FR). Many flame retardants used in epoxy thermosets are synthesized from petroleum-based monomers leading to significant environmental impacts on the industrial scale. Many of these compounds also have significant impacts on human health. Various bio-based modifiers have been developed to improve the FR of the epoxy resin; however, increasing FR of the system without tradeoffs with other properties has proven challenging, especially for TA. Methodologies: In this work, TA was incorporated into the thermoset by use of solvent-exchange using methyl ethyl ketone, a co-solvent for TA, and epoxy resin. Samples were then characterized optically (UV-vis spectroscopy and optical microscopy), thermally (thermogravimetric analysis and differential scanning calorimetry), and for their flame retardancy (mass loss calorimetry). Major Findings: Compared to control samples, all samples were found to have increased thermal stability. Further, the addition of tannic acid to the polymer matrix by the use of solvent greatly increased the compatibility of the additive in epoxy thermosets. By using solvent-exchange, the highest loading level of TA found in literature was achieved in this work (40 wt%). Conclusions: The use of solvent-exchange shows promises for circumventing the limitations of TA in epoxy.Keywords: sustainable, flame retardant, epoxy, tannic acid
Procedia PDF Downloads 1302134 Prevalence of Visual Impairment among School Children in Ethiopia: A Systematic Review and Meta-Analysis
Authors: Merkineh Markos Lorato, Gedefaw Diress Alene
Abstract:
Introduction: Visual impairment is any condition of the eye or visual system that results in loss/reduction of visual functioning. It significantly influences the academic routine and social activities of children, and the effect is severe for low-income countries like Ethiopia. So, this study aimed to determine the pooled prevalence of visual impairment among school children in Ethiopia. Methods: Databases such as Medical Literature Analysis and Retrieval System Online, Excerpta Medica dataBASE, World Wide Web of Science, and Cochrane Library searched to retrieve eligible articles. In addition, Google Scholar and a reference list of the retrieved eligible articles were addressed. Studies that reported the prevalence of visual impairment were included to estimate the pooled prevalence. Data were extracted using a standardized data extraction format prepared in Microsoft Excel and analysis was held using STATA 11 statistical software. I² was used to assess the heterogeneity. Because of considerable heterogeneity, a random effect meta-analysis model was used to estimate the pooled prevalence of visual impairment among school children in Ethiopia. Results: The result of 9 eligible studies showed that the pooled prevalence of visual impairment among school children in Ethiopia was 7.01% (95% CI: 5.46, 8.56%). In the subgroup analysis, the highest prevalence was reported in South Nations Nationalities and Tigray region together (7.99%; 3.63, 12.35), while the lowest prevalence was reported in Addis Ababa (5.73%; 3.93, 7.53). Conclusion: The prevalence of visual impairment among school children is significantly high in Ethiopia. If it is not detected and intervened early, it will cause a lifetime threat to visually impaired school children, so that school vision screening program plan and its implementation may cure the life quality of future generations in Ethiopia.Keywords: visual impairment, school children, Ethiopia, prevalence
Procedia PDF Downloads 372133 The Prodomain-Bound Form of Bone Morphogenetic Protein 10 is Biologically Active on Endothelial Cells
Authors: Austin Jiang, Richard M. Salmon, Nicholas W. Morrell, Wei Li
Abstract:
BMP10 is highly expressed in the developing heart and plays essential roles in cardiogenesis. BMP10 deletion in mice results in embryonic lethality due to impaired cardiac development. In adults, BMP10 expression is restricted to the right atrium, though ventricular hypertrophy is accompanied by increased BMP10 expression in a rat hypertension model. However, reports of BMP10 activity in the circulation are inconclusive. In particular it is not known whether in vivo secreted BMP10 is active or whether additional factors are required to achieve its bioactivity. It has been shown that high-affinity binding of the BMP10 prodomain to the mature ligand inhibits BMP10 signaling activity in C2C12 cells, and it was proposed that prodomain-bound BMP10 (pBMP10) complex is latent. In this study, we demonstrated that the BMP10 prodomain did not inhibit BMP10 signaling activity in multiple endothelial cells, and that recombinant human pBMP10 complex, expressed in mammalian cells and purified under native conditions, was fully active. In addition, both BMP10 in human plasma and BMP10 secreted from the mouse right atrium were fully active. Finally, we confirmed that active BMP10 secreted from mouse right atrium was in the prodomain-bound form. Our data suggest that circulating BMP10 in adults is fully active and that the reported vascular quiescence function of BMP10 in vivo is due to the direct activity of pBMP10 and does not require an additional activation step. Moreover, being an active ligand, recombinant pBMP10 may have therapeutic potential as an endothelial-selective BMP ligand, in conditions characterized by loss of BMP9/10 signaling.Keywords: bone morphogenetic protein 10 (BMP10), endothelial cell, signal transduction, transforming growth factor beta (TGF-B)
Procedia PDF Downloads 2732132 Patient Tracking Challenges During Disasters and Emergencies
Authors: Mohammad H. Yarmohammadian, Reza Safdari, Mahmoud Keyvanara, Nahid Tavakoli
Abstract:
One of the greatest challenges in disaster and emergencies is patient tracking. The concept of tracking has different denotations. One of the meanings refers to tracking patients’ physical locations and the other meaning refers to tracking patients ‘medical needs during emergency services. The main goal of patient tracking is to provide patient safety during disaster and emergencies and manage the flow of patient and information in different locations. In most of cases, there are not sufficient and accurate data regarding the number of injuries, medical conditions and their accommodation and transference. The objective of the present study is to survey on patient tracking issue in natural disaster and emergencies. Methods: This was a narrative study in which the population was E-Journals and the electronic database such as PubMed, Proquest, Science direct, Elsevier, etc. Data was gathered by Extraction Form. All data were analyzed via content analysis. Results: In many countries there is no appropriate and rapid method for tracking patients and transferring victims after the occurrence of incidents. The absence of reliable data of patients’ transference and accommodation, even in the initial hours and days after the occurrence of disasters, and coordination for appropriate resource allocation, have faced challenges for evaluating needs and services challenges. Currently, most of emergency services are based on paper systems, while these systems do not act appropriately in great disasters and incidents and this issue causes information loss. Conclusion: Patient tracking system should update the location of patients or evacuees and information related to their states. Patients’ information should be accessible for authorized users to continue their treatment, accommodation and transference. Also it should include timely information of patients’ location as soon as they arrive somewhere and leave therein such a way that health care professionals can be able to provide patients’ proper medical treatment.Keywords: patient tracking, challenges, disaster, emergency
Procedia PDF Downloads 3042131 Simulated Mechanical Analysis on Hydroxyapatite Coated Porous Polylactic Acid Scaffold for Bone Grafting
Authors: Ala Abobakr Abdulhafidh Al-Dubai
Abstract:
Bone loss has risen due to fractures, surgeries, and traumatic injuries. Scientists and engineers have worked over the years to find solutions to heal and accelerate bone regeneration. The bone grafting technique has been utilized, which projects significant improvement in the bone regeneration area. An extensive study is essential on the relation between the mechanical properties of bone scaffolds and the pore size of the scaffolds, as well as the relation between the mechanical properties of bone scaffolds with the development of bioactive coating on the scaffolds. In reducing the cost and time, a mechanical simulation analysis is beneficial to simulate both relations. Therefore, this study highlights the simulated mechanical analyses on three-dimensional (3D) polylactic acid (PLA) scaffolds at two different pore sizes (P: 400 and 600 μm) and two different internals distances of (D: 600 and 900 μm), with and without the presence of hydroxyapatite (HA) coating. The 3D scaffold models were designed using SOLIDWORKS software. The respective material properties were assigned with the fixation of boundary conditions on the meshed 3D models. Two different loads were applied on the PLA scaffolds, including side loads of 200 N and vertical loads of 2 kN. While only vertical loads of 2 kN were applied on the HA coated PLA scaffolds. The PLA scaffold P600D900, which has the largest pore size and maximum internal distance, generated the minimum stress under the applied vertical load. However, that same scaffold became weaker under the applied side load due to the high construction gap between the pores. The development of HA coating on top of the PLA scaffolds induced greater stress generation compared to the non-coated scaffolds which is tailorable for bone implantation. This study concludes that the pore size and the construction of HA coating on bone scaffolds affect the mechanical strength of the bone scaffolds.Keywords: hydroxyapatite coating, bone scaffold, mechanical simulation, three-dimensional (3D), polylactic acid (PLA).
Procedia PDF Downloads 602130 The Current Ways of Thinking Mild Traumatic Brain Injury and Clinical Practice in a Trauma Hospital: A Pilot Study
Authors: P. Donnelly, G. Mitchell
Abstract:
Traumatic Brain Injury (TBI) is a major contributor to the global burden of disease; despite its ubiquity, there is significant variation in diagnosis, prognosis, and treatment between clinicians. This study aims to examine the spectrum of approaches that currently exist at a Level 1 Trauma Centre in Australasia by surveying Emergency Physicians and Neurosurgeons on those aspects of mTBI. A pilot survey of 17 clinicians (Neurosurgeons, Emergency Physicians, and others who manage patients with mTBI) at a Level 1 Trauma Centre in Brisbane, Australia, was conducted. The objective of this study was to examine the importance these clinicians place on various elements in their approach to the diagnosis, prognostication, and treatment of mTBI. The data were summarised, and the descriptive statistics reported. Loss of consciousness and post-traumatic amnesia were rated as the most important signs or symptoms in diagnosing mTBI (median importance of 8). MRI was the most important imaging modality in diagnosing mTBI (median importance of 7). ‘Number of the Previous TBIs’ and Intracranial Injury on Imaging’ were rated as the most important elements for prognostication (median importance of 9). Education and reassurance were rated as the most important modality for treating mTBI (median importance of 7). There was a statistically insignificant variation between the specialties as to the importance they place on each of these components. In this Australian tertiary trauma center, there appears to be variation in how clinicians approach mTBI. This study is underpowered to state whether this is between clinicians within a specialty or a trend between specialties. This variation is worthwhile in investigating as a step toward a unified approach to diagnosing, prognosticating, and treating this common pathology.Keywords: mild traumatic brain injury, adult, clinician, survey
Procedia PDF Downloads 1302129 Exploring Key Elements of Successful Distance Learning Programs: A Case Study in Palau
Authors: Maiya Smith, Tyler Thorne
Abstract:
Background: The Pacific faces multiple healthcare crises, including high rates of noncommunicable diseases, infectious disease outbreaks, and susceptibility to natural disasters. These issues are expected to worsen in the coming decades, increasing the burden on an already understaffed healthcare system. Telehealth is not new to the Pacific, but improvements in technology and accessibility have increased its utility and have already proven to reduce costs and increase access to care in remote areas. Telehealth includes distance learning; a form of education that can help alleviate many healthcare issues by providing continuing education to healthcare professionals and upskilling staff, while decreasing costs. This study examined distance learning programs at the Ministry of Health in the Pacific nation of Palau and identified key elements to their successful distance learning programs. Methods: Staff at the Belau National Hospital in Koror, Palau as well as private practitioners were interviewed to assess distance learning programs utilized. This included physicians, IT personnel, public health members, and department managers of allied health. In total, 36 people were interviewed. Standardized questions and surveys were conducted in person throughout the month of July 2019. Results: Two examples of successful distance learning programs were identified. Looking at the factors that made these programs successful, as well as consulting with staff who undertook other distance learning programs, four factors for success were determined: having a cohort, having a facilitator, dedicated study time off from work, and motivation. Discussion: In countries as geographically isolated as the Pacific, with poor access to specialists and resources, telehealth has the potential to radically change how healthcare is delivered. Palau shares similar resources and issues as other countries in the Pacific and the lessons learned from their successful programs can be adapted to help other Pacific nations develop their own distance learning programs.Keywords: distance learning, Pacific, Palau, telehealth
Procedia PDF Downloads 1402128 The Effect of "Trait" Variance of Personality on Depression: Application of the Trait-State-Occasion Modeling
Authors: Pei-Chen Wu
Abstract:
Both preexisting cross-sectional and longitudinal studies of personality-depression relationship have suffered from one main limitation: they ignored the stability of the construct of interest (e.g., personality and depression) can be expected to influence the estimate of the association between personality and depression. To address this limitation, the Trait-State-Occasion (TSO) modeling was adopted to analyze the sources of variance of the focused constructs. A TSO modeling was operated by partitioning a state variance into time-invariant (trait) and time-variant (occasion) components. Within a TSO framework, it is possible to predict change on the part of construct that really changes (i.e., time-variant variance), when controlling the trait variances. 750 high school students were followed for 4 waves over six-month intervals. The baseline data (T1) were collected from the senior high schools (aged 14 to 15 years). Participants were given Beck Depression Inventory and Big Five Inventory at each assessment. TSO modeling revealed that 70~78% of the variance in personality (five constructs) was stable over follow-up period; however, 57~61% of the variance in depression was stable. For personality construct, there were 7.6% to 8.4% of the total variance from the autoregressive occasion factors; for depression construct there were 15.2% to 18.1% of the total variance from the autoregressive occasion factors. Additionally, results showed that when controlling initial symptom severity, the time-invariant components of all five dimensions of personality were predictive of change in depression (Extraversion: B= .32, Openness: B = -.21, Agreeableness: B = -.27, Conscientious: B = -.36, Neuroticism: B = .39). Because five dimensions of personality shared some variance, the models in which all five dimensions of personality were simultaneous to predict change in depression were investigated. The time-invariant components of five dimensions were still significant predictors for change in depression (Extraversion: B = .30, Openness: B = -.24, Agreeableness: B = -.28, Conscientious: B = -.35, Neuroticism: B = .42). In sum, the majority of the variability of personality was stable over 2 years. Individuals with the greater tendency of Extraversion and Neuroticism have higher degrees of depression; individuals with the greater tendency of Openness, Agreeableness and Conscientious have lower degrees of depression.Keywords: assessment, depression, personality, trait-state-occasion model
Procedia PDF Downloads 1762127 Effect of Al Addition on Microstructure and Properties of NbTiZrCrAl Refractory High Entropy Alloys
Authors: Xiping Guo, Fanglin Ge, Ping Guan
Abstract:
Refractory high entropy alloys are alternative materials expected to be employed at high temperatures. The comprehensive changes of microstructure and properties of NbTiZrCrAl refractory high entropy alloys are systematically studied by adjusting Al content. Five kinds of button alloy ingots with different contents of Al in NbTiZrCrAlX (X=0, 0.2, 0.5, 0.75, 1.0) were prepared by vacuum non-consumable arc melting technology. The microstructure analysis results show that the five alloys are composed of BCC solid solution phase rich in Nb and Ti and Laves phase rich in Cr, Zr, and Al. The addition of Al changes the structure from hypoeutectic to hypereutectic, increases the proportion of Laves phase, and changes the structure from cubic C15 to hexagonal C14. The hardness and fracture toughness of the five alloys were tested at room temperature, and the compressive mechanical properties were tested at 1000℃. The results showed that the addition of Al increased the proportion of Laves phase and decreased the proportion of the BCC phase, thus increasing the hardness and decreasing the fracture toughness at room temperature. However, at 1000℃, the strength of 0.5Al and 0.75Al alloys whose composition is close to the eutectic point is the best, which indicates that the eutectic structure is of great significance for the improvement of high temperature strength of NbTiZrCrAl refractory high entropy alloys. The five alloys were oxidized for 1 h and 20 h in static air at 1000℃. The results show that only the oxide film of 0Al alloy falls off after oxidizing for 1 h at 1000℃. After 20h, the oxide film of all the alloys fell off, but the oxide film of alloys containing Al was more dense and complete. By producing protective oxide Al₂O₃, inhibiting the preferential oxidation of Zr, promoting the preferential oxidation of Ti, and combination of Cr₂O₃ and Nb₂O₅ to form CrNbO₄, Al significantly improves the high temperature oxidation resistance of NbTiZrCrAl refractory high entropy alloys.Keywords: NbTiZrCrAl, refractory high entropy alloy, al content, microstructural evolution, room temperature mechanical properties, high temperature compressive strength, oxidation resistance
Procedia PDF Downloads 842126 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 1142125 Geographic and Territorial Knowledge as Epistemic Contexts for Intercultural Curriculum Development
Authors: Verónica Muñoz-Rivero
Abstract:
The historically marginalized indigenous communities in the Atacama Desert continue to experience and struggle curricular hegemony in a prevalent monocultural educational context that denies heritage, culture and epistemologies in a documented attempted knowledge negation by the educational policies, the national curriculum and educational culture. The ancestral indigenous community of Toconce demands a territorial-based intercultural education and a school in their ancestral land to prevent the progressive cultural loss as they reclaim their memory and identity negated. This case study makes use of the intercultural theoretical framework and open qualitative methodology to analyze local socio-educational reality integrating aspects related to the educational experience, education demands for future generations and importance given to formal education. The interlocutors: elders, parents, caretakers and former teachers raised the educational experience for the indigenous childhood as an intergenerational voice that experienced discrimination, exclusion and racism on their K-12 trajectories. By center, the indigenous epistemologies, geography and memory, this research proposes a project-based learning approach anchored to the Limpia de Canales ceremony to develop a situated territorial intercultural curriculum unpacking from the local epistemology and structure thinking. The work on terraces gives students the opportunity to co-create a real-life application with practical purpose and present the importance of reinforcing notions related to the relevance of a situated intercultural curriculum for social justice in the formative development of prospective teachers.Keywords: cultural studies, decolonial education, epistemic symmetry, intercultural curriculum, multidimensional curriculum
Procedia PDF Downloads 1932124 Examining Predictive Coding in the Hierarchy of Visual Perception in the Autism Spectrum Using Fast Periodic Visual Stimulation
Authors: Min L. Stewart, Patrick Johnston
Abstract:
Predictive coding has been proposed as a general explanatory framework for understanding the neural mechanisms of perception. As such, an underweighting of perceptual priors has been hypothesised to underpin a range of differences in inferential and sensory processing in autism spectrum disorders. However, empirical evidence to support this has not been well established. The present study uses an electroencephalography paradigm involving changes of facial identity and person category (actors etc.) to explore how levels of autistic traits (AT) affect predictive coding at multiple stages in the visual processing hierarchy. The study uses a rapid serial presentation of faces, with hierarchically structured sequences involving both periodic and aperiodic repetitions of different stimulus attributes (i.e., person identity and person category) in order to induce contextual expectations relating to these attributes. It investigates two main predictions: (1) significantly larger and late neural responses to change of expected visual sequences in high-relative to low-AT, and (2) significantly reduced neural responses to violations of contextually induced expectation in high- relative to low-AT. Preliminary frequency analysis data comparing high and low-AT show greater and later event-related-potentials (ERPs) in occipitotemporal areas and prefrontal areas in high-AT than in low-AT for periodic changes of facial identity and person category but smaller ERPs over the same areas in response to aperiodic changes of identity and category. The research advances our understanding of how abnormalities in predictive coding might underpin aberrant perceptual experience in autism spectrum. This is the first stage of a research project that will inform clinical practitioners in developing better diagnostic tests and interventions for people with autism.Keywords: hierarchical visual processing, face processing, perceptual hierarchy, prediction error, predictive coding
Procedia PDF Downloads 1112123 Characterization and Pcr Detection of Selected Strains of Psychrotrophic Bacteria Isolated From Raw Milk
Authors: Kidane workelul, Li xu, Xiaoyang Pang, Jiaping Lv
Abstract:
Dairy products are exceptionally ideal media for the growth of microorganisms because of their high nutritional content. There are several ways that milk might get contaminated throughout the milking process, including how the raw milk is transported and stored, as well as how long it is kept before being processed. Psychrotrophic bacteria are among the one which can deteriorate the quality of milk mainly their heat resistance proteas and lipase enzyme. For this research purpose 8 selected strains of Psychrotrophic bacteria (Entrococcus hirae, Pseudomonas fluorescens, Pseudomonas azotoformans, Pseudomonas putida, Exiguobacterium indicum, Pseudomonas paralactice, Acinetobacter indicum, Serratia liquefacients)are chosen and try to determine their characteristics based on the research methodology protocol. Thus, the 8 selected strains are cultured, plated incubate, extracted their genomic DNA and genome DNA was amplified, the purpose of the study was to identify their Psychrotrophic properties, lipase hydrolysis positive test, their optimal incubation temperature, designed primer using the noble strain P,flourescens conserved region area in target with lipA gene, optimized primer specificity as well as sensitivity and PCR detection for lipase positive strains using the design primers. Based on the findings both the selected 8 strains isolated from stored raw milk are Psychrotrophic bacteria, 6 of the selected strains except the 2 strains are positive for lipase hydrolysis, their optimal temperature is 20 to 30 OC, the designed primer specificity is very accurate and amplifies for those strains only with lipase positive but could not amplify for the others. Thus, the result is promising and could help in detecting the Psychrotrophic bacteria producing heat resistance enzymes (lipase) at early stage before the milk is processed and this will safe production loss for the dairy industry.Keywords: dairy industry, heat-resistant, lipA, milk, primer and psychrotrophic
Procedia PDF Downloads 642122 Valorization of Surveillance Data and Assessment of the Sensitivity of a Surveillance System for an Infectious Disease Using a Capture-Recapture Model
Authors: Jean-Philippe Amat, Timothée Vergne, Aymeric Hans, Bénédicte Ferry, Pascal Hendrikx, Jackie Tapprest, Barbara Dufour, Agnès Leblond
Abstract:
The surveillance of infectious diseases is necessary to describe their occurrence and help the planning, implementation and evaluation of risk mitigation activities. However, the exact number of detected cases may remain unknown whether surveillance is based on serological tests because identifying seroconversion may be difficult. Moreover, incomplete detection of cases or outbreaks is a recurrent issue in the field of disease surveillance. This study addresses these two issues. Using a viral animal disease as an example (equine viral arteritis), the goals were to establish suitable rules for identifying seroconversion in order to estimate the number of cases and outbreaks detected by a surveillance system in France between 2006 and 2013, and to assess the sensitivity of this system by estimating the total number of outbreaks that occurred during this period (including unreported outbreaks) using a capture-recapture model. Data from horses which exhibited at least one positive result in serology using viral neutralization test between 2006 and 2013 were used for analysis (n=1,645). Data consisted of the annual antibody titers and the location of the subjects (towns). A consensus among multidisciplinary experts (specialists in the disease and its laboratory diagnosis, epidemiologists) was reached to consider seroconversion as a change in antibody titer from negative to at least 32 or as a three-fold or greater increase. The number of seroconversions was counted for each town and modeled using a unilist zero-truncated binomial (ZTB) capture-recapture model with R software. The binomial denominator was the number of horses tested in each infected town. Using the defined rules, 239 cases located in 177 towns (outbreaks) were identified from 2006 to 2013. Subsequently, the sensitivity of the surveillance system was estimated as the ratio of the number of detected outbreaks to the total number of outbreaks that occurred (including unreported outbreaks) estimated using the ZTB model. The total number of outbreaks was estimated at 215 (95% credible interval CrI95%: 195-249) and the surveillance sensitivity at 82% (CrI95%: 71-91). The rules proposed for identifying seroconversion may serve future research. Such rules, adjusted to the local environment, could conceivably be applied in other countries with surveillance programs dedicated to this disease. More generally, defining ad hoc algorithms for interpreting the antibody titer could be useful regarding other human and animal diseases and zoonosis when there is a lack of accurate information in the literature about the serological response in naturally infected subjects. This study shows how capture-recapture methods may help to estimate the sensitivity of an imperfect surveillance system and to valorize surveillance data. The sensitivity of the surveillance system of equine viral arteritis is relatively high and supports its relevance to prevent the disease spreading.Keywords: Bayesian inference, capture-recapture, epidemiology, equine viral arteritis, infectious disease, seroconversion, surveillance
Procedia PDF Downloads 297