Search results for: computer-assisted image processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5715

Search results for: computer-assisted image processing

675 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 158
674 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 145
673 Mesoporous Na2Ti3O7 Nanotube-Constructed Materials with Hierarchical Architecture: Synthesis and Properties

Authors: Neumoin Anton Ivanovich, Opra Denis Pavlovich

Abstract:

Materials based on titanium oxide compounds are widely used in such areas as solar energy, photocatalysis, food industry and hygiene products, biomedical technologies, etc. Demand for them has also formed in the battery industry (an example of this is the commercialization of Li4Ti5O12), where much attention has recently been paid to the development of next-generation systems and technologies, such as sodium-ion batteries. This dictates the need to search for new materials with improved characteristics, as well as ways to obtain them that meet the requirements of scalability. One of the ways to solve these problems can be the creation of nanomaterials that often have a complex of physicochemical properties that radically differ from the characteristics of their counterparts in the micro- or macroscopic state. At the same time, it is important to control the texture (specific surface area, porosity) of such materials. In view of the above, among other methods, the hydrothermal technique seems to be suitable, allowing a wide range of control over the conditions of synthesis. In the present study, a method was developed for the preparation of mesoporous nanostructured sodium trititanate (Na2Ti3O7) with a hierarchical architecture. The materials were synthesized by hydrothermal processing and exhibit a complex hierarchically organized two-layer architecture. At the first level of the hierarchy, materials are represented by particles having a roughness surface, and at the second level, by one-dimensional nanotubes. The products were found to have high specific surface area and porosity with a narrow pore size distribution (about 6 nm). As it is known, the specific surface area and porosity are important characteristics of functional materials, which largely determine the possibilities and directions of their practical application. Electrochemical impedance spectroscopy data show that the resulting sodium trititanate has a sufficiently high electrical conductivity. As expected, the synthesized complexly organized nanoarchitecture based on sodium trititanate with a porous structure can be practically in demand, for example, in the field of new generation electrochemical storage and energy conversion devices.

Keywords: sodium trititanate, hierarchical materials, mesoporosity, nanotubes, hydrothermal synthesis

Procedia PDF Downloads 98
672 Prediction of Formation Pressure Using Artificial Intelligence Techniques

Authors: Abdulmalek Ahmed

Abstract:

Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).

Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)

Procedia PDF Downloads 145
671 A Facile One Step Modification of Poly(dimethylsiloxane) via Smart Polymers for Biomicrofluidics

Authors: A. Aslihan Gokaltun, Martin L. Yarmush, Ayse Asatekin, O. Berk Usta

Abstract:

Poly(dimethylsiloxane) (PDMS) is one of the most widely used materials in the fabrication of microfluidic devices. It is easily patterned and can replicate features down to nanometers. Its flexibility, gas permeability that allows oxygenation, and low cost also drive its wide adoption. However, a major drawback of PDMS is its hydrophobicity and fast hydrophobic recovery after surface hydrophilization. This results in significant non-specific adsorption of proteins as well as small hydrophobic molecules such as therapeutic drugs limiting the utility of PDMS in biomedical microfluidic circuitry. While silicon, glass, and thermoplastics have been used, they come with problems of their own such as rigidity, high cost, and special tooling needs, which limit their use to a smaller user base. Many strategies to alleviate these common problems with PDMS are lack of general practical applicability, or have limited shelf lives in terms of the modifications they achieve. This restricts large scale implementation and adoption by industrial and research communities. Accordingly, we aim to tailor biocompatible PDMS surfaces by developing a simple and one step bulk modification approach with novel smart materials to reduce non-specific molecular adsorption and to stabilize long-term cell analysis with PDMS substrates. Smart polymers that blended with PDMS during device manufacture, spontaneously segregate to surfaces when in contact with aqueous solutions and create a < 1 nm layer that reduces non-specific adsorption of organic and biomolecules. Our methods are fully compatible with existing PDMS device manufacture protocols without any additional processing steps. We have demonstrated that our modified PDMS microfluidic system is effective at blocking the adsorption of proteins while retaining the viability of primary rat hepatocytes and preserving the biocompatibility, oxygen permeability, and transparency of the material. We expect this work will enable the development of fouling-resistant biomedical materials from microfluidics to hospital surfaces and tubing.

Keywords: cell culture, microfluidics, non-specific protein adsorption, PDMS, smart polymers

Procedia PDF Downloads 288
670 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 213
669 Corpus Stylistics and Multidimensional Analysis for English for Specific Purposes Teaching and Assessment

Authors: Svetlana Strinyuk, Viacheslav Lanin

Abstract:

Academic English has become lingua franca for international scientific community which stimulates universities to introduce English for Specific Purposes (EAP) courses into curriculum. Teaching L2 EAP students might be fulfilled with corpus technologies and digital stylistics. A special software developed to reach the manifold task of teaching, assessing and researching academic writing of L2 students on basis of digital stylistics and multidimensional analysis was created. A set of annotations (style markers) – grammar, lexical and syntactic features most significant of academic writing was built. Contrastive comparison of two corpora “model corpus”, subject domain limited papers published by competent writers in leading academic journals, and “students’ corpus”, subject domain limited papers written by last year students allows to receive data about the features of academic writing underused or overused by L2 EAP student. Both corpora are tagged with a special software created in GATE Developer. Style markers within the framework of research might be replaced depending on the relevance and validity of the result which is achieved from research corpora. Thus, selecting relevant (high frequency) style markers and excluding less relevant, i.e. less frequent annotations, high validity of the model is achieved. Software allows to compare the data received from processing model corpus to students’ corpus and get reports which can be used in teaching and assessment. The less deviation from the model corpus students demonstrates in their writing the higher is academic writing skill acquisition. The research showed that several style markers (hedging devices) were underused by L2 EAP students whereas lexical linking devices were used excessively. A special software implemented into teaching of EAP courses serves as a successful visual aid, makes assessment more valid; it is indicative of the degree of writing skill acquisition, and provides data for further research.

Keywords: corpus technologies in EAP teaching, multidimensional analysis, GATE Developer, corpus stylistics

Procedia PDF Downloads 186
668 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India

Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar

Abstract:

The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.

Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose

Procedia PDF Downloads 249
667 Lead Chalcogenide Quantum Dots for Use in Radiation Detectors

Authors: Tom Nakotte, Hongmei Luo

Abstract:

Lead chalcogenide-based (PbS, PbSe, and PbTe) quantum dots (QDs) were synthesized for the purpose of implementing them in radiation detectors. Pb based materials have long been of interest for gamma and x-ray detection due to its high absorption cross section and Z number. The emphasis of the studies was on exploring how to control charge carrier transport within thin films containing the QDs. The properties of QDs itself can be altered by changing the size, shape, composition, and surface chemistry of the dots, while the properties of carrier transport within QD films are affected by post-deposition treatment of the films. The QDs were synthesized using colloidal synthesis methods and films were grown using multiple film coating techniques, such as spin coating and doctor blading. Current QD radiation detectors are based on the QD acting as fluorophores in a scintillation detector. Here the viability of using QDs in solid-state radiation detectors, for which the incident detectable radiation causes a direct electronic response within the QD film is explored. Achieving high sensitivity and accurate energy quantification in QD radiation detectors requires a large carrier mobility and diffusion lengths in the QD films. Pb chalcogenides-based QDs were synthesized with both traditional oleic acid ligands as well as more weakly binding oleylamine ligands, allowing for in-solution ligand exchange making the deposition of thick films in a single step possible. The PbS and PbSe QDs showed better air stability than PbTe. After precipitation the QDs passivated with the shorter ligand are dispersed in 2,6-difloupyridine resulting in colloidal solutions with concentrations anywhere from 10-100 mg/mL for film processing applications, More concentrated colloidal solutions produce thicker films during spin-coating, while an extremely concentrated solution (100 mg/mL) can be used to produce several micrometer thick films using doctor blading. Film thicknesses of micrometer or even millimeters are needed for radiation detector for high-energy gamma rays, which are of interest for astrophysics or nuclear security, in order to provide sufficient stopping power.

Keywords: colloidal synthesis, lead chalcogenide, radiation detectors, quantum dots

Procedia PDF Downloads 122
666 Recycled Cellulosic Fibers and Lignocellulosic Aggregates for Sustainable Building Materials

Authors: N. Stevulova, I. Schwarzova, V. Hospodarova, J. Junak, J. Briancin

Abstract:

Sustainability is becoming a priority for developers and the use of environmentally friendly materials is increasing. Nowadays, the application of raw materials from renewable sources to building materials has gained a significant interest in this research area. Lignocellulosic aggregates and cellulosic fibers are coming from many different sources such as wood, plants and waste. They are promising alternative materials to replace synthetic, glass and asbestos fibers as reinforcement in inorganic matrix of composites. Natural fibers are renewable resources so their cost is relatively low in comparison to synthetic fibers. With the consideration of environmental consciousness, natural fibers are biodegradable so their using can reduce CO2 emissions in the building materials production. The use of cellulosic fibers in cementitious matrices have gained importance because they make the composites lighter at high fiber content, they have comparable cost - performance ratios to similar building materials and they could be processed from waste paper, thus expanding the opportunities for waste utilization in cementitious materials. The main objective of this work is to find out the possibility of using different wastes: hemp hurds as waste of hemp stem processing and recycled fibers obtained from waste paper for making cement composite products such as mortars based on cellulose fibers. This material was made of cement mortar containing organic filler based on hemp hurds and recycled waste paper. In addition, the effects of fibers and their contents on some selected physical and mechanical properties of the fiber-cement plaster composites have been investigated. In this research organic material have used to mortars as 2.0, 5.0 and 10.0 % replacement of cement weight. Reference sample is made for comparison of physical and mechanical properties of cement composites based on recycled cellulosic fibers and lignocellulosic aggregates. The prepared specimens were tested after 28 days of curing in order to investigate density, compressive strength and water absorbability. Scanning Electron Microscopy examination was also carried out.

Keywords: Hemp hurds, organic filler, recycled paper, sustainable building materials

Procedia PDF Downloads 218
665 Analysis of Socio-Economics of Tuna Fisheries Management (Thunnus Albacares Marcellus Decapterus) in Makassar Waters Strait and Its Effect on Human Health and Policy Implications in Central Sulawesi-Indonesia

Authors: Siti Rahmawati

Abstract:

Indonesia has had long period of monetary economic crisis and it is followed by an upward trend in the price of fuel oil. This situation impacts all aspects of tuna fishermen community. For instance, the basic needs of fishing communities increase and the lower purchasing power then lead to economic and social instability as well as the health of fishermen household. To understand this AHP method is applied to acknowledge the model of tuna fisheries management priorities and cold chain marketing channel and the utilization levels that impact on human health. The study is designed as a development research with the number of 180 respondents. The data were analyzed by Analytical Hierarchy Process (AHP) method. The development of tuna fishery business can improve productivity of production with economic empowerment activities for coastal communities, improving the competitiveness of products, developing fish processing centers and provide internal capital for the development of optimal fishery business. From economic aspects, fishery business is more attracting because the benefit cost ratio of 2.86. This means that for 10 years, the economic life of this project can work well as B/C> 1 and therefore the rate of investment is economically viable. From the health aspects, tuna can reduce the risk of dying from heart disease by 50%, because tuna contain selenium in the human body. The consumption of 100 g of tuna meet 52.9% of the selenium in the body and activating the antioxidant enzyme glutathione peroxidaxe which can protect the body from free radicals and stimulate various cancers. The results of the analytic hierarchy process that the quality of tuna products is the top priority for export quality as well as quality control in order to compete in the global market. The implementation of the policy can increase the income of fishermen and reduce the poverty of fishermen households and have impact on the human health whose has high risk of disease.

Keywords: management of tuna, social, economic, health

Procedia PDF Downloads 308
664 Finite Element Analysis of Mechanical Properties of Additively Manufactured 17-4 PH Stainless Steel

Authors: Bijit Kalita, R. Jayaganthan

Abstract:

Additive manufacturing (AM) is a novel manufacturing method which provides more freedom in design, manufacturing near-net-shaped parts as per demand, lower cost of production, and expedition in delivery time to market. Among various metals, AM techniques, Laser Powder Bed Fusion (L-PBF) is the most prominent one that provides higher accuracy and powder proficiency in comparison to other methods. Particularly, 17-4 PH alloy is martensitic precipitation hardened (PH) stainless steel characterized by resistance to corrosion up to 300°C and tailorable strengthening by copper precipitates. Additively manufactured 17-4 PH stainless steel exhibited a dendritic/cellular solidification microstructure in the as-built condition. It is widely used as a structural material in marine environments, power plants, aerospace, and chemical industries. The excellent weldability of 17-4 PH stainless steel and its ability to be heat treated to improve mechanical properties make it a good material choice for L-PBF. In this study, the microstructures of martensitic stainless steels in the as-built state, as well as the effects of process parameters, building atmosphere, and heat treatments on the microstructures, are reviewed. Mechanical properties of fabricated parts are studied through micro-hardness and tensile tests. Tensile tests are carried out under different strain rates at room temperature. In addition, the effect of process parameters and heat treatment conditions on mechanical properties is critically reviewed. These studies revealed the performance of L-PBF fabricated 17–4 PH stainless-steel parts under cyclic loading, and the results indicated that fatigue properties were more sensitive to the defects generated by L-PBF (e.g., porosity, microcracks), leading to the low fracture strains and stresses under cyclic loading. Rapid melting, solidification, and re-melting of powders during the process and different combinations of processing parameters result in a complex thermal history and heterogeneous microstructure and are necessary to better control the microstructures and properties of L-PBF PH stainless steels through high-efficiency and low-cost heat treatments.

Keywords: 17–4 PH stainless steel, laser powder bed fusion, selective laser melting, microstructure, additive manufacturing

Procedia PDF Downloads 110
663 Cellular RNA-Binding Domains with Distant Homology in Viral Proteomes

Authors: German Hernandez-Alonso, Antonio Lazcano, Arturo Becerra

Abstract:

Until today, viruses remain controversial and poorly understood; about their origin, this problem represents an enigma and one of the great challenges for the contemporary biology. Three main theories have tried to explain the origin of viruses: regressive evolution, escaped host gene, and pre-cellular origin. Under the perspective of the escaped host gene theory, it can be assumed a cellular origin of viral components, like protein RNA-binding domains. These universal distributed RNA-binding domains are related to the RNA metabolism processes, including transcription, processing, and modification of transcripts, translation, RNA degradation and its regulation. In the case of viruses, these domains are present in important viral proteins like helicases, nucleases, polymerases, capsid proteins or regulation factors. Therefore, they are implicated in the replicative cycle and parasitic processes of viruses. That is why it is possible to think that those domains present low levels of divergence due to selective pressures. For these reasons, the main goal for this project is to create a catalogue of the RNA-binding domains found in all the available viral proteomes, using bioinformatics tools in order to analyze its evolutionary process, and thus shed light on the general virus evolution. ProDom database was used to obtain larger than six thousand RNA-binding domain families that belong to the three cellular domains of life and some viral groups. From the sequences of these families, protein profiles were created using HMMER 3.1 tools in order to find distant homologous within greater than four thousand viral proteomes available in GenBank. Once accomplished the analysis, almost three thousand hits were obtained in the viral proteomes. The homologous sequences were found in proteomes of the principal Baltimore viral groups, showing interesting distribution patterns that can contribute to understand the evolution of viruses and their host-virus interactions. Presence of cellular RNA-binding domains within virus proteomes seem to be explained by closed interactions between viruses and their hosts. Recruitment of these domains is advantageous for the viral fitness, allowing viruses to be adapted to the host cellular environment.

Keywords: bioinformatics tools, distant homology, RNA-binding domains, viral evolution

Procedia PDF Downloads 381
662 Exploring Pre-Trained Automatic Speech Recognition Model HuBERT for Early Alzheimer’s Disease and Mild Cognitive Impairment Detection in Speech

Authors: Monica Gonzalez Machorro

Abstract:

Dementia is hard to diagnose because of the lack of early physical symptoms. Early dementia recognition is key to improving the living condition of patients. Speech technology is considered a valuable biomarker for this challenge. Recent works have utilized conventional acoustic features and machine learning methods to detect dementia in speech. BERT-like classifiers have reported the most promising performance. One constraint, nonetheless, is that these studies are either based on human transcripts or on transcripts produced by automatic speech recognition (ASR) systems. This research contribution is to explore a method that does not require transcriptions to detect early Alzheimer’s disease (AD) and mild cognitive impairment (MCI). This is achieved by fine-tuning a pre-trained ASR model for the downstream early AD and MCI tasks. To do so, a subset of the thoroughly studied Pitt Corpus is customized. The subset is balanced for class, age, and gender. Data processing also involves cropping the samples into 10-second segments. For comparison purposes, a baseline model is defined by training and testing a Random Forest with 20 extracted acoustic features using the librosa library implemented in Python. These are: zero-crossing rate, MFCCs, spectral bandwidth, spectral centroid, root mean square, and short-time Fourier transform. The baseline model achieved a 58% accuracy. To fine-tune HuBERT as a classifier, an average pooling strategy is employed to merge the 3D representations from audio into 2D representations, and a linear layer is added. The pre-trained model used is ‘hubert-large-ls960-ft’. Empirically, the number of epochs selected is 5, and the batch size defined is 1. Experiments show that our proposed method reaches a 69% balanced accuracy. This suggests that the linguistic and speech information encoded in the self-supervised ASR-based model is able to learn acoustic cues of AD and MCI.

Keywords: automatic speech recognition, early Alzheimer’s recognition, mild cognitive impairment, speech impairment

Procedia PDF Downloads 117
661 A Furniture Industry Concept for a Sustainable Generative Design Platform Employing Robot Based Additive Manufacturing

Authors: Andrew Fox, Tao Zhang, Yuanhong Zhao, Qingping Yang

Abstract:

The furniture manufacturing industry has been slow in general to adopt the latest manufacturing technologies, historically relying heavily upon specialised conventional machinery. This approach not only requires high levels of specialist process knowledge, training, and capital investment but also suffers from significant subtractive manufacturing waste and high logistics costs due to the requirement for centralised manufacturing, with high levels of furniture product not re-cycled or re-used. This paper aims to address the problems by introducing suitable digital manufacturing technologies to create step changes in furniture manufacturing design, as the traditional design practices have been reported as building in 80% of environmental impact. In this paper, a 3D printing robot for furniture manufacturing is reported. The 3D printing robot mainly comprises a KUKA industrial robot, an Arduino microprocessor, and a self-assembled screw fed extruder. Compared to traditional 3D printer, the 3D printing robot has larger motion range and can be easily upgraded to enlarge the maximum size of the printed object. Generative design is also investigated in this paper, aiming to establish a combined design methodology that allows assessment of goals, constraints, materials, and manufacturing processes simultaneously. ‘Matrixing’ for part amalgamation and product performance optimisation is enabled. The generative design goals of integrated waste reduction increased manufacturing efficiency, optimised product performance, and reduced environmental impact institute a truly lean and innovative future design methodology. In addition, there is massive future potential to leverage Single Minute Exchange of Die (SMED) theory through generative design post-processing of geometry for robot manufacture, resulting in ‘mass customised’ furniture with virtually no setup requirements. These generatively designed products can be manufactured using the robot based additive manufacturing. Essentially, the 3D printing robot is already functional; some initial goals have been achieved and are also presented in this paper.

Keywords: additive manufacturing, generative design, robot, sustainability

Procedia PDF Downloads 122
660 Damage Detection in a Cantilever Beam under Different Excitation and Temperature Conditions

Authors: A. Kyprianou, A. Tjirkallis

Abstract:

Condition monitoring of structures in service is very important as it provides information about the risk of damage development. One of the essential constituents of structural condition monitoring is the damage detection methodology. In the context of condition monitoring of in service structures a damage detection methodology analyses data obtained from the structure while it is in operation. Usually, this means that the data could be affected by operational and environmental conditions in a way that could mask the effects of a possible damage on the data. This, depending on the damage detection methodology, could lead to either false alarms or miss existing damages. In this article a damage detection methodology that is based on the Spatio-temporal continuous wavelet transform (SPT-CWT) analysis of a sequence of experimental time responses of a cantilever beam is proposed. The cantilever is subjected to white and pink noise excitation to simulate different operating conditions. In addition, in order to simulate changing environmental conditions, the cantilever is subjected to heating by a heat gun. The response of the cantilever beam is measured by a high-speed camera. Edges are extracted from the series of images of the beam response captured by the camera. Subsequent processing of the edges gives a series of time responses on 439 points on the beam. This sequence is then analyzed using the SPT-CWT to identify damage. The algorithm proposed was able to clearly identify damage under any condition when the structure was excited by white noise force. In addition, in the case of white noise excitation, the analysis could also reveal the position of the heat gun when it was used to heat the structure. The analysis could identify the different operating conditions i.e. between responses due to white noise excitation and responses due to pink noise excitation. During the pink noise excitation whereas damage and changing temperature were identified it was not possible to clearly identify the effect of damage from that of temperature. The methodology proposed in this article for damage detection enables the separation the damage effect from that due to temperature and excitation on data obtained from measurements of a cantilever beam. This methodology does not require information about the apriori state of the structure.

Keywords: spatiotemporal continuous wavelet transform, damage detection, data normalization, varying temperature

Procedia PDF Downloads 271
659 An Integrated Approach to Solid Waste Management of Karachi, Pakistan (Waste-to-Energy Options)

Authors: Engineer Dilnawaz Shah

Abstract:

Solid Waste Management (SWM) is perhaps one of the most important elements constituting the environmental health and sanitation of the urban developing sector. The management system has several components that are integrated as well as interdependent; thus, the efficiency and effectiveness of the entire system are affected when any of its functional components fails or does not perform up to the level mark of operation. Sindh Solid Waste Management Board (SSWMB) is responsible for the management of solid waste in the entire city. There is a need to adopt the engineered approach in the redesigning of the existing system. In most towns, street sweeping operations have been mechanized and done by machinery operated by vehicles. Construction of Garbage Transfer Stations (GTS) at a number of locations within the city will cut the cost of transportation of waste to disposal sites. Material processing, recovery of recyclables, compaction, volume reduction, and increase in density will enable transportation of waste to disposal sites/landfills via long vehicles (bulk transport), minimizing transport/traffic and environmental pollution-related issues. Development of disposal sites into proper sanitary landfill sites is mandatory. The transportation mechanism is through garbage vehicles using either hauled or fixed container systems employing crew for mechanical or manual loading. The number of garbage vehicles is inadequate, and due to comparatively long haulage to disposal sites, there are certain problems of frequent vehicular maintenance and high fuel costs. Foreign investors have shown interest in enterprising improvement schemes and proposed operating a solid waste management system in Karachi. The waste to Energy option is being considered to provide a practical answer to be adopted to generate power and reduce waste load – a two-pronged solution for the increasing environmental problem. The paper presents results and analysis of a recent study into waste generation and characterization probing into waste-to-energy options for Karachi City.

Keywords: waste to energy option, integrated approach, solid waste management, physical and chemical composition of waste in Karachi

Procedia PDF Downloads 31
658 The Effect of Elapsed Time on the Cardiac Troponin-T Degradation and Its Utility as a Time Since Death Marker in Cases of Death Due to Burn

Authors: Sachil Kumar, Anoop K.Verma, Uma Shankar Singh

Abstract:

It’s extremely important to study postmortem interval in different causes of death since it assists in a great way in making an opinion on the exact cause of death following such incident often times. With diligent knowledge of the interval one could really say as an expert that the cause of death is not feigned hence there is a great need in evaluating such death to have been at the CRIME SCENE before performing an autopsy on such body. The approach described here is based on analyzing the degradation or proteolysis of a cardiac protein in cases of deaths due to burn as a marker of time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (Department of Forensic Medicine and Toxicology), King George’s Medical University, Lucknow India, after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC) for different time periods (~7.30, 18.20, 30.30, 41.20, 41.40, 54.30, 65.20, and 88.40 Hours). The cases included were the subjects of burn without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. As time postmortem progresses the intact cTnT band degrades to fragments that are easily detected by the monoclonal antibodies. A decreasing trend in the level of cTnT (% of intact) was found as the PM hours increased. A significant difference was observed between <15 h and other PM hours (p<0.01). Significant difference in cTnT level (% of intact) was also observed between 16-25 h and 56-65 h & >75 h (p<0.01). Western blot data clearly showed the intact protein at 42 kDa, three major (28 kDa, 30kDa, 10kDa) fragments, three additional minor fragments (12 kDa, 14kDa, and 15 kDa) and formation of low molecular weight fragments. Overall, both PMI and cardiac tissue of burned corpse had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 41.40 Hrs and after it intact protein slowly disappears. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the time postmortem. A strong significant positive correlation was found between cTnT and PM hours (r=0.87, p=0.0001). The regression analysis showed a good variability explained (R2=0.768) The post-mortem Troponin-T fragmentation observed in this study reveals a sequential, time-dependent process with the potential for use as a predictor of PMI in cases of burning.

Keywords: burn, degradation, postmortem interval, troponin-T

Procedia PDF Downloads 442
657 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City

Authors: Berhanu Keno Terfa

Abstract:

Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.

Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.

Procedia PDF Downloads 21
656 Convective Boiling of CO₂/R744 in Macro and Micro-Channels

Authors: Adonis Menezes, J. C. Passos

Abstract:

The current panorama of technology in heat transfer and the scarcity of information about the convective boiling of CO₂ and hydrocarbon in small diameter channels motivated the development of this work. Among non-halogenated refrigerants, CO₂/ R744 has distinct thermodynamic properties compared to other fluids. The R744 presents significant differences in operating pressures and temperatures, operating at higher values compared to other refrigerants, and this represents a challenge for the design of new evaporators, as the original systems must normally be resized to meet the specific characteristics of the R744, which creates the need for a new design and optimization criteria. To carry out the convective boiling tests of CO₂, an experimental apparatus capable of storing (m= 10kg) of saturated CO₂ at (T = -30 ° C) in an accumulator tank was used, later this fluid was pumped using a positive displacement pump with three pistons, and the outlet pressure was controlled and could reach up to (P = 110bar). This high-pressure saturated fluid passed through a Coriolis type flow meter, and the mass velocities varied between (G = 20 kg/m².s) up to (G = 1000 kg/m².s). After that, the fluid was sent to the first test section of circular cross-section in diameter (D = 4.57mm), where the inlet and outlet temperatures and pressures, were controlled and the heating was promoted by the Joule effect using a source of direct current with a maximum heat flow of (q = 100 kW/m²). The second test section used a cross-section with multi-channels (seven parallel channels) with a square cross-section of (D = 2mm) each; this second test section has also control of temperature and pressure at the inlet and outlet as well as for heating a direct current source was used, with a maximum heat flow of (q = 20 kW/m²). The fluid in a biphasic situation was directed to a parallel plate heat exchanger so that it returns to the liquid state, thus being able to return to the accumulator tank, continuing the cycle. The multi-channel test section has a viewing section; a high-speed CMOS camera was used for image acquisition, where it was possible to view the flow patterns. The experiments carried out and presented in this report were conducted in a rigorous manner, enabling the development of a database on the convective boiling of the R744 in macro and micro channels. The analysis prioritized the processes from the beginning of the convective boiling until the drying of the wall in a subcritical regime. The R744 resurfaces as an excellent alternative to chlorofluorocarbon refrigerants due to its negligible ODP (Ozone Depletion Potential) and GWP (Global Warming Potential) rates, among other advantages. The results found in the experimental tests were very promising for the use of CO₂ in micro-channels in convective boiling and served as a basis for determining the flow pattern map and correlation for determining the heat transfer coefficient in the convective boiling of CO₂.

Keywords: convective boiling, CO₂/R744, macro-channels, micro-channels

Procedia PDF Downloads 135
655 A Closed Loop Audit of Pre-operative Transfusion Samples in Orthopaedic Patients at a Major Trauma Centre

Authors: Tony Feng, Rea Thomson, Kathryn Greenslade, Ross Medine, Jennifer Easterbrook, Calum Arthur, Matilda Powell-bowns

Abstract:

There are clear guidelines on taking group and screen samples (G&S) for elective arthroplasty and major trauma. However, there is limited guidance on blood grouping for other trauma patients. The purpose of this study was to review the level of blood grouping at a major trauma centre and validate a protocol that limits the expensive processing of G&S samples. After reviewing the national guidance on transfusion samples in orthopaedic patients, data was prospectively collected for all orthopaedic admissions in the Royal Infirmary of Edinburgh between January to February 2023. The cause of admission, number of G&S samples processed on arrival and need for red cells was collected using the hospital blood bank. A new protocol was devised based on a multidisciplinary meeting which limited the requirement for G&S samples only to presentations in “category X”, including neck-of-femur fractures (NOFs), pelvic fractures and major trauma. A re-audit was completed between April and May after departmental education and institution of this protocol. 759 patients were admitted under orthopaedics in the major trauma centre across two separate months. 47% of patients were admitted with presentations falling in category X (354/759) and patients in this category accounted for 88% (92/104) of those requiring post-operative red cell transfusions. Of these, 51% were attributed to NOFs (47/92). In the initial audit, 50% of trauma patients outwith category X had samples sent (116/230), estimated to cost £3800. Of these 230 patients, 3% required post-operative transfusions (7/230). In the re-audit, 23% of patients outwith category X had samples sent (40/173), estimated to cost £1400, of which 3% (5/173) required transfusions. None of the transfusions in these patients in either audit were related to their operation and the protocol achieved an estimated cost saving of £2400 over one month. This study highlights the importance of sending samples for patients with certain categories of orthopaedic trauma (category X) due to the high demand for post-operative transfusions. However, the absence of transfusion requirements in other presentations suggests over-testing. While implementation of the new protocol has markedly reduced over-testing, additional interventions are required to reduce this further.

Keywords: blood transfusion, quality improvement, orthopaedics, trauma

Procedia PDF Downloads 68
654 Socially Sustainable Urban Rehabilitation Projects: Case Study of Ortahisar, Trabzon

Authors: Elif Berna Var

Abstract:

Cultural, physical, socio-economic, or politic changes occurred in urban areas might be resulted in the decaying period which may cause social problems. As a solution to that, urban renewal projects have been used in European countries since World War II whereas they have gained importance in Turkey after the 1980s. The first attempts were mostly related to physical or economic aspects which caused negative effects on social pattern later. Thus, social concerns have also started to include in renewal processes in developed countries. This integrative approach combining social, physical, and economic aspects promotes creating more sustainable neighbourhoods for both current and future generations. However, it is still a new subject for developing countries like Turkey. Concentrating on Trabzon-Turkey, this study highlights the importance of socially sustainable urban renewal processes especially in historical neighbourhoods where protecting the urban identity of the area is vital, as well as social structure, to create sustainable environments. Being in the historic city centre and having remarkable traditional houses, Ortahisar is an important image for Trabzon. Because of the fact that architectural and historical pattern of the area is still visible but need rehabilitations, it is preferred to use 'urban rehabilitation' as a way of urban renewal method for this study. A project is developed by the local government to create a secondary city centre and a new landmark for the city. But it is still ambiguous if this project can provide social sustainability of area which is one of the concerns of the research. In the study, it is suggested that social sustainability of an area can be achieved by several factors. In order to determine the factors affecting the social sustainability of an urban rehabilitation project, previous studies have been analysed and some common features are attempted to define. To achieve this, firstly, several analyses are conducted to find out social structure of Ortahisar. Secondly, structured interviews are implemented to 150 local people which aims to measure satisfaction level, awareness, the expectation of them, and to learn their demographical background in detail. Those data are used to define the critical factors for a more socially sustainable neighbourhood in Ortahisar. Later, the priority of those factors is asked to 50 experts and 150 local people to compare their attitudes and to find common criterias. According to the results, it can be said that social sustainability of Ortahisar neighbourhood can be improved by considering various factors like quality of urban areas, demographical factors, public participation, social cohesion and harmony, proprietorial factors, facilities of education and employment. In the end, several suggestions are made for Ortahisar case to promote more socially sustainable urban neighbourhood. As a pilot study highlighting the importance of social sustainability, it is hoped that this attempt might be the contributory effect on achieving more socially sustainable urban rehabilitation projects in Turkey.

Keywords: urban rehabilitation, social sustainability, Trabzon, Turkey

Procedia PDF Downloads 370
653 Deep Convolutional Neural Network for Detection of Microaneurysms in Retinal Fundus Images at Early Stage

Authors: Goutam Kumar Ghorai, Sandip Sadhukhan, Arpita Sarkar, Debprasad Sinha, G. Sarkar, Ashis K. Dhara

Abstract:

Diabetes mellitus is one of the most common chronic diseases in all countries and continues to increase in numbers significantly. Diabetic retinopathy (DR) is damage to the retina that occurs with long-term diabetes. DR is a major cause of blindness in the Indian population. Therefore, its early diagnosis is of utmost importance towards preventing progression towards imminent irreversible loss of vision, particularly in the huge population across rural India. The barriers to eye examination of all diabetic patients are socioeconomic factors, lack of referrals, poor access to the healthcare system, lack of knowledge, insufficient number of ophthalmologists, and lack of networking between physicians, diabetologists and ophthalmologists. A few diabetic patients often visit a healthcare facility for their general checkup, but their eye condition remains largely undetected until the patient is symptomatic. This work aims to focus on the design and development of a fully automated intelligent decision system for screening retinal fundus images towards detection of the pathophysiology caused by microaneurysm in the early stage of the diseases. Automated detection of microaneurysm is a challenging problem due to the variation in color and the variation introduced by the field of view, inhomogeneous illumination, and pathological abnormalities. We have developed aconvolutional neural network for efficient detection of microaneurysm. A loss function is also developed to handle severe class imbalance due to very small size of microaneurysms compared to background. The network is able to locate the salient region containing microaneurysms in case of noisy images captured by non-mydriatic cameras. The ground truth of microaneurysms is created by expert ophthalmologists for MESSIDOR database as well as private database, collected from Indian patients. The network is trained from scratch using the fundus images of MESSIDOR database. The proposed method is evaluated on DIARETDB1 and the private database. The method is successful in detection of microaneurysms for dilated and non-dilated types of fundus images acquired from different medical centres. The proposed algorithm could be used for development of AI based affordable and accessible system, to provide service at grass root-level primary healthcare units spread across the country to cater to the need of the rural people unaware of the severe impact of DR.

Keywords: retinal fundus image, deep convolutional neural network, early detection of microaneurysms, screening of diabetic retinopathy

Procedia PDF Downloads 131
652 Economics of Sugandhakokila (Cinnamomum Glaucescens (Nees) Dury) in Dang District of Nepal: A Value Chain Perspective

Authors: Keshav Raj Acharya, Prabina Sharma

Abstract:

Sugandhakokila (Cinnamomum glaucescens Nees. Dury) is a large evergreen native tree species; mostly confined naturally in mid-hills of Rapti Zone of Nepal. The species is identified as prioritized for agro-technology development as well as for research and development by a department of plant resources. This species is band for export outside the country without processing by the government of Nepal to encourage the value addition within the country. The present study was carried out in Chillikot village of Dang district to find out the economic contribution of C. glaucescens in the local economy and to document the major conservation threats for this species. Participatory Rural Appraisal (PRA) tools such as Household survey, key informants interviews and focus group discussions were carried out to collect the data. The present study reveals that about 1.7 million Nepalese rupees (NPR) have been contributed annually in the local economy of 29 households from the collection of C. glaucescens berries in the study area. The average annual income of each family was around NPR 67,165.38 (US$ 569.19) from the sale of the berries which contributes about 53% of the total household income. Six different value chain actors are involved in C. glaucescens business. Maximum profit margin was taken by collector followed by producer, exporter and processor. The profit margin was found minimum to regional and village traders. The total profit margin for producers was NPR 138.86/kg, and regional traders have gained NPR 17/kg. However, there is a possibility to increase the profit of producers by NPR 8.00 more for each kg of berries through the initiation of community forest user group and village cooperatives in the area. Open access resource, infestation by an insect to over matured trees and browsing by goats were identified as major conservation threats for this species. Handing over the national forest as a community forest, linking the producers with the processor through organized market channel and replacing the old tree through new plantation has been recommended for future.

Keywords: community forest, conservation threats, C. glaucescens, value chain analysis

Procedia PDF Downloads 129
651 Transition in Protein Profile, Maillard Reaction Products and Lipid Oxidation of Flavored Ultra High Temperature Treated Milk

Authors: Muhammad Ajmal

Abstract:

- Thermal processing and subsequent storage of ultra-heat treated (UHT) milk leads to alteration in protein profile, Maillard reaction and lipid oxidation. Concentration of carbohydrates in normal and flavored version of UHT milk is considerably different. Transition in protein profile, Maillard reaction and lipid oxidation in UHT flavored milk was determined for 90 days at ambient conditions and analyzed at 0, 45 and 90 days of storage. Protein profile, hydroxymethyl furfural, furosine, Nε-carboxymethyl-l-lysine, fatty acid profile, free fatty acids, peroxide value and sensory characteristics were determined. After 90 days of storage, fat, protein, total solids contents and pH were significantly less than the initial values determined at 0 day. As compared to protein profile normal UHT milk, more pronounced changes were recorded in different fractions of protein in UHT milk at 45 and 90 days of storage. Tyrosine content of flavored UHT milk at 0, 45 and 90 days of storage were 3.5, 6.9 and 15.2 µg tyrosine/ml. After 45 days of storage, the decline in αs1-casein, αs2-casein, β-casein, κ-casein, β-lactoglobulin, α-lactalbumin, immunoglobulin and bovine serum albumin were 3.35%, 10.5%, 7.89%, 18.8%, 53.6%, 20.1%, 26.9 and 37.5%. After 90 days of storage, the decline in αs1-casein, αs2-casein, β-casein, κ-casein, β-lactoglobulin, α-lactalbumin, immunoglobulin and bovine serum albumin were 11.2%, 34.8%, 14.3%, 33.9%, 56.9%, 24.8%, 36.5% and 43.1%. Hydroxy methyl furfural content of UHT milk at 0, 45 and 90 days of storage were 1.56, 4.18 and 7.61 (µmol/L). Furosine content of flavored UHT milk at 0, 45 and 90 days of storage intervals were 278, 392 and 561 mg/100g protein. Nε-carboxymethyl-l-lysine content of UHT flavored milk at 0, 45 and 90 days of storage were 67, 135 and 343mg/kg protein. After 90 days of storage of flavored UHT milk, the loss of unsaturated fatty acids 45.7% from the initial values. At 0, 45 and 90 days of storage, free fatty acids of flavored UHT milk were 0.08%, 0.11% and 0.16% (p<0.05). Peroxide value of flavored UHT milk at 0, 45 and 90 days of storage was 0.22, 0.65 and 2.88 (MeqO²/kg). Sensory analysis of flavored UHT milk after 90 days indicated that appearance, flavor and mouth feel score significantly decreased from the initial values recorded at 0 day. Findings of this investigation evidenced that in flavored UHT milk more pronounced changes take place in protein profile, Maillard reaction products and lipid oxidation as compared to normal UHT milk.

Keywords: UHT flavored milk , hydroxymethyl furfural, lipid oxidation, sensory properties

Procedia PDF Downloads 187
650 Humans’ Physical Strength Capacities on Different Handwheel Diameters and Angles

Authors: Saif K. Al-Qaisi, Jad R. Mansour, Aseel W. Sakka, Yousef Al-Abdallat

Abstract:

Handwheels are common to numerous industries, such as power generation plants, oil refineries, and chemical processing plants. The forces required to manually turn handwheels have been shown to exceed operators’ physical strengths, posing risks for injuries. Therefore, the objectives of this research were twofold: (1) to determine humans’ physical strengths on handwheels of different sizes and angles and (2) to subsequently propose recommended torque limits (RTLs) that accommodate the strengths of even the weaker segment of the population. Thirty male and thirty female participants were recruited from a university student population. Participants were asked to exert their maximum possible forces in a counter-clockwise direction on handwheels of different sizes (35 cm, 45 cm, 60 cm, and 70 cm) and angles (0°-horizontal, 45°-slanted, and 90°-vertical). The participant’s posture was controlled by adjusting the handwheel to be at the elbow level of each participant, requiring the participant to stand erect, and restricting the hand placements to be in the 10-11 o’clock position for the left hand and the 4-5 o’clock position for the right hand. A torque transducer (Futek TDF600) was used to measure the maximum torques generated by the human. Three repetitions were performed for each handwheel condition, and the average was computed. Results showed that, at all handwheel angles, as the handwheel diameter increased, the maximum torques generated also increased, while the underlying forces decreased. In controlling the handwheel diameter, the 0° handwheel was associated with the largest torques and forces, and the 45° handwheel was associated with the lowest torques and forces. Hence, a larger handwheel diameter –as large as 70 cm– in a 0° angle is favored for increasing the torque production capacities of users. Also, it was recognized that, regardless of the handwheel diameter size and angle, the torque demands in the field are much greater than humans’ torque production capabilities. As such, this research proposed RTLs for the different handwheel conditions by using the 25th percentile values of the females’ torque strengths. The proposed recommendations may serve future standard developers in defining torque limits that accommodate humans’ strengths.

Keywords: handwheel angle, handwheel diameter, humans’ torque production strengths, recommended torque limits

Procedia PDF Downloads 103
649 Cost-Effective Mechatronic Gaming Device for Post-Stroke Hand Rehabilitation

Authors: A. Raj Kumar, S. Bilaloglu

Abstract:

Stroke is a leading cause of adult disability worldwide. We depend on our hands for our activities of daily living(ADL). Although many patients regain the ability to walk, they continue to experience long-term hand motor impairments. As the number of individuals with young stroke is increasing, there is a critical need for effective approaches for rehabilitation of hand function post-stroke. Motor relearning for dexterity requires task-specific kinesthetic, tactile and visual feedback. However, when a stroke results in both sensory and motor impairment, it becomes difficult to ascertain when and what type of sensory substitutions can facilitate motor relearning. In an ideal situation, real-time task-specific data on the ability to learn and data-driven feedback to assist such learning will greatly assist rehabilitation for dexterity. We have found that kinesthetic and tactile information from the unaffected hand can assist patients re-learn the use of optimal fingertip forces during a grasp and lift task. Measurement of fingertip grip force (GF), load forces (LF), their corresponding rates (GFR and LFR), and other metrics can be used to gauge the impairment level and progress during learning. Currently ATI mini force-torque sensors are used in research settings to measure and compute the LF, GF, and their rates while grasping objects of different weights and textures. Use of the ATI sensor is cost prohibitive for deployment in clinical or at-home rehabilitation. A cost effective mechatronic device is developed to quantify GF, LF, and their rates for stroke rehabilitation purposes using off-the-shelf components such as load cells, flexi-force sensors, and an Arduino UNO microcontroller. A salient feature of the device is its integration with an interactive gaming environment to render a highly engaging user experience. This paper elaborates the integration of kinesthetic and tactile sensing through computation of LF, GF and their corresponding rates in real time, information processing, and interactive interfacing through augmented reality for visual feedback.

Keywords: feedback, gaming, kinesthetic, rehabilitation, tactile

Procedia PDF Downloads 237
648 Human Health Risk Assessment from Metals Present in a Soil Contaminated by Crude Oil

Authors: M. A. Stoian, D. M. Cocarta, A. Badea

Abstract:

The main sources of soil pollution due to petroleum contaminants are industrial processes involve crude oil. Soil polluted with crude oil is toxic for plants, animals, and humans. Human exposure to the contaminated soil occurs through different exposure pathways: Soil ingestion, diet, inhalation, and dermal contact. The present study research is focused on soil contamination with heavy metals as a consequence of soil pollution with petroleum products. Human exposure pathways considered are: Accidentally ingestion of contaminated soil and dermal contact. The purpose of the paper is to identify the human health risk (carcinogenic risk) from soil contaminated with heavy metals. The human exposure and risk were evaluated for five contaminants of concern of the eleven which were identified in soil. Two soil samples were collected from a bioremediation platform from Muntenia Region of Romania. The soil deposited on the bioremediation platform was contaminated through extraction and oil processing. For the research work, two average soil samples from two different plots were analyzed: The first one was slightly contaminated with petroleum products (Total Petroleum Hydrocarbons (TPH) in soil was 1420 mg/kgd.w.), while the second one was highly contaminated (TPH in soil was 24306 mg/kgd.w.). In order to evaluate risks posed by heavy metals due soil pollution with petroleum products, five metals known as carcinogenic were investigated: Arsenic (As), Cadmium (Cd), ChromiumVI (CrVI), Nickel (Ni), and Lead (Pb). Results of the chemical analysis performed on samples collected from the contaminated soil evidence soil contamination with heavy metals as following: As in Site 1 = 6.96 mg/kgd.w; As in Site 2 = 11.62 mg/kgd.w, Cd in Site 1 = 0.9 mg/kgd.w; Cd in Site 2 = 1 mg/kgd.w; CrVI was 0.1 mg/kgd.w for both sites; Ni in Site 1 = 37.00 mg/kgd.w; Ni in Site 2 = 42.46 mg/kgd.w; Pb in Site 1 = 34.67 mg/kgd.w; Pb in Site 2 = 120.44 mg/kgd.w. The concentrations for these metals exceed the normal values established in the Romanian regulation, but are smaller than the alert level for a less sensitive use of soil (industrial). Although, the concentrations do not exceed the thresholds, the next step was to assess the human health risk posed by soil contamination with these heavy metals. Results for risk were compared with the acceptable one (10-6, according to World Human Organization). As, expected, the highest risk was identified for the soil with a higher degree of contamination: Individual Risk (IR) was 1.11×10-5 compared with 8.61×10-6

Keywords: carcinogenic risk, heavy metals, human health risk assessment, soil pollution

Procedia PDF Downloads 416
647 Metabolically Healthy Obesity and Protective Factors of Cardiovascular Diseases as a Result from a Longitudinal Study in Tebessa (East of Algeria)

Authors: Salima Taleb, Kafila Boulaba, Ahlem Yousfi, Nada Taleb, Difallah Basma

Abstract:

Introduction: Obesity is recognized as a cardiovascular risk factor. It is associated with cardio-metabolic diseases. Its prevalence is increasing significantly in both rich and poor countries. However, there are obese people who have no metabolic disturbance. So we think obesity is not always a risk factor for an abnormal metabolic profile that increases the risk of cardiometabolic problems. However, there is no definition that allows us to identify the individual group Metabolically Healthy but Obese (MHO). Objective: The objective of this study is to evaluate the relationship between MHO and some factors associated with it. Methods: A longitudinal study is a prospective cohort study of 600 participants aged ≥18 years. Metabolic status was assessed by the following parameters: blood pressure, fasting glucose, total cholesterol, HDL cholesterol, LDL cholesterol, and triglycerides. Body Mass Index (BMI) was calculated as weight (in kg) divided by height (m2), BMI = Weight/(Height)². According to the BMI value, our population was divided into four groups: underweight subjects with BMI <18.5 kg/m2, normal weight subjects with BMI = 18.5–24.9 kg/m², overweight subjects with BMI=25–29.9 kg/m², and obese subjects who have (BMI ≥ 30 kg/m²). A value of P < 0.05 was considered significant. Statistical processing was done using the SPSS 25 software. Results: During this study, 194 (32.33%) were identified as MHO among 416 (37%) obese individuals. The prevalence of the metabolically unhealthy phenotype among normal-weight individuals was (13.83%) vs. (37%) in obese individuals. Compared with metabolically healthy normal-weight individuals (10.93%), the prevalence of diabetes was (30.60%) in MHO, (20.59%) in metabolically unhealthy normal weight, and (52.29%) for metabolically unhealthy obese (p = 0.032). Blood pressure was significantly higher in MHO individuals than in metabolically healthy normal-weight individuals and in metabolically unhealthy obese than in metabolically unhealthy normal weight (P < 0.0001). Familial coronary artery disease does not appear to have an effect on the metabolic status of obese and normal-weight patients (P = 0.544). However, waist circumference appears to have an effect on the metabolic status of individuals (P < 0.0001). Conclusion: This study showed a high prevalence of metabolic profile disruption in normal-weight subjects and a high rate of overweight and/or obese people who are metabolically healthy. To understand the physiological mechanism related to these metabolic statuses, a thorough study is needed.

Keywords: metabolically health, obesity, factors associated, cardiovascular diseases

Procedia PDF Downloads 103
646 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 125